text
stringlengths
216
4.52M
meta
dict
\section{Introduction} Deformations and extensions of the oscillator algebra have found a lot of applications to physical problems, such as the description of systems with non-standard statistics, the construction of integrable lattice models, the investigation of nonlinearities in quantum optics, as well as the algebraic treatment of quantum exactly solvable models and of $n$-particle integrable systems.\par The generalized deformed oscillator algebras (GDOAs) (see e.g.\ Ref.\ \cite{cq95} and references quoted therein) arose from successive generalizations of the Arik-Coon~\cite{arik} and Biedenharn-Macfarlane~\cite{biedenharn} $q$-oscillators. Such algebras, denoted by ${\cal A}_q(G(N))$, are generated by the unit, creation, annihilation, and number operators $I$, $a^{\dagger}$, $a$, $N$, satisfying the Hermiticity conditions $\left(a^{\dagger}\right)^{\dagger} = a$, $N^{\dagger} = N$, and the commutation relations \begin{equation} \left[N, a^{\dagger}\right] = a^{\dagger}, \qquad [N, a] = - a, \qquad \left[a, a^{\dagger}\right]_q \equiv a a^{\dagger} - q a^{\dagger} a = G(N), \end{equation} where $q$ is some real number and $G(N)$ is some Hermitian, analytic function.\par On the other hand, $\cal G$-extended oscillator algebras, where $\cal G$ is some finite group, appeared in connection with $n$-particle integrable models. For the Calogero model~\cite{calogero}, for instance, $\cal G$ is the symmetric group $S_n$~\cite{poly}.\par {}For two particles, the $S_2$-extended oscillator algebra ${\cal A}^{(2)}_{\kappa}$, where $S_2 = \{\, I, K \mid K^2 = I \,\}$, is generated by the operators $I$, $a^{\dagger}$, $a$, $N$, $K$, subject to the Hermiticity conditions $\left(a^{\dagger}\right)^{\dagger} = a$, $N^{\dagger} = N$, $K^{\dagger} = K^{-1}$, and the relations \begin{eqnarray} \left[N, a^{\dagger}\right] & = & a^{\dagger}, \qquad [N, K] = 0, \qquad K^2 = I, \nonumber \\ \left[a, a^{\dagger}\right] & = & I + \kappa K \qquad (\kappa \in {\mbox R}), \qquad a^{\dagger} K = - K a^{\dagger}, \end{eqnarray} together with their Hermitian conjugates.\par When the $S_2$ generator $K$ is realized in terms of the Klein operator $(-1)^N$, ${\cal A}^{(2)}_{\kappa}$ becomes a GDOA characterized by $q=1$ and $G(N) = I + \kappa (-1)^N$, and known as the Calogero-Vasiliev~\cite{vasiliev} or modified~\cite{brze} oscillator algebra.\par The operator $K$ may be alternatively considered as the generator of the cyclic group $C_2$ of order two, since the latter is isomorphic to $S_2$. By replacing $C_2$ by the cyclic group of order $\lambda$, $C_{\lambda} = \{\, I, T, T^2, \ldots, T^{\lambda-1} \mid T^{\lambda} = I \,\}$, one then gets a new class of $\cal G$-extended oscillator algebras~\cite{cq98a}, generalizing that describing the two-particle Calogero model. In the present communication, we will define the $C_{\lambda}$-extended oscillator algebras, study some of their properties, and show that they have some interesting applications to supersymmetric quantum mechanics (SSQM)~\cite{witten} and some of its variants.\par \section{\boldmath Definition and properties of $C_{\lambda}$-extended oscillator algebras} \setcounter{equation}{0} Let us consider the algebras generated by the operators $I$, $a^{\dagger}$, $a$, $N$, $T$, satisfying the Hermiticity conditions $\left(a^{\dagger}\right)^{\dagger} = a$, $N^{\dagger} = N$, $T^{\dagger} = T^{-1}$, and the relations \begin{eqnarray} \left[N, a^{\dagger}\right] & = & a^{\dagger}, \qquad [N, T] = 0, \qquad T^{\lambda} = I, \nonumber \\ \left[a, a^{\dagger}\right] & = & I + \sum_{\mu=1}^{\lambda-1} \kappa_{\mu} T^{\mu}, \qquad a^{\dagger} T = e^{-{\rm i}2\pi/\lambda}\, T a^{\dagger}, \label{eq:alg-def1} \end{eqnarray} together with their Hermitian conjugates~\cite{cq98a}. Here $T$ is the generator of (a unitary representation of) the cyclic group $C_{\lambda}$ (where $\lambda \in \{2, 3, 4, \ldots\}$), and $\kappa_{\mu}$, $\mu = 1$, 2, $\ldots$,~$\lambda-1$, are some complex parameters restricted by the conditions $\kappa_{\mu}^* = \kappa_{\lambda - \mu}$ (so that there remain altogether $\lambda-1$ independent real parameters).\par $C_{\lambda}$ has $\lambda$ inequivalent, one-dimensional matrix unitary irreducible representations (unirreps) $\Gamma^{\mu}$, $\mu = 0$, 1, $\ldots$,~$\lambda-1$, which are such that $\Gamma^{\mu}\left(T^{\nu}\right) = \exp({\rm i}2\pi \mu \nu/\lambda)$ for any $\nu = 0$, 1, $\ldots$,~$\lambda-1$. The projection operator on the carrier space of~$\Gamma^{\mu}$ may be written as \begin{equation} P_{\mu} = \frac{1}{\lambda} \sum_{\nu=0}^{\lambda-1} e^{-{\rm i}2\pi \mu\nu/\lambda}\, T^{\nu}, \end{equation} and conversely $T^{\nu}$, $\nu=0$, 1, $\ldots$,~$\lambda-1$, may be expressed in terms of the $P_{\mu}$'s as \begin{equation} T^{\nu} = \sum_{\mu=0}^{\lambda-1} e^{{\rm i}2\pi \mu\nu/\lambda} P_{\mu}. \end{equation} \par The algebra defining relations~(\ref{eq:alg-def1}) may therefore be rewritten in terms of $I$, $a^{\dagger}$, $a$, $N$, and~$P_{\mu}^{\vphantom{\dagger}} = P_{\mu}^{\dagger}$, $\mu=0$, 1, $\ldots$,~$\lambda-1$, as \begin{eqnarray} \left[N, a^{\dagger}\right] & = & a^{\dagger}, \qquad \left[N, P_{\mu}\right] = 0, \qquad \sum_{\mu=0}^{\lambda-1} P_{\mu} = I, \nonumber \\ \left[a, a^{\dagger}\right] & = & I + \sum_{\mu=0}^{\lambda-1} \alpha_{\mu} P_{\mu}, \qquad a^{\dagger} P_{\mu} = P_{\mu+1}\, a^{\dagger}, \qquad P_{\mu} P_{\nu} = \delta_{\mu,\nu} P_{\mu}, \label{eq:alg-def2} \end{eqnarray} where we use the convention $P_{\mu'} = P_{\mu}$ if $\mu' - \mu = 0\, {\rm mod}\, \lambda$ (and similarly for other operators or parameters indexed by $\mu$, $\mu'$). Equation~(\ref{eq:alg-def2}) depends upon $\lambda$ real parameters $\alpha_{\mu} = \sum_{\nu=1}^{\lambda-1} \exp({\rm i}2\pi \mu\nu/\lambda) \kappa_{\nu}$, $\mu=0$, 1, $\ldots$,~$\lambda-1$, restricted by the condition $\sum_{\mu=0}^{\lambda-1} \alpha_{\mu} = 0$. Hence, we may eliminate one of them, for instance $\alpha_{\lambda-1}$, and denote $C_{\lambda}$-extended oscillator algebras by ${\cal A}^{(\lambda)}_{\alpha_0 \alpha_1 \ldots \alpha_{\lambda-2}}$.\par The cyclic group generator $T$ and the projection operators $P_{\mu}$ can be realized in terms of $N$ as \begin{equation} T = e^{{\rm i}2\pi N/\lambda}, \qquad P_{\mu} = \frac{1}{\lambda} \sum_{\nu=0}^{\lambda-1} e^{{\rm i}2\pi \nu (N-\mu)/\lambda}, \qquad \mu = 0, 1, \ldots, \lambda-1, \label{eq:N-realize} \end{equation} respectively. With such a choice, ${\cal A}^{(\lambda)}_{\alpha_0 \alpha_1 \ldots \alpha_{\lambda-2}}$\ becomes a GDOA, ${\cal A}^{(\lambda)}(G(N))$, characterized by $q=1$ and $G(N) = I + \sum_{\mu=0}^{\lambda-1} \alpha_{\mu} P_{\mu}$, where $P_{\mu}$ is given in Eq.~(\ref{eq:N-realize}).\par {}For any GDOA ${\cal A}_q(G(N))$, one may define a so-called structure function~$F(N)$, which is the solution of the difference equation $F(N+1) - q F(N) = G(N)$, such that $F(0) = 0$~\cite{cq95}. For ${\cal A}^{(\lambda)}(G(N))$, we find \begin{equation} F(N) = N + \sum_{\mu=0}^{\lambda-1} \beta_{\mu} P_{\mu}, \qquad \beta_0 \equiv 0, \qquad \beta_{\mu} \equiv \sum_{\nu=0}^{\mu-1} \alpha_{\nu} \quad (\mu =1, 2, \ldots, \lambda-1). \end{equation} \par At this point, it is worth noting that for $\lambda=2$, we obtain $T=K$, $P_0 = (I + K)/2$, $P_1 = (I - K)/2$, and $\kappa_1 = \kappa_1^* = \alpha_0 = - \alpha_1 = \kappa$, so that ${\cal A}^{(2)}_{\alpha_0}$\ coincides with the $S_2$-extended oscillator algebra ${\cal A}^{(2)}_{\kappa}$ and ${\cal A}^{(2)}(G(N))$\ with the Calogero-Vasiliev algebra.\par In Ref.~\cite{cq99b}, we showed that ${\cal A}^{(\lambda)}(G(N))$\ (and more generally ${\cal A}^{(\lambda)}_{\alpha_0 \alpha_1 \ldots \alpha_{\lambda-2}}$) has only two different types of unirreps: infinite-dimensional bounded from below unirreps and finite-dimensional ones. Among the former, there is the so-called bosonic Fock space representation, wherein $a^{\dagger} a = F(N)$ and $a a^{\dagger} = F(N+1)$. Its carrier space $\cal F$ is spanned by the eigenvectors~$|n\rangle$ of the number operator~$N$, corresponding to the eigenvalues $n=0$, 1, 2,~$\ldots$, where $|0\rangle$ is a vacuum state, i.e., $a |0\rangle = N|0\rangle = 0$ and $P_{\mu} |0\rangle = \delta_{\mu,0} |0\rangle$. The eigenvectors can be written as \begin{equation} |n\rangle = {\cal N}_n^{-1/2} \left(a^{\dagger}\right)^n |0\rangle, \qquad n = 0, 1, 2, \ldots, \label{eq:vectors} \end{equation} where ${\cal N}_n = \prod_{i=1}^n F(i)$. The creation and annihilation operators act upon~$|n\rangle$ in the usual way, i.e., \begin{equation} a^{\dagger} |n\rangle = \sqrt{F(n+1)}\, |n+1\rangle, \qquad a |n\rangle = \sqrt{F(n)}\, |n-1\rangle, \end{equation} while $P_{\mu}$ projects on the $\mu$th component ${\cal F}_{\mu} \equiv \{\, |k\lambda + \mu\rangle \mid k = 0, 1, 2, \ldots\,\}$ of the ${\rm Z}_{\lambda}$-graded Fock space ${\cal F} = \sum_{\mu=0}^{\lambda-1} \oplus {\cal F}_{\mu}$. It is obvious that such a bosonic Fock space representation exists if and only if $F(\mu) > 0$ for $\mu=1$, 2, $\ldots$,~$\lambda-1$. This gives the following restrictions on the algebra parameters~$\alpha_{\mu}$, \begin{equation} \sum_{\nu=0}^{\mu-1} \alpha_{\nu} > - \mu, \qquad \mu = 1, 2, \ldots, \lambda-1. \label{eq:cond-Fock} \end{equation} \par In the bosonic Fock space representation, we may consider the bosonic oscillator Hamiltonian, defined as usual by \begin{equation} H_0 \equiv \case{1}{2} \left\{a, a^{\dagger}\right\}. \label{eq:H_0} \end{equation} It can be rewritten as \begin{equation} H_0 = a^{\dagger} a + \frac{1}{2} \left(I + \sum_{\mu=0}^{\lambda-1} \alpha_{\mu} P_{\mu}\right) = N + \frac{1}{2} I + \sum_{\mu=0}^{\lambda-1} \gamma_{\mu} P_{\mu}, \end{equation} where $\gamma_0 \equiv \frac{1}{2} \alpha_0$ and $\gamma_{\mu} \equiv \sum_{\nu=0}^{\mu-1} \alpha_{\nu} + \frac{1}{2} \alpha_{\mu}$ for $\mu = 1$, 2, \ldots,~$\lambda-1$.\par The eigenvectors of $H_0$ are the states~$|n\rangle = |k \lambda + \mu\rangle$, defined in Eq.~(\ref{eq:vectors}), and their eigenvalues are given by \begin{equation} E_{k\lambda+\mu} = k\lambda + \mu + \gamma_{\mu} + \case{1}{2}, \qquad k = 0, 1, 2, \ldots, \qquad \mu = 0, 1, \ldots, \lambda-1. \end{equation} In each ${\cal F}_{\mu}$ subspace of the ${\rm Z}_{\lambda}$-graded Fock space~$\cal F$, the spectrum of~$H_0$ is therefore harmonic, but the $\lambda$ infinite sets of equally spaced energy levels, corresponding to $\mu=0$, 1, $\ldots$,~$\lambda-1$, may be shifted with respect to each other by some amounts depending upon the algebra parameters $\alpha_0$, $\alpha_1$, $\ldots$,~$\alpha_{\lambda-2}$, through their linear combinations $\gamma_{\mu}$, $\mu=0$, 1, $\ldots$,~$\lambda-1$.\par {}For the Calogero-Vasiliev oscillator, i.e., for $\lambda=2$, the relation $\gamma_0 = \gamma_1 = \kappa/2$ implies that the spectrum is very simple and coincides with that of a shifted harmonic oscillator. For $\lambda\ge 3$, however, it has a much richer structure. According to the parameter values, it may be nondegenerate, or may exhibit some ($\nu+1$)-fold degeneracies above some energy eigenvalue, where $\nu$ may take any value in the set $\{1, 2, \ldots, \lambda-1\}$. In Ref.~\cite{cq99a}, we obtained for $\lambda=3$ the complete classification of nondegenerate, twofold and threefold degenerate spectra in terms of $\alpha_0$ and $\alpha_1$.\par In the remaining part of this communication, we will show that the bosonic Fock space representation of ${\cal A}^{(\lambda)}(G(N))$\ and the corresponding bosonic oscillator Hamiltonian $H_0$ have some useful applications to SSQM and some of its variants.\par \section{Application to supersymmetric quantum mechanics with cyclic shape invariant potentials} \setcounter{equation}{0} In SSQM with two supercharges, the supersymmetric Hamiltonian $\cal H$ and the supercharges $Q^{\dagger}$, $Q = \left(Q^{\dagger}\right)^{\dagger}$, satisfy the sqm(2) superalgebra, defined by the relations \begin{equation} Q^2 = 0, \qquad [{\cal H}, Q] = 0, \qquad \left\{Q, Q^{\dagger}\right\} = {\cal H}, \label{eq:SSQM} \end{equation} together with their Hermitian conjugates~\cite{witten}. In such a context, shape invariance~\cite{genden} provides an integrability condition, yielding all the bound state energy eigenvalues and eigenfunctions, as well as the scattering matrix.\par Recently, Sukhatme, Rasinariu, and Khare~\cite{sukhatme} introduced cyclic shape invariant potentials of period $p$ in SSQM. They are characterized by the fact that the supersymmetric partner Hamiltonians correspond to a series of shape invariant potentials, which repeats after a cycle of $p$ iterations. In other words, one may define $p$ sets of operators $\left\{{\cal H}_{\mu}, Q^{\dagger}_{\mu}, Q_{\mu}\right\}$, $\mu=0$, 1, \ldots,~$p-1$, each satisfying the sqm(2) defining relations~(\ref{eq:SSQM}). The operators may be written as \begin{equation} {\cal H}_{\mu} = \left(\begin{array}{cc} {\cal H}^{(\mu)} - {\cal E}^{(\mu)}_0 I & 0 \\ 0 & {\cal H}^{(\mu+1)} - {\cal E}^{(\mu)}_0 I \end{array}\right), \quad Q^{\dagger}_{\mu} = \left(\begin{array}{cc} 0 & A^{\dagger}_{\mu} \\ 0 & 0 \end{array}\right), \quad Q_{\mu} = \left(\begin{array}{cc} 0 & 0 \\ A_{\mu} & 0 \end{array}\right), \label{eq:super-op} \end{equation} where \begin{eqnarray} {\cal H}^{(0)} & = & A^{\dagger}_0 A_0, \nonumber \\ {\cal H}^{(\mu)} & = & A_{\mu-1} A^{\dagger}_{\mu-1} + {\cal E}^{(\mu-1)}_0 I = A^{\dagger}_{\mu} A_{\mu} + {\cal E}^{(\mu)}_0 I, \qquad \mu = 1, 2, \ldots, p, \nonumber \\ A_{\mu} & = & \frac{d}{dx} + W(x,b_{\mu}), \qquad A^{\dagger}_{\mu} = - \frac{d}{dx} + W(x,b_{\mu}), \qquad \mu = 0, 1, \ldots, p, \label{eq:hierarchy} \end{eqnarray} and ${\cal E}^{(\mu)}_0$ denotes the ground state energy of~${\cal H}^{(\mu)}$ (with ${\cal E}^{(0)}_0 = 0$). Here the superpotentials $W(x,b_{\mu})$ depend upon some parameters $b_{\mu}$, such that $b_{\mu+p} = b_{\mu}$, and they satisfy $p$ shape invariance conditions \begin{equation} W^2(x,b_{\mu}) + W'(x,b_{\mu}) = W^2(x,b_{\mu+1}) - W'(x,b_{\mu+1}) + \omega_{\mu}, \qquad \mu = 0, 1, \ldots, p-1, \label{eq:shape} \end{equation} where $\omega_{\mu}$, $\mu=0$, 1, \ldots,~$p-1$, are some real constants.\par {}From the solution of Eq.~(\ref{eq:shape}), one may then construct the potentials corresponding to the supersymmetric partners ${\cal H}^{(\mu)}$, ${\cal H}^{(\mu+1)}$ in the usual way, i.e., $V^{(\mu)} = W^2(x, b_{\mu}) - W'(x, b_{\mu}) + {\cal E}^{(\mu)}_0$, $V^{(\mu+1)} = W^2(x, b_{\mu}) + W'(x, b_{\mu}) + {\cal E}^{(\mu)}_0$. For $p=2$, Gangopadhyaya and Sukhatme~\cite{gango} obtained such potentials as superpositions of a Calogero potential and a $\delta$-function singularity. For $p\ge3$, however, only numerical solutions of the shape invariance conditions~(\ref{eq:shape}) have been obtained~\cite{sukhatme}, so that no analytical form of $V^{(\mu)}$ is known. In spite of this, the spectrum is easily derived and consists of $p$ infinite sets of equally spaced energy levels, shifted with respect to each other by the energies $\omega_0$, $\omega_1$, \ldots,~$\omega_{p-1}$.\par Since for some special choices of parameters, spectra of a similar type may be obtained with the bosonic oscillator Hamiltonian~(\ref{eq:H_0}) acting in the bosonic Fock space representation of ${\cal A}^{(p)}(G(N))$, one may try to establish a relation between the class of algebras ${\cal A}^{(p)}(G(N))$ and SSQM with cyclic shape invariant potentials of period~$p$.\par In Ref.~\cite{cq99a}, we proved that the operators ${\cal H}^{(\mu)}$, $A^{\dagger}_{\mu}$, and $A_{\mu}$ of Eqs.~(\ref{eq:super-op}) and~(\ref{eq:hierarchy}) can be realized in terms of the generators of $p$ algebras ${\cal A}^{(p)}(G^{(\mu)}(N))$, $\mu=0$, 1, \ldots,~$p-1$, belonging to the class $\left\{{\cal A}^{(p)}(G(N))\right\}$. The parameters of such algebras are obtained by cyclic permutations from a starting set $\{\alpha_0, \alpha_1, \ldots, \alpha_{p-1}\}$ corresponding to ${\cal A}^{(p)}(G^{(0)}(N)) = {\cal A}^{(p)}(G(N))$. Denoting by $N$, $a^{\dagger}_{\mu}$, $a_{\mu}$ the number, creation, and annihilation operators corresponding to the $\mu$th algebra ${\cal A}^{(p)}(G^{(\mu)}(N))$, where $a^{\dagger}_0 = a^{\dagger}$, and $a_0 = a$, we may write the fourth relation in the algebra defining relations~(\ref{eq:alg-def2}) as \begin{equation} \left[a_{\mu}, a^{\dagger}_{\mu}\right] = I + \sum_{\nu=0}^{p-1} \alpha^{(\mu)}_{\nu} P_{\nu}, \qquad \alpha^{(\mu)}_{\nu} \equiv \alpha_{\nu+\mu}, \qquad \mu=0, 1, \ldots, p-1, \end{equation} while the remaining relations keep the same form.\par The realization of ${\cal H}^{(\mu)}$, $A^{\dagger}_{\mu}$, $A_{\mu}$, $\mu=0$, 1, \ldots,~$p-1$, is then given by \begin{eqnarray} {\cal H}^{(\mu)} & = & F(N+\mu) = N + \mu I + \sum_{\nu=0}^{p-1} \beta_{\nu+\mu} P_{\nu} = H^{(\mu)}_0 - \case{1}{2} \sum_{\nu=0}^{p-1} \left(1 + \alpha^{(\mu)}_{\nu}\right) P_{\nu} + {\cal E}^{(\mu)}_0 I, \nonumber\\ A^{\dagger}_{\mu} & = & a^{\dagger}_{\mu}, \qquad A_{\mu} = a_{\mu}, \label{eq:hierarchy-realiz} \end{eqnarray} where $H^{(\mu)}_0 \equiv \frac{1}{2} \left\{a^{\vphantom{\dagger}}_{\mu}, a^{\dagger}_{\mu}\right\}$ is the bosonic oscillator Hamiltonian associated with ${\cal A}^{(p)}(G^{(\mu)}(N))$, ${\cal E}^{(\mu)}_0 = \sum_{\nu=0}^{\mu-1} \omega_{\nu}$, and the level spacings are $\omega_{\mu} = 1 + \alpha_{\mu}$. For this result to be meaningful, the conditions $\omega_{\mu} > 0$, $\mu=0$, 1, \ldots,~$p-1$, have to be fulfilled. When combined with the restrictions~(\ref{eq:cond-Fock}), the latter imply that the parameters of the starting algebra ${\cal A}^{(p)}(G(N))$ must be such that $-1 < \alpha_0 < \lambda-1$, $-1 < \alpha_{\mu} < \lambda - \mu -1 - \sum_{\nu=0}^{\mu-1} \alpha_{\nu}$ if $\mu=1$, 2, $\ldots$,~$\lambda-2$, and $\alpha_{\lambda-1} = - \sum_{\nu=0}^{\lambda-2} \alpha_{\nu}$.\par \section{\boldmath Application to parasupersymmetric quantum mechanics of order $p$} \setcounter{equation}{0} The sqm(2) superalgebra~(\ref{eq:SSQM}) is most often realized in terms of mutually commuting boson and fermion operators. Plyushchay~\cite{plyu}, however, showed that it can alternatively be realized in terms of only boson-like operators, namely the generators of the Calogero-Vasiliev algebra ${\cal A}^{(2)}(G(N))$\ (see also Ref.~\cite{beckers97}). Such an SSQM bosonization can be performed in two different ways, by choosing either $Q = a^{\dagger} P_1$ (so that ${\cal H} = H_0 - \frac{1}{2}(K + \kappa)$) or $Q = a^{\dagger} P_0$ (so that ${\cal H} = H_0 + \frac{1}{2}(K + \kappa)$). The first choice corresponds to unbroken SSQM (all the excited states are twofold degenerate while the ground state is nondegenerate and at vanishing energy), and the second choice describes broken SSQM (all the states are twofold degenerate and at positive energy).\par SSQM was generalized to parasupersymmetric quantum mechanics (PSSQM) of order two by Rubakov and Spiridonov~\cite{rubakov}, and later on to PSSQM of arbitrary order $p$ by Khare~\cite{khare93a}. In the latter case, Eq.~(\ref{eq:SSQM}) is replaced by \[ Q^{p+1} = 0 \qquad ({\rm with\ } Q^p \ne 0), \] \[ [{\cal H}, Q] = 0, \] \begin{equation} Q^p Q^{\dagger} + Q^{p-1} Q^{\dagger} Q + \cdots + Q Q^{\dagger} Q^{p-1} + Q^{\dagger} Q^p = 2p Q^{p-1} {\cal H}, \label{eq:PSSQM} \end{equation} and is retrieved in the case where $p=1$. The parasupercharges $Q$, $Q^{\dagger}$, and the parasupersymmetric Hamiltonian $\cal H$ are usually realized in terms of mutually commuting boson and parafermion operators.\par A property of PSSQM of order $p$ is that the spectrum of $\cal H$ is ($p+1$)-fold degenerate above the ($p-1$)th energy level. This fact and Plyushchay's results for $p=1$ hint at a possibility of representing $\cal H$ as a linear combination of the bosonic oscillator Hamiltonian $H_0$ associated with ${\cal A}^{(p+1)}(G(N))$ and some projection operators, as in Eq.~(\ref{eq:hierarchy-realiz}).\par In Ref.~\cite{cq99b} (see also Refs.~\cite{cq98a,cq98b}), we proved that PSSQM of order $p$ can indeed be bosonized in terms of the generators of ${\cal A}^{(p+1)}(G(N))$ for any allowed (i.e., satisfying Eq.~(\ref{eq:cond-Fock})) values of the algebra parameters $\alpha_0$, $\alpha_1$, \ldots,~$\alpha_{p-1}$. For such a purpose, we started from ans\" atze of the type \begin{equation} Q = \sum_{\nu=0}^p \sigma_{\nu} a^{\dagger} P_{\nu}, \qquad {\cal H} = H_0 + \case{1}{2} \sum_{\nu=0}^p r_{\nu} P_{\nu}, \end{equation} where $\sigma_{\nu}$ and $r_{\nu}$ are some complex and real constants, respectively, to be determined in such a way that Eq.~(\ref{eq:PSSQM}) is fulfilled. We found that there are $p+1$ families of solutions, which may be distinguished by an index $\mu \in \{0, 1, \ldots, p\}$ and from which we may choose the following representative solutions \begin{eqnarray} Q_{\mu} & = & \sqrt{2} \sum_{\nu=1}^p a^{\dagger} P_{\mu+\nu}, \nonumber\\ {\cal H}_{\mu} & = & N + \case{1}{2} (2\gamma_{\mu+2} + r_{\mu+2} - 2p + 3) I + \sum_{\nu=1}^p (p + 1 - \nu) P_{\mu+\nu}, \label{eq:PSSQM-sol} \end{eqnarray} where \begin{equation} r_{\mu+2} = \frac{1}{p} \left[(p-2) \alpha_{\mu+2} + 2 \sum_{\nu=3}^p (p-\nu+1) \alpha_{\mu+\nu} + p (p-2)\right]. \end{equation} \par The eigenvectors of ${\cal H}_{\mu}$ are the states~(\ref{eq:vectors}) and the corresponding eigenvalues are easily found. All the energy levels are equally spaced. For $\mu=0$, PSSQM is unbroken, otherwise it is broken with a ($\mu+1$)-fold degenerate ground state. All the excited states are ($p+1$)-fold degenerate. For $\mu=0$, 1, \ldots,~$p-2$, the ground state energy may be positive, null, or negative depending on the parameters, whereas for $\mu = p-1$ or $p$, it is always positive.\par Khare~\cite{khare93a} showed that in PSSQM of order $p$, $\cal H$ has in fact $2p$ (and not only two) conserved parasupercharges, as well as $p$ bosonic constants. In other words, there exist $p$ independent operators $Q_r$, $r=1$, 2, \ldots,~$p$, satisfying with $\cal H$ the set of equations~(\ref{eq:PSSQM}), and $p$ other independent operators $I_t$, $t=2$, 3, \ldots,~$p+1$, commuting with $\cal H$, as well as among themselves. In Ref.~\cite{cq99b}, we obtained a realization of all such operators in terms of the ${\cal A}^{(p+1)}(G(N))$ generators.\par As a final point, let us note that there exists an alternative approach to PSSQM of order $p$, which was proposed by Beckers and Debergh~\cite{beckers90}, and wherein the multilinear relation in Eq.~(\ref{eq:PSSQM}) is replaced by the cubic equation \begin{equation} \left[Q, \left[Q^{\dagger}, Q\right] \right] = 2Q {\cal H}. \label{eq:cubic} \end{equation} In Ref.~\cite{cq98a}, we proved that for $p=2$, this PSSQM algebra can only be realized by those ${\cal A}^{(3)}(G(N))$ algebras that simultaneously bosonize Rubakov-Spiridonov-Khare PSSQM algebra.\par \section{Application to pseudosupersymmetric quantum mechanics} \setcounter{equation}{0} Pseudosupersymmetric quantum mechanics (pseudoSSQM) was introduced by Beckers, Debergh, and Nikitin~\cite{beckers95} in a study of relativistic vector mesons interacting with an external constant magnetic field. In the nonrelativistic limit, their theory leads to a pseudosupersymmetric oscillator Hamiltonian, which can be realized in terms of mutually commuting boson and pseudofermion operators, where the latter are intermediate between standard fermion and $p=2$ parafermion operators.\par It is then possible to formulate a pseudoSSQM~\cite{beckers95}, characterized by a pseudosupersymmetric Hamiltonian $\cal H$ and pseudosupercharge operators $Q$, $Q^{\dagger}$, satisfying the relations \begin{equation} Q^2 = 0, \qquad [{\cal H}, Q] = 0, \qquad Q Q^{\dagger} Q = 4 c^2 Q {\cal H}, \label{eq:pseudoSSQM} \end{equation} and their Hermitian conjugates, where $c$ is some real constant. The first two relations in Eq.~(\ref{eq:pseudoSSQM}) are the same as those occurring in SSQM, whereas the third one is similar to the multilinear relation valid in PSSQM of order two. Actually, for $c=1$ or 1/2, it is compatible with Eq.~(\ref{eq:PSSQM}) or (\ref{eq:cubic}), respectively.\par In Ref.~\cite{cq99b}, we proved that pseudoSSQM can be bosonized in two different ways in terms of the generators of ${\cal A}^{(3)}(G(N))$ for any allowed values of the parameters $\alpha_0$, $\alpha_1$. This time, we started from the ans\" atze \begin{equation} Q = \sum_{\nu=0}^2 \left(\xi_{\nu} a + \eta_{\nu} a^{\dagger}\right) P_{\nu}, \qquad {\cal H} = H_0 + \case{1}{2} \sum_{\nu=0}^2 r_{\nu} P_{\nu}, \end{equation} and determined the complex constants $\xi_{\nu}$, $\eta_{\nu}$, and the real ones $r_{\nu}$ in such a way that Eq.~(\ref{eq:pseudoSSQM}) is fulfilled.\par The first type of bosonization corresponds to three families of two-parameter solutions, labelled by an index $\mu \in \{0, 1, 2\}$, \begin{eqnarray} Q_{\mu}(\eta_{\mu+2}, \varphi) & = & \left(\eta_{\mu+2} a^{\dagger} + e^{{\rm i} \varphi}\sqrt{4 c^2 - \eta_{\mu+2}^2}\, a\right) P_{\mu+2}, \nonumber \\ {\cal H}_{\mu}(\eta_{\mu+2}) & = & N + \case{1}{2} (2 \gamma_{\mu+2} + r_{\mu+2} - 1) I + 2 P_{\mu+1} + P_{\mu+2}, \label{eq:pseudoSSQM-sol} \end{eqnarray} where $0 < \eta_{\mu+2} < 2 |c|$, $0 \le \varphi < 2\pi$, and \begin{equation} r_{\mu+2} = \frac{1}{2c^2} (1 + \alpha_{\mu+2}) \left(|\eta_{\mu+2}|^2 - 2 c^2\right). \end{equation} Choosing for instance $\eta_{\mu+2} = \sqrt{2} |c|$, and $\varphi = 0$, hence $r_{\mu+2} = 0$ (producing an overall shift of the spectrum), we obtain \begin{eqnarray} Q_{\mu} & = & c \sqrt{2} \left(a^{\dagger} + a\right) P_{\mu+2}, \nonumber \\ {\cal H}_{\mu} & = & N + \case{1}{2} (2 \gamma_{\mu+2} - 1) I + 2 P_{\mu+1} + P_{\mu+2}. \label{eq:pseudoSSQM-solbis} \end{eqnarray} A comparison between Eq.~(\ref{eq:pseudoSSQM-sol}) or (\ref{eq:pseudoSSQM-solbis}) and Eq.~(\ref{eq:PSSQM-sol}) shows that the pseudosupersymmetric and $p=2$ parasupersymmetric Hamiltonians coincide, but that the corresponding charges are of course different. The conclusions relative to the spectrum and the ground state energy are therefore the same as in Sec.~4. \par The second type of bosonization corresponds to three families of one-parameter solutions, again labelled by an index $\mu \in \{0, 1, 2\}$, \begin{eqnarray} Q_{\mu} & = & 2 |c| a P_{\mu+2}, \nonumber \\ {\cal H}_{\mu}(r_{\mu}) & = & N + \case{1}{2} (2 \gamma_{\mu+2} - \alpha_{\mu+2}) I + \case{1}{2} (1 - \alpha_{\mu+1} + \alpha_{\mu+2} + r_{\mu}) P_{\mu} + P_{\mu+1}, \end{eqnarray} where $r_{\mu} \in {\rm R}$ changes the Hamiltonian spectrum in a significant way. We indeed find that the levels are equally spaced if and only if $r_{\mu} = (\alpha_{\mu+1} - \alpha_{\mu+2} + 3)\, {\rm mod}\, 6$. If $r_{\mu}$ is small enough, the ground state is nondegenerate, and its energy is negative for $\mu=1$, or may have any sign for $\mu = 0$ or~2. On the contrary, if $r_{\mu}$ is large enough, the ground state remains nondegenerate with a vanishing energy in the former case, while it becomes twofold degenerate with a positive energy in the latter. For some intermediate $r_{\mu}$ value, one gets a two or threefold degenerate ground state with a vanishing or positive energy, respectively.\par \section{Application to orthosupersymmetric quantum mechanics of order two} \setcounter{equation}{0} Mishra and Rajasekaran~\cite{mishra} introduced order-$p$ orthofermion operators by replacing the Pauli exclusion principle by a more stringent one: an orbital state shall not contain more than one particle, whatever be the spin direction. The wave function is thus antisymmetric in spatial indices alone with the order of the spin indices frozen.\par Khare, Mishra, and Rajasekaran~\cite{khare93b} then developed orthosupersymmetric quantum mechanics (OSSQM) of arbitrary order $p$ by combining boson operators with orthofermion ones, for which the spatial indices are ignored. OSSQM is formulated in terms of an orthosupersymmetric Hamiltonian $\cal H$, and $2p$ orthosupercharge operators $Q_r$, $Q_r^{\dagger}$, $r = 1$, 2, \ldots,~$p$, satisfying the relations \begin{equation} Q_r Q_s = 0, \qquad [{\cal H}, Q_r] = 0, \qquad Q_r Q_s^{\dagger} + \delta_{r,s} \sum_{t=1}^p Q_t^{\dagger} Q_t = 2 \delta_{r,s} {\cal H}, \label{eq:OSSQM} \end{equation} and their Hermitian conjugates, where $r$ and $s$ run over 1, 2, \ldots,~$p$.\par In Ref.~\cite{cq99b}, we proved that OSSQM of order two can be bosonized in terms of the generators of some well-chosen ${\cal A}^{(3)}(G(N))$ algebras. As ans\" atze, we used the expressions \begin{equation} Q_1 = \sum_{\nu=0}^2 \left(\xi_{\nu} a + \eta_{\nu} a^{\dagger}\right) P_{\nu}, \qquad Q_2 = \sum_{\nu=0}^2 \left(\zeta_{\nu} a + \rho_{\nu} a^{\dagger}\right) P_{\nu}, \qquad {\cal H} = H_0 + \case{1}{2} \sum_{\nu=0}^2 r_{\nu} P_{\nu}, \end{equation} and determined the complex constants $\xi_{\nu}$, $\eta_{\nu}$, $\zeta_{\nu}$, $\rho_{\nu}$, and the real ones $r_{\nu}$ in such a way that Eq.~(\ref{eq:OSSQM}) is fulfilled. We found two families of two-parameter solutions, labelled by $\mu \in \{0 ,1\}$, \begin{eqnarray} Q_{1,\mu}(\xi_{\mu+2}, \varphi) & = & \xi_{\mu+2} a P_{\mu+2} + e^{{\rm i} \varphi} \sqrt{2 - \xi_{\mu+2}^2}\, a^{\dagger} P_{\mu}, \nonumber \\ Q_{2,\mu}(\xi_{\mu+2}, \varphi) & = & - e^{-{\rm i} \varphi} \sqrt{2 - \xi_{\mu+2}^2}\, a P_{\mu+2} + \xi_{\mu+2} a^{\dagger} P_{\mu}, \nonumber\\ {\cal H}_{\mu} & = & N + \case{1}{2} (2 \gamma_{\mu+1} - 1) I + 2 P_{\mu} + P_{\mu+1}, \label{eq:OSSQM-sol} \end{eqnarray} where $0 < \xi_{\mu+2} \le \sqrt{2}$ and $0 \le \varphi <2\pi$, provided the algebra parameter $\alpha_{\mu+1}$ is taken as $\alpha_{\mu+1} = -1$. As a matter of fact, the absence of a third family of solutions corresponding to $\mu=2$ comes from the incompatibility of this condition (i.e., $\alpha_0 = -1$) with conditions~(\ref{eq:cond-Fock}).\par The orthosupersymmetric Hamiltonian $\cal H$ in Eq.~(\ref{eq:OSSQM-sol}) is independent of the parameters $\xi_{\mu+2}$, $\varphi$. All the levels of its spectrum are equally spaced. For $\mu=0$, OSSQM is broken: the levels are threefold degenerate, and the ground state energy is positive. On the contrary, for $\mu=1$, OSSQM is unbroken: only the excited states are threefold degenerate, while the nondegenerate ground state has a vanishing energy. Such results agree with the general conclusions of Ref.\ \cite{khare93b}.\par {}For $p$ values greater than two, the OSSQM algebra~(\ref{eq:OSSQM}) becomes rather complicated because the number of equations to be fulfilled increases considerably. A glance at the 18 independent conditions for $p=3$ led us to the conclusion that the ${\cal A}^{(4)}(G(N))$ algebra is not rich enough to contain operators satisfying Eq.~(\ref{eq:OSSQM}). Contrary to what happens for PSSQM, for OSSQM the $p=2$ case is therefore not representative of the general one.\par \section{Conclusion} In this communication, we showed that the $S_2$-extended oscillator algebra, which was introduced in connection with the two-particle Calogero model, can be extended to the whole class of $C_{\lambda}$-extended oscillator algebras ${\cal A}^{(\lambda)}_{\alpha_0 \alpha_1 \ldots \alpha_{\lambda-2}}$, where $\lambda \in \{2,3, \ldots\}$, and $\alpha_0$, $\alpha_1$, \ldots,~$\alpha_{\lambda-2}$ are some real parameters. In the same way, the GDOA realization of the former, known as the Calogero-Vasiliev algebra, is generalized to a class of GDOAs ${\cal A}^{(\lambda)}(G(N))$, where $\lambda \in \{2,3, \ldots\}$, for which one can define a bosonic oscillator Hamiltonian $H_0$, acting in the bosonic Fock space representation.\par {}For $\lambda \ge 3$, the spectrum of $H_0$ has a very rich structure in terms of the algebra parameters $\alpha_0$, $\alpha_1$, \ldots,~$\alpha_{\lambda-2}$. This can be exploited to provide an algebraic realization of SSQM with cyclic shape invariant potentials of period~$\lambda$, a bosonization of PSSQM of order $p = \lambda-1$, and, for $\lambda=3$, a bosonization of pseudoSSQM and OSSQM of order two.\par
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} \label{intro} Thanks to advances in computer-aided image analysis, radiological image data are now increasingly considered a valuable source of quantitative biomarkers. Body tissue composition is a long-known biomarker with high diagnostic and prognostic value in cardiovascular, oncological and orthopaedic diseases, but also in rehabilitation medicine or drug dosage. As obvious and simple as a quantitative determination of tissue composition based on modern radiological sectional imaging may seem, the actual extraction of this information in clinical routine is not feasible, since a manual assessment requires an extraordinary amount of human labour. A recent study has shown that some anthropometric measures can be estimated from simple and reproducible 2D measurements in CT using linear regression models \cite{Zopfs2019}. Another study showed that a fully automated 2D segmentation of CT sectional images at the level of L3 vertebra into subcutaneous adipose tissue, muscle, viscera, and bone was possible using a 2D U-Net architecture \cite{Weston2019}. The determination of the tissue composition at the level of L3 is often used as a reference in clinical routine to limit the amount of work required for the assessment. However, even here this is only a rough approximation, since the inter-individual variability between patients is large and the section at the level of L3 does not necessarily have to be representative of the entire human anatomy. Other dedicated techniques for analyzing body composition using Dual‐energy X‐ray absorptiometry or Magnetic resonance imaging exist \cite{Seabolt2015} but require additional potentially time-consuming or expensive procedures to be performed. The aim of our study was therefore to develop a fully automated, reproducible and quantitative 3D volumetry of body tissue composition from standard CT examinations of the abdomen in order to be able to offer such valuable biomarkers as part of routine clinical imaging. \section{Materials and Methods} \subsection{Dataset} A retrospective dataset was collected, consisting of 40 abdominal CTs for training and 10 abdominal CTs for testing. The included scans were randomly selected from abdominal CT studies performed between 2015 and 2019 at the University Hospital Essen, Germany. Each CT volume has a slice thickness of 5mm and was reconstructed using a soft tissue convolutional reconstruction kernel. The data was annotated with five different labels: background (= outside the human body), muscle, bones, subcutaneous tissue, abdominal cavity, and thoracic cavity. For annotation, the ITK Snap \cite{Yushkevich2006} software (version 3.8.0) was used. Region segmentation was performed manually with a polygon tool. In order to reduce the annotation effort, every fifth slice was fully annotated. Remaining slices were marked with an ignore label, as visualized in Figure \ref{fig:annotation}. The final dataset contains 751 fully annotated slices for training and 186 for testing. \begin{figure}[htbp] \includegraphics[width=\linewidth]{annotation-overview.png} \caption{Exemplary annotation of an abdominal CT, with subcutaneous tissue (red), muscle (yellow), bones (blue), abdominal cavity (green), thoracic cavity (purple), and ignore regions (white).} \label{fig:annotation} \end{figure} \subsection{Network Architectures} \label{sec:architecture} For this study two different network architectures were chosen for training, namely the U-Net 3D \cite{Cicek2016} and a more recent variant multi-resolution U-Net 3D \cite{Ibtehaz2020}. The latter is shown in Figure \ref{fig:network}, however, U-Net 3D is very similar with residual path blocks replaced by identity operations and multi resolution blocks replaced by two successive convolutions. In this case, volumetric data limits the batch size to a single example per batch due to a large memory footprint. Therefore, instance normalization layers \cite{Ulyanov2017} were utilized in favor of batch normalization layers \cite{Ioffe2015}. In the original architectures, transposed convolutions were employed to upsample feature maps back to the original image size. However, transposed convolutions tend to generate checkerboard artifacts \cite{odena2016}. This is why trilinear upsampling followed by a convolution was used instead, which is computationally more expensive, but more stable during optimization. Additionally, different choices for the initial number of feature maps are evaluated: 16, 32, and 64. After each pooling step the number gets doubled, resulting in 256, 512, and 1024 feature maps in the lowest resolution, respectively. \begin{figure}[htbp] \includegraphics[width=\linewidth]{body-composition-network.pdf} \caption{Schematic overview of the multi-resolution U-Net 3D architecture.} \label{fig:network} \end{figure} \subsection{Training Details} \label{sec:training} The implementation of network architectures and training was done using Tensorflow 2.0 \cite{Abadi2016} and the Keras API. Nvidia Titan RTX GPUs with 24GB VRAM were used, which enable training of more complex network architectures when using large volumetric data. Adam \cite{Kingma2015} with decoupled weight decay regularization \cite{Loshchilov2019} was utilized, configured with $beta_1$=0.9, $beta_2$=0.999, eps=1e-7, and weight decay of 1e-4. An exponentially decaying learning rate with initial value of 1e-4, multiplied by 0.95 every 50 epochs, helped to stabilize the optimization process at the end of the training. For selecting the best model weights during training, 5-fold cross-validation was used on the training set and the average dice score was monitored on the respective validation splits. Since the training dataset consists of 40 abdominal CTs, each training run was performed using 32 CTs for training and 8 CTs for validation. During training, several data augmentations were applied in order to virtually increase the unique sample size for training a generalizable network. First, random scale augmentation was applied with a scaling factor sampled uniformly between 0.8 and 1.2. Since this factor was sampled independently for both x- and y axis, it also acts as an aspect ratio augmentation. Second, random flipping was utilized to mirror volumes on the x-axis. Third, subvolumes of size $32\times256\times256$ were randomly cropped from the full volume with size $n\times512\times512$. During inference, the same number of slices was used, but with x and y dimension kept unchanged, and the whole volume was processed using a sliding window approach with 75\% overlap. To improve segmentation accuracy, predictions for overlapping subvolumes were aggregated in a weighted fashion, giving the central slices more weight than the outermost. Besides random data augmentations, additional pre-processing steps were performed before feeding the image data into the neural networks. Volumes were downscaled by factor 2 to on the x/y axes, retaining a slice thickness of 5mm on the z-axis. CT images are captured as hounsfield units (HU), which capture fine details and allow for different interpretations depending on which transfer function is used to map HUs to a color (e.g. black/white). Normally, when using floating point values the typical scanner quantization of 12 bits can be stored lossless and a network should be able to process all information without any problems. In this work, multiple HU windows [-1024, 4096], [-150, 250], and [-95, 155] were applied with clipping outliers to the respective minimum and maximum values and stacked as channels. Lastly, the network inputs were centered around zero with minimum value at -1 and maximum value at +1. For supervision, a combination of softmax cross entropy loss and generalized S{\o}rensen Dice loss \cite{Sudre2017} was chosen, similar to \cite{Isensee2019}. Both losses are defined as below: \begin{equation} \mathbb{L}_{XCE} = -\frac{1}{N} \cdot \sum_{n=1}^{N} \sum_{c=1}^{C} y_{c,n} \cdot \log\left(\hat{y}_{c,n}\right) \end{equation} \begin{equation} \mathbb{L}_{Dice} = 1.0 - \frac{1}{C-1}\cdot\sum_{c=2}^{C}\frac{\sum_{n=1}^{N}2\cdot\hat{y}_{c,n}\cdot y_{c,n} + \epsilon}{\sum_{n=1}^{N}\hat{y}_{c,n} + y_{c,n} + \epsilon} \end{equation} $C$ stands for for the total number of classes, which equals six for the problem at hand. $\hat{y}_{c,n}$ and $y_{c,n}$ represent the prediction respectively groundtruth label for class $c$ at voxel location $n$. The background class is in this work explicitly not covered by the dice loss in order to give the foreground classes more weight in the optimization process. This choice is well known for class imbalanced problems where the foreground class only covers little areas compared to the background class. The final loss is an equally weighted combination of both losses: \begin{equation} \mathbb{L}_{SV} = 0.5 \cdot \mathbb{L}_{XCE} + 0.5 \cdot \mathbb{L}_{Dice} \end{equation} \subsection{Tissue Quantification} \label{sec:method:report} Various materials can be extracted from a CT by thresholding the HU to a specific intensity range. For quantifying tissues, the reporting system uses a mixture of classical thresholding and modern semantic segmentation neural networks for building the semantic relationships. During training, five models were optimized using cross-validation to evaluate the generalization performance. When using these for inference, the predictions of all five models are averaged and thus an ensemble model is constructed. The final output of the quantification system is a report about subcutaneous adipose tissue (SAT), visceral adipose tissue (VAT), and muscle volume. Muscular tissue is identified by thresholding the HU between -29 and 150. Adipose tissue is identified by thresholding the HU between -190 and -30. If an adipose voxel is within the abdominal cavity region, it is counted as VAT. If it is within the subcutaneous tissue region, it is counted as SAT. \section{Results} \subsection{Model Evaluation} As described in Section \ref{sec:architecture} and \ref{sec:training}, two different network architectures with varying initial number of feature maps were systematically evaluated using a 5-fold cross-validation scheme on the training dataset. The results are stated in Table 1. First of all, all networks delivered promising results with average dice scores over 0.93. Second, multi-resolution U-Net variants achieved constantly higher scores compared to their respective U-Net counterparts. It is interesting to note, that the improvements in scores were small compared to the increase in trainable parameters and thus required time to train and test the networks. A single optimization step took 294ms, 500ms, and 1043ms on a NVIDIA Titan RTX for the initial feature map count of 16, 32, and 64, respectively. \begin{table}[htbp] \small \setlength{\tabcolsep}{3pt} \renewcommand{\arraystretch}{1.5} \caption{Evaluation for 5-fold cross-validation runs (stated as mean over all runs) and ensemble predictions on the test set. (AC) Abdominal Cavity, (B) Bones, (M) Muscle, (ST) Subcutaneous Tissue, (TC) Thoracic Cavity.} \begin{center} \begin{tabular}{llcrcccccc} \toprule & & & & \multicolumn{5}{c}{\textbf{Dice Score}}\\ \cline{5-9} &\textbf{Model} & $n_f$ & $n_{param}$ & AC & B & M & ST & TC & Average \\ \midrule \multirow{6}{*}{\rotatebox[origin=c]{90}{\textbf{5-fold CV}}} \quad & U-Net 3D & 16 & 5.34M & $0.9509$ & $0.9462$ & $0.9266$ & $0.9432$ & $0.8823$ & $0.9299$\\ & & 32 & 21.36M & $0.9669$ & $0.9540$ & $0.9379$ & $0.9574$ & $0.9336$ & $0.9500$\\ & & 64 & 85.43M & $0.9682$ & $0.9561$ & $0.9403$ & $0.9582$ & $0.9481$ & $0.9542$\\ \cline{2-10} & multi-res U-Net 3D & 16 & 5.82M & $0.9589$ & $0.9484$ & $0.9328$ & $0.9531$ & $0.9211$ & $0.9429$ \\ & & 32 & 21.24M & $0.9680$ & $0.9554$ & $0.9399$ & $0.9596$ & $0.9414$ & $0.9529$ \\ & & 64 & 85.10M & $0.9692$ & $0.9564$ & $0.9414$ & $0.9605$ & $0.9452$ & $0.9545$\\ \midrule \multirow{6}{*}{\rotatebox[origin=c]{90}{\textbf{Test Set}}}& U-Net 3D & 16 & 5.34M & $0.9609$ & $0.9340$ & $0.9229$ & $0.9553$ & $0.9172$ & $0.9381$\\ & & 32 & 21.36M & $0.9731$ & $0.9390$ & $0.9309$ & $0.9610$ & $0.9598$ & $0.9528$ \\ & & 64 & 85.43M & $0.9739$ & $0.9406$ & $0.9316$ & $0.9623$ & $0.9641$ & $0.9545$\\ \cline{2-10} & multi-res U-Net 3D & 16 & 5.82M & $0.9667$ & $0.9355$ & $0.9272$ & $0.9593$ & $0.9518$ & $0.9481$ \\ & & 32 & 21.24M & $0.9736$ & $0.9409$ & $0.9328$ & $0.9627$ & $0.9629$ & $0.9546$ \\ & & 64 & 85.10M & $0.9735$ & $0.9423$ & $0.9334$ & $0.9623$ & $0.9652$ & $0.9553$\\ \bottomrule \end{tabular} \end{center} \label{tab:model-evaluation} \end{table} For visual inspection of the ensemble segmentations, a few exemplary slices are shown in Figure \ref{fig:prediction-vs-groundtruth}. Most slices show almost perfect segmentation boundaries, however, especially the ribs are problematic due to the partial volume effect. In 5mm CTs it is even for human readers sometimes hard to correctly assign one or the other region. \begin{figure}[htbp] \includegraphics[width=\linewidth]{body-composition-pred-vs-gt.pdf} \caption{Comparison of different slices, their respective groundtruth annotation and predictions of the ensemble formed from five trained models on cross-validation splits.} \label{fig:prediction-vs-groundtruth} \end{figure} \subsection{Ablation Study} During model development it was observed, that the choice of HU window has an impact on optimization stability and final achieved scores. Therefore a small ablation study was conducted in order to systematically evaluate the influence of different HU limits. Additional models were trained using the same training parameters, but only with changed input pre-processing. The results are stated in Table \ref{tab:ablation}. \begin{table}[htbp] \small \setlength{\tabcolsep}{3pt} \renewcommand{\arraystretch}{1.5} \caption{Evaluation of multi-resolution U-Nets with $n_f = 32$ trained on different mappings from hounsfield units to the target intensity value range of $[-1, 1]$. Multi-Window stands for a combination of theoretical value range of 12-bit CT scans, abdomen window, and liver window. (AC) Abdominal Cavity, (B) Bones, (M) Muscle, (ST) Subcutaneous Tissue, (TC) Thoracic Cavity.} \begin{center} \begin{tabular}{llcccccc} \toprule & & \multicolumn{5}{c}{\textbf{Dice Score}}\\ \cline{3-7} &\textbf{HU Window} & AC & B & M & ST & TC & Average \\ \midrule \multirow{5}{*}{\rotatebox[origin=c]{90}{\textbf{5-fold CV}}} \quad &Multi-Window & $0.9680$ & $0.9554$ & $0.9399$ & $0.9596$ & $0.9414$ & $0.9529$ \\ &$[-1024,3071]$ & $0.9561$ & $0.9403$ & $0.9217$ & $0.9494$ & $0.9254$ & $0.9386$ \\ &$[-1024,2047]$ & $0.9533$ & $0.9410$ & $0.9144$ & $0.9412$ & $0.9303$ & $0.9360$ \\ &$[-1024,1023]$ & $0.8731$ & $0.8778$ & $0.7875$ & $0.6959$ & $0.8696$ & $0.8208$ \\ &$[-150,250]$ & $0.8598$ & $0.8687$ & $0.7632$ & $0.7772$ & $0.8759$ & $0.8289$ \\ \midrule \multirow{5}{*}{\rotatebox[origin=c]{90}{\textbf{Test Set}}}&Multi-Window & $0.9736$ & $0.9409$ & $0.9328$ & $0.9627$ & $0.9629$ & $0.9546$ \\ &$[-1024,3071]$ & $0.9682$ & $0.9392$ & $0.9261$ & $0.9606$ & $0.9532$ & $0.9495$ \\ &$[-1024,2047]$ & $0.9644$ & $0.9331$ & $0.9174$ & $0.9560$ & $0.9569$ & $0.9455$ \\ &$[-1024,1023]$ & $0.9329$ & $0.9002$ & $0.8412$ & $0.8879$ & $0.9066$ & $0.8938$ \\ &$[-150,250]$ & $0.8950$ & $0.8997$ & $0.8004$ & $0.8482$ & $0.9311$ & $0.8749$ \\ \bottomrule \end{tabular} \end{center} \label{tab:ablation} \end{table} Increasing the HU intensity range consistently improves dice scores. By combining multiple HU windows as separate input channels, the dice scores can be even more improved to over 0.95 dice score on average on both cross-validation and test set. The lowest scores of 0.829 dice on average for cross-validation and 0.875 for the test set were achieved by an abdominal HU window ranging from -150 to 250. \subsection{Tissue Quantification Report} As described in Section \ref{sec:method:report}, the segmentation models are intended to be used for assigning thresholded tissues to different regions, which is technically a logical conjunction. The achieved intraclass correlation coefficients for the derived SAT, VAT, and muscle volumes measured per slice are 0.999, 0.998, and 0.991, respectively ($p<0.001$). In order to visually inspect the quality of the tissue segmentation, a PDF report with sagittal and coronal slices is generated, in conjunction with a stacked bar plot showing the volumes of segmented muscle, SAT, and VAT per axial slice (see Figure \ref{fig:report}). This is only intended to give the human reader a first visual impression on the system output. For analysis, an additional table with all numeric values per slice is generated. The PDF file is encapsulated into DICOM and automatically sent back to the PACS, in order to make use of existing DICOM infrastructure. \begin{figure}[htbp] \includegraphics[width=\linewidth]{report_002.pdf} \caption{Final visual report of the tissue quantification system output. SAT is shown in red, VAT is shown in green, and muscle tissue is shown yellow.} \label{fig:report} \end{figure} \section{Discussion} Our study aimed to develop a fully automated, reproducible and quantitative 3D volumetry of body tissue composition from standard abdominal CT examinations in order to provide valuable biomarkers as part of routine clinical imaging. Our best approach using a multi-resolution U-Net 3D with an initial feature map count of 64 was able to fully automatically segment abdominal cavity, bones, muscle, subcutaneous tissue, and thoracic cavity with a mean S{\o}rensen Dice coefficient of 0.9553 and thus yielded excellent results. The derived tissue volumetry had intraclass correlation coefficients of over 0.99. Further experiments showed a high performance with heavily reduced parameter counts which enables considering speed / accuracy trade-offs depending on the type of application. Choosing the transfer function to map from HU to a normalized value range for feeding images into neural networks was found to have a huge impact on segmentation performance. In a recent study, manual single-slice CT measurements were used to build linear regression models for predicting stable anthropometric measures \cite{Zopfs2019}. As the authors suggest, these measures may be important as biomarkers for several diseases like e.g. sarcopenia, but could also be used where the real measurements are not available. However, manual single-slice CT measurements are still prone to intra-patient variability and inter- as well as intra-rater variability. By using a fully automated approach, derived anthropometric measures from more than a single CT slice should in theory be more stable. Fully automated analysis of body composition has been attempted many times in the past. Older methods utilize classical image processing and binary morphological operations \cite{Kim2013,Kullberg2017,Mensink2011} in order to isolate SAT and VAT from total adipose tissue (TAT). Other studies use prior knowledge about contours and shapes and actively fit a contour or template to a given CT image \cite{Agarwal2017,Ohshima2008,Parikh2017,Pednekar2005,Popuri2016}. Those methods are prone to variations in intensity values and assume certain body structures for algorithmic separation between SAT and VAT. Apart from purely CT imaging based studies there have been efforts to apply similar techniques to magnetic resonance imaging (MRI) \cite{Joshi2013,Positano2004,Zhou2011}. However, MRI procedures are more cost and time expensive than CT imaging in the clinical routine. Specific MRI procedures exist for body fat assessment, but have to be performed explicitly. Our approach can be used on routine CT imaging and may be used as supplementary material for diagnosis or screening purposes. Recently, deep learning based methods have been proposed \cite{Bridge2018,Weston2019}. In both studies, models were trained solely on single L3 CT slices. However, Weston et al. \cite{Weston2019} visually showed that their model was able to generalize for other abdominal slices well without being trained on such data. Nonetheless, they mentioned that extending the training and evaluation data to the whole abdomen would be beneficial for stability but also analysis capabilities. Our study uses annotated data for training and evaluation across the whole abdomen and thus is a true volumetric approach to body composition analysis. In addition, they segmented SAT and VAT directly, whereas in our study the semantic body region was segmented and adipose tissue was subclassified using known HU thresholds. \begin{figure}[b] \includegraphics[width=\linewidth]{body-composition-foreign-objects.pdf} \caption{Beam hardening artifacts may not only harm segmentation quality (top), but also prevent accurate identification of tissues (bottom). (left) Strong beam hardening artifacts with faults in the segmentation output (middle) Beam hardening artifacts with mostly accurate segmentation, but streaking artifacts prevent accurate muscle and SAT identification (right) No beam hardening artifacts at all, but metal foreign object detected.} \label{fig:artifacts} \end{figure} One major disadvantage of the collected dataset is the slice thickness of 5mm. Several tissues, materials, and potentially air can be contained within a distance of 5mm, the resulting HU at a specific location is an average of all components. This is also known as partial volume effect and can be counteracted by using a smaller slice thickness, ideally with isometric voxel sizes. However, a reconstructed slice thickness of 5 mm is common in clinical routine CT and it is questionable whether the increased precision of calculating the tissue composition on 1 mm slices would have clinical relevance. Nevertheless, we plan to investigate the influence of thinner slices in further studies, as the reading on thin slices is becoming routine in more and more institutions. Another limitation is the differentiation between visceral fat and fat contained within organs. Currently, every voxel with HU in the fat intensity value range, which is contained within the abdominal cavity region, is counted as VAT. However, per definition, fat cells within organs do not count as VAT and thus should be excluded from the final statistics. Public datasets like \cite{Gibson2018,Gibson2018b} already exist for multi-organ semantic segmentation and could be utilized to postprocess the segmentation results from this study by masking organs in the abdominal cavity. It is quite common to find metal foreign objects like implants in abdominal CTs and thus to encounter beam hardening artifacts. Those artifacts, depending on how strong they are, may affect the segmentation quality, as shown in Figure \ref{fig:artifacts}. Even if the segmentation model is able to predict the precise boundary of the individual semantic regions, streaking and cupping artifacts make it impossible to threshold fatty or muscular tissue based on HU intensities potentially invalidating quantification reports. In a future version of our tool we are therefore planning a functionality for automatic detection and handling of image artifacts. In future works, we plan to extend the body composition analysis system to incorporate other regions of the body as well. For example, \cite{Kullberg2017} already showed an analysis of adipose tissue and muscle for thighs. Ideally, the system should be capable of analysing the whole body in order to derive stable biomarkers. \section{Conclusion} In the present study we presented a deep learning based, fully automated volumetric tissue classification system for the extraction of robust biomarkers from clinical CT examinations of the abdomen. In the future, we plan to extend the system to thoracic examinations and to add important tissue classes such as pericardial adipose tissue and myocardium.
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} \vspace{-0.26cm} Ultra-relativistic heavy-ion collisions at the LHC produce a significant number of light (anti-)nuclei and (anti-)hypernuclei because of the large amount of energy deposited in a volume, which is much larger than in pp collisions. This allows to study in detail the production mechanism of nuclei and hypernuclei. The two production mechanisms used to describe measured yields are the coalescence model~\cite{Butler:1963pp,Kapusta:1980zz} and the statistical thermal model~\cite{Andronic:2010qu,Cleymans:2011pe}. The coalescence model is based on the simple assumption that \mbox{(anti-)}(hyper)nuclei are formed if (anti-)nucleons and/or (anti-)hyperons are close in the coordinate as well as in the momentum phase space. In the thermal model, the abundance of particles is determined by the thermodynamic equilibrium conditions. Since the baryo-chemical potential ($\mu_B$ ) is close to zero at LHC energies, the thermal model predicts the dependence of the particle yields on the chemical freeze-out temperature ($T_{\rm chem}$) by the relation d$N$/d$y$\ $\propto$ $\exp(-m/T_{\rm chem})$. The excellent particle identification capabilities of the ALICE experiment allow for the detection of these (anti-)(hyper)nuclei and to search for exotic bound states like $\Lambda$ n and H-dibaryon. \vspace{-0.35cm} \section{Data analysis} \vspace{-0.27cm} Nuclei and anti-nuclei such as (anti-)deuterons, (anti-)tritons, (anti-)$^{3}{\mathrm{He}}$\ and (anti-)$^{4}{\mathrm{He}}$ are identified using the specific energy loss (d$E$/d$x$) measurement in the Time Projection Chamber (TPC). The measured energy-loss signal of a track is required to be within a 3$\sigma$ region around the expected value for a given particle species. This method provides a pure sample of $^{3}{\mathrm{He}}$\ in the transverse momentum interval of 2 to 8 GeV/$c$, while it is limited up to 1.4 GeV/$c$ for deuterons. In order to extend deuteron identification, the velocity measurement by the Time-Of-Flight detector (TOF) is used. The momentum range under study is only limited by the available statistics and not by the detector performance. The secondaries due to knock-out from the detector material are rejected by applying a cut on the Distance-of-Closest Approach along the beam axis, $|$DCA$_Z|$ $<$ 1.0 cm. This selection removes a large fraction of background for nuclei, but does not affect primary anti-nuclei. The measured raw spectra are then corrected for efficiency and detector acceptance. More details can be found in Ref.~\cite{Adam:2015vda} and references therein. The production of (anti-)hypertriton, $^{3}_{\Lambda}{\mathrm{H}}$ , $^{3}_{\overline{\Lambda}}{\overline{\rm{H}}}$\ has been measured in Pb--Pb collisions via the invariant-mass reconstruction of the mesonic weak decay channel $^{3}_{\Lambda}{\mathrm{H}}$\ $\rightarrow$ $^{3}{\mathrm{He}}$\ + $\pi^{-}$ and $^{3}_{\overline{\Lambda}}{\overline{\rm{H}}}$\ $\rightarrow$ $^{3}{\overline{\rm{He}}}$ + $\pi^{+}$, respectively. Topological selections are applied in order to identify secondary decay vertex and to reduce the combinatorial background. More details about the used analysis technique can be found in Ref.~\cite{Adam:2015yta}. \vspace{-0.35cm} \section{Results and discussions} \vspace{-0.2cm} \subsection{Nuclei and anti-nuclei mass difference} \vspace{-0.1cm} The momentum-over-charge ($p/z$) measurement from the TPC and velocity measurement using the TOF allow to obtain mass-over-charge distributions for (anti-)deuterons and (anti-)$^{3}{\mathrm{He}}$. Figure~\ref{massDiff} shows the ALICE measurements of the mass-over-charge ratio difference for d-$\rm\overline{d}$\ and $^{3}{\mathrm{He}}$-$^{3}{\overline{\rm{He}}}$. The results are compared with the CPT invariance expectation and with the existing mass measurements. These measurements show that the mass and binding energies of nuclei and anti-nuclei are compatible within uncertainties, confirming CPT invariance for light nuclei~\cite{Adam:2015pna}. \begin{figure} \vspace{-0.2cm} \floatbox[{\capbeside\thisfloatsetup{capbesideposition={right,center},capbesidewidth=5cm}}]{figure}[\FBwidth] {\caption{Mass-over-charge ratio difference for d-$\rm\overline{d}$\ and $^{3}{\mathrm{He}}$-$^{3}{\overline{\rm{He}}}$ compared with the CPT invariance expectation (dotted lines). The solid red points show the ALICE measurements and the open black circles show the existing mass difference measurements. Error bars represent the quadrature sum of the statistical and systematic uncertainties (standard deviations)~\cite{Adam:2015pna}.}\label{massDiff}} {\includegraphics[width=4.7cm]{final2-11441Cropped1.pdf}} \vspace{-0.7cm} \end{figure} \vspace{-0.25cm} \subsection{Transverse momentum distributions, ratios, and yields} \vspace{-0.1cm} The deuteron transverse momentum ($\ensuremath{p_{\rm T}}$) distributions are obtained for Pb--Pb collisions at $\sqrt{s_{\mathrm{NN}}}$\ = 2.76 TeV~\cite{Adam:2015vda}, and for p--Pb collisions at $\sqrt{s_{\mathrm{NN}}}$\ = 5.02 TeV for various centrality classes, and also for pp collisions at $\sqrt{s}$\ = 7 TeV~\cite{Adam:2015vda}. A hardening of the spectrum with increasing centrality is observed in both Pb--Pb and p--Pb collisions. The $\ensuremath{p_{\rm T}}$\ distributions of $^{3}{\mathrm{He}}$\ are obtained for two centrality classes in Pb--Pb collisions at $\sqrt{s_{\mathrm{NN}}}$ = 2.76 TeV~\cite{Adam:2015vda} and for non-single diffractive (NSD) p--Pb collisions at $\sqrt{s_{\mathrm{NN}}}$ = 5.02 TeV. As an example, Fig.~\ref{spectra} shows the $\ensuremath{p_{\rm T}}$\ distribution of deuterons and $^{3}{\mathrm{He}}$\ in p--Pb collisions. The V0A multiplicity classes (Pb-side) corresponds to the measurement of the multiplicity in the Pb-going direction using the V0A detector. In order to extrapolate the yield into the unmeasured $\ensuremath{p_{\rm T}}$\ region, the spectra are fitted individually with a Blast-Wave function. \begin{figure} \vspace{-0.15cm} \includegraphics[width=7.3cm]{dSpectra-pPb.pdf} \includegraphics[width=6.3cm]{He3Spectra-pPb-eps-converted-to.pdf \caption{The transverse momentum distribution of deuterons (left panel) and of $^{3}{\mathrm{He}}$\ (right panel) for p--Pb collisions at $\sqrt{s_{\mathrm{NN}}}$\ = \mbox{5.02 TeV.} The boxes indicate systematic uncertainties, whereas the lines represent statistical uncertainties.} \label{spectra} \vspace{-0.25cm} \end{figure} If nuclei production is described by the thermal model then the d/p ratio is expected to remain constant for all colliding systems and for all multiplicities (because $\mu_B$$\approx$0). Figure~\ref{D2p} shows the ratio of deuteron-to-proton yields as a function of event multiplicity for pp, p--Pb, and Pb--Pb collisions. Within the uncertainties no significant variation with multiplicity is observed in Pb--Pb, which is consistent with the thermal model expectations~\cite{Adam:2015vda}. In p--Pb, the d/p ratio increases with multiplicity, which is incompatible with the thermal model expectations for chemical freeze-out temperatures that do not depend on multiplicity. The ratio in pp collisions is a factor of 2.2 lower than in Pb-Pb. The left panel of Fig.~\ref{thermalModel} shows the mass dependence of light nuclei yield for central Pb--Pb collisions and for NSD p--Pb collisions. The lines represent a fit with an exponential function. The figure also shows the anti-alpha yield measured in Pb--Pb collisions. Nuclei yields follow an exponential decrease with the mass. The penalty factor, namely the reduction of the yield by adding one nucleon, is $\sim$300 for Pb--Pb collisions and is $\sim$600 for NSD p--Pb collisions. The thermal model predicts the exponential dependence of the yield on the temperature given by d$N$/d$y$\ $\propto$ $\exp(-m/T_{\rm chem})$. The slope of the exponential fits can be used to extract $T_{\rm chem}$. For Pb--Pb collisions, the obtained value is consistent with the thermal model expectation. However, the corresponding value is much lower for p--Pb collisions. Figure~\ref{thermalModel} (right) shows the thermal model fit to various particle yields including light (hyper-)nuclei for 0-10\% central Pb--Pb collisions at $\sqrt{s_{\mathrm{NN}}}$\ = 2.76 TeV. Different models like~\cite{Andronic:2010qu,Wheaton:2004qb,Torrieri:2006xi} describe particle yields including deuterons, $^{3}{\mathrm{He}}$\ and $^{3}_{\Lambda}{\mathrm{H}}$\ yield well using $T_{\rm chem}$$\sim$156 MeV. Exclusion of the nuclei from the fit does not cause any significant change in $T_{\rm chem}$. \begin{figure}[h] \vspace{-0.3cm} \begin{center} \includegraphics[width=8.5cm]{2015-Jul-07-doverp_ratio_PPfinal_PbPbfinal-eps-converted-to.pdf \caption{d/p ratio as a function of charged particle multiplicity for different colliding systems at LHC energies.} \label{D2p} \end{center} \vspace{-0.6cm} \end{figure} \begin{figure} \vspace{-0.15cm} \includegraphics[width=6.5cm]{2015-Sep-23-mass_ordering_pPb_PbPb-eps-converted-to.pdf} \includegraphics[width=7.5cm]{2015-Jul-03-Fit_PbPb0010_Reference_final_SQM-eps-converted-to.pdf \caption{Left panel: The production yield d$N$/d$y$\ of light nuclei as a function of the particle mass $m_{{\rm A}}$ measured for 0-20\% central Pb--Pb collisions at $\sqrt{s_{\mathrm{NN}}}$\ = 2.76 TeV and for NSD p--Pb collisions at $\sqrt{s_{\mathrm{NN}}}$\ = 5.02 TeV. The boxes indicate systematic uncertainties, whereas the lines represent statistical uncertainties The lines represent fits with an exponential function. Right panel: Thermal model fit to various particle yields for 0-10\% central Pb--Pb collisions at $\sqrt{s_{\mathrm{NN}}}$\ = 2.76 TeV. } \label{thermalModel} \vspace{-0.25cm} \end{figure} \vspace{-0.5cm} \subsection{Searches for exotica} The thermal model describes the (hyper)nuclei yields well for Pb--Pb collisions and therefore can be used for predicting the expected yields of weakly decaying exotic bound states like H-dibaryon ($\Lambda$\lbd) and $\overline{\rm \Lambda n}$. The possible existence of these weakly bound states has been investigated via the decay channel $\Lambda$\lbd $\rightarrow$ $\Lambda$ + p + $\pi^-$ and $\overline{\rm \Lambda n}$ $\rightarrow$ $\bar{d}$ + $\pi^+$. No evidence has been seen in the invariant-mass distribution, neither for the $\Lambda$\lbd\ nor for the $\overline{\rm \Lambda n}$ bound state. The upper limits on the yield of $\Lambda$\lbd\ and $\overline{\rm \Lambda n}$ are about a factor 20 lower than the thermal model predictions when assuming reasonable lifetimes and branching ratios (BR). Figure~\ref{ExoticaModelComp} shows the experimentally determined upper limit for $\Lambda$\lbd\ and $\overline{\rm \Lambda n}$ bound states compared with the model calculation as a function of BR, see Ref.~\cite{Adam:2015nca} for more details. \begin{figure}[h] \begin{center} \includegraphics[width=11.0cm]{ExoticaModelComp1.pdf} \caption{ Experimentally determined upper limit, under the assumption of the lifetime of a free $\Lambda$. In the upper panel shown for the $\Lambda$ n bound state and for the H-dibaryon in the lower panel. The theory lines are drawn for different branching ratios (BR)~\cite{Adam:2015nca}.} \label{ExoticaModelComp} \end{center} \vspace{-0.6cm} \end{figure} \vspace{-0.7cm} \section{Summary and conclusions} \vspace{-0.2cm} The (anti-)nuclei production has been measured by the ALICE Collaboration in pp, p--Pb, and Pb--Pb collisions. Hardening of deuteron spectra with increasing centrality is observed in p--Pb and Pb--Pb. The d/p ratio rises with multiplicity in p--Pb and remains almost constant in Pb--Pb. The nuclei yields follow an exponential decrease with mass. This exponential decrease in Pb--Pb reflects a temperature similar to $T_{\rm chem}$ expected from the thermal fits of various produced particles suggesting that the nuclei follow the thermal behavior. However, in p--Pb the obtained temperature is less than the expected $T_{\rm chem}$ suggesting nuclei might not follow thermal behavior in p--Pb. Statistical hadronization models such as THERMUS~\cite{Wheaton:2004qb}, SHARE~\cite{Torrieri:2006xi} describe particle and light (hyper)nuclei yields well at $T_{\rm chem}$$\approx$156 MeV for Pb--Pb. The upper limits for $\Lambda$ $\Lambda$\ and $\overline{\rm \Lambda n}$ are lower than the thermal model expectations by at least an order of magnitude. Therefore the existence of such states with the BRs, masses and lifetimes typically assumed in literature is questionable. \vspace{-0.25cm}
{ "redpajama_set_name": "RedPajamaArXiv" }
\subsection{Aerodynamic constraints verification through an iterative procedure} blabla \subsection{Handling reference trajectories with different durations} In the setting of Section \ref{sec:opti}, the reference trajectories are implicitly assumed to have the same duration $T$ but this may not be the case in practice. For instance, if one is interested in optimising the climb phase for aircraft from a given altitude to a final one, then one may observe duration differences which vary from a few seconds to a few minutes. In such a case, projecting these trajectories onto the same functional basis is not possible and the approach developed in Section \ref{sec:opti} can not be applied. To deal with this problem, we propose to extend each reference trajectory $y_{R_i} \in L^2\big([0,T_i], \mathbb{R}^D \big)$ to a trajectory $\overline{y}_{R_i}$ over a larger time interval $[0, T]$ defined as follows: \begin{equation*} \overline{y}_{R_i}(t) := \left\{ \begin{array}{l} y_{R_i}(t) \; , \quad \text{if } t \in [0, T_i] \\[2mm] y_{fin} \; , \quad \text{if } t \in [T_i, T] \end{array} \right. \; , \end{equation*} where we set $\displaystyle T := \max_{i \in \{1, \dots, I\}} T_i$. In the climb optimisation setting, this consists in concatenating the beginning of an artificial cruise to each reference climb. Thanks to this procedure, it is then possible to project each component of the reference flights on an orthonormal basis of $L^2\big([0,T], \mathbb{R} \big)$. In this setting, we consider then the following optimisation problem: \begin{equation*} \overline{y}^\star \in \argmin_{y \in \mathcal{Y}_\mathcal{K}(0,T) \cap \mathcal{B}(y_{init}, y_{fin})} \sum_{i=1}^I \omega_i \int_0^T \Big\| \Lambda^{\frac{1}{2}} \big( y(t) - \overline{y}_{R_i}(t) \big) \Big\|_{\mathbb{R}^D}^2 \, dt + \int_0^T \mathrm{FF}\big(y(t) \big) \, dt \; , \end{equation*} which is equivalent of an optimisation problem on $\mathbb{R}^K$ of the form \eqref{eq:opt_c2} according to \autoref{prop:min_pb_equiv_y_C}. On the other hand, the duration $T$ to reach the final state $y_{fin}$ from the initial one $y_{init}$ following the trajectory $\overline{y}^\star$ may be too large when compared to the durations $T_i$ of the best reference trajectories. Nevertheless the trajectory $\overline{y}^\star$ may be close to (even reach) the final state $y_{fin}$ before the final time $T$; for instance, this is likely to occur if the duration of the reference trajectories associated with the largest weights are smaller than $T$. We exploit this fact to propose a truncation-adjustment step leading to an optimised trajectory with a more realistic duration in this setting while remaining close to the optimised trajectory $\overline{y}^\star$. We proceed as follows: \begin{enumerate} \item \underline{Truncation:}\\ Suppose that the set \begin{equation*} \Big\{t \in [0,T] \, \Big| \, \Big\| \Lambda^{\frac{1}{2}} \big( \overline{y}^\star(t) - y_{fin} \big) \Big\|_{\mathbb{R}^D}^2 \leqslant \epsilon \Big\} \; , \end{equation*} where $\epsilon > 0$, is non-empty and take its minimal element $T_\epsilon$ (or infimum). If the error $\epsilon$ is acceptable from an expert point of view, then the truncated trajectory $\overline{y}^\star|_{[0,T_\epsilon]}$ is the output of our optimisation process. Otherwise we consider the two following steps to adjust $\overline{y}^\star|_{[0,T_\epsilon]}$ so that the resulting trajectory satisfies the endpoint constraints. \item \underline{Projection:}\\ We set $y_\epsilon := \overline{y}^\star|_{[0,T_\epsilon]}$. The truncated trajectory $y_\epsilon$ does not belong necessarily to a finite dimension space. Given $\varepsilon > 0$, we choose $\mathcal{K}_\varepsilon = \big\{K_{\varepsilon,1},\dots,K_{\varepsilon,D} \big\}$ such that the projection $\widetilde{y}_\epsilon$ of $y_\epsilon$ onto $\mathcal{Y}_{\mathcal{K}_\varepsilon}(0,T_\epsilon)$ satisfies \begin{equation*} \int_0^{T_\epsilon} \Big\| \Lambda^{\frac{1}{2}} \big( \widetilde{y}_\epsilon(t) - y_\epsilon(t) \big) \Big\|_{\mathbb{R}^D}^2 \, dt \leqslant \varepsilon \; . \end{equation*} We define \begin{equation*} c_\epsilon := \Phi \, y_\epsilon \in \mathbb{R}^{K_\varepsilon} \qquad , \qquad \widetilde{y}_\epsilon := \Phi|_{\mathcal{Y}_\epsilon(0,T_\epsilon)}^{-1} c_\epsilon \in \mathcal{Y}_{\mathcal{K}_\varepsilon}(0,T_\epsilon) \; , \end{equation*} where $K_\varepsilon := \sum_{d=1}^D (1 + K_{\varepsilon,d})$. Note that the projected trajectory $\widetilde{y}_\epsilon$ is likely not to satisfy the endpoint constraints, \emph{i.e.} $\widetilde{y}_\epsilon \notin \mathcal{B}(y_{init}, y_{fin})$. \item \underline{Penalised adjustment:}\\ To obtain a final trajectory belonging to $\mathcal{B}(y_{init}, y_{fin})$, we slightly modify the projected trajectory $\widetilde{y}_\epsilon$ so that the endpoint constraints are verified. To do so, we propose to solve the following optimisation problem: \begin{equation} \label{eq:pen_adj} y_\eta^\star \in \argmin_{y \in \mathcal{Y}_{\mathcal{K}_\varepsilon}(0,T_\epsilon) \cap \mathcal{B}(y_{init}, y_{fin})} \int_0^{T_\epsilon} \Big\| \Lambda^{\frac{1}{2}} \big( y(t) - \widetilde{y}_\epsilon(t) \big) \Big\|_{\mathbb{R}^D}^2 \, dt + \eta \int_0^{T_\epsilon} \mathrm{FF}\big(y(t) \big) \, dt \; , \end{equation} where $\eta > 0$, which is equivalent to the following one according to \autoref{prop:min_pb_equiv_y_C}: \begin{equation} \label{eq:pend_adj_c} c_\eta^\star \in \argmin_{c \in \mathcal{C}(y_{init}, y_{fin})} \Big\| \overline{\Lambda}^{\frac{1}{2}} \big(c - c_\epsilon \big) \Big\|_{\mathbb{R}^K_\varepsilon}^2 + \eta \, \mathrm{TFC}(\Phi|_{\mathcal{Y}_\epsilon(0,T_\epsilon)}^{-1} c) \; . \end{equation} Here we choose $\eta > 0$ sufficiently small so that the preceding optimisation problem is strictly convex, namely the cost function is strictly convex. \end{enumerate} In the following theorem, we show that the total fuel consumption of $y_\eta^\star$ is necessarily smaller than the one of $\widetilde{y}_\epsilon$. Further we prove that the trajectories $y_\eta^\star$ and $y_\epsilon$ are close if $\eta$ is sufficiently small, if the projected trajectory $\widetilde{y}_\epsilon$ is sufficiently accurate and if the endpoint constraints errors of $\widetilde{y}_\epsilon$ are sufficiently small. \begin{theorem} \label{thm:stablity_adj} The solution $y_\eta^\star$ of the optimisation problem \eqref{eq:pen_adj} satisfies the following property: \begin{equation} \label{eq:TFC_adjust} \mathrm{TFC}(y_\eta^\star) \leqslant \mathrm{TFC}(\widetilde{y}_\epsilon) - \eta^{-1} \int_0^{T_\epsilon} \Big\| \Lambda^{\frac{1}{2}} \big( y_\eta^\star(t) - \widetilde{y}_\epsilon(t) \big) \Big\|_{\mathbb{R}^D}^2 \, dt \; . \end{equation} Assume in addition the matrix $A(0,T_\epsilon) \in \mathbb{R}^{2D \times K_\varepsilon}$ defined in \autoref{prop:linear_const} to be full rank and that there exists $\delta > 0$ such that \begin{equation*} \Big\| \Lambda^\frac{1}{2} \big (\widetilde{y}_\epsilon(0) - y_{init} \big) \Big\|_{\mathbb{R}^D}^2 + \Big\| \Lambda^\frac{1}{2} \big (\widetilde{y}_\epsilon(T_\epsilon) - y_{fin} \big) \Big\|_{\mathbb{R}^D}^2 \leqslant \delta \; . \end{equation*} Let $\varepsilon > 0$. Then there exists $\eta_\varepsilon > 0$ such that for all $\eta \in (0, \eta_\varepsilon)$ we have \begin{equation} \label{eq:traj_adjust} \int_0^{T_\epsilon} \Big\| \Lambda^{\frac{1}{2}} \big( y_\eta^\star(t) - y_\epsilon(t) \big) \Big\|_{\mathbb{R}^D}^2 \, dt \leqslant 4 (\lambda_{max} + 1) \, \varepsilon + \frac{4 \kappa}{\sigma_{min}^2} \, \delta \; , \end{equation} where $\sigma_{min} > 0$ is the smallest singular value of the matrix $A(0,T_\epsilon)$ and \begin{equation*} \lambda_{max} := \max_{d \in \{1,\dots, D\}} \lambda^{(d)} \qquad , \qquad \kappa := \frac{\max_{d \in \{1,\dots, D\}} \lambda^{(d)}}{\min_{d \in \{1,\dots, D\}} \lambda^{(d)}} \; . \end{equation*} \end{theorem} \begin{proof} The trajectory $y_\eta^\star$ is the solution of the minimisation problem \eqref{eq:pen_adj}. Since $\widetilde{y}_\epsilon$ belongs also to $\mathcal{Y}_{\mathcal{K}_\varepsilon}(0,T_\epsilon) \cap \mathcal{B}(y_{init},y_{fin})$, we have \begin{align*} & \int_0^{T_\epsilon} \Big\| \Lambda^{\frac{1}{2}} \big( y_\eta^\star(t) - \widetilde{y}_\epsilon(t) \big) \Big\|_{\mathbb{R}^D}^2 \, dt + \eta \int_0^{T_\epsilon} \mathrm{FF}\big(y_\eta^\star(t) \big) \, dt \\ & \hspace{2cm} \leqslant \int_0^{T_\epsilon} \Big\| \Lambda^{\frac{1}{2}} \big( \widetilde{y}_\epsilon(t) - \widetilde{y}_\epsilon(t) \big) \Big\|_{\mathbb{R}^D}^2 \, dt + \eta \int_0^{T_\epsilon} \mathrm{FF}\big(\widetilde{y}_\epsilon(t) \big) \, dt \\ & \hspace{2cm} = \eta \int_0^{T_\epsilon} \mathrm{FF}\big(\widetilde{y}_\epsilon(t) \big) \, dt \; , \end{align*} which leads to inequality \eqref{eq:TFC_adjust}.\\ We prove now the second inequality \eqref{eq:traj_adjust}. First of all, we apply \autoref{lem:continuity_pert} whose hypotheses are satisfied with \begin{itemize} \item $A = A(0,T_\epsilon) \in \mathbb{R}^{2D \times K_\varepsilon}$ which is surjective by hypothesis; \item $b = (y_{init}, y_{fin})^T$; \item $\displaystyle f(c) = \Big\| \overline{\Lambda}^\frac{1}{2} (c - c_\epsilon) \Big\|_{\mathbb{R}^{K_\varepsilon}}^2$ which is a strictly convex $\mathcal{C}^\infty$-function; \item $g(c) = \mathrm{TFC}\big( \Phi|_{\mathcal{Y}_\epsilon(0,T_\epsilon)}^{-1} c \big)$ which is a $\mathcal{C}^\infty$-function. \end{itemize} This implies the continuity at 0 of the solution $c_\eta^\star$ of the optimisation problem \eqref{eq:pend_adj_c} with respect to $\eta$. Hence there exists $\eta_\varepsilon > 0$ such that \begin{equation*} \forall \, \eta \in (0, \eta_\varepsilon) \qquad \big\| c_\eta^\star - c_0^\star \big\|_{\mathbb{R}^{K_\varepsilon}}^2 \leqslant \varepsilon \; . \end{equation*} By \autoref{prop:phi_unitary}, it follows \begin{equation} \label{eq:ineq_1} \int_0^{T_\epsilon} \Big\| \Lambda^{\frac{1}{2}} \big( y_\eta^\star(t) - y_0^\star(t) \big) \Big\|_{\mathbb{R}^D}^2 \, dt = \Big\| \overline{\Lambda}^\frac{1}{2} \big( c_\eta^\star - c_0^\star \big) \Big\|_{\mathbb{R}^{K_\varepsilon}}^2 \leqslant \lambda_{max} \, \varepsilon \; , \end{equation} for all $\eta \in (0, \eta_\varepsilon)$. Furthermore the hypotheses of \autoref{lem:stability_adj} are satisfied so we have \begin{equation} \label{eq:ineq_2} \int_0^{T_\epsilon} \Big\| \Lambda^{\frac{1}{2}} \big( y_0^\star(t) - \widetilde{y}_\epsilon(t) \big) \Big\|_{\mathbb{R}^D}^2 \, dt \leqslant \frac{\kappa(\Lambda)}{\sigma_0\big(A(0,T_\epsilon)\big)^2} \, \delta \; . \end{equation} And the set $\mathcal{K}_\varepsilon$ is chosen in such way that \begin{equation} \label{eq:ineq_3} \int_0^{T_\epsilon} \Big\| \Lambda^{\frac{1}{2}} \big( \widetilde{y}_\epsilon(t) - y_\epsilon(t) \big) \Big\|_{\mathbb{R}^D}^2 \, dt \leqslant \varepsilon \; . \end{equation} By the parallelogram law, one has \begin{align*} \int_0^{T_\epsilon} \Big\| \Lambda^{\frac{1}{2}} \big( y_\eta^\star(t) - y_\epsilon(t) \big) \Big\|_{\mathbb{R}^D}^2 \, dt & \leqslant 4 \int_0^{T_\epsilon} \Big\| \Lambda^{\frac{1}{2}} \big( y_\eta^\star(t) - y_0^\star(t) \big) \Big\|_{\mathbb{R}^D}^2 \, dt \\ & \hspace{1cm} + 4 \int_0^{T_\epsilon} \Big\| \Lambda^{\frac{1}{2}} \big( y_0^\star(t) - \widetilde{y}_\epsilon(t) \big) \Big\|_{\mathbb{R}^D}^2 \, dt \\ & \hspace{2cm} + 4 \int_0^{T_\epsilon} \Big\| \Lambda^{\frac{1}{2}} \big( \widetilde{y}_\epsilon(t) - y_\epsilon(t) \big) \Big\|_{\mathbb{R}^D}^2 \, dt \; . \end{align*} Inserting inequalities \eqref{eq:ineq_1}, \eqref{eq:ineq_2} and \eqref{eq:ineq_3} into the preceding one leads to the final result. \end{proof} \begin{lemma} \label{lem:stability_adj} Assume that there exist $\delta > 0$ such that \begin{equation*} \Big\| \Lambda^\frac{1}{2} \big (\widetilde{y}_\epsilon(0) - y_{init} \big) \Big\|_{\mathbb{R}^D}^2 + \Big\| \Lambda^\frac{1}{2} \big (\widetilde{y}_\epsilon(T_\epsilon) - y_{fin} \big) \Big\|_{\mathbb{R}^D}^2 \leqslant \delta \; , \end{equation*} and assume that the matrix $A(0,T_\epsilon) \in R^{2D \times K_\varepsilon}$ is full rank. Then the solution of the optimisation problem \begin{equation} \label{eq:adj} y_0^\star \in \argmin_{y \in \mathcal{Y}_{\mathcal{K}_\varepsilon}(0,T_\epsilon) \cap \mathcal{B}(y_{init}, y_{fin})} \int_0^{T_\epsilon} \Big\| \Lambda^{\frac{1}{2}} \big( y(t) - \widetilde{y}_\epsilon(t) \big) \Big\|_{\mathbb{R}^D}^2 \, dt \end{equation} satisfies \begin{equation*} \int_0^{T_\epsilon} \Big\| \Lambda^{\frac{1}{2}} \big( y_0^\star(t) - \widetilde{y}_\epsilon(t) \big) \Big\|_{\mathbb{R}^D}^2 \, dt \leqslant \frac{\kappa}{\sigma_{min}^2} \, \delta \; , \end{equation*} where $\kappa$ and $\sigma_{min}$ are defined in \autoref{thm:stablity_adj}. \end{lemma} \begin{proof} First of all, we consider the following minimisation problem \begin{equation*} c_0^\star \in \argmin_{c \in \mathcal{C}(y_{init}, y_{fin})} \Big\| \overline{\Lambda}^{\frac{1}{2}} \big(c - c_\epsilon \big) \Big\|_{\mathbb{R}^{K_\varepsilon}}^2 \; , \end{equation*} which is equivalent to the problem \eqref{eq:adj} following the lines of the proof of \autoref{prop:min_pb_equiv_y_C}.\\ Consider now the singular value decomposition of the matrix $A(0,T_\epsilon)$: \begin{equation*} A(0,T_\epsilon) = U \Sigma V^T \; , \end{equation*} where $\Sigma \in \mathbb{R}^{2D \times K_\varepsilon}$ is a rectangular diagonal matrix and $U \in \mathbb{R}^{2D \times 2D}$ and $V \in \mathbb{R}^{K_\varepsilon \times K_\varepsilon}$ are orthogonal matrices. In particular we can write \begin{equation*} U \Sigma V^T = U \Big( \Sigma_1 \hspace{3mm} 0_{2D, K_\varepsilon-2D} \Big) \left( \begin{array}{c} V_1^T \\[2mm] V_2^T \end{array} \right) = U \Sigma_1 V_1^T \; , \end{equation*} where $\Sigma_1 \in \mathbb{R}^{2D \times 2D}$ is a diagonal matrix whose diagonal elements $\{ \sigma_d \}_{d=1}^{2D}$ are the singular values in descending order and $V_1 \in \mathbb{R}^{K_\varepsilon \times 2D}$ is a semi-orthogonal matrix; note that $\Sigma_1$ is non-singular and that $\sigma_d > 0$ for all $d \in \{1,\dots,2D\}$ thanks to the hypothesis $A(0,T_\epsilon)$ is full rank.\\ For all $c \in \mathcal{C}(y_{init}, y_{fin})$, we have the following equivalences: \begin{align} A(0,T_\epsilon) c = \left( \begin{array}{c} y_{init} \\ y_{fin} \end{array} \right) \quad & \Longleftrightarrow \quad U \Sigma_1 V_1^T c = \left( \begin{array}{c} y_{init} \\ y_{fin} \end{array} \right) \nonumber \\ & \Longleftrightarrow \quad V_1^T c = \Sigma_1^{-1} U^T \left( \begin{array}{c} y_{init} \\ y_{fin} \end{array} \right) \; . \label{eq:equiv_end_const} \end{align} Let now define the following vector $\widetilde{c} \in \mathbb{R}^{K_\varepsilon}$ through its image $V^T \widetilde{c}$: \begin{equation*} \left\{ \begin{array}{l} V_1^T \widetilde{c} = \Sigma_1^{-1} U^T \left( \begin{array}{c} y_{init} \\ y_{fin} \end{array} \right) \\[4mm] V_2^T \widetilde{c} = V_2^T c_\epsilon \end{array} \right. \; . \end{equation*} We deduce that $\widetilde{c}$ belongs to $\mathcal{C}(y_{init},y_{fin})$ from the equivalences \eqref{eq:equiv_end_const} and we have \begin{align} \Big\| A(0,T_\epsilon) (\widetilde{c} - c_\epsilon) \Big\|_{\mathbb{R}^{2D}}^2 & = \Big\| U \Sigma V^T (\widetilde{c} - c_\epsilon) \Big\|_{\mathbb{R}^{2D}}^2 \nonumber \\ & = \Big\| \Sigma_1 V_1^T (\widetilde{c} - c_\epsilon) \Big\|_{\mathbb{R}^{2D}}^2 \label{eq:U_unit} \\ & = \sum_{d = 1}^{2D} \sigma_d^2 \Big| \big(V_1^T \widetilde{c}\big)_d - \big(V_1^T c_\epsilon\big)_d \Big|^2 \nonumber \\ & \geqslant \sigma_{2D}^2 \Big\| V_1^T (\widetilde{c} - c_\epsilon) \Big\|_{\mathbb{R}^{2D}}^2 \label{eq:min_sing_val} \\ & = \sigma_{2D}^2 \Big\| V^T (\widetilde{c} - c_\epsilon) \Big\|_{\mathbb{R}^{K_\varepsilon}}^2 \label{eq:V_2_zero} \\ & = \sigma_{2D}^2 \big\| \widetilde{c} - c_\epsilon \big\|_{\mathbb{R}^{K_\varepsilon}}^2 \label{eq:V_unit} \; ; \end{align} \begin{itemize} \item \eqref{eq:U_unit}: use the unitary property of $U$ and the equality $U \Sigma V^T = U \Sigma_1 V_1^T$; \item \eqref{eq:min_sing_val}: use the fact that $\sigma_d \geqslant \sigma_{2D}$ for all $d \in \{1, \dots, 2D \}$; \item \eqref{eq:V_2_zero}: by the definition of $\widetilde{c}$, we have \begin{equation*} \Big\| V^T (\widetilde{c} - c_\epsilon) \Big\|_{\mathbb{R}^{K_\varepsilon}}^2 = \Big\| V_1^T (\widetilde{c} - c_\epsilon) \Big\|_{\mathbb{R}^{2D}}^2 + \Big\| V_2^T (\widetilde{c} - c_\epsilon) \Big\|_{\mathbb{R}^{K_\varepsilon-2D}}^2 = \Big\| V_1^T (\widetilde{c} - c_\epsilon) \Big\|_{\mathbb{R}^{2D}}^2 \; ; \end{equation*} \item[\eqref{eq:V_unit}]: use the unitary property of $V$. \end{itemize} It follows \begin{align} & \Big\| \overline{\Lambda}^\frac{1}{2} (\widetilde{c} - c_\epsilon ) \Big\|_{\mathbb{R}^{K_\varepsilon}}^2 \nonumber \\ & \hspace{2cm} \leqslant \max_{d \in \{1,\dots, D\}} \lambda^{(d)} \big\| \widetilde{c} - c_\epsilon \big\|_{\mathbb{R}^{K_\varepsilon}}^2 \nonumber \\ & \hspace{2cm} \leqslant \frac{\max_{d \in \{1,\dots, D\}} \lambda^{(d)}}{\sigma_{2D}^2} \, \Big\| A(0,T_\epsilon) (\widetilde{c} - c_\epsilon) \Big\|_{\mathbb{R}^{2D}}^2 \label{eq:inv_ineq} \\ & \hspace{2cm} = \frac{\max_{d \in \{1,\dots, D\}} \lambda^{(d)}}{\sigma_{2D}^2} \bigg( \Big\| A(0,T_\epsilon)_1 (\widetilde{c} - c_\epsilon) \Big\|_{\mathbb{R}^D}^2 + \Big\| A(0,T_\epsilon)_2 (\widetilde{c} - c_\epsilon) \Big\|_{\mathbb{R}^D}^2 \bigg) \label{eq:mat_two_parts} \\ & \hspace{2cm} = \frac{\max_{d \in \{1,\dots, D\}} \lambda^{(d)}}{\sigma_{2D}^2} \Big( \big\| y_{init} - \widetilde{y}_\epsilon(0) \big\|_{\mathbb{R}^D}^2 + \big\| y_{fin} - \widetilde{y}_\epsilon(T_\epsilon) \big\|_{\mathbb{R}^D}^2 \Big) \label{eq:recov_init_cond} \\ & \hspace{2cm} \leqslant \frac{\max_{d \in \{1,\dots, D\}} \lambda^{(d)}}{\sigma_{2D}^2 \min_{d \in \{1,\dots, D\}} \lambda^{(d)}} \Big( \Big\| \Lambda^\frac{1}{2} \big (y_{init} - \widetilde{y}_\epsilon(0) \big) \Big\|_{\mathbb{R}^D}^2 + \Big\| \Lambda^\frac{1}{2} \big( y_{fin} - \widetilde{y}_\epsilon(T_\epsilon) \big) \Big\|_{\mathbb{R}^D}^2 \Big) \label{eq:equiv_norms} \\ & \hspace{2cm} \leqslant \frac{\max_{d \in \{1,\dots, D\}} \lambda^{(d)}}{\sigma_{2D}^2 \min_{d \in \{1,\dots, D\}} \lambda^{(d)}} \, \delta \; ; \label{eq:constraints_hyp} \end{align} \begin{itemize} \item \eqref{eq:inv_ineq}: use the inequality \begin{equation*} \sigma_{2D}^2 \big\| \widetilde{c} - c_\epsilon \big\|_{\mathbb{R}^{K_\varepsilon}}^2 \leqslant \Big\| A(0,T_\epsilon) (\widetilde{c} - c_\epsilon) \Big\|_{\mathbb{R}^{2D}}^2 \end{equation*} proved above and the fact that $\sigma_{2D} > 0$; \item \eqref{eq:mat_two_parts}: the matrices $A(0,T_\epsilon)_1, A(0,T_\epsilon)_2 \in \mathbb{R}^{D \times K_\varepsilon}$ are defined as follows: \begin{align*} & \bullet \quad A(0,T_\epsilon)_1 := \left( \begin{array}{ccccccc} \varphi_0(0) & \dots & \varphi_{K_{\varepsilon, 1}}(0) & & & & \\ & & & \ddots & & & \\ & & & & \varphi_0(0) & \dots & \varphi_{K_{\varepsilon, D}}(0) \end{array} \right) \; ; \\ & \bullet \quad A(0,T_\epsilon)_2 := \left( \begin{array}{ccccccc} \varphi_0(T_\epsilon) & \dots & \varphi_{K_{\varepsilon, 1}}(T_\epsilon) & & & & \\ & & & \ddots & & & \\ & & & & \varphi_0(T_\epsilon) & \dots & \varphi_{K_{\varepsilon, D}}(T_\epsilon) \end{array} \right) \; ; \end{align*} in particular, they satisfy \begin{equation*} A(0,T_\epsilon) = \left( \begin{array}{c} A(0,T_\epsilon)_1 \\[2mm] A(0,T_\epsilon)_2 \end{array} \right) \; ; \end{equation*} \item \eqref{eq:recov_init_cond}: by the definitions of the matrices $A(0,T_\epsilon)_1, A(0,T_\epsilon)_2$ and of the trajectory $\widetilde{y}_\epsilon = \Phi|_{\mathcal{Y}_\epsilon(0,T_\epsilon)}^{-1} c_\epsilon$, we have \begin{equation*} A(0,T_\epsilon)_1 \, c_\epsilon := \widetilde{y}_\epsilon(0) \qquad , \qquad A(0,T_\epsilon)_2 \, c_\epsilon := \widetilde{y}_\epsilon(T_\epsilon) \; , \end{equation*} and by the fact that the vector $\widetilde{c}$ belongs to $\mathcal{B}(y_{init},y_{fin})$, we obtain \begin{equation*} A(0,T_\epsilon)_1 \, \widetilde{c} := y_{init} \qquad , \qquad A(0,T_\epsilon)_2 \, \widetilde{c} := y_{fin} \; ; \end{equation*} \item \eqref{eq:equiv_norms}: use the following inequality \begin{equation*} \forall x \in \mathbb{R}^D \qquad \min_{d \in \{1,\dots,D\}} \lambda^{(d)} \big\| x \big\|_{\mathbb{R}^D}^2 \leqslant \Big\| \Lambda^{\frac{1}{2}} x \Big\|_{\mathbb{R}^D}^2 \; ; \end{equation*} \item \eqref{eq:constraints_hyp}: use the hypothesis \begin{equation*} \Big\| \Lambda^\frac{1}{2} \big (\widetilde{y}_\epsilon(0) - y_{init} \big) \Big\|_{\mathbb{R}^D}^2 + \Big\| \Lambda^\frac{1}{2} \big (\widetilde{y}_\epsilon(T_\epsilon) - y_{fin} \big) \Big\|_{\mathbb{R}^D}^2 \leqslant \delta \; . \end{equation*} \end{itemize} By using once again \autoref{prop:phi_unitary} and by using the property that $c_0^\star$ is a minimiser, we obtain finally \begin{align*} \int_0^{T_\epsilon} \Big\| \Lambda^{\frac{1}{2}} \big( y_0^\star(t) - \widetilde{y}_\epsilon(t) \big) \Big\|_{\mathbb{R}^D}^2 \, dt & = \Big\| \overline{\Lambda}^\frac{1}{2} (c_0^\star - c_\epsilon ) \Big\|_{\mathbb{R}^{K_\varepsilon}}^2 \\ & \leqslant \Big\| \overline{\Lambda}^\frac{1}{2} (\widetilde{c} - c_\epsilon ) \Big\|_{\mathbb{R}^{K_\varepsilon}}^2 \\ & \leqslant \frac{\max_{d \in \{1,\dots, D\}} \lambda^{(d)}}{\sigma_{2D}^2 \min_{d \in \{1,\dots, D\}} \lambda^{(d)}} \, \delta \; . \end{align*} \end{proof} \begin{lemma} \label{lem:continuity_pert} Let $A \in \mathbb{R}^{\widetilde{K} \times K}$ be a full rank matrix with $\widetilde{K} \leqslant K$ and let $b \in \mathbb{R}^{\widetilde{K}}$. Let $f, g :\mathbb{R}^K \longrightarrow \mathbb{R}$ be two $\mathcal{C}^2$-functions such that $f$ is strictly convex and such that the $C^2$-function $h_\eta := f + \eta \, g$ is strictly convex for all $\eta \in \mathcal{N}(0) \subseteq \mathbb{R}$, where $\mathcal{N}(0)$ is an open neighbourhood of $0$. We define the function $x^\star: \mathcal{N}(0) \longrightarrow \mathbb{R}^K$ as \begin{equation} \label{eq:opti_abst} x^\star(\eta) := \argmin_{Ax = b} h_\eta(x) \; , \end{equation} which is well-defined. Then the function $x^\star$ is continuous at $0$. \end{lemma} \begin{proof} For all $\eta \in \mathcal{N}(0)$, the optimisation problem \eqref{eq:opti_abst} has a unique solution $x^\star(\eta) \in \mathbb{R}^K$ since the cost function is strictly convex and the constraint is affine. Further the cost function $h_\eta$ is a $\mathcal{C}^2$-function and the matrix $A$ is full rank by hypothesis. By Lagrange multiplier theorem \cite[p. 285]{F2000}, for all $\eta \in \mathcal{N}(0)$, there exists a unique vector $\lambda(\eta) \in \mathbb{R}^{\widetilde{K}}$ such that \begin{equation} \label{eq:lagrange} \nabla h_\eta \big( x^\star(\eta) \big) + A^T \lambda(\eta) = 0_{\mathbb{R}^K} \; . \end{equation} Define now $u : \mathbb{R}^K \times \mathcal{N}(0) \longrightarrow \mathbb{R}^K$ as follows \begin{equation*} u(x, \eta) = \nabla h_\eta \big( x \big) + A^T \lambda(\eta) \; . \end{equation*} Then, by equality \eqref{eq:lagrange}, we have \begin{equation*} u\big(x^\star(\eta), \eta \big) = 0_{R^K} \; , \end{equation*} and the gradient $\nabla_x u$ of $u$ with respect to its first variable is equal to the Hessian matrix of $h_\eta$. Since $h_\eta$ is supposed to be strictly convex for all $\eta \in \mathcal{N}(0)$, its Hessian is positive definite and hence $\nabla_x u$ is invertible. By the implicit function theorem, we deduce the continuity of $x^\star$ at $0$. \end{proof} \subsection{Misc.} \begin{proposition} \label{prop:phi_unitary} Let $\| \cdot \|_{\mathbb{R}^K}$ denote the Euclidean norm on $\mathbb{R}^K$ and let $\big\{ \lambda^{(d)} \big\}_{d=1}^D$ be a set of positive real numbers. Define the matrices $\Lambda \in \mathbb{R}^{D \times D}$ and $\overline{\Lambda} \in \mathbb{R}^{K \times K}$ as \begin{equation*} \Lambda := \left( \begin{array}{ccc} \lambda^{(1)} & & \\ & \ddots & \\ & & \lambda^{(D)} \end{array} \right) \qquad , \qquad \overline{\Lambda} := \left( \begin{array}{ccc} \lambda^{(1)} \, I_{K_1 + 1} & & \\ & \ddots & \\ & & \lambda^{(D)} I_{K_D + 1} \end{array} \right) \end{equation*} where $I_n$ is the identity matrix of size $n$. Then the operator $\Phi|_{\mathcal{Y}_\mathcal{K}} : \mathcal{Y}_\mathcal{K} \longrightarrow \mathbb{R}^K$ is bijective and satisfies \begin{equation*} \forall \, y \in \mathcal{Y}_\mathcal{K} \qquad \int_0^T \Big\| \Lambda^{\frac{1}{2}} \, y(t) \Big\|_{\mathbb{R}^D}^2 \, dt = \Big\| \overline{\Lambda}^\frac{1}{2} \, \Phi|_{\mathcal{Y}_\mathcal{K}} y \Big\|_{\mathbb{R}^K}^2 \; . \end{equation*} \end{proposition} \begin{proof} Let $y \in \mathcal{Y}_\mathcal{K}$ and let $c := \Phi|_{\mathcal{Y}_\mathcal{K}} y \in \mathbb{R}^{K_d}$. We have by the definition of the matrix $\Lambda$ and by the hypothesis $y \in \mathcal{Y}_\mathcal{K}$, \begin{equation*} \int_0^T \Big\| \Lambda^{\frac{1}{2}} \, y(t) \Big\|_{\mathbb{R}^D}^2 \, dt = \sum_{d=1}^D \lambda^{(d)} \int_0^T \Big| y^{(d)}(t) \Big|^2 \, dt = \sum_{d=1}^D \lambda^{(d)} \int_0^T \left| \sum_{k=0}^{K_d} c_k^{(d)} \, \varphi_k(t) \right|^2 dt \; , \end{equation*} and the orthonormal property of the basis $\{\varphi_k\}_{k=0}^{+\infty}$ gives for all $d \in \{1,\dots,D\}$, \begin{equation*} \int_0^T \left| \sum_{k=0}^{K_d} c_k^{(d)} \, \varphi_k(t) \right|^2 dt = \sum_{k=0}^{K_d} \Big| c_k^{(d)} \Big|^2 \int_0^T \big| \varphi_k(t) \big|^2 \, dt = \sum_{k=0}^{K_d} \Big| c_k^{(d)} \Big|^2 \; . \end{equation*} By using the definition of $\overline{\Lambda}$, we obtain \begin{equation*} \int_0^T \Big\| \Lambda^{\frac{1}{2}} \, y(t) \Big\|_{\mathbb{R}^D}^2 \, dt = \sum_{d=1}^D \lambda^{(d)} \sum_{k=0}^{K_d} \Big| c_k^{(d)} \Big|^2 = \Big\| \overline{\Lambda}^\frac{1}{2} \, c \Big\|_{\mathbb{R}^K}^2 \; . \end{equation*} As a direct consequence, the operator $\Phi|_{\mathcal{Y}_\mathcal{K}} : \mathcal{Y}_\mathcal{K} \longrightarrow \mathbb{R}^K$ is injective. It is surjective as well since for all $\widetilde{c} = \big( \widetilde{c}_0^{(1)}, \dots, \widetilde{c}_{K_1}^{(1)}, \dots, \; \widetilde{c}_0^{(D)}, \dots, \widetilde{c}_{K_D}^{(D)} \big) \in \mathbb{R}^K$, the trajectory $\widetilde{y} \in \mathcal{Y}_\mathcal{K}$ defined by \begin{equation*} \forall \, d \in \{1,\dots,D\} \qquad \widetilde{y}^{(d)} := \sum_{k=0}^{K_d} \widetilde{c}_k^{(d)} \, \varphi_k \end{equation*} satisfies $\Phi|_{\mathcal{Y}_\mathcal{K}} \widetilde{y} = \Phi \widetilde{y} = \widetilde{c}$. \end{proof} \begin{definition}[Total fuel consumption] Let $\mathrm{FF}: \mathbb{R}^D \longrightarrow \mathbb{R}_+$ denote the fuel flow. We define the total fuel consumption $\mathrm{TFC}(y) \in \mathbb{R}_+$ of a trajectory $y \in L^2\big([0,T], \mathbb{R}^D\big)$ as follows: \begin{equation*} \mathrm{TFC}(y) := \int_0^T \mathrm{FF}\big(y(t) \big) \, dt \; . \end{equation*} \end{definition} \begin{definition} \label{def:phi} We define the integer $K := \sum_{d = 1}^D K_d$ and the operator $\Phi : L^2\big([0,T], \mathbb{R}^D \big) \longrightarrow \mathbb{R}^K$ as follows: \begin{equation*} \Phi y := \Big( c_1^{(1)}, \dots, c_{K_1}^{(1)}, \; c_1^{(2)}, \dots, c_{K_2}^{(2)}, \dots, \; c_1^{(D)}, \dots, c_{K_D}^{(D)} \Big) \; . \end{equation*} \end{definition} \begin{definition} We define \begin{enumerate} \item the projection operator $\Phi : \mathcal{C}\big([0,T], \mathbb{R}^D \big) \longrightarrow \mathbb{R}^K$ as follows: \begin{equation*} \Phi y := \Big( c_1^{(1)}, \dots, c_{K_1}^{(1)}, \; c_1^{(2)}, \dots, c_{K_2}^{(2)}, \dots, \; c_1^{(D)}, \dots, c_{K_D}^{(D)} \Big)^T \; ; \end{equation*} \item the set $\widetilde{\mathcal{D}}(y_0, y_T) \subset \mathbb{R}^K$ as the set of vectors satisfying the following linear condition: \begin{equation} \label{eq:endpoint_syst} A(0,T) \, c = \Gamma \; , \end{equation} where the vector $\Gamma \in \mathbb{R}^{2D}$ is defined as \begin{equation*} \Gamma := \Big( y_0^{\ T} \quad y_T^{\ T} \Big)^T \; , \end{equation*} and the matrix $A(0,T) \in \mathbb{R}^{2D \times K}$ is defined as \begin{equation*} A(0, T) := \left( \begin{array}{ccccccc} \varphi_1(0) & \dots & \varphi_{K_1}(0) & & & & \\ & & & \ddots & & & \\ & & & & \varphi_1(0) & \dots & \varphi_{K_D}(0) \\ \varphi_1(T) & \dots & \varphi_{K_1}(T) & & & & \\ & & & \ddots & & & \\ & & & & \varphi_1(T) & \dots & \varphi_{K_D}(T) \\ \end{array} \right) \; . \end{equation*} \end{enumerate} \end{definition} \begin{proposition} \label{prop:linear_const} \begin{enumerate} \item The restriction of the operator $\Phi$ to the subspace $\mathcal{Y}_\mathcal{K}$, namely $\Phi|_{\mathcal{Y}_\mathcal{K}} : \mathcal{Y}_\mathcal{K} \longrightarrow \mathbb{R}^K$, is bijective. \item A trajectory $y \in \mathcal{Y}_{\mathcal{K}}$ belongs to $\mathcal{D}(y_0, y_T)$ if and only if $\Phi y$ belongs to $\widetilde{\mathcal{D}}(y_0, y_T)$. \end{enumerate} \end{proposition} \begin{proof} \begin{enumerate} \item It is clear that each component $y^{(d)}$ of a trajectory $y \in \mathcal{Y}_\mathcal{K}$ is associated with a unique vector $c^{(d)} \in \mathbb{R}^{K_d}$ given by \begin{equation*} \forall \, k = 1, \dots, K_d \qquad c_k^{(d)} = \int_0^T y^{(d)}(t) \, \varphi_{k}(t) \, dt \; . \end{equation*} Since the restriction $\Phi|_{\mathcal{Y}_\mathcal{K}}$ is actually the Cartesian product of the bijective functions $\Phi^{(d)} : \text{span} \left\{ \varphi_{k} \right\}_{k = 1}^{K_d} \longrightarrow \mathbb{R}^{K_d}$ defined by \begin{equation*} \Phi^{(d)} y^{(d)} := c^{(d)} \; , \end{equation*} it inherits the bijective nature of its components. This proves the first point. \item Let $y \in \mathcal{Y}_\mathcal{K}$ and let $c := \Phi y \in \mathbb{R}^K$. By the definition of the matrix $A(0,T)$, we have \begin{align*} A(0,T) c & = A(0,T) \Big( c_1^{(1)}, \dots, c_{K_1}^{(1)}, \; c_1^{(2)}, \dots, c_{K_2}^{(2)}, \dots, \; c_1^{(D)}, \dots, c_{K_D}^{(D)} \Big)^T \\ & = \left( \sum_{k=1}^{K_1} c_k^{(1)} \varphi_k(0), \dots, \displaystyle \sum_{k=1}^{K_D} c_k^{(D)} \varphi_k(0), \dots, \sum_{k=1}^{K_1} c_k^{(1)} \varphi_k(T), \dots, \sum_{k=1}^{K_D} c_k^{(D)} \varphi_k(T) \right)^T \\ & = \big( y(0) \quad y(T) \big)^T \; . \end{align*} The conclusion follows directly from the preceding relation. \end{enumerate} \end{proof} \begin{remark} Throughout the paper, we will use interchangeably the two following representations of a vector $c \in \mathbb{R}^K$: \begin{itemize} \item $c = (c_1, c_2, \dots, c_K)^T$; \item $c = \Big( c_1^{(1)}, \dots, c_{K_1}^{(1)}, \; c_1^{(2)}, \dots, c_{K_2}^{(2)}, \dots, \; c_1^{(D)}, \dots, c_{K_D}^{(D)} \Big)^T$. \end{itemize} Indeed, for any $k \in \{1, \dots, K\}$, there exists a unique\footnote{Appendix ?} $\big(\widetilde{d}, \widetilde{k}\big)$ such that \begin{equation*} k = \widetilde{k} + \sum_{d = 1}^{\widetilde{d}-1} K_d \; , \end{equation*} with $\widetilde{d} \in \{1,\dots,D\}$ and $\widetilde{k} \in \{1, \dots, K_{\widetilde{d}} \}$. In this case, we have $c_k = c_{\widetilde{k}}^{(\widetilde{d})}$. \end{remark} \subsection{Interpretation} \begin{proposition} \label{prop:min_pb_equiv_y_C} A vector $c^\star \in \mathbb{R}^K$ is solution of the optimisation problem \eqref{eq:opt_c} if and only if the trajectory $y^\star := \Phi|_{\mathcal{Y}_\mathcal{K}}^{-1} c^\star \in \mathcal{Y}_\mathcal{K}$ is solution of the following optimisation problem \begin{equation*} y^\star \in \argmin_{y \in \mathcal{Y}_\mathcal{K} \cap \mathcal{B}(y_0, y_T)} \sum_{i=1}^I \omega_i \int_0^T \Big\| \Lambda^{\frac{1}{2}} \big( y(t) - y_{R_i}(t) \big) \Big\|_{\mathbb{R}^D}^2 \, dt + \int_0^T \mathrm{FF}\big(y(t) \big) \, dt \; . \end{equation*} \end{proposition} \begin{proof} First of all, we define the maps $g_1: \mathbb{R}^K \longrightarrow \mathbb{R}$ as \begin{equation*} g_1(c) := \sum_{i=1}^I \omega_i \, \Big\| \overline{\Lambda}^{\frac{1}{2}} \big(c - c_{R_i} \big) \Big\|_{\mathbb{R}^K}^2 + \mathrm{TFC}\big( \Phi|_{\mathcal{Y}_\mathcal{K}}^{-1} c \big) \; , \end{equation*} and $g_2: L^2\big([0,T], \mathbb{R}^d \big) \longrightarrow \mathbb{R}$ as \begin{equation*} g_2(y) := \sum_{i=1}^I \omega_i \int_0^T \Big\| \Lambda^{\frac{1}{2}} \big( y(t) - y_{R_i}(t) \big) \Big\|_{\mathbb{R}^D}^2 \, dt + \int_0^T \mathrm{FF}\big(y(t) \big) \, dt \; . \end{equation*} Let $c \in \mathbb{R}^K$ and let $y := \Phi|_{\mathcal{Y}_\mathcal{K}}^{-1} c \in \mathcal{Y}_\mathcal{K}$ be its associated trajectory. By the definition of $y$, we clearly have \begin{equation*} \mathrm{TFC}\big( \Phi|_{\mathcal{Y}_\mathcal{K}}^{-1} c \big) = \mathrm{TFC}(y) = \int_0^T \mathrm{FF}\big(y(t) \big) \, dt \; . \end{equation*} Furthermore Proposition \ref{prop:phi_unitary} implies \begin{equation*} \sum_{i=1}^I \omega_i \, \Big\| \overline{\Lambda}^{\frac{1}{2}} \big(c - c_{R_i} \big) \Big\|_{\mathbb{R}^K}^2 = \sum_{i=1}^I \omega_i \int_0^T \Big\| \Lambda^{\frac{1}{2}} \Big( y(t) - y_{R_i}(t) \Big) \Big\|_{\mathbb{R}^D}^2 \, dt \; , \end{equation*} showing that $g_1(c) = g_2\big(y\big)$. Since the set $\mathcal{C}(y_0, y_T)$ is the image of $\mathcal{Y}_\mathcal{K} \cap \mathcal{B}(y_0, y_T)$ under the bijection $\Phi|_{\mathcal{Y}_\mathcal{K}}$ by definition, it follows that minimising the map $g_1$ over the set $\mathcal{C}(y_0, y_T)$ amounts to minimising the function $g_2$ over $\mathcal{Y}_\mathcal{K} \cap \mathcal{B}(y_0, y_T)$. \end{proof} \subsection{Quadratic cost for a convex optimisation problem} \label{subsec:quad_model} \todo{Note ? + Change the position of $\kappa$ and adapt the results}\\ We focus on the case where the function $f$ (defining the cost $F$) is quadratic. In this setting, explicit formulas for the objective functions appearing in the optimisation problems \eqref{eq:opt_proj} and \eqref{eq:opt_proj_2} can be established. In particular this permits to derive sufficient conditions on the parameter $\kappa > 0$ so that the optimisation problems are proved to be equivalent to quadratic programs \citep[Sec. 4.4]{convexopt}, namely the objective functions are convex quadratic. In practice, this allows to make use of efficient convex optimisation libraries to solve numerically the problems. Throughout this subsection, we suppose that the function $f$ defining the cost $F$ is quadratic, \emph{i.e.} \begin{equation} \label{eq:quad_f} f(x) = x^T Q x + w^T x + r = \sum_{d_1, d_2 = 1}^D Q_{d_1 d_2} \, x^{(d_1)} \, x^{(d_2)} + \sum_{d = 1}^D w_d \, x^{(d)} + r \; , \end{equation} where $x \in \mathbb{R}^D$, $Q \in \mathbb{R}^{D \times D}$ is symmetric, $w \in \mathbb{R}^D$ and $r \in \mathbb{R}$. We define now some vectors and matrices which will be used to prove the quadratic programming property. \begin{definition} \label{def:misc} We define \begin{enumerate} \item the map $\widetilde{\varphi} \in \mathcal{C}\big([0,T], \mathbb{R}^K\big)$ as \begin{equation*} \widetilde{\varphi} := \Big( \varphi_1, \dots, \varphi_{K_1}, \; \varphi_1, \; \dots, \varphi_{K_2}, \dots, \; \varphi_1, \dots, \varphi_{K_D} \Big)^T \; ; \end{equation*} \item the matrix $\widetilde{Q} \in \mathbb{R}^{K \times K}$ as \begin{equation*} \widetilde{Q} := \left( \begin{array}{ccc} Q_{11} \, J_{K_1, K_1} & \ldots & Q_{1D} \, J_{K_1, K_D} \\ \vdots & & \vdots \\ Q_{D1} \, J_{K_D, K_1} & \ldots & Q_{DD} \, J_{K_D, K_D} \end{array} \right) \; , \end{equation*} where $J_{n,m}$ is the all-ones matrix of size $n \times m$; \item the matrix $\overline{Q} \in \mathbb{R}^{K \times K}$ as \begin{equation*} \overline{Q}_{k_1 k_2} := \widetilde{Q}_{k_1 k_2} \int_0^T \widetilde{\varphi}_{k_1}(t) \, \widetilde{\varphi}_{k_2}(t) \, dt \; ; \end{equation*} \item the matrix $\overline{Q}_V := V^T \overline{Q} V \in \mathbb{R}^{K \times K}$ which can be written as follows: \begin{equation*} \overline{Q}_V = \left( \begin{array}{cc} \overline{Q}_{V, 11} & \overline{Q}_{V, 12} \\[2mm] \overline{Q}_{V, 21} & \overline{Q}_{V, 22} \end{array} \right) \; , \end{equation*} where $\overline{Q}_{V, 11} \in \mathbb{R}^{\sigma \times \sigma}$, $\overline{Q}_{V, 12} \in \mathbb{R}^{\sigma \times (K - \sigma)}$, $\overline{Q}_{V, 21} \in \mathbb{R}^{(K-\sigma) \times \sigma}$ and $\overline{Q}_{V, 22} \in \mathbb{R}^{(K-\sigma) \times (K-\sigma)}$; \item the vector $\widetilde{w} \in \mathbb{R}^K$ as \begin{equation*} \widetilde{w} := \big( w_1 \, J_{1, K_1} \quad \dots \quad w_D \, J_{1, K_D} \big)^T \; ; \end{equation*} \item the vector $\overline{w} \in \mathbb{R}^K$ as \begin{equation*} \overline{w}_k := \widetilde{w}_k \int_0^T \widetilde{\varphi}_k(t) \, dt \; ; \end{equation*} \item the vector $\overline{w}_V := V^T \overline{w} \in \mathbb{R}^K$ which can be written as follows: \begin{equation*} \overline{w}_V = \Big( \overline{w}_{V, 1}^{\ T} \quad \overline{w}_{V, 2}^{\ T} \Big)^T \end{equation*} where $\overline{w}_{V, 1} \in \mathbb{R}^\sigma$ and $\overline{w}_{V, 2} \in \mathbb{R}^{K-\sigma}$. \end{enumerate} \end{definition} \begin{remark} The matrix $\widetilde{Q}$ inherits the symmetric property of $Q$. Indeed, since $J_{n, m}^{\ T} = J_{m, n}$ and $Q_{d_1 d_2} = Q_{d_2 d_1}$, we have \begin{equation*} \widetilde{Q}^T = \left( \begin{array}{ccc} Q_{11} \, J_{K_1, K_1}^{\ T} & \ldots & Q_{D1} \, J_{K_D, K_1}^{\ T} \\ \vdots & & \vdots \\ Q_{1D} \, J_{K_1, K_D}^{\ T} & \ldots & Q_{DD} \, J_{K_D, K_D}^{\ T} \end{array} \right) = \left( \begin{array}{ccc} Q_{11} \, J_{K_1, K_1} & \ldots & Q_{1D} \, J_{K_1, K_D} \\ \vdots & & \vdots \\ Q_{D1} \, J_{K_1, K_D} & \ldots & Q_{DD} \, J_{K_D, K_D} \end{array} \right) = \widetilde{Q} \; . \end{equation*} It follows that the matrices $\overline{Q}$ and $\overline{Q}_V$ are also symmetric. \end{remark} The following lemma provides quadratic formulas for the costs $\check{F}: \mathbb{R}^K \longrightarrow \mathbb{R}$ and $\widetilde{F}: \mathbb{R}^\sigma \longrightarrow \mathbb{R}$ in the present setting. They will be used to establish sufficient conditions so that the optimisation problems are convex. \begin{lemma} \label{lem:ff} Suppose that the function $f$ is of the form \eqref{eq:quad_f}. Then \begin{enumerate} \item we have for all $c \in \mathbb{R}^K$, \begin{equation*} \check{F}(c) = c^T \overline{Q} c + \overline{w}^T c + r T \; ; \end{equation*} \item we have for all $\widetilde{c}_1 \in \mathbb{R}^\sigma$, \begin{equation*} \widetilde{F}(\widetilde{c}_1) = \widetilde{c}_1^T \, \overline{Q}_{V, 11} \, \widetilde{c}_1 + \Big( 2 \, \overline{Q}_{V, 12} \, \widetilde{c}_{2,3} + \overline{w}_{V, 1}^{\ T} \Big)^T \widetilde{c}_1 + \widetilde{c}_{2,3}^{\ T} \, \overline{Q}_{V, 22} \, \widetilde{c}_{2,3} + \overline{w}_{V, 2}^{\ T} \, \widetilde{c}_{2,3} + r T \; , \end{equation*} where $\widetilde{c}_{2,3} := \Big( c_{R_i}^{\ T} V_2 \quad \Gamma^T U \big(S_{A,2}^{-1}\big)^T \Big)^T \in \mathbb{R}^{K - \sigma}$ and $i$ is arbitrarily chosen in $\{1, \dots, I \}$. \end{enumerate} \end{lemma} \begin{proof} Let $c \in \mathbb{R}^K$ and let $y := \Phi|_{\mathcal{Y}_\mathcal{K}}^{-1} c \in \mathcal{Y}_\mathcal{K}$ be its associated trajectory, which can be represented as follows: \begin{equation*} \forall \, d \in \{1, \dots, D\} \qquad y^{(d)} = \sum_{k=1}^{K_d} c_k^{(d)} \, \varphi_k \; . \end{equation*} We also remark that each component of the vector \begin{equation*} c = \Big( c_1^{(1)}, \dots, c_{K_1}^{(1)}, \; c_1^{(2)}, \dots, c_{K_2}^{(2)}, \dots, \; c_1^{(D)}, \dots, c_{K_D}^{(D)} \Big)^T \end{equation*} can be simply described by a single parameter so that we can write $c = (c_1, c_2, \dots, c_K)^T$. \begin{enumerate} \item By inserting the preceding representation of $y$ into \eqref{eq:quad_f}, we obtain: \begin{align*} f\big( y(t) \big) & = \sum_{d_1, d_2 = 1}^D \sum_{k_1 = 0}^{K_{d_1}} \sum_{k_2 = 0}^{K_{d_2}} Q_{d_1 d_2} \, c_{k_1}^{(d_1)} c_{k_2}^{(d_2)} \, \varphi_{k_1}(t) \varphi_{k_2}(t) + \sum_{d = 1}^D \sum_{k=0}^{K_d} w_d \, c_k^{(d)} \, \varphi_k(t) + r \; , \end{align*} for all $t \in [0,T]$. The next step of the proof consists in changing the indices of the vectors and matrices. Using the above rewriting of $c$ as well as the matrix $\widetilde{Q}$ and the map $\widetilde{\varphi}$ given in \cref{def:misc} provides the following equality: \begin{equation*} \sum_{d_1, d_2 = 1}^D \sum_{k_1 = 0}^{K_{d_1}} \sum_{k_2 = 0}^{K_{d_2}} Q_{d_1 d_2} \, c_{k_1}^{(d_1)} c_{k_2}^{(d_2)} \, \varphi_{k_1}(t) \varphi_{k_2}(t) = \sum_{k_1, k_2 = 1}^{K} \widetilde{Q}_{k_1 k_2} \, c_{k_1} \, c_{k_2} \, \widetilde{\varphi}_{k_1}(t) \widetilde{\varphi}_{k_2}(t) \; . \end{equation*} By similar computations, we obtain \begin{equation*} \sum_{d = 1}^D \sum_{k=0}^{K_d} w_d \, c_k^{(d)} \, \varphi_k^{(d)}(t) = \sum_{k = 1}^{K} \widetilde{w}_k \, c_k \, \widetilde{\varphi}_k(t) \; , \end{equation*} leading to \begin{align*} f\big(y(t) \big) = \sum_{k_1, k_2 = 1}^{K} \widetilde{Q}_{k_1 k_2} \, c_{k_1} \, c_{k_2} \, \widetilde{\varphi}_{k_1}(t) \widetilde{\varphi}_{k_2}(t) + \sum_{k = 1}^{K} \widetilde{w}_k \, c_k \, \widetilde{\varphi}_k(t) + r \; . \end{align*} Integrating finally over $[0,T]$ gives \begin{align*} \check{F}(c) & = \int_0^T f\big(y(t)\big) \, dt \\ & = \sum_{k_1, k_2 = 1}^{K} \widetilde{Q}_{k_1 k_2} \int_0^T \widetilde{\varphi}_{k_1}(t) \, \widetilde{\varphi}_{k_2}(t) \, dt \, c_{k_1} \, c_{k_2} + \sum_{k = 1}^{K} \widetilde{w}_k \int_0^T \widetilde{\varphi}_k(t) \, dt \, c_k + r T \\ & = \sum_{k_1, k_2 = 1}^{K} \overline{Q}_{k_1 k_2} \, c_{k_1} \, c_{k_2} + \sum_{k = 1}^{K} \overline{w}_k \, c_k + r T \\ & = c^T \overline{Q} c + \overline{w}^T c + r T \; . \end{align*} \item By the definition of $\widetilde{F}$ given in \cref{subsec:modelling}, we have \begin{equation*} \widetilde{F}(\widetilde{c}_1) = \check{F}\Big( V \big( \widetilde{c}_1^T \quad \widetilde{c}_{2,3}^{\ T} \big)^T \Big) \; . \end{equation*} We use now the result of the preceding point and the definitions of the matrix $\overline{Q}_V$ and the vector $\overline{w}_V$ to obtain \begin{align*} \widetilde{F}(\widetilde{c}_1) & = \big( \widetilde{c}_1^T \quad \widetilde{c}_{2,3}^{\ T} \big) \big( V^T \overline{Q} V \big) \big( \widetilde{c}_1^T \quad \widetilde{c}_{2,3}^{\ T} \big)^T + \big( V^T \overline{w} \big)^T \big( \widetilde{c}_1^T \quad \widetilde{c}_{2,3}^{\ T} \big)^T + r T \\ & = \big( \widetilde{c}_1^T \quad \widetilde{c}_{2,3}^{\ T} \big) \overline{Q}_V \big( \widetilde{c}_1^T \quad \widetilde{c}_{2,3}^{\ T} \big)^T + \overline{w}_V^T \big( \widetilde{c}_1^T \quad \widetilde{c}_{2,3}^{\ T} \big)^T + r T \\ & = \widetilde{c}_1^T \, \overline{Q}_{V, 11} \, \widetilde{c}_1 + \widetilde{c}_1^T \, \overline{Q}_{V, 12} \, \widetilde{c}_{2,3} + \widetilde{c}_{2,3}^{\ T} \, \overline{Q}_{V, 21} \, \widetilde{c}_1 + \widetilde{c}_{2,3}^{\ T} \, \overline{Q}_{V, 22} \, \widetilde{c}_{2,3} \\ & \hspace{1cm} + \overline{w}_{V, 1}^{\ T} \, \widetilde{c}_{1} + \overline{w}_{V, 2}^{\ T} \, \widetilde{c}_{2,3} + r T \; . \end{align*} Rearranging the preceding terms and using the fact that $\overline{Q}_V$ is symmetric gives the result. \end{enumerate} \end{proof} \todo{Changer la place de $\kappa$ dans fonction coût ?}\\ In the present setting, the optimisation problem \eqref{eq:opt_proj} is then equivalent to the following quadratic one: \begin{equation} \label{eq:opt_proj_quad} \left\{ \begin{array}{l} \displaystyle \widetilde{c}_1^\star \in \argmin_{\widetilde{c}_1 \in \mathbb{R}^\sigma} \sum_{i=1}^I \omega_i \, \big( \widetilde{c}_1 - \widetilde{c}_{R_i, 1} \big)^T \Lambda_{\Sigma,1}^{-1} \big( \widetilde{c}_1 - \widetilde{c}_{R_i, 1} \big) \\ \hspace{2cm} + \kappa \, \Big( \widetilde{c}_1^T \, \overline{Q}_{V, 11} \, \widetilde{c}_1 + \Big( 2 \, \overline{Q}_{V, 12} \, \widetilde{c}_{2,3} + \overline{w}_{V, 1}^{\ T} \Big)^T \widetilde{c}_1 \Big) \\[2mm] \widetilde{c}_2 = V_2^T c_{R_i} \\[2mm] \widetilde{c}_3 = S_{A,2}^{-1} \, U^T \Gamma \end{array} \; . \right. \end{equation} In the following result, we provide sufficient conditions on the parameter $\kappa > 0$ so that the problem \eqref{eq:opt_proj_quad} is a quadratic program. The proof uses the fact that the symmetric matrix associated with the quadratic objective function is now explicit and given by the sum of two matrices. A perturbation result for matrices is then applied to obtain a bound for $\kappa$ assuring that the symmetric matrix is positive semidefinite. \begin{theorem} \label{thm:quad_prog} Let $\rho_1 \geqslant \rho_2 \geqslant \dots \geqslant \rho_\sigma$ and $\lambda_1 \geqslant \lambda_2 \geqslant \dots \geqslant \lambda_K$ be respectively the eigenvalues of the symmetric matrices $\overline{Q}_{V, 11}$ and $\Sigma$. \begin{enumerate} \item If $\rho_\sigma \geqslant 0$ then the optimisation problem \eqref{eq:opt_proj_quad} is a quadratic program for any $\kappa > 0$. \item If $\rho_\sigma < 0$ then the optimisation problem \eqref{eq:opt_proj_quad} is a quadratic program for any $\kappa > 0$ satisfying \begin{equation*} \kappa < - \frac{1}{\lambda_1 \, \rho_\sigma} \; . \end{equation*} \end{enumerate} \end{theorem} \begin{proof} We first note that all the eigenvalues of the matrix $\Sigma$ are non-negative (because $\Sigma$ is a covariance matrix) and that $\lambda_{\sigma + 1} = \dots = \lambda_K = 0$ (because $\rk \Sigma = \sigma$). In particular, the eigenvalue $\lambda_1$ is positive.\\ Standard calculations show that the symmetric matrix associated with the quadratic objective function of the problem \eqref{eq:opt_proj_quad} is given by \begin{equation*} M(\kappa) := \kappa \, \overline{Q}_{V, 11} + \Lambda_{\Sigma, 1}^{-1} \in \mathbb{R}^{\sigma \times \sigma} \; . \end{equation*} Let $\mu_1(\kappa) \geqslant \mu_2(\kappa) \geqslant \dots \geqslant \mu_\sigma(\kappa)$ denote the eigenvalues of $M(\kappa)$. Our goal is to prove that $\mu_\sigma(\kappa)$ is non-negative to assure that $M$ is positive semidefinite. Since $M(\kappa)$ can be interpreted as a perturbed version of $\Lambda_{\Sigma, 1}^{-1}$, we can apply Weyl's inequality (see for instance \citet{wang201965}) which implies \begin{equation*} \kappa \, \rho_\sigma + \frac{1}{\lambda_1} \leqslant \mu_\sigma(\kappa) \; . \end{equation*} We distinguish now the two cases. \begin{enumerate} \item In the case where $\rho_\sigma \geqslant 0$ (\emph{i.e.} the matrix $\overline{Q}_{V, 11}$ is positive semidefinite), then $\mu_\sigma(\kappa)$ is positive for any $\kappa > 0$. \item If $\rho_\sigma < 0$ and $\kappa \leqslant - \frac{1}{\lambda_1 \, \rho_\sigma}$ then $\mu_\sigma(\kappa)$ is non-negative. \end{enumerate} \end{proof} For the sake of completeness, we finish by rewriting the problem \eqref{eq:opt_proj_quad} as a quadratic optimisation problem in $\mathcal{V}_1 \subset \mathbb{R}^K$ with affine constraints function. \begin{proposition} \label{prop:quad_min_pb} Suppose that the function $f$ is of the form \eqref{eq:quad_f}. Then the optimisation problem \eqref{eq:opt_proj_quad} is equivalent to the following one: \begin{equation} \label{eq:quad_min_pb} c^\star \in \argmin_{c \in \mathcal{V}_1} c^T \Big( \kappa \, \overline{Q} + \Sigma^\dagger \Big) c + \left( \kappa \, \overline{w} - 2 \sum_{i=1}^I \omega_i \, \Sigma^\dagger c_{R_i} \right)^{\hspace{-5pt}T} c \; . \end{equation} \end{proposition} \begin{proof} As explained above and according to \cref{prop:opt_equiv}, the problem \eqref{eq:opt_proj_quad} is equivalent to the problem \eqref{eq:opt_proj_2} in the present setting. Hence it is sufficient to show that the objective functions $g_1, g_2: \mathbb{R}^K \longrightarrow \mathbb{R}$ defined as \begin{itemize} \item $\displaystyle g_1(c) := \sum_{i=1}^I \omega_i \, \big( c - c_{R_i} \big)^T \Sigma^\dagger \big( c - c_{R_i} \big) + \kappa \, \check{F}(c)$ , \item $\displaystyle g_2(c) := c^T \Big( \kappa \, \overline{Q} + \Sigma^\dagger \Big) c + \left( \kappa \, \overline{w} - 2 \sum_{i=1}^I \omega_i \, \Sigma^\dagger c_{R_i} \right)^{\hspace{-5pt}T} c$ . \end{itemize} have the same minima. Firstly we have by standard calculations, \begin{align*} \sum_{i=1}^I \omega_i \, \big( c - c_{R_i} \big)^T \Sigma^\dagger \big( c - c_{R_i} \big) & = \sum_{i=1}^I \omega_i \, c^T \Sigma^\dagger c - 2 \sum_{i=1}^I \omega_i \, c_{R_i}^{\ T} \Sigma^\dagger c + \sum_{i=1}^I \omega_i \, c_{R_i}^{\ T} \Sigma^\dagger c_{R_i} \\ & = c^T \Sigma^\dagger c - \left( 2 \sum_{i=1}^I \omega_i \, \Sigma^\dagger c_{R_i} \right)^{\hspace{-5pt}T} c + \sum_{i=1}^I \omega_i \, c_{R_i}^{\ T} \Sigma^\dagger c_{R_i} \; , \end{align*} for any $c \in \mathbb{R}^K$, where we have used $\sum_{i=1}^I \omega_i = 1$. Combining now this equality with \cref{lem:ff} implies \begin{align*} g_1(c) & = c^T \Sigma^\dagger c - \left( 2 \sum_{i=1}^I \omega_i \, \Sigma^\dagger c_{R_i} \right)^{\hspace{-5pt}T} c + \sum_{i=1}^I \omega_i \, c_{R_i}^{\ T} \Sigma^\dagger c_{R_i} + \kappa \, \Big( c^T \overline{Q} c + \overline{w}^T c + r T \Big) \\ & = c^T \Big( \kappa \, \overline{Q} + \Sigma^\dagger \Big) c + \left( \kappa \, \overline{w} - 2 \sum_{i=1}^I \omega_i \, \Sigma^\dagger c_{R_i} \right)^{\hspace{-5pt}T} c + \sum_{i=1}^I \omega_i \, c_{R_i}^{\ T} \Sigma^\dagger c_{R_i} + \kappa \, r T \\ & = g_2(c) + \sum_{i=1}^I \omega_i \, c_{R_i}^{\ T} \Sigma^\dagger c_{R_i} + \kappa \, r T \; . \end{align*} Since the two last terms of the last right-hand side do not depend on $c$, we deduce that the objective functions $g_1$ and $g_2$ have the same minima. \end{proof} \subsection{Modelling} \label{subsec:appli_aero_model} Here the trajectories are supposed to be in a vertical plane and are defined by the altitude $h$, the Mach number $\mathrm{M}$ and the engines rotational speed $\mathrm{N1}$ (expressed as a percentage of a maximal value). Hence a trajectory $y$ in this setting is a continuous $\mathbb{R}^3$-valued map defined on $[0,T]$, where $T$ is a maximal climb duration fixed by the user. Hence we have \begin{equation*} \forall \, t \in [0,T] \qquad y(t) := \big( h(t), \mathrm{M}(t), \mathrm{N1}(t) \big) \; . \end{equation*} The quantity to minimise is the total fuel consumption $\mathrm{TFC}: \mathcal{C}\big( [0,T], \mathbb{R}^3 \big) \longrightarrow \mathbb{R}_+$ which is defined via the fuel flow $\mathrm{FF}: \mathbb{R}^3 \longrightarrow \mathbb{R}_+$ as follows\footnote{In the notation of \cref{subsec:quad_model}, $\mathrm{FF}$ and $\mathrm{TFC}$ play respectively the role of $f$ and $F$.}: \begin{equation*} \mathrm{TFC}(y) := \int_0^T \mathrm{FF}\big(y(t)\big) \, dt \; . \end{equation*} Regarding the endpoints conditions, we require the trajectory to start at the altitude $h_0$ with Mach number $\mathrm{M}_0$ and to end at the altitude $h_T$ with Mach number $\mathrm{M}_T$. In particular, the reference trajectories we use have to verify these conditions. We consider also additional constraints which are conventional in the aeronautic setting: \begin{itemize} \item The rate of climb, \emph{i.e.} the time-derivative of the altitude, has to be upper bounded by a given maximal value $\gamma_{max}$ during the whole climb; \item The Mach number should not exceed a certain value called the maximum operational Mach ($\mathrm{MMO}$). \end{itemize} The final time of the climb is given by $T^\star \in [0,T]$ which is the first time where the aircraft reaches $h_T$ with Mach number $\mathrm{M}_T$. Finally we mention that the fuel flow model $\mathrm{FF}$ is here estimated. To do so, we exploit the reference trajectories which contain recorded altitude, Mach number, engines power and fuel flow for each second of the flight. Having access to these data, we are in position to fit a statistical model. Following the numerical results in \cite{dewez2020industrywide} which show that polynomials can accurately model aeronautic variables, we consider a polynomial model of degree 2 for the fuel flow. In particular the requirements for the cost function in the current version of \texttt{PyRotor} are fulfilled. The prediction accuracy of the resulting estimated model is assessed in the following subsection. \subsection{Numerical results} We present now numerical results based on real flight data for the above aeronautic problem. Here we have access to 2,162 recorded short and medium-haul flights performed by the same narrow-body airliner type, provided by a partner airline. In particular they can not be publicly released for commercial reasons. The data is here recorded by the Quick Access Recorder (QAR). Before considering the optimisation setting, we estimate a fuel flow model specific to the climb phase and to the considered airliner type. To do so we extract the signals of the four variables of interest (altitude, Mach number, engines rotational speed and fuel flow) and keep the observations from the take-off to the beginning of the cruise without level-off phases. Smoothing splines are then applied to the raw signals to remove the noise. We sample each 5 seconds to reduce the data set size without impacting strongly the accuracy of the resulting models. At the end, we obtain 494,039 observations which are randomly split into training and test sets to fit a polynomial model of degree 2 using the \texttt{scikit-learn} library. The RMSE and MAPE values of this model on the test set are respectively equal to $3.64 \times 10^{-2}$ kg.s$^{-1}$ and 1.73\%. Regarding the optimisation, we are interested in climb phases from 3,000~ft to 38,000~ft. We mention that we remove lower altitudes because operational procedures constraint heavily the trajectory during the very beginning of the climb. Further the initial and final Mach numbers are required to be equal to 0.3 and 0.78. It is noteworthy that the optimisation solvers used in \texttt{PyRotor} allow linear inequality conditions, permitting to slightly relax the endpoints conditions. Here we tolerate an error of 100~ft for the altitude and an error of 0.01 for the Mach number. The initial and final $\mathrm{N1}$ values are let unconstrained. Finally the $\mathrm{MMO}$ and $\gamma_{max}$ are respectively set to 0.82 and 3,600~ft.min$^{-1}$. The reference trajectories are given by 48 recorded flights which satisfy the above climb endpoints conditions among the 2,162 available ones. All these selected flights are used to estimate the covariance matrix involved in the optimisation problem. On the other hand, we use only the 5 most fuel-efficient flights in the objective function to focus on a domain containing the most efficient recorded flights. Further the maximal duration $T$ is here fixed to the duration of the longest climb among the 5 most fuel-efficient ones we use. Legendre polynomials are used as the functional basis spanning the space in which lies the trajectories. Since we consider narrow-body airliners, polynomials are expected to be relevant to describe the slow variations of such aircrafts. Here the dimensions associated with the altitude, the Mach number and the engines power are given respectively by 4, 10 and 6. The reference vectors $c_{R_i}$ are then computed using the formula \eqref{eq:def_c}. At the end, we amount to solving a constrained optimisation problem in a space of dimension 20. We are then in position to apply the optimisation method developed in \cref{sec:opti} using the \texttt{PyRotor} library. First of all a relevant value for $\nu_{max} > 0$ has to be fixed. In order to propose a realistic optimised climb, we choose a $\nu_{max}$ relatively small so that the optimised climb remains close to the reference ones. In particular, the quadratic objective function in \eqref{eq:opt_proj_3} turns out to be convex for all $\nu \in [0,\nu_{max}]$ permitting to use the quadratic programming solver from \texttt{CVXOPT} software imported in \texttt{PyRotor}. The preprocessing of the reference trajectories and the optimisation steps have been executed 100 times using \texttt{PyRotor} on an Intel Core i7 6 cores running at 2.2~GHz. The mean of the execution time for both steps is equal to 3.76~s with standard deviation 0.11~s, illustrating that the library is time-efficient in this setting. A plot of the optimised trajectory obtained using \texttt{PyRotor} is given in \cref{fig:opti_climb}. We observe that the optimised trajectory seeks to reach the maximum altitude in the minimum amount of time; this is in accordance with the existing literature (see for instance \citet{dalmau2014} and references therein). In particular, the duration $T^\star$ is equal to 1,033 seconds which is actually slightly shorter than the reference durations. We note also that the optimised Mach number shares a very similar pattern with the references. On the other hand, the optimised engines rotational speed tends to slowly decrease until the cruise regime before reaching the top of climb. This is not the case for the reference engines speed which falls to the cruise regime just after reaching the final altitude. Most of the savings seem to be achieved in these last moments of the climb. At last but not least, the optimised trajectory presents a realistic pattern inherited from the reference trajectories. For a quantitative comparison, we refer to \cref{tab:savings} which provides statistical information on the fuel savings. The mean savings 16.54\% together with the fact that the optimised trajectory verifies the additional constraints show that these first results are promising, motivating further studies. For instance one could model environmental conditions or take into account Air Traffic Control constraints for more realistic modellings. \begin{table} \caption{\label{tab:savings}Statistical description of the fuel savings of the optimised trajectory -- The savings are compared with the 48 recorded flights satisfying the present endpoints and the total consumption of the optimised trajectory is estimated by using the statistical model for the fuel flow - $Q_1$, $Q_2$ and $Q_3$ refer to the first, second and third quartiles.} \centering \fbox{% \begin{tabular*}{\textwidth}{@{\extracolsep{\fill}} l*{6}{c}r} & Mean & Std & Min & $Q_1$ & $Q_2$ & $Q_3$ & Max \\ \hline Fuel savings [kg] & 260.38 & 86.21 & 71.79 & 202.40 & 261.87 & 330.32 & 393.73 \\ Percentage [\%] & 16.54 & 4.73 & 5.27 & 13.56 & 16.88 & 20.39 & 23.39 \end{tabular*}} \end{table} \begin{figure} \centering \includegraphics[width=\textwidth]{figures/altitude_opti.png} \includegraphics[width=\textwidth]{figures/mach_opti.png} \includegraphics[width=\textwidth]{figures/n1_opti.png} \caption{Optimised and reference altitudes, Mach numbers and engines rotational speeds -- The optimised trajectory is represented by the blue curves.} \label{fig:opti_climb} \end{figure} \subsection{Modelling} \label{subsec:appli_phys_nautic} To model this problem, we suppose without loss of generality that the trajectories are defined on the (time-)interval $[0,1]$ and we let $V: \mathbb{R}^D \longrightarrow \mathbb{R}^D$ denote a vector field. Furthermore the trajectories are assumed here to be continuously differentiable, \emph{i.e.} they belong to $\mathcal{C}^1\big([0,1], \mathbb{R}^D \big)$. The work of $V$ along a trajectory $y \in \mathcal{C}^1\big( [0,1], \mathbb{R}^D \big)$ is defined as \begin{equation*} W(y, \dot{y}) := \int_0^1 V\big(y(t)\big)^T \dot{y}(t) \, dt \; ; \end{equation*} here $\dot{y}$ denotes the derivative of $y$ with respect to the independent variable $t$. Moreover using Hamilton's principle in Lagrangian mechanics, it can be shown that the trajectory with constant velocity (\emph{i.e.} a straight line travelled at constant speed) is the minimum of the following functional, \begin{equation*} J(\dot{y}) = \int_0^1 \big\| \dot{y}(t) \big\|_2^2 \, dt \; , \end{equation*} where the starting and ending points of $y$ are fixed and different. This functional can be then used to control the travelled distance. It follows that minimising the cost function \begin{equation*} F_\alpha(y, \dot{y}) := \alpha J(\dot{y}) - W(y, \dot{y}) = \int_0^1 \alpha \big\| \dot{y}(t) \big\|_2^2 - V\big(y(t)\big)^T \dot{y}(t) \, dt \; , \end{equation*} where $\alpha \geqslant 0$ is arbitrarily chosen, is expected to lead to an optimised trajectory reflecting a trade-off between maximising the work and minimising the distance. Further we require the trajectory to stay in the hypercube $[0,1]^D$ and to start and to end respectively at $y_0 \in [0,1]^D$ and $y_1 \in [0,1]^D$. Now we remark that the above cost function involves the (time-)derivative $\dot{y}$. So one has to derive a formula permitting to compute the derivative of any trajectory $y = \Phi|_{\mathcal{Y}_\mathcal{K}}^{-1} c \in \mathcal{Y}_\mathcal{K}$ from its associated vector $c \in \mathbb{R}^K$, especially to compute $\check{F}(c)$. For instance, this can be easily achieved by assuming that each element of the functional basis is continuously differentiable. Indeed we can differentiate in this case any $y \in \mathcal{Y}_\mathcal{K}$: \begin{equation*} \forall \, d = 1, \dots, D \qquad \dot{y}^{(d)} = \sum_{k=1}^{K_d} c_k^{(d)} \dot{\varphi}_k = \left( \frac{d}{dt} \Phi|_{\mathcal{Y}_\mathcal{K}}^{-1} c \right)^{(d)} \; . \end{equation*} We deduce then the following formula for $\check{F}(c)$ in the present setting: \begin{equation*} \check{F}(c) := F_\alpha\left( \Phi|_{\mathcal{Y}_\mathcal{K}}^{-1} c, \frac{d}{dt} \Phi|_{\mathcal{Y}_\mathcal{K}}^{-1} c \right) \; . \end{equation*} Here the vector $c$ contains information on both position and velocity, permitting especially to keep the problem dimension unchanged. To finish, let us remark that it is possible to make the above formula for $\check{F}$ explicit with respect to $c$ in certain settings. For instance it is possible to derive an explicit quadratic formula for $\check{F}(c)$ when the integrand defining $F_\alpha$ is quadratic with respect to $y(t)$ and $\dot{y}(t)$; this formula is implemented in \texttt{PyRotor} and the arguments to obtain it are similar to those proving \cref{prop:quad_min_pb}. \subsection{Numerical results} Numerical results based on randomly generated data for the above physical application are presented in this section. First of all we consider trajectories with two components $y^{(1)}$ and $y^{(2)}$ lying in the square $[0,1]^2$ for the sake of simplicity. We set the starting and ending points as follows: \begin{equation*} y^{(1)}(0) = 0.111 \quad , \quad y^{(2)}(0) = 0.926 \quad , \quad y^{(1)}(1) = 0.912 \quad , \quad y^{(2)}(1) = 0.211 \end{equation*} with a tolerated error $1 \times 10^{-4}$, and the vector field $V: \mathbb{R}^2 \longrightarrow \mathbb{R}^2$ is here defined by \begin{equation*} V\Big(x^{(1)}, x^{(2)}\Big) = \Big( 0, x^{(1)} \Big)^T \; . \end{equation*} Given the above endpoints and the vector field, we observe that the force modelled by $V$ will be in average a resistance force to the motion. Indeed the force is oriented toward the top of the square while the moving point has to go downward. Further let us note that the integrand of the cost function $F_\alpha$ in the present setting is actually quadratic with respect to $y(t)$ and $\dot{y}(t)$, so that an explicit quadratic formula for $\check{F}(c)$ implemented in \texttt{PyRotor} is available. Here the reference trajectories are obtained through a random generation process. To do so, we define an arbitrarily trajectory $y_R$ verifying the endpoints conditions and we compute its associated vector $c_R$; Legendre polynomials are once again used and the dimensions of $y^{(1)}$ and $y^{(2)}$ are here set to 4 and 6. Let us note that $y_R$ is designed in such a way that it has a relevant pattern but not the optimal one. Then we construct a set of reference trajectories by adding centered Gaussian noises to $c_R$. It is noteworthy that the noise is generated in such a way that it belongs to the null space of the matrix $A$ describing the endpoints conditions; the resulting noised trajectories satisfy then these conditions. Further the trajectories which go out of the square $[0,1]^2$ are not kept. At the end, we get 122 generated reference trajectories assumed to be realistic in this setting, each of them containing 81 time observations. Among these reference trajectories, we use the 10 most efficient ones with respect to the cost $F_\alpha$. In the present example, we set a $\nu_{max}$ relatively large to explore a large domain around the reference trajectories. In this case, the objective function of the optimisation problem \eqref{eq:opt_proj_3} may be not convex even if it is still quadratic. So we make use of the generic optimisation solver \texttt{minimize(method='trust-constr')} imported in \texttt{PyRotor}. Regarding the execution time, we have randomly and uniformly generated 100 values in the interval $[0,10]$ for the parameter $\alpha$ and executed \texttt{PyRotor} for each of them. The mean of \texttt{PyRotor} execution time is 0.44~s with standard deviation 0.03~s on an Intel Core i7 6 cores running at 2.2~GHz. In \cref{fig:opti_work_velocity}, we plot 4 optimised trajectories associated with different values of $\alpha$: 0, 0.35, 1 and 10. As expected the trajectory associated with the largest value of $\alpha$ gives the most straight trajectory while the most curvy one is associated with $\alpha = 0$. In particular, the latter tends to move to the left at the beginning where the force $V$ is the smallest before going to the ending point in a quasi-straightforward way so that the force is perpendicular to the motion. This example illustrates especially that our optimisation approach may lead to optimised trajectories which differ from the reference ones to reduce more the cost. A quantitative comparison in terms of work gains for different values of $\alpha$ is provided in \cref{tab:max_work}. The results confirm the above observations on the curves and show that a right value for $\alpha$ has to be fixed depending on the setting. \begin{figure}[t] \centering \includegraphics[width=\textwidth]{figures/opti_work_velocity.png} \caption{Optimised trajectories in the square $[0,1]^2$ for $\alpha \in \{0, 0.35, 1, 10 \}$ -- Optimised and reference trajectories are respectively given by plain and dotted curves -- Coloured dots indicate the power value of the force at different points of the optimised trajectories and the bar shows the scale -- Red arrows represent the pattern of the vector field $V$.} \label{fig:opti_work_velocity} \end{figure} \begin{table} \caption{\label{tab:max_work}Statistical description of the work gains in percentage for $\alpha \in \{0, 0.35, 1, 10 \}$ -- The values have been computed by using the 122 available reference trajectories -- Negative percentages indicate that no work gains have been obtained -- $Q_1$, $Q_2$ and $Q_3$ refer to the first, second and third quartiles.} \centering \fbox{ \begin{tabular*}{\textwidth}{@{\extracolsep{\fill}} l*{6}{c}r} & Mean & Std & Min & $Q_1$ & $Q_2$ & $Q_3$ & Max \\ \hline $\alpha = 0$ & 73.43 & 2.36 & 68.63 & 71.90 & 73.25 & 74.67 & 80.69 \\ $\alpha = 0.35$ & 45.88 & 4.81 & 36.09 & 42.75 & 45.49 & 48.39 & 60.66 \\ $\alpha = 1$ & $-6.12$ & 9.43 & $-25.31$ & $-12.26$ & $-6.88$ & $-1.20$ & 22.87 \\ $\alpha = 10$ & $-34.54$ & 11.96 & $-58.87$ & $-42.32$ & $-35.50$ & $-28.30$ & 2.22 \\ \end{tabular*}} \end{table} \subsubsection*{\bibname}} \usepackage[top=3cm, bottom=3cm, left=3cm, right=3cm]{geometry} \usepackage{booktabs} \usepackage{xcolor} \usepackage{latexsym} \usepackage{amsmath} \usepackage{amssymb} \usepackage{amsfonts} \usepackage{amsthm} \usepackage{multirow} \usepackage{tikz} \usetikzlibrary{arrows,positioning,shapes} \usepackage{tcolorbox} \usepackage[english]{babel} \RequirePackage[% pdfstartview=FitH,% breaklinks=true,% bookmarks=true,% colorlinks=true,% linkcolor= blue, anchorcolor=blue,% citecolor=blue, filecolor=blue,% menucolor=blue,% urlcolor=blue% ]{hyperref} \AtBeginDocument{% \hypersetup{% pdfauthor={Florent Dewez, Benjamin Guedj, Arthur Talpaert and Vincent Vandewalle},% urlcolor = blue,% linkcolor = blue,% citecolor = orange,% pdftitle={Title - compilation: \today}% } } \newcommand{\paren}[1]{\left( #1 \right)} \newcommand{\croch}[1]{\left[\, #1 \,\right]} \newcommand{\acc}[1]{\left\{ #1 \right\}} \newcommand{\abs}[1]{\left| #1 \right|} \newcommand{\norm}[1]{\left\Vert #1 \right\Vert} \newcommand{\todo}[1]{\textbf{\color{red}{[TODO: #1]}}} \usepackage[mathcal]{eucal} \usepackage{cleveref} \crefname{assumption}{Assumption}{Assumptions} \crefname{equation}{Eq.}{Eqs.} \crefname{figure}{Fig.}{Figs.} \crefname{table}{Table}{Tables} \crefname{section}{Sec.}{Secs.} \crefname{theorem}{Thm.}{Thms.} \crefname{lemma}{Lemma}{Lemmas} \crefname{corollary}{Cor.}{Cors.} \crefname{example}{Example}{Examples} \crefname{appendix}{Appendix}{Appendixes} \crefname{remark}{Remark}{Remark} \renewenvironment{proof}[1][\proofname]{{\bfseries #1.}}{\qed \\ } \makeatother \newcommand{\note}[1]{{\textbf{\color{red}#1}}} \theoremstyle{plain} \newtheorem{theorem}{Theorem}[section] \newtheorem{definition}[theorem]{Definition} \newtheorem{attempt}[theorem]{Attempt} \newtheorem{lemma}[theorem]{Lemma} \newtheorem{proposition}[theorem]{Proposition} \newtheorem{property}[theorem]{Property} \newtheorem{properties}[theorem]{Properties} \newtheorem{corollary}[theorem]{Corollary} \newtheorem{remark}[theorem]{Remark} \newtheorem{warning}[theorem]{\textcolor{red}{Warning}} \newtheorem{example}[theorem]{Example} \newtheorem{examples}[theorem]{Examples} \bibliographystyle{plainnat} \DeclareMathOperator*{\argmin}{arg\,min \:} \DeclareMathOperator*{\rk}{\text{rank} \,} \DeclareMathOperator*{\im}{\text{Im} \,} \newcommand{\mathrm{N1}}{\mathbb{N}} \newcommand{\mathbb{R}}{\mathbb{R}} \newcommand{\mathbb{E}}{\mathbb{E}} \def\mathrm{FF}{\mathrm{FF}} \def\mathrm{TFC}{\mathrm{TFC}} \def\mathrm{N1}{\mathrm{N1}} \def\mathrm{M}{\mathrm{M}} \def\mathrm{MMO}{\mathrm{MMO}} \usepackage{arydshln} \begin{document} \title{An end-to-end data-driven optimisation framework for constrained trajectories} \author{\textbf{Florent Dewez} \\ [2ex] Inria, Lille - Nord Europe Research centre, France \\\\ \textbf{Benjamin Guedj} \\ [2ex] Inria, Lille - Nord Europe Research centre, France \\ \emph{and} Centre for Artificial Intelligence,\\ Department of Computer Science,\\ University College London, United Kingdom \\\\ \textbf{Arthur Talpaert} \\ [2ex] Inria, Lille - Nord Europe Research centre, France \\\\ \textbf{Vincent Vandewalle} \\ [2ex] Inria, Lille - Nord Europe Research centre\\ \emph{and} Université de Lille, France\\ } \date{} \maketitle \input{abstract} \tableofcontents \section{Introduction} \label{sec:intro} \input{intro.tex} \section{An end-to-end optimisation workflow based on observed trajectories} \label{sec:opti} \input{optimisation.tex} \section{The Python library Pyrotor} \label{sec:pyrotor} \input{pyrotor} \section{Application 1: trajectory optimisation for fuel-efficient aircrafts} \label{sec:appli_aero} \input{appli_aero} \section{Application 2: trajectory optimisation to maximise work of a force field} \label{sec:appli_nautic} \input{appli_nautic} \section{Conclusion / Discussion} \input{conclusion} \section*{Acknowledgements} \input{acknowledgments} \subsection{Fuel-efficient trajectories for aircraft} \label{subsec:appli_aero_model} \todo{Notations for fuel flow and consumption ?} In this subsection, we consider the aeronautic problem of reducing the total fuel consumption of an aircraft during the climb phase. This example illustrates the key role played by the reference trajectories since we are able to obtain very promising optimised trajectories thanks to a simple modelling involving few constraints; we refer to \cref{sec:appli_aero} for the numerical results. This motivates further studies on this problem using our methodology. For instance one could model environmental conditions or take into account Air Traffic Control constraints for more realistic modellings. Here trajectories are supposed to be in a vertical plane and are defined by the altitude $h$, the Mach number $\mathrm{M}$ and the engines rotational speed $\mathrm{N1}$ (expressed as a percentage of a maximal value). Hence a trajectory $y$ in this setting is a continuous $\mathbb{R}^3$-valued function defined on $[0,T]$, where $T$ is a maximal climb duration fixed by the user. Hence we have \begin{equation*} \forall \, t \in [0,T] \qquad y(t) := \big( h(t), \mathrm{M}(t), \mathrm{N1}(t) \big) \; . \end{equation*} The quantity to minimise is the total fuel consumption $\mathrm{TFC}: \mathcal{C}\big( [0,T], \mathbb{R}^3 \big) \longrightarrow \mathbb{R}_+$ which is defined via the fuel flow $\mathrm{FF}: \mathbb{R}^3 \longrightarrow \mathbb{R}_+$ as follows: \begin{equation*} \mathrm{TFC}(y) := \int_0^T \mathrm{FF}\big(y(t)\big) \, dt \; . \end{equation*} \todo{footnote to refer to preceding notations} Regarding the endpoints conditions, we require the trajectory to start at the altitude $h_0$ with Mach number $\mathrm{M}_0$ and to end at the altitude $h_T$ with Mach number $\mathrm{M}_T$. In particular, the reference trajectories we use have to verify these conditions. \todo{itemize} We consider also two additional constraints which are conventional in the aeronautic setting. The rate of climb, \emph{i.e.} the time-derivative of the altitude, has to be upper bounded by a given maximal value $\gamma_{max}$ during the whole climb and the Mach number should not exceed the maximum operational Mach ($\mathrm{MMO}$). We also consider a third constraint which permits to determine the duration of the climb: if we let denote $T^\star \in [0,T]$ the first time where the aircraft reaches $h_T$, then we require $\mathrm{M}(T^\star)$ to be equal to $\mathrm{M}_T$. Note that this constraint is satisfied by the reference trajectories. Finally we mention that the fuel flow model $\mathrm{FF}$ is here estimated. To do so, we exploit the reference trajectories which contain recorded altitude, Mach number, engines power and fuel flow for each second of the flight. Having access to these data, we are in position to fit a statistical model. Here we consider a polynomial model of degree 2 to obtain at the end a quadratic program as explained in \cref{subsec:quad_model}. The prediction accuracy of the resulting estimated model is assessed in \cref{sec:appli_aero}. \subsection{Maximising the work of a physical force along a path} \label{subsec:appli_phys_nautic} Here we consider the following generic example: given a moving point in a force field, find a trajectory starting and ending at two different given points which maximises the work of the force along the trajectory while minimising the travelled distance. For instance, \todo{motion of point in wind -- nautic}. This second example demonstrates that our generic optimisation approach is flexible enough to take into account derivatives of trajectories and hence to cover dynamics settings. To model this problem, we suppose without loss of generality that the trajectories are defined on the (time-)interval $[0,1]$ and we let $V: \mathbb{R}^D \longrightarrow \mathbb{R}$ denote a vector field. Furthermore the trajectories are assumed here to be continuously differentiable, \emph{i.e.} they belong to $\mathcal{C}^1\big([0,1], \mathbb{R}^D \big)$. The work of $V$ along a trajectory $y \in \mathcal{C}^1\big( [0,1], \mathbb{R}^D \big)$ is defined as \begin{equation*} W(y, \dot{y}) := \int_0^1 V\big(y(t)\big)^T \dot{y}(t) \, dt \; ; \end{equation*} here $\dot{y}$ denotes the derivative of $y$ with respect to the independent variable $t$. Moreover using Hamilton's principle in Lagrangian mechanics, it can be shown that the trajectory with constant velocity is the minimum of the following functional, \begin{equation*} J(\dot{y}) = \int_0^1 \big\| \dot{y}(t) \big\|_2^2 \, dt \; , \end{equation*} where the starting and ending points of $y$ are fixed and different. This functional can be then used to control the velocity of the moving point. It follows that minimising the cost function \begin{equation*} F_\alpha(y, \dot{y}) := \alpha J(\dot{y}) - W(y, \dot{y}) = \int_0^1 \alpha \big\| \dot{y}(t) \big\|_2^2 - V\big(y(t)\big)^T \dot{y}(t) \, dt \; , \end{equation*} where $\alpha \geqslant 0$ is arbitrarily chosen, is expected to lead to an optimised trajectory reflecting a trade-off between maximising the work and minimising the travelled distance. Further we require the trajectory to stay in the hypercube $[0,1]^D$ and to start and to end respectively at $y_0 \in [0,1]^D$ and $y_1 \in [0,1]^D$. Now we remark that the above cost function involves the (time-)derivative $\dot{y}$. So one has to derive a formula permitting to compute the derivative of any trajectory $y = \Phi|_{\mathcal{Y}_\mathcal{K}}^{-1} c \in \mathcal{Y}_\mathcal{K}$ from its associated vector $c \in \mathbb{R}^K$, especially to compute the cost $\check{F}(c)$ (introduced in \cref{def:cost} and involved in the generic optimisation problem \eqref{eq:opt_proj_2}). For instance, this can be easily achieved by assuming that each element of the functional basis is continuously differentiable. Indeed we can differentiate in this case any $y \in \mathcal{Y}_\mathcal{K}$ as follows: \begin{equation*} \forall \, d = 1, \dots, D \qquad \dot{y}^{(d)} = \left( \frac{d}{dt} \Phi|_{\mathcal{Y}_\mathcal{K}}^{-1} c \right)^{(d)} = \sum_{k=1}^{K_d} c_k^{(d)} \dot{\varphi}_k \; . \end{equation*} We deduce then the following formula for $\check{F}(c)$ in the present setting: \begin{equation*} \check{F}(c) := F_\alpha\left( \Phi|_{\mathcal{Y}_\mathcal{K}}^{-1} c, \frac{d}{dt} \Phi|_{\mathcal{Y}_\mathcal{K}}^{-1} c \right) \; . \end{equation*} Here the vector $c$ contains information on both position and velocity, permitting especially to keep the problem dimension unchanged. To finish, let us remark that it is possible to make the above formula for $\check{F}(c)$ explicit with respect to $c$ for certain cost functions $F_\alpha$. For instance this is the case when $F_\alpha$ is quadratic leading to a quadratic function $\check{F}$. \subsection{Admissible trajectories modelling} \label{sec:2-1} We start with definitions. \begin{definition}[Trajectory] Let $T > 0$ be a real number and let $D \geqslant 1$ be an integer. Any continuous $\mathbb{R}^D$-valued map $y$ defined on $[0,T]$, \emph{i.e.} $y \in \mathcal{C}\big([0,T], \mathbb{R}^D\big)$, is called a \emph{trajectory} over the time interval $[0,T]$. The $d$-th component of a trajectory $y$ will be denoted by $y^{(d)}$. As such, a trajectory is at least a continuous map on a finite interval. \end{definition} When optimising a trajectory with respect to a given criterion, the initial and final states are often constrained, that is to say the optimisation is performed in an affine subspace modelling these endpoints conditions. This subspace is introduced just below. \begin{definition}[Endpoints conditions] \label{def:endpoints} Let $y_0, y_T \in \mathbb{R}^D$. We define the set $\mathcal{D}(y_0, y_T) \subset \mathcal{C}\big([0,T], \mathbb{R}^D\big)$ as follows: \begin{align*} y \in \mathcal{D}(y_0, y_T) \qquad & \Longleftrightarrow \qquad \left\{ \begin{array}{l} y(0) = y_0 \\[2mm] y(T) = y_T \end{array} \right. \; . \end{align*} \end{definition} In many applications, the trajectories have to satisfy some additional constraints defined by a set of (nonlinear) functions. For instance these functions may model physical or user-defined constraints. In this paper, this set is not intended to include the dynamics of the system. We define now the set of trajectories verifying such additional constraints. \begin{definition}[Additional constraints] \label{def:constraints} For $\ell = 1,\dots,L$, let $g_\ell$ be a real-valued function defined on $\mathbb{R}^D$. We define the set $\mathcal{G} \subset \mathcal{C}\big([0,T], \mathbb{R}^D\big)$ as the set of trajectories over $[0,T]$ satisfying the following $L$ inequality constraints given by the functions $g_\ell$, \emph{i.e.} \begin{equation*} y \in \mathcal{G} \qquad \Longleftrightarrow \qquad \forall \, \ell = 1, \dots, L \quad \forall \, t \in [0,T] \qquad g_\ell\big(y(t)\big) \leqslant 0 \; . \end{equation*} \end{definition} To finish we introduce the set of admissible trajectories which satisfy both the endpoints conditions and the additional constraints. \begin{definition}[Admissible trajectory] \label{def:admissible} We define the set $\mathcal{A}_\mathcal{G}(y_0, y_T) \subset \mathcal{C}\big([0,T], \mathbb{R}^D\big)$ as follows: \begin{equation*} \mathcal{A}_\mathcal{G}(y_0, y_T) := \mathcal{D}(y_0, y_T) \cap \mathcal{G} \; . \end{equation*} Any element of $\mathcal{A}_\mathcal{G}(y_0, y_T)$ will be called an \emph{admissible trajectory}. \end{definition} \subsection{Projection for a finite-dimensional optimisation problem} \label{sec:2-2} In our approach, a theoretical optimisation problem in a finite-dimensional space is desired to reduce the inherent complexity of the problem. This can be achieved by decomposing the trajectories on a finite number of basis functions. While raw signals are unlikely to be described by a small number of parameters, this is not the case for smoothed versions of these signals which capture the important patterns. In particular, given a family of smoothed observed trajectories, one may suppose that there exists a basis such that the projection error on a certain number of basis functions of any trajectory is negligible. From now on, the trajectories we consider are assumed to belong to a space spanned by a finite number of basis functions. For the sake of simplicity, we assume in addition that all the components of the trajectories can be decomposed on the same basis but with different dimensions. Extension to different bases is straightforward and does not change our findings but burdens the notation. \begin{definition} \label{def:projected_set} Let $\{\varphi_k\}_{k=1}^{+\infty}$ be an orthonormal basis of $L^2\big([0,T], \mathbb{R} \big)$ with respect to the inner product \begin{equation*} \langle f, g \rangle = \int_0^T f(t) \, g(t) \, dt \; , \end{equation*} such that each $\varphi_k$ is continuous on $[0,T]$ and let $\mathcal{K} := \{K_d\}_{d=1}^D$ be a sequence of integers with $K := \sum_{d = 1}^D K_d$. We define the space of projected trajectories $\mathcal{Y}_\mathcal{K}(0,T) \subset \mathcal{C}\big([0,T], \mathbb{R}^D\big)$ over $[0,T]$ as \begin{equation*} \mathcal{Y}_\mathcal{K}(0,T) := \prod_{d = 1}^D \text{span} \left\{ \varphi_{k} \right\}_{k = 1}^{K_d} \; . \end{equation*} If there is no risk of confusion, we write $\mathcal{Y}_\mathcal{K} := \mathcal{Y}_\mathcal{K}(0,T)$ for the sake of readability. \end{definition} \begin{remark} From the above definition, any projected trajectory $y \in \mathcal{Y}_\mathcal{K}$ is associated with a unique vector \begin{equation*} c = \Big( c_1^{(1)}, \dots, c_{K_1}^{(1)}, \; c_1^{(2)}, \dots, c_{K_2}^{(2)}, \dots, \; c_1^{(D)}, \dots, c_{K_D}^{(D)} \Big)^T \in \mathbb{R}^K \end{equation*} defined by \begin{equation} \label{eq:def_c} c_k^{(d)} := \big\langle y^{(d)}, \varphi_k \big\rangle = \int_0^T y^{(d)}(t) \, \varphi_{k}(t) \, dt \; . \end{equation} In other words, the vector $c$ is the image of the trajectory $y$ by the projection operator $\Phi : \mathcal{C}\big([0,T], \mathbb{R}^D \big) \longrightarrow \mathbb{R}^K$ defined by $\Phi y := c$, whose restriction $\Phi|_{\mathcal{Y}_\mathcal{K}}$ is bijective (as the Cartesian product of bijective operators). In particular, the spaces $\mathcal{Y}_\mathcal{K}$ and $\mathbb{R}^K$ are isomorphic, \emph{i.e.} $\mathcal{Y}_\mathcal{K} \simeq \mathbb{R}^K$. \end{remark} Regarding the endpoints conditions introduced in \cref{def:endpoints}, we prove in the following result that satisfying these conditions is equivalent to satisfying a linear system for a projected trajectory. \begin{proposition} \label{prop:linear_const} A trajectory $y \in \mathcal{Y}_{\mathcal{K}}$ belongs to $\mathcal{D}(y_0, y_T)$ if and only if its associated vector $c := \Phi y \in \mathbb{R}^K$ satisfies the linear system \begin{equation} \label{eq:endpoint_syst} A(0,T) \, c = \Gamma \; , \end{equation} where the matrix $A(0,T) \in \mathbb{R}^{2D \times K}$ and the vector $\Gamma \in \mathbb{R}^{2D}$ are defined as follows \begin{equation*} A(0, T) := \left( \begin{array}{ccccccc} \varphi_1(0) & \dots & \varphi_{K_1}(0) & & & & \\ & & & \ddots & & & \\ & & & & \varphi_1(0) & \dots & \varphi_{K_D}(0) \\ \varphi_1(T) & \dots & \varphi_{K_1}(T) & & & & \\ & & & \ddots & & & \\ & & & & \varphi_1(T) & \dots & \varphi_{K_D}(T) \\ \end{array} \right) \quad , \quad \Gamma := \left( \begin{array}{c} y_0\\ y_T \end{array} \right) \; . \end{equation*} \end{proposition} \begin{proof} Let $y \in \mathcal{Y}_\mathcal{K}$ and let $c := \Phi y \in \mathbb{R}^K$. By the definition of the matrix $A(0,T)$, we have \begin{align*} A(0,T) \, c & = A(0,T) \Big( c_1^{(1)}, \dots, c_{K_1}^{(1)}, \; c_1^{(2)}, \dots, c_{K_2}^{(2)}, \dots, \; c_1^{(D)}, \dots, c_{K_D}^{(D)} \Big)^T \\ & = \left( \sum_{k=1}^{K_1} c_k^{(1)} \varphi_k(0), \dots, \displaystyle \sum_{k=1}^{K_D} c_k^{(D)} \varphi_k(0), \dots, \sum_{k=1}^{K_1} c_k^{(1)} \varphi_k(T), \dots, \sum_{k=1}^{K_D} c_k^{(D)} \varphi_k(T) \right)^T \\ & = \left( \begin{array}{c} y(0)\\ y(T) \end{array} \right) \; . \end{align*} The conclusion follows directly from the preceding relation. \end{proof} \subsection{Reference trajectories modelling} \label{subsec:ref_traj} Let us now suppose that we have access to $I$ recorded trajectories $y_{R_1},\ldots,y_{R_I}$, called \emph{reference trajectories}, coming from some experiments. We propose here a statistical modelling for these reference trajectories, permitting especially to exhibit some linear properties. This modelling will permit to take advantage of the information contained in these recorded trajectories when deriving optimisation problems in the next subsection. These trajectories being recorded, they are in particular admissible and we assume that they belong to the space $\mathcal{Y}_{\mathcal{K}}(0,T)$. As explained previously they may be interpreted as smoothed versions of recorded signals. In particular each reference trajectory $y_{R_i}$ is associated with a unique vector $c_{R_i} \in \mathbb{R}^K$. Moreover we consider each reference trajectory as a noisy observation of a certain admissible and projected trajectory $y_*$. In other words we suppose that there exists a trajectory $y_* \in \mathcal{Y}_{\mathcal{K}} \cap \mathcal{A}_\mathcal{G}(y_0, y_T)$ associated with a vector $c_* \in \mathbb{R}^K$ satisfying \begin{equation*} \forall \, i = 1, \dots, I \qquad c_{R_i} = c_* + \varepsilon_i \; . \end{equation*} The noise $\varepsilon_i$ is here assumed to be a centered Gaussian whose covariance matrix $\Sigma_i$ is of the form \begin{equation*} \Sigma_i = \frac{1}{2 \omega_i} \, \Sigma \; , \end{equation*} where $\Sigma \in \mathbb{R}^{K \times K}$. It is noteworthy that this matrix will not be known in most of the cases but an estimated covariance matrix can be computed on the basis of the reference vectors. The positive real numbers $\omega_i$ are here considered as weights so we require $\sum_{i=1}^I \omega_i = 1 \;$; each $\omega_i$ plays actually the role of a noise intensity. Further from the hypothesis that the trajectory $y_*$ and all the reference trajectories $y_{R_i}$ verify the same endpoints conditions, we deduce \begin{equation*} A \, c_{R_i} = A \, c_* + A \, \varepsilon_i \qquad \Longleftrightarrow \qquad A \, \varepsilon_i = 0_{\mathbb{R}^{2D}} \qquad \Longleftrightarrow \qquad \varepsilon_i \in \ker A \; , \end{equation*} for all $i = 1, \dots, I$ (we shorten $A(0,T)$ in $A$ when the context is clear). Hence the reference vector $c_*$ satisfies the following $I$ systems: \begin{equation} \label{eq:model_c} \left\{ \begin{array}{l} c_{R_i} = c_* + \varepsilon_i \\[2mm] \varepsilon_i \sim \mathcal{N}(0_{\mathbb{R}^K}, \Sigma_i) \\[2mm] \varepsilon_i \in \ker A \end{array} \right. \; . \end{equation} To establish a more explicit system which is equivalent to the preceding one, we require the following preliminary proposition. Here we diagonalise the matrices $\Sigma$ and $A^T A$ by exploiting the fact that the image of the first one is contained in the null space of the other one and vice versa; this is shown in the proof. This property is actually a consequence of the above modelling: the endpoints conditions modelled by $A$ imply linear relations within the components of the vectors, which should be reflected by the covariance matrix $\Sigma$. The following result will be helpful to establish \cref{prop:equiv_syst}. \begin{proposition} \label{prop:matrix_structure} We define $\sigma := \rk \Sigma$ and $a := \rk A^T A$. In the setting of system \eqref{eq:model_c}, we have $\sigma + a \leqslant K$ and there exist an orthogonal matrix $V \in \mathbb{R}^{K \times K}$ and two matrices $\Lambda_\Sigma \in \mathbb{R}^{K \times K}$ and $\Lambda_A \in \mathbb{R}^{K \times K}$ of the following form: \begin{equation*} \Lambda_\Sigma = \left( \begin{array}{cc} \Lambda_{\Sigma,1} & 0_{\mathbb{R}^{\sigma \times (K-\sigma)}} \\[2mm] 0_{\mathbb{R}^{(K-\sigma) \times \sigma}} & 0_{\mathbb{R}^{(K-\sigma) \times (K-\sigma)}} \end{array} \right) \qquad , \qquad \Lambda_A = \left( \begin{array}{cc} 0_{\mathbb{R}^{(K-a) \times (K-a)}} & 0_{\mathbb{R}^{(K-a) \times a}} \\[2mm] 0_{\mathbb{R}^{a \times (K-a)}} & \Lambda_{A,2} \end{array} \right) \; , \end{equation*} where $\Lambda_{\Sigma,1} \in \mathbb{R}^{\sigma \times \sigma}$ and $\Lambda_{A,2} \in \mathbb{R}^{a \times a}$ are diagonal matrices with positive elements, such that \begin{equation*} \Sigma = V \Lambda_\Sigma V^T \qquad , \qquad A^T A = V \Lambda_A V^T \; . \end{equation*} \end{proposition} \begin{proof} The starting point of the proof is to remark that we have \begin{equation} \label{eq:commute_zero} \Sigma \, A^T A = A^T A \, \Sigma = 0_{\mathbb{R}^{K \times K}} \; . \end{equation} Indeed using the hypothesis $\varepsilon_i \in \ker A$ for any $i=1,\dots,I$ gives \begin{equation*} \Sigma \, A^T A = 2 \omega_i \, \Sigma_i \, A^T A = 2 \omega_i \, \mathbb{E}(\varepsilon_i \varepsilon_i^T) \, A^T A = 2 \omega_i \, \mathbb{E}\big(\varepsilon_i \, (A \varepsilon_i)^T \big) \, A = 0_{\mathbb{R}^{K \times K}} \; ; \end{equation*} similar arguments prove the second equality in \eqref{eq:commute_zero}. First, we can deduce \begin{equation} \label{eq:inclusion} \im \Sigma \subseteq \ker A^T A \; , \end{equation} which leads to $\sigma \leqslant K - a$ by the rank-nullity theorem. Equalities \eqref{eq:commute_zero} show also that $\Sigma$ and $A^T A$ are simultaneously diagonalisable (since they commute) so there exists an orthogonal matrix $V \in \mathbb{R}^{K \times K}$ such that \begin{equation} \label{eq:diag} \Sigma = V \Lambda_\Sigma V^T \qquad , \qquad A^T A = V \Lambda_A V^T \; , \end{equation} where $\Lambda_\Sigma \in \mathbb{R}^{K \times K}$ and $\Lambda_A \in \mathbb{R}^{K \times K}$ are diagonal matrices. Permuting if necessary columns of $V$, we can write the matrix $\Lambda_\Sigma$ as follows: \begin{equation} \label{eq:lambda_sigma} \Lambda_\Sigma = \left( \begin{array}{cc} \Lambda_{\Sigma,1} & 0_{\mathbb{R}^{\sigma \times (K-\sigma)}} \\[2mm] 0_{\mathbb{R}^{(K-\sigma) \times \sigma}} & 0_{\mathbb{R}^{(K-\sigma) \times (K-\sigma)}} \end{array} \right) \; ; \end{equation} in other words the $\sigma$ first column vectors of $V$ span the image of $\Sigma$. From the inclusion \eqref{eq:inclusion}, we deduce that these vectors belong to the null space of $A^T A$. Hence the $\sigma$ first diagonal elements of $\Lambda_A$ are equal to zero and, up to a permutation of the $K - \sigma$ last column vectors of $V$, we can write \begin{equation*} \Lambda_A = \left( \begin{array}{cc} 0_{\mathbb{R}^{(K-a) \times (K-a)}} & 0_{\mathbb{R}^{(K-a) \times a}} \\[2mm] 0_{\mathbb{R}^{a \times (K-a)}} & \Lambda_{A,2} \end{array} \right) \; , \end{equation*} which ends the proof. \end{proof} \begin{remark} From equalities \eqref{eq:commute_zero}, we can also deduce \begin{equation*} \im A^T A \subseteq \ker \Sigma \; , \end{equation*} showing that $\Sigma$ is singular. Consequently the Gaussian noise $\varepsilon_i$ involved in \eqref{eq:model_c} is degenerate. \end{remark} A new formulation of system \eqref{eq:model_c} which makes explicit the constrained and unconstrained parts of a vector satisfying this system is given in the following result. This is achieved by using the preceding result which allows to decompose the space $\mathbb{R}^K$ into three orthogonal subspaces. We prove that the restriction of the noise $\varepsilon_i$ to the first subspace is a non-degenerate Gaussian, showing that this first subspace corresponds to the unconstrained one. The two other subspaces describe affine relations coming from the endpoints conditions and from implicit relations within the vector components. These implicit relations, which may model for instance natural trends, are expected to be contained in the reference vectors $c_{R_i}$ and reflected by the (estimated) covariance matrix $\Sigma$.\\ Prior to this, let us write the matrix $V \in \mathbb{R}^{K \times K}$ introduced in \cref{prop:matrix_structure} as follows: \begin{equation*} V = \big( V_1 \quad V_2 \quad V_3 \big) \; , \end{equation*} where $V_1 \in \mathbb{R}^{K \times \sigma}$, $V_2 \in \mathbb{R}^{K \times K-\sigma-a}$ and $V_3 \in \mathbb{R}^{K \times a}$. We emphasise that the column-vectors of the matrices $V_1$ and $V_3$ do not overlap according to the property $\sigma + a \leqslant K$ proved in \cref{prop:matrix_structure}. In particular the matrix $V_2$ has to be considered only in the case $\sigma + a < K$. Further for any $c \in \mathbb{R}^K$, we will use the notations \begin{equation*} \widetilde{c} := V^T c \qquad, \qquad \widetilde{c}_\ell := V_\ell^T c \; , \end{equation*} for $\ell=1,2,3$. Finally we consider the singular value decomposition of $A$ coming from the diagonalisation of the symmetric matrix $A^T A$ with $V$: \begin{equation*} A = U S_A V^T \; , \end{equation*} where $U \in \mathbb{R}^{2D \times 2D}$ is orthogonal and $S_A \in \mathbb{R}^{2D \times K}$ is a rectangular diagonal matrix of the following form: \begin{equation} \label{eq:s_a} S_A = \big( 0_{\mathbb{R}^{2D \times K-2D}} \quad S_{A,2} \big) \; , \end{equation} with $S_{A,2} := \sqrt{\Lambda_{A,2}} \in \mathbb{R}^{2D \times 2D}$. \begin{proposition} \label{prop:equiv_syst} Suppose that the matrix $A$ is full rank, \emph{i.e.} $a = 2D$. Then for any $i=1, \dots, I$, system \eqref{eq:model_c} is equivalent to the following one: \begin{equation} \label{eq:model_c_tilde_2} \left\{ \begin{array}{l} \widetilde{c}_{R_i, 1} = \widetilde{c}_{*,1} + \widetilde{\varepsilon}_{i, 1} \\[2mm] \displaystyle \widetilde{\varepsilon}_{i, 1} \sim \mathcal{N} \left(0_{\mathbb{R}^\sigma}, \frac{1}{2 \omega_i} \, \Lambda_{\Sigma,1} \right) \\[3mm] \widetilde{c}_{*,2} = V_2^T c_{R_i} \\[2mm] \widetilde{c}_{*,3} = \, S_{A,2}^{-1} \, U^T \Gamma \end{array} \right. \; . \end{equation} \end{proposition} \begin{proof} We first prove that system \eqref{eq:model_c} is equivalent to \begin{equation} \label{eq:model_c_tilde} \left\{ \begin{array}{l} \widetilde{c}_{R_i} = \widetilde{c}_* + \widetilde{\varepsilon}_i \\[2mm] \displaystyle \widetilde{\varepsilon}_i \sim \mathcal{N} \left(0_{\mathbb{R}^K}, \frac{1}{2 \omega_i} \, \Lambda_\Sigma \right) \\[3mm] S_A \, \widetilde{c}_* = U^T \Gamma \end{array} \right. \; . \end{equation} The matrix $V$ being orthogonal, it is non-singular and so we have for all $i = 1, \dots, I$, \begin{equation*} c_{R_i} = c_* + \varepsilon_i \qquad \Longleftrightarrow \qquad \widetilde{c}_{R_i} = \widetilde{c}_* + \widetilde{\varepsilon}_i \; , \end{equation*} and, since $\Sigma_i = \frac{1}{2 \omega_i} \, \Sigma = \frac{1}{2 \omega_i} \, V \Lambda_\Sigma V^T$, we obtain \begin{equation*} \varepsilon_i \sim \mathcal{N}(0_{\mathbb{R}^K}, \Sigma_i) \qquad \Longleftrightarrow \qquad \widetilde{\varepsilon}_i \sim \mathcal{N} \left(0_{\mathbb{R}^K}, \frac{1}{2 \omega_i} \, \Lambda_\Sigma \right) \; . \end{equation*} Finally the property $\varepsilon_i \in \ker A$ is equivalent to \begin{align*} A \, c_* = \Gamma \qquad & \Longleftrightarrow \qquad U S_A V^T c_* = \Gamma \\ & \Longleftrightarrow \qquad S_A \, \widetilde{c}_* = U^T \Gamma \; , \end{align*} proving that the systems \eqref{eq:model_c} and \eqref{eq:model_c_tilde} are equivalent. Now the fact that the $K-\sigma$ last diagonal elements of $\Lambda_\Sigma$ are zero implies that the components $\widetilde{c}_{*,2} \in \mathbb{R}^{K-\sigma-2D}$ and $\widetilde{c}_{*,3} \in \mathbb{R}^{2D}$ are constant. From the first equality of \eqref{eq:model_c_tilde}, we have on one side \begin{equation*} \widetilde{c}_{R_i, 2} = \widetilde{c}_{*,2} \qquad \Longleftrightarrow \qquad V_2^T c_{R_i} = \widetilde{c}_{*,2} \; , \end{equation*} for any $i = 1, \dots, I$. On the other side, combining the last relation of the system \eqref{eq:model_c_tilde} with the form of the matrix $S_A$ given in \eqref{eq:s_a} permits to obtain \begin{align*} S_A \, \widetilde{c}_* = U^T \Gamma \qquad & \Longleftrightarrow \qquad S_{A,2} \, \widetilde{c}_{*,3} = U^T \Gamma \nonumber \\ & \Longleftrightarrow \qquad \widetilde{c}_{*,3} = \, S_{A,2}^{-1} \, U^T \Gamma \; , \end{align*} the last equivalence being justified by the hypothesis that the matrix $A$ is full rank (which implies that the diagonal matrix $S_{A,2}$ is non-singular). \end{proof} The above decomposition gives us access to non-degenerated density of $\widetilde{c}_{R_i, 1}$ given $\widetilde{c}_{*,1}$ which is later denoted by $u(\widetilde{c}_{R_i, 1}|\widetilde{c}_{*,1})$. In next section we will assume a prior distribution on $\widetilde{c}_{*,1}$ with high density for low values of the cost function $F$. \subsection{A trajectory optimisation problem via a Maximum A Posteriori approach} \label{subsec:modelling} Before introducing the Bayesian framework, let first recall that we are interested in minimising a certain cost function $F: \mathcal{C}\big([0,T], \mathbb{R}^D \big) \longrightarrow \mathbb{R}$ over the set of projected and admissible trajectories $\mathcal{Y}_\mathcal{K} \cap \mathcal{A}_\mathcal{G}(y_0, y_T)$. As explained previously, we propose here a methodology leading to a constrained optimisation problem based on the reference trajectories and designed to provide realistic trajectories. Technically speaking, we seek for the mode of a \emph{posterior} distribution which contains information from the reference trajectories. The aim of this subsection is then to obtain the \emph{posterior} distribution via Bayes's rule, using in particular the precise modelling of the reference trajectories given in \cref{prop:equiv_syst} and defining an accurate prior distribution with high density for low values of the cost function $F$. To do so, we recall firstly that all the trajectories considered here are assumed to belong to the space $\mathcal{Y}_\mathcal{K}$ which is isomorphic to $\mathbb{R}^K$. So each trajectory is here described by its associated vector in $\mathbb{R}^K$, permitting in particular to define distributions over finite-dimensional spaces. We recall also that the reference trajectories are interpreted as noisy observations of a certain $y_*$ associated with a $c_*$. According to \cref{prop:equiv_syst}, this vector complies with some affine conditions which are described by the following subspace $\mathcal{V}_1$: \begin{equation} \label{eq:v1} c \in \mathcal{V}_1 \qquad \Longleftrightarrow \qquad \left\{ \begin{array}{l} V_2^T c = V_2^T c_{R_i} \\[2mm] V_3^T c = S_{A,2}^{-1} \, U^T \Gamma \end{array} \right. \; . \end{equation} Hence a vector $c$ belonging to $\mathcal{V}_1$ is described only through its component $\widetilde{c}_1 := V_1^T c$. In addition we note that the definition of $\mathcal{V}_1$ does not depend actually on the choice of $i$ since $V_2^T c_{R_i}$ has been proved to be constant in \cref{prop:equiv_syst}. Further we emphasise that the matrix $A$ is supposed to be full rank in this case and we have $\mathcal{V}_1 \simeq \mathbb{R}^\sigma$; we recall that $\sigma$ is the rank of the covariance matrix $\Sigma$. Let us now define the cost function $F$ over the spaces $\mathbb{R}^K$ and $\mathcal{V}_1$. This is necessary to define the \emph{prior} distribution and to establish our optimisation problem. \begin{definition}[Cost functions] \label{def:cost} Let $\check{F}: \mathbb{R}^K \longrightarrow \mathbb{R}$ and $\widetilde{F}: \mathbb{R}^\sigma \longrightarrow \mathbb{R}$ be the functions defined by \begin{itemize} \item $\displaystyle \check{F}(c) := F \big( \Phi|_{\mathcal{Y}_\mathcal{K}}^{-1} c \big)$ ; \item $\displaystyle \widetilde{F}(\widetilde{c}_1) := F\Big( \Phi|_{\mathcal{Y}_\mathcal{K}}^{-1} \, V \Big( \widetilde{c}_1^T \quad c_{R_i}^{\ T} V_2 \quad \Gamma^T U \, \big(S_{A,2}^{-1} \big)^T \Big)^T \Big)$ . \end{itemize} \end{definition} \begin{remark} From the preceding definition, we observe that for any $y \in \mathcal{Y}_\mathcal{K}$ and its associated vector $c \in \mathbb{R}^K$, we have \begin{equation*} \check{F}(c) = F \big( \Phi|_{\mathcal{Y}_\mathcal{K}}^{-1} c \big) = F(y) \; . \end{equation*} Further for any $c \in \mathcal{V}_1$, we have \begin{align*} \check{F}(c) = F \big( \Phi|_{\mathcal{Y}_\mathcal{K}}^{-1} c \big) = F \big( \Phi|_{\mathcal{Y}_\mathcal{K}}^{-1} V \widetilde{c} \big) = F\Big( \Phi|_{\mathcal{Y}_\mathcal{K}}^{-1} \, V \Big( \widetilde{c}_1^T \quad c_{R_i}^{\ T} V_2 \quad \Gamma^T U \, \big(S_{A,2}^{-1} \big)^T \Big)^T \Big) = \widetilde{F}(\widetilde{c}_1) \; . \end{align*} We deduce that $\widetilde{F}$ is actually the restriction of $\check{F}$ to the subspace $\mathcal{V}_1$. \end{remark} From now on, the trajectory $y_*$ and the associated vector $c_*$ will be considered as random variables and will be denoted by $y$ and $c$. We are interested in the \emph{posterior} distribution \begin{equation*} u(\widetilde{c}_1 \, | \, \widetilde{c}_{R_1, 1}, \dots, \widetilde{c}_{R_I, 1}) \; , \end{equation*} which depends only on the free component $\widetilde{c}_1$ of $c \in \mathcal{V}_1$, the two other ones $\widetilde{c}_2$ and $\widetilde{c}_3$ being fixed according to \eqref{eq:v1}. We use Bayes's rule to model the \emph{posterior} via the \emph{prior} and likelihood distributions, leading to \begin{equation*} u(\widetilde{c}_1 \, | \, \widetilde{c}_{R_1, 1}, \dots, \widetilde{c}_{R_I, 1}) \propto u(\widetilde{c}_{R_1, 1}, \dots, \widetilde{c}_{R_I, 1} \, | \, \widetilde{c}_1) \, u(\widetilde{c}_1) \; . \end{equation*} Assuming now that the vectors $\widetilde{c}_{R_i, 1}$ are independent gives \begin{equation*} u(\widetilde{c}_{R_1, 1}, \dots, \widetilde{c}_{R_I, 1} \, | \, \widetilde{c}_1) \, u(\widetilde{c}_1) = \prod_{i=1}^I u(\widetilde{c}_{R_i, 1} \, | \, \widetilde{c}_1) \, u(\widetilde{c}_1) \; . \end{equation*} The above likelihood is given by the modelling of the reference trajectories detailed in \cref{prop:equiv_syst}. In this case, we have \begin{equation*} u(\widetilde{c}_{R_i, 1} \, | \, \widetilde{c}_1) \propto \exp\Big( -\omega_i \, \big( \widetilde{c}_1 - \widetilde{c}_{R_i, 1} \big)^T \Lambda_{\Sigma,1}^{-1} \big( \widetilde{c}_1 - \widetilde{c}_{R_i, 1} \big) \Big) \; . \end{equation*} The prior distribution is obtained by assuming that the most efficient trajectories (with respect to the cost function) are \emph{a priori} the most likely ones: \begin{equation} \label{eq:model_prior} u(\widetilde{c}_1) \propto \exp \Big( -\kappa^{-1} \widetilde{F}(\widetilde{c}_1) \Big) \; , \end{equation} where $\kappa > 0$. Putting everything together and taking the negative of the logarithm gives the following minimisation problem, whose solution is the Maximum \emph{A Posteriori} estimator: \begin{equation} \label{eq:opt_proj} \left\{ \begin{array}{l} \displaystyle \widetilde{c}_1^\star \in \argmin_{\widetilde{c}_1 \in \mathbb{R}^\sigma} \widetilde{F}(\widetilde{c}_1) + \kappa \sum_{i=1}^I \omega_i \, \big( \widetilde{c}_1 - \widetilde{c}_{R_i, 1} \big)^T \Lambda_{\Sigma,1}^{-1} \big( \widetilde{c}_1 - \widetilde{c}_{R_i, 1} \big) \\[2mm] \widetilde{c}_2 = V_2^T c_{R_i} \\[2mm] \widetilde{c}_3 = S_{A,2}^{-1} \, U^T \Gamma \end{array} \; , \right. \end{equation} where $i$ is arbitrarily chosen in $\{1, \dots, I \}$. Let us now rewrite the above optimisation problem with respect to the variable $c = V \widetilde{c} \in \mathbb{R}^K$ in order to make it more interpretable. \begin{proposition} \label{prop:opt_equiv} The optimisation problem \eqref{eq:opt_proj} is equivalent to the following one: \begin{equation} \label{eq:opt_proj_2} c^\star \in \argmin_{c \in \mathcal{V}_1} \check{F}(c) + \kappa \sum_{i=1}^I \omega_i \, \big( c - c_{R_i} \big)^T \Sigma^\dagger \big( c - c_{R_i} \big) \; , \end{equation} where $\Sigma^\dagger \in \mathbb{R}^{K \times K}$ denotes the pseudoinverse of the matrix $\Sigma$. \end{proposition} \begin{proof} From \eqref{eq:lambda_sigma}, we deduce \begin{align*} \sum_{i=1}^I \omega_i \, \big( \widetilde{c}_1 - \widetilde{c}_{R_i, 1} \big)^T \Lambda_{\Sigma,1}^{-1} \big( \widetilde{c}_1 - \widetilde{c}_{R_i, 1} \big) & = \sum_{i=1}^I \omega_i \, \big( \widetilde{c} - \widetilde{c}_{R_i} \big)^T \Lambda_{\Sigma}^\dagger \big( \widetilde{c} - \widetilde{c}_{R_i} \big) \\ & = \sum_{i=1}^I \omega_i \, \big( c - c_{R_i} \big)^T \, V \Lambda_{\Sigma}^\dagger V^T \, \big( c - c_{R_i} \big) \\ & = \sum_{i=1}^I \omega_i \, \big( c - c_{R_i} \big)^T \Sigma^\dagger \big( c - c_{R_i} \big) \; . \end{align*} And from the proof of \cref{prop:equiv_syst}, we have \begin{equation*} A \, c = \Gamma \qquad \Longleftrightarrow \qquad \widetilde{c}_3 = \, S_{A,2}^{-1} \, U^T \Gamma \; , \end{equation*} proving that $c \in \mathcal{V}_1$. \end{proof} To conclude, let us comment on this optimisation problem. \begin{enumerate} \item To interpret the optimisation problem \eqref{eq:opt_proj_2} (or equivalently \eqref{eq:opt_proj}) from a geometric point of view, let us consider the following new problem: \begin{equation} \label{eq:primal} \begin{array}{ll} & \displaystyle \min_{\widetilde{c}_1 \in \mathbb{R}^\sigma} \widetilde{F}(\widetilde{c}_1) \\[2mm] \text{s.t. } & \displaystyle \sum_{i=1}^I \omega_i \, \big( \widetilde{c}_1 - \widetilde{c}_{R_i, 1} \big)^T \Lambda_{\Sigma,1}^{-1} \big( \widetilde{c}_1 - \widetilde{c}_{R_i, 1} \big) \leqslant \widetilde{\kappa} \end{array} \end{equation} where $\lambda \geqslant 0$. Here we suppose that $\widetilde{F}$ is strictly convex and that the problem \eqref{eq:primal} has a solution (which is then unique). By Slater's theorem \citep[Subsec. 5.2.3]{convexopt}, the strong duality holds for the problem \eqref{eq:primal}. It can then be proved that there exists a certain $\lambda^\star \geqslant 0$ such that the solution of \eqref{eq:primal} is the minimiser of the strictly convex function \begin{equation*} \widetilde{c}_1 \longmapsto \widetilde{F}(\widetilde{c}_1) + \lambda^\star \sum_{i=1}^I \omega_i \, \big( \widetilde{c}_1 - \widetilde{c}_{R_i, 1} \big)^T \Lambda_{\Sigma,1}^{-1} \big( \widetilde{c}_1 - \widetilde{c}_{R_i, 1} \big) \; , \end{equation*} which is actually the objective function of the optimisation problem \eqref{eq:opt_proj} for $\kappa = \lambda^\star$. Hence the problem \eqref{eq:opt_proj} minimises the cost $\widetilde{F}$ in a ball centered on the weighted average of the reference trajectories. In particular if the reference trajectories are close to an optimal one with respect to $\widetilde{F}$ then one could expect the solution of \eqref{eq:opt_proj} to be equal to this optimal trajectory. \item Further the optimisation problem \eqref{eq:opt_proj_2} takes into account the endpoints conditions through the subspace $\mathcal{V}_1$ but not the additional constraints. However as explained in the preceding point, the solution is close to realistic trajectories and so is likely to comply with the additional constraints for a well-chosen parameter $\kappa > 0$. We refer to \cref{subsec:iterations} for more details on an iterative method for the tuning of $\kappa$. In particular, a right choice for this parameter is expected to provide an optimised trajectory with a realistic behaviour. This is for instance illustrated in \cref{sec:appli_aero}. \item Taking into account the linear information from the available data through the covariance matrix $\Sigma$ allows to restrict the search to the subspace $\mathcal{V}_1$ describing these relations. This is of particular interest when implicit relations (modelled by the sub-matrix $V_2$) are revealed by the estimation of $\Sigma$ on the basis of the reference trajectories; in this case, these implicit relations may not be known by the expert. \item The optimisation problem \eqref{eq:opt_proj_2} has linear constraints and a quadratic penalised term. For instance, if the cost function $\check{F}$ is a convex function then we obtain a convex problem for which efficient algorithms exist. \end{enumerate} \subsection{Quadratic cost for a convex optimisation problem} \label{subsec:quad_model} In this short subsection, we focus on a particular case where the cost function $F$ is defined as the integral of an instantaneous quadratic cost function, \emph{i.e.} \begin{equation} \label{eq:quad_f} \forall \, y \in \mathcal{C}\big([0,T], \mathbb{R}^D \big) \qquad F(y) = \int_0^T f(y(t)) \, dt \; , \end{equation} where $f : \mathbb{R}^D \longrightarrow \mathbb{R}$ is quadratic. Even though such a setting may appear to be restrictive, we emphasise that quadratic models may lead to highly accurate approximations of variables, as it is illustrated in \cref{sec:appli_aero}. For a quadratic instantaneous cost, the associated function $\check{F}: \mathbb{R}^K \longrightarrow \mathbb{R}$ can be proved to be quadratic as well and can be explicitly computed. In the following result, we provide a quadratic optimisation problem equivalent to \eqref{eq:opt_proj_2}. \begin{proposition} \label{prop:quad_min_pb} Suppose that the cost function $F$ is of the form \eqref{eq:quad_f} with $f$ quadratic. Then the optimisation problem \eqref{eq:opt_proj_2} is equivalent to the following one: \begin{equation} \label{eq:quad_min_pb} c^\star \in \argmin_{c \in \mathcal{V}_1} c^T \Big( \check{Q} + \kappa \, \Sigma^\dagger \Big) c + \left( \check{w} - 2 \kappa \sum_{i=1}^I \omega_i \, \Sigma^\dagger c_{R_i} \right)^{\hspace{-5pt}T} c \; , \end{equation} where $\check{Q} \in \mathbb{R}^{K \times K}$ and $\check{w} \in \mathbb{R}^K$ can be explicitly computed from $f$. \end{proposition} \begin{proof} We defer the proof to the supplementary material. \end{proof} In particular this permits to derive sufficient conditions on the parameter $\kappa > 0$ so that the optimisation problem is proved to be equivalent to a quadratic program \citep[Sec. 4.4]{convexopt}, namely the objective function is convex quadratic together with affine constraints. In practice, this allows to make use of efficient optimisation libraries to solve numerically \eqref{eq:quad_min_pb}. \subsection{Iterative process to comply with additional constraints} \label{subsec:iterations} As explained in \cref{subsec:modelling}, the trajectory optimisation problem \eqref{eq:opt_proj_2} is constrained by the endpoints conditions and by implicit linear relations revealed by the reference trajectories. Nevertheless the additional constraints introduced in \cref{def:constraints} are not taken into account in this problem. In practice such constraints assure that natural or user-defined features are verified and so a trajectory which does not comply with these constraints may be considered as unrealistic. Our aim is then to assure that the trajectory $y^\star = \Phi|_{\mathcal{Y}_\mathcal{K}}^{-1} c^\star$, where $c^\star \in \mathcal{V}_1$ is the solution of the optimisation problem \eqref{eq:opt_proj_2}, verifies the additional constraints, \emph{i.e.} belongs to the set $\mathcal{G}$. A first solution would be to add the constraint $\Phi|_{\mathcal{Y}_\mathcal{K}}^{-1} c \in \mathcal{G}$ in the optimisation problem \eqref{eq:opt_proj_2}. However depending on the nature of the constraints functions $g_\ell$, this may lead to nonlinear constraints which could be costly from a numerical point of view. The solution we propose consists rather in exploiting the degree of freedom coming from the parameter $\kappa > 0$ appearing in the problem \eqref{eq:opt_proj_2}. First of all, let us factorise the problem \eqref{eq:opt_proj_2} by $\kappa$ to obtain the following new one for the sake of presentation: \begin{equation} \label{eq:opt_proj_3} c^\star \in \argmin_{c \in \mathcal{V}_1} \nu \, \check{F}(c) + \sum_{i=1}^I \omega_i \, \big( c - c_{R_i} \big)^T \Sigma^\dagger \big( c - c_{R_i} \big) \; , \end{equation} where $\nu := \kappa^{-1}$. On one hand, we observe that the solution of the optimisation problem \eqref{eq:opt_proj_3} for the limit case $\nu = 0$ is given by $\sum_{i=1}^I \omega_i \, c_{R_i}$ which is the average of the reference vectors. In this case, one may expect that the associated average trajectory complies with the constraints but is unlikely to optimise the cost function $F$. On the other hand, for very large $\nu > 0$, the second term of the objective function in \eqref{eq:opt_proj_3} can be considered as negligible compared to the first one. In this case, the cost of the solution will surely be smaller than the costs of the reference trajectories but no guarantee regarding the additional constraints can be established in a general setting. Given these observations, the task is then to find an appropriate value $\nu^\star > 0$ in order to reach a trade-off between optimising and remaining close to the reference trajectories to comply with the additional constraints. Many methods can be developed to find such a $\nu^\star$ and, among those based on iterative processes, linear or binary search algorithms can be considered. In this case, one has to set firstly a maximal value $\nu_{max}$ so that the solution of \eqref{eq:opt_proj_3} with $\nu_{max}$ is unlikely to satisfy the constraints and to perform secondly the search over the interval $(0,\nu_{max})$. Since the solution for $\nu = 0$ is assumed to be admissible, we expect that the binary search will find a $\nu^\star > 0$ leading to an optimised trajectory belonging to $\mathcal{G}$. \subsection{Confidence bounds on the integrated cost} \label{sec:2-7} In practice the cost function $F$ considered is an estimation of the true cost $F^{\star}$, a random variable which cannot be fully predicted based on $y$. If the distribution $F(y)$ would be known it would be possible to deduce confidence bound on $F^{\star}$. This is for instance possible by considering multivariate functional regression \citep{RHCC2007}. The simplest case from the estimation point of view is to consider that $F^{\star}$ is the integral of some instantaneous consumption function $f^{\star}$ as in \cref{subsec:quad_model}, and to estimate the parameters of the standard multivariate regression $$ f^{\star}(y(t)) = f(y(t)) + \varepsilon(t), $$ where the random noise $\varepsilon(t)$ is assumed to follow a centered Gaussian distribution with variance $\sigma$. In this case $F^{\star}$ can be expressed as the integral of a stochastic process $$ F^{\star}(y) := \int_{0}^{T} f^{\star}(y(t)) \, dt \; = F(y) + \int_{0}^{T} \varepsilon(t) \, dt \;. $$ then assuming that $(\varepsilon(t))_{t \in [0,T]}$ independent we obtain $$ \int_{0}^{T} \varepsilon(t) \, dt \; \sim \mathcal{N}(0,T \sigma^2). $$ Thus $F^{\star}(y)$ follows a Gaussian distribution centered on $F(y)$ and with variance equals to $T\sigma^2$. This makes it possible to compute confidence bounds on $F^{\star}(y)$. For a confidence level $1-u$, $u\in [0,1]$, a confidence interval for $F^{\star}(y)$ is obtained as $$ \texttt{CI}^{1-u}(F^{\star}(y)) = F(y) \pm \zeta_{1-\frac{u}{2}} \sqrt{T}\sigma,$$ where $\zeta_{1-\frac{u}{2}}$ is the quantile of order $1-\frac{u}{2}$ of the standard Gaussian distribution. The assumption that $f$ and $\sigma^2$ are known is relevant since they are estimated based on a huge amount of training data. The assumption of white Gaussian noise can be seen as unrealistic, however it appears to be the only route to explicit calculus. A more complex strategy could be derived using Gaussian processes, which is beyond the scope of this paper.
{ "redpajama_set_name": "RedPajamaArXiv" }
\subsection*{Funding} National Natural Science Foundation of China (61631014, 61401036, 61471051, 61531003); National Science Fund for Distinguished Young Scholars of China (61225003); China Postdoctoral Science Foundation (2015M580008); Youth Research and Innovation Program of BUPT (2015RC12); PhD Students' Overseas Research Program of Peking University, China. \bibliographystyle{amsplain}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} Wireless Physical-layer Identification (WPLI) is a promising wireless security solution. Since the software-level device identities (e.g., IP or MAC address) can be manipulated, the physical layer feature cannot be modified without significant efforts. The physical layer features are extracted from signal by WPLI to form the radio frequency fingerprints (RFFs) which are rooted in the hardware imperfections of analog (radio) circuitry at the transmitter device \cite{danev2012physical}. Fig.\ref{fig:system_overview} illustrates the processing procedures of WPLI and typical application scenarios. The signals are obtained by the identification system through an acquisition setup to acquire signals from devices. A feature extraction module is then to obtain selected kinds of identification-relevant feature from the identification signal to form a fingerprint. A fingerprint matcher compares the fingerprints with reference fingerprints stored in database using dimensionality reduction classification technique. The identities are classified and assigned to devices. Two application scenarios are involved in WPLI: (i) identification scenario is between all unauthorized imposters and the whole authorized users. (ii) classification scenario is the N-class identification between all authorized users within this network \cite{danev2012physical, danev2009physical}. \begin{figure} \centering \includegraphics [width=2.0in] {fig/diagrams.pdf} \vspace{-5pt} \caption{Typical logic procedures and application scenarios in WPLI.} \vspace{-15pt} \label{fig:system_overview} \end{figure} The RFFs are rooted in the hardware imperfections of the transmitter device \cite{danev2012physical}, which include the nonlinearity of RF front-end system \cite{polak2011identifying}, \cite{polak2011rf}, \cite{liu2008specific}, clock jitter \cite{zanetti2010physical, jana2010fast}, distortions due to modulator sub-circuit \cite{brik2008wireless}, etc. The classification procedure of WPLI is to use RFF features to estimate the user identities. However, due to channel effects, in-band hardware noise, and resolution errors of extraction algorithms, the features extracted by receiver become random variables with certain distribution which brings the uncertainty to the classification results. If more user classes are kept adding into WPLI, the distributions of features of different identities are more likely to overlap. Hence the uncertainty between feature and identity is increased, resulting the increase of classification errors. Current works in WPLI research area, mainly focus on demonstrating feasibility of system with the classification error performance of a fixed-number network measured by high quality receiving equipment. For instance, in \cite{danev2009transient}, 50 COTS Tmote Sky nodes and an oscilloscope are utilized to achieve a high sampling rate (4GS/s). In \cite{brik2008wireless}, 138 Network Interface Cards (NIC) are measured and a vector signal analyzer are used as the receiver. In \cite{scanlon2010feature}, 54 Universal Mobile Telecommunications System (UMTS) user equipment (UE) devices and a signal spectrum analyzer are utilized. To our best knowledge, no existing works have analyzed the user number that WPLI can accommodate using different RFFs and receiving equipment within certain performance, i.e., the user capacity of the WPLI. Since the existing research analyses are mainly conducted in different experiment scenarios, the results of which cannot have the repeatable accuracy due to different individual experiment setup. To this end, in this paper, information-theoretic analyses are utilized to provide the theoretical tool, which can be universally applied for various types of WPLI. This theoretical tool can be a fundamental approach to describe the uncertainty between feature and user class identity in WPLI. Specifically, entropy can be used as a measure of the uncertainty on the values taken by a feature member. Meanwhile mutual information can be seen as the reduction in the uncertainty of one feature member due to the knowledge of user class identity. \cite{scanlon2010feature,brown2009information}. Moreover, the classification error performance is restricted by the uncertainty remains in the feature member after the reduction of mutual information \cite{cover2012elements}. Based on the relations of uncertainty, the key factors in WPLI, including feature, class identity, classification error performance, and user capacity, can be jointly analyzed. However, to date, no existing work in the research field of WPLI has fully covered this research direction. In this paper, we establish a theoretical understanding of user capacity of WPLI in an information-theoretic perspective. Based on mutual information of RFF, an information-theoretic approach is established to characterize the user capacity of WPLI. The RFF feature of WPLI is modeled according to the signal processing procedures of WPLI. The mutual information between RFF feature member and user identity is calculated. The ensemble mutual information (EMI) between RFF and identity is obtained using an approximation calculation. The user capacity of WPLI is derived using the EMI and class identity entropy. To illustrate the usage of this theoretical tool, we use a experiment-based approach to calculate the mutual information and then to derive the achievable user capacity under practical constrains of different application cases. Experiments on classification error performance of a practical system are also conducted to validate the user capacity characterization for each application case setting. \section{Information-theoretic Analyses of User Capacity} \label{Modeling} In this section, we firstly provide the information-theoretic modeling of RFF feature according to the processing procedures. The mutual information between fingerprinting feature member and user identity is modeled. The model to calculate ensemble mutual information between RFF and identity is then given. Finally, the user capacity is derived using ensemble mutual information and entropy. \subsection{Modeling of RFF feature} The beginning of the RRF classification procedure is the Analog-to-Digital Converter (ADC) sampling procedure of received signal. This signal can be either baseband signal or passband signal resulting in different signal type are utilized to extract the fingerprints which are basedband preamble \cite{zanetti2010physical,suski2008using} or passband transient signal respectively \cite{danev2009transient, barbeau2006detection}. The ADC sampling procedure can be modeled as, \begin{gather} \label{eq: ADC} \textstyle s[n]=s(nT_s)+\eta+\xi_{ADC}, \end{gather} where $s(t)$ is the received analog signal, $s[n], n\in\mathbf{N}$ is the sampled digital signal, $\mathbf{N}$ is the set of all sampled digital signals in this round of identification. $\eta$ is the in-band AWGN noise which can be measured by the receiver. $\xi_{ADC}$ is the random ADC quantization error, for Q-bit ADC quantization and input dynamic range $U$ Vp-p, the maximum quantization error is $ \delta_{ADC}=2^{-Q}U$. After ADC sampling procedure, the signal sequence then go through the signal acquisition procedure to extract the valid part of the signal i.e., the preamble or transient part. The next procedure is feature extraction to obtain the fingerprinting feature from the signal, which can be modeled as, \begin{gather} \label{eq: feature} \textstyle \mathbf{X}_{1:M}=Feature(\mathbf{S}_{1:N}), \end{gather} where $\mathbf{S}$ is $N$ point raw signal data vector set, $\mathbf{X}$ is the fingerprint feature set with the feature dimensionality $M$. The feature dimensionality depends on the feature selection approach. For instance, the spectral feature in \cite{danev2009transient} is high-dimensional feature of which the $M$ is the number of FFT points. In \cite{zanetti2010physical}, two single-dimensional features, TIE error and average signal power, are utilized as a combined feature. While in \cite{brik2008wireless}, a combined low-dimensional feature of frequency error, I/Q offset, magnitude error and phase error are used as fingerprinting feature. Since the multi-dimensional feature are widely applied in the feature selection, dimensionality reduction techniques are applied to reduce the computation burden and find more discriminant subspaces which highlight the relevant features that may be hidden in noise \cite{danev2012physical}. Currently typical dimensionality reduction techniques in machine learning are applied for WPLI classification scenario, including PCA \cite{danev2009physical}, Fisher LDA \cite{danev2009transient}, Maximum Mutual Information (MMI) \cite{scanlon2010feature} etc. \subsection{Mutual information between RFF and identity} To derive the user capacity $N_C$, we firstly calculate the mutual information between RFF feature and its identity. Specifically, variable $X$ is the one single dimensional component of the feature vector i.e., $X\in \mathbf{X}$. Y denotes the user class identity of this RFF feature. The value of $X$ and $Y$ varies for each testing RFF sample received by WPLI. Consequently, the number of $X$ values, $N_X$, equals the number of all the test samples received by WPLI. The entropy of feature values can be calculated as, $ H(X) =-\sum_{i=1}^{N_X}p(x_i)\log(p(x_i)) $. While the number of $Y$, $N_Y$ is the number of users connected to WPLI. The classification procedure of WPLI is to utilize larges samples of $X$ with different identities to decide the user identity $Y$. Hence the conditional entropy, which describe the uncertainty remaining in $X$ after obtained the outcome of $Y$, can be calculated as $H(X|Y) =-\sum_{j=1}^{N_Y} p(y_j) \sum_{i=1}^{N_X}p(x_i|y_i)\log(p(x_i)|y_i)$. The mutual information between $X$ and $Y$ can be finally derived as, \begin{align} \label{eq: MI} \textstyle I(X;Y)&=I(Y;X) =H(X)-H(X|Y) \\\textstyle &=\sum_{j=1}^{N_Y} \sum_{i=1}^{N_X} p(x_i y_i)\log(\frac{p(x_i y_i)}{p(x_i)p(y_j)}). \notag \end{align} For instance, if the whole signal spectrum is used as feature $\mathbf{X}$ for RFF, each value of frequency can be seen as the one variable $X$ for the spectral feature. $x_i$ is the specific magnitude value of each frequency point is of the $i$th RFF sample. Hence mutual information between each frequency point and identity can be measured and calculated using large number of tests. The specific measure and calculation approach which will be detailed discussed using an application case in section \ref{sec: app_case}. To have a clear understanding of mutual information between feature member and identity, in Fig.\ref{fig_PSD_MI} we present the two PSDs of signal preambles of two Micaz sensor nodes and corresponding mutual information between each frequency point and signal identity. The difference of spectrums of different devices is the reason that spectral feature can be utilized to classify the identities. From the figures, we can see that the frequency points which have larger differences in spectrum also show larger mutual information value with their identity. Just as the discussion in \cite{scanlon2010feature} and \cite{brown2009information}, mutual information is a significant metric to characterize the relevance of the feature to its identity. \begin{figure}[!t] \centering \subfigure[]{ \includegraphics [width=1.6in] {fig//two_node_PSD.pdf} \label{fig_PSD}} \subfigure[]{ \includegraphics [width=1.6in] {fig//two_node_MI.pdf} \label{fig_MI}} \caption{ (a) Preamble PSDs of Micaz sensor nodes. (b) Mutual information between spectral feature member and identity.}\vspace{-15pt} \label{fig_PSD_MI} \end{figure} If only single dimensional feature is utilized to form a RFF, the mutual information between RFF and identity can already be calculated using equation (\ref{eq: MI}). However, as our previous discussion, even single dimensional features are combined to form a multi-dimensional feature $\mathbf{X}$ to form a RFF. Hence the Ensemble Mutual Information (EMI) between ensemble feature and class identity, $I(\mathbf{X};Y)$, needs to be calculated to characterize the relation between the ensemble RFF and the identity. In \cite{brown2009information, zhou2010multi}, a definition for the ensemble mutual information is given. However, as the increasing of dimensionality of ensemble feature, exponential number of possible variable values will make the calculation for this approach infeasible in practice. In \cite{ozertem2005detection,ozertem2006spectral}, the EMI between high-dimensional feature and class identities is given as, \begin{align} \label{eq: EMI} \textstyle I(\mathbf{X};\it Y) &=\sum_Y \int p(\mathbf{x}, y) \log \frac{p(\mathbf{x}, y)}{p(\mathbf{x}) p(y)} d\mathbf{x} \\ \textstyle &=\sum_Y p(y) \mathbb{E}_{\mathbf{x}| y} \left [ \log \frac{p(\mathbf{x}| y)}{p(\mathbf{x})} \right ], \notag \end{align} The pdfs $p(\mathbf{x}| y)$, $p(\mathbf{x})$ and the conditional expectation $\mathbb{E}_{\mathbf{x}| y} $ can be approaximatedly calculated using nonparametrical Kernel Density Estimator (KDE) with $K(.)$ as the kernel \cite{fasshauerkernel}, \begin{align} \label{eq: EMI_app} \textstyle I(\bf{X};\it Y) & \approx\sum_Y \frac{p(y)}{N_Y} \sum_{j=1}^{N_Y} \log \frac{(1/N_Y)\sum_{i=1}^{N_Y} K(\mathbf{x}_j^y-\mathbf{x}_i^y)}{(1/N_\mathbf{x})\sum_{i=1}^{N_Y} K(\mathbf{x}_j^y-\mathbf{x}_i)} \\ \textstyle &\approx \sum_Y \frac{p(y)}{N_Y} \sum_{j=1}^{N_Y} \log \left [ \frac{\bar{\varphi}^T(\mathbf{x}_j)\bar{\Lambda} \mathbf{\bar{\mu}}_y }{\bar{\varphi}^T(\mathbf{x}_j)\bar{\Lambda} \mathbf{\bar{\mu}}} \right ] \notag \end{align} where the kernel $K(.)$ can be calculated with the eigenvectors $\mathbf{\varphi(x)}$ and the eigenmatrix $\mathbf{\bar{\Phi}_x}=[\mathbf{\varphi(x)}_1, ...,\mathbf{\varphi(x)}_N ]$, $ \mathbf{\bar{\mu}}_y=(1/N_Y)\mathbf{\bar{\Phi}_x}\mathbf{m}_y$ is the average eigenvector for class $y$, $ \mathbf{\bar{\mu}}=(1/N_\mathbf{x})\mathbf{\bar{\Phi}_x}\mathbf{1}$ is the average eigenvector for all the training samples, and $N_\mathbf{x}$ is the number of all RFF feature samples. \subsection{User capacity of WPLI} The WPLI system finally assign the identity of testing RFF to the class with minimal feature distance scores between reference RFFs. After a large number of sample tests, the WPLI classification performance can be evaluated using average classification error rate as the metric \cite{danev2009physical, zanetti2010physical} which can be denoted as $P_e$. With the EMI $I(\mathbf{X};Y)$ obtained, the important property of mutual information related to classification error rate $P_e$ can be utilized to derive the user capacity of WPLI. In \cite{brown2009information,cover2012elements,zhou2010multi}, the information-theoretic bounds for classification error rate are given in details using Fano's Inequality (note that the inequality is valid for three or more classes scenario). Hence the classification error rate of WPLI can be bounded as, \begin{align} \label{eq: bounds} \textstyle \frac{H(Y)-I(\mathbf{X};Y)-H(P_e)}{\log(N_Y-1)} \leq P_e \leq \frac{1}{2} \left (H(Y)-I(\mathbf{X};Y) \right). \end{align} These bounds determine that no classifier can possibly achieve better than error lower bound and also there exists a classifier that can achieve at least error upper bound. The bounds are restricted by two terms. One is the ensemble mutual information between feature and identity $I(\mathbf{X};Y)$, the other is class identity entropy $H(Y)$. Considering a specific scenario of WPLI, the stable mutual information between feature and identity, can be measured with a large number of test samples \cite{scanlon2010feature} and the EMI can be calculated using equation (\ref{eq: EMI}). Hence the bound error rate can be bounded by the class identity entropy which is directly related to the user number $N_Y$ of WPLI. Considering an equal-identity-probability WPLI system, i.e., $H(Y)=\log(N_Y)$, the upper-bound user capacity can then be derived as, \begin{gather} \label{eq: up_UC} \textstyle N_C= \max(\mathbf{N}_Y) | \frac{\log(N_Y)-I(\mathbf{X};Y)-H(\lambda)}{\log(N_Y-1)}\leq\lambda, \end{gather} where $N_C$ is the user capacity, $\mathbf{N}_Y$ is the set of all possible user number, $Y$ is the user identity, and $\lambda$ is the performance threshold for classification error rate $P_e$. By far, the theoretical tool to derive the user capacity of WPLI is given. \section{Application Case Study on User Capacity under Practical Constrains} \label{sec: app_case} To apply this theoretical tool, we conduct an application case study to illustrate its usage for a specific type of WPLI. We use a experiment-based approach to calculate the mutual information between feature and identity. Then the achievable user capacity of this type of WPLI is derived under practical constrains of different application case settings. Moreover, the effects of key system parameters on user capacity are evaluated and analyzed. \subsection{Case overview} The most existing works try to present the best performance with the high quality receiving equipment. Differently, we try to derive the user capacity and evaluate the system feasibility using the existing approach under practical constrains of off-the-shelf devices. The details of this application case are given as, \begin{enumerate}[\IEEEsetlabelwidth{8)}] \item Feature selection: FFT spectrum of baseband preamble \cite{danev2009physical,zanetti2010physical,scanlon2010feature,suski2008using}. \item Transmitter: Micaz, Imote2 and TelosB sensor nodes (3 typical models with the same ZigBee Protocol radio chip). \item Receiver: USRP N210 with SBX daughter board (14-bit ADC) \item Sampling rates: 2M$\sim$10MS/s \item Number of FFT points: 64$\sim$2048p. \item Communication channel: indoor AWGN channel (SNR=20$\sim$30dB). \item Number of transmitters (user class identities): 40 in all. \item Number of signal samples: 2000 samples per class. \end{enumerate} \subsection{User capacity characterization} After the one-time collection of the raw signals from sensor nodes, the training samples of RFF can be obtained. We calculate the ensemble mutual information for each RFF and its class identiy using the nonparametrical Kernel Density Estimation approach in equation (\ref{eq: EMI_app}). With the EMI obtained, the next step is to characterize the user capacity using equation (\ref{eq: up_UC}). Subsequently, we try to derive the user capacity under different constrains of key parameters in the user capacity modeling and RFF forming, including number of training transmitters $N_Y$, in-band AWGN noise level, ADC quantization bits $Q$, number of FFT points $N_{FFT}$ and sampling rate of receiver $f_s$. we set the parameters due to different typical application scenarios of WPLI and derive the user capacity with targeted performance as the following figures. In each case, we present the ensemble mutual information (EMI) (blue curve) together with user capacity under 1\% and 10\% classification error rate i.e., $N_C|P_e\leq1\%$ (black curve) and $N_C|P_e\leq10\%$ (red curve). \subsubsection{Effects of number of training transmitters} Since we obtain the raw RFF samples from limited number of transmitters and limited number of RFF samples to characterize the user capacity, firstly, we characterize user capacity results using RFF samples from 3 to 40 training transmitters to find out what's the least number of class identities we need to characterize a stable user capacity for this system. We collect the raw RFF samples in a typical system setting as $f_s=4MS/s$, $N_{FFT}=512p$, and $SNR=24dB$. In Fig. \ref{fig_num_nodes}, the user capacity is presented from 3 to 25 training transmitters together with the EMI we obtained from these samples. The EMI directly related to classification error rate and user capacity, the relation of which can be easily observed in the figures. In the beginning, when the number of training transmitters is too small, the obtained EMI is also small which results in the user capacity is near to $N_Y$. As the increasing of $N_Y$, the results are increased unstably. After the number of training transmitters is larger than 18, the characterization result becomes a stable and reliable result which is $13|P_e\leq1\%$ and $22|P_e\leq10\%$. Hence in order to characterize the user capacity we at least should use 13 nodes to collect the raw training RFF samples. \begin{figure}[!t] \centering \includegraphics [width=2.0in] {fig//Num_class_MI_UC_RKHS_v4-eps-converted-to.pdf} \caption{ User capacity under different numbers of training transmitters.}\vspace{-15pt} \label{fig_num_nodes} \end{figure} \subsubsection{Effects of RFF noise level} We firstly present user capacity due to various the noise effects in Fig.\ref{fig_min_d}. Here, we set the other parameters as $f_s=4MS/s$, $N_{FFT}=512p$. As the modeling in equation (\ref{eq: ADC}), the noise level of RFF feature is mainly contributed by the ADC quantization error and in-band AWGN noise. The noise level significantly affects the classification performance of WPLI resulting in the decrease of user capacity we finally obtained. We fix the ADC quantization bit $Q=14$bits and simulate the noise feature value within SNR$=0\sim 28$dB. The user capacity results are presented in Fig.\ref{fig_SNR}. The EMI and user capacity are decreased synchronously as the AWGN SNR level decreases. It should be noted that, according to equation (\ref{eq: up_UC}), the user capacity we obtained is the upper bound for all classification methods and classifiers using all these RFF samples. Hence in high SNR situations, the typical classification procedure of WPLI can easily achieve the user capacity quite accurately. However, in the extremely low SNR scenarios, it is hard to use a single method or feature to achieve the upper bound of user capacity. Hence more combined features extracted from RFF for multiple classifiers should be utilized to achieve the upper bound of user capacity, as the work in \cite{brik2008wireless,zanetti2010physical}. Moreover, in the low SNR scenarios, the error in signal acquisition procedure of WPLI can also contribute to worsen the classification performance \cite{suski2008using}, which is out of the scope of this paper and can be discussed in future works. Then we fixe the SNR$=29$dB and simulate the feature value of ADC quantization error within $Q=6\sim14bits$. The corresponding user capacity results are presented in Fig.\ref{fig_ADC}. The user capacity becomes stable when the ADC quantization bits are increased to 10 bits. In practical applications, the effects of AWGN noise level usually more significant than the effects of ADC quantization error, while the effects of ADC quantization error can be significant when AWGN SNR level is very high. Here we only can simulate the feature value for 14 bits quantization due to the constrains of USRP daughter board, while in practical, 16 or more quantization bits can also be found in higher standard equipment. With the development of device resolutions, the effects can be kept to minimal, \begin{figure}[!t] \centering \subfigure[]{ \includegraphics [width=1.6in] {fig//SNR_MI_UC_RKHS_v1-eps-converted-to.pdf} \label{fig_SNR}} \subfigure[]{ \includegraphics [width=1.6in] {fig//ADC_MI_UC_RKHS_v1-eps-converted-to.pdf} \label{fig_ADC}} \caption{ (a) User capacity under different RFF noise levels. (b) User capacity under different ADC quantization bits.}\vspace{-10pt} \label{fig_min_d} \end{figure} \subsubsection{Effects of number of FFT points} Here we present user capacity due to various number of FFT points within the same sampling rate setting. The number of FFT points $N_{FFT}$ is the key parameter to form the spectral RFF which decides the resolution of the spectral feature $\mathbf{X}$ in equation (\ref{eq: feature}) and consequently reflects the distribution of RFF feature. The specific modeling about the number of FFT points of spectral feature can be found in \cite{danev2009transient}. In Fig.\ref{fig_NFFT_PSD}, we present the preamble spectrum obtained with two different number of FFT points to present difference of resolution. Here, we set the other parameters as $f_s=8$MS/s, SNR$=29$dB, $Q=14$bits. We simulate the result within number of FFT points $N_{FFT}=64\sim1024$p. The user capacity results are presented in Fig.\ref{fig_NFFT}. From the results, we can see the larger number of FFT points can increase the EMI of feature and improve the performance and user capacity. However, when the resolution is accurate to some extend, the improvement of performance is not so significant. Since the increase of FFT points can cause greater computation burden for WPLI, here involves a trade off for system designer. \begin{figure}[!t] \centering \includegraphics [width=2.5in] {fig//NFFT_PSD.pdf} \caption{Preamble spectrum of signal under different number of FFT points.}\vspace{-10pt} \label{fig_NFFT_PSD} \end{figure} \begin{figure}[!t] \centering \includegraphics [width=2.0in] {fig//NFFT_MI_UC_RKHS_v1-eps-converted-to.pdf} \caption{ User capacity under different numbers of FFT points.}\vspace{-15pt} \label{fig_NFFT} \end{figure} \begin{figure}[!t] \centering \includegraphics [width=2.5in] {fig//Fs_PSD.pdf} \caption{Preamble spectrum of signal under different sampling rates.}\vspace{-10pt} \label{fig_Fs_PSD} \end{figure} \begin{figure}[!t] \centering \includegraphics [width=2.0in] {fig//Sampling_MI_UC_RKHS_v1-eps-converted-to.pdf} \caption{ User capacity under different sampling rates.}\vspace{-15pt} \label{fig_fs} \end{figure} \subsubsection{Effects of sampling rate} Here we present user capacity due to various sampling rates of receiver with the same frequency resolution, which are the key parameters to determine the bandwidth of the spectral feature $\mathbf{X}$ in equation (\ref{eq: feature}) and consequently reflects its distribution. In Fig.\ref{fig_Fs_PSD}, we present the preamble spectrum obtained with two different sampling rates to present difference of spectrum bandwidth. In the case $f_s=2$MS/s, the spectrum bandwidth covers the main lobe of signal PSD. While the higher sampling rates can cover more side lobe information of signal PSD which are more beneficial for WPLI performance. However, the choice of sampling rates also involves a trade off that with the increasing of bandwidth, the bandwidth of noise is also increased which can result in the decreasing of signal SNR which worsen the WPLI performance. This phenomenon can be observed in the user capacity characterization and also experimental validations. Here, we set the other parameters as, SNR$=29$dB for $f_s=8$Ms/s, $Q=14$bits. We simulate the result within different sampling rates, $f_s=2\sim10$MS/s, with the same spectrum resolution ( $N_{FFT}=512p$ for $f_s=4 Ms/s$), the user capacity results of which are presented in Fig.\ref{fig_fs}. It can be inferred that when low-noise devices are applied in high SNR scenarios, a higher sampling rate can be applied for WPLI system. While, for the low-SNR scenarios where low quality devices are applied, a sampling rate which tightly covers the main lobe of spectrum should be chosen. \section{Experimental Validations for User Capacity} In this section, we conduct field experiments on classification error performance according to the different case setting to validate the user capacity characterization in section \ref{sec: app_case}. We still utilize the equipments in section \ref{sec: app_case}. We use newly collected 1000 thousand samples per class to train the LDA training matrix and another 1000 samples per class to be tested for classification. For classification procedure of WPLI, we select the Fisher LDA \cite{danev2009transient} as the feature dimensionality reduction technique and the Mahalanobis distance as the distance metric \cite{danev2009physical,zanetti2010physical,danev2009transient}. We set a large LDA subspace dimensionality $\kappa=150$ despite the computation time in order to achieve the optimal classification performance . We present the classification error performance of WPLI with selected user number near the upper-bound user capacity we obtained. Hence if the classification error rate is larger than the threshold error rate i.e., $P_e|N_Y > \lambda$, when the user number is larger than the user capacity, i.e., $N_Y > N_C|\lambda$, the user capacity is proved. Meanwhile, we also present the classification error performance when user number is near to the user capacity bound i.e., $N_Y \leq N_C$, to show the tightness of this bound. The classification results are shown in the following figures where the x-axis is the number of test samples, the y-axis is the minimal distance score between test sample and its reference, and the z-axis is the identity number assigned to the test samples. Besides the color of each sample is to present its true identity which can help the reader to compare classified identities of test samples. \subsection{Effects of RFF noise level} Here, we use the experiment results to validate the user capacity characterization for RFF noise level case. Because the ADC quantization bits is impossible to change for given hardware setting, we can only fix the ADC quantization bits to Q$=14$bit according to USRP daughter board setting. As the case setting in Fig.\ref{fig_SNR}, we conduct the experiments at SNR=26dB, where the 1\% error rate user capacity is 13, i.e., $N_C=13|P_e\leq1\%$. In Fig.\ref{fig_class_4M_13c_Pe0_91}, classification results of 13 classes, are shown of which the classification error rate is $P_e=0.91\%|N_Y=13$. While in Fig.\ref{fig_class_4M_14c_Pe1_09}, the classification results of 14 classes are shown of which the classification error rate is $P_e=1.09\%|N_Y=14$. Hence we can see the user capacity characterization at this point is validated accurately. Similarly, we change SNR situation and threshold error rate to see validate another point of our user capacity curves. In Fig.\ref{fig_SNR}, when SNR=22dB, the user capacity is 12 with 1\% error rate i.e., $N_C=12|P_e\leq1\%$. In Fig.\ref{fig_class_4M_11c_Pe0_75}, classification results of 11 classes, are shown of which $P_e=0.75\%|N_Y=11$. While in Fig.\ref{fig_class_4M_12c_Pe1_79}, the classification results of 12 classes are shown of which $P_e=1.79\%|N_Y=12$. The user capacity characterization at this case is slightly larger than the experimental result. As we discuss in the previous section, with the decrease of SNR level, single classifier and single feature selection are not enough achieve the upper bound user capacity. \begin{figure}[!t] \centering \subfigure[]{ \includegraphics [width=1.6in] {fig//Classification_4M_per_13_LDA150_Pe0_91-eps-converted-to.pdf} \label{fig_class_4M_13c_Pe0_91}} \subfigure[]{ \includegraphics [width=1.6in] {fig//Classification_4M_per_14_LDA150_Pe1_09-eps-converted-to.pdf} \label{fig_class_4M_14c_Pe1_09}} \caption{SNR=26dB, $f_s=4MS/s$, $N_{FFT}=512p$. (a) Classification results for 13 classes. (b) Classification results for 14 classes.}\vspace{-10pt} \label{fig_class_SNR} \end{figure} \begin{figure}[!t] \centering \subfigure[]{ \includegraphics [width=1.6in] {fig//Classification_4M_per_11_LDA150_Pe0_75-eps-converted-to.pdf} \label{fig_class_4M_11c_Pe0_75}} \subfigure[]{ \includegraphics [width=1.6in] {fig//Classification_4M_per_12_LDA150_Pe1_79-eps-converted-to.pdf} \label{fig_class_4M_12c_Pe1_79}} \caption{SNR=22dB, $f_s=4MS/s$, $N_{FFT}=512p$. (a) Classification results for 11 classes. (b) Classification results for 12 classes.}\vspace{-15pt} \label{fig_class_SNR_2} \end{figure} \subsection{Effects of number of FFT points} We use the experiment results to validate the user capacity characterization for number of FFT points case. We conduct the experiments as the case setting for Fig.\ref{fig_NFFT}. As the user capacity characterization under $N_{FFT}=64p$ is ,$N_C=2|P_e\leq10\%$. In Fig.\ref{fig_class_8M_3c_64}, the classification results of 3 classes are shown with $P_e=15.67\%|N_Y=3$ which is out of user capacity. For $N_{FFT}=256p$, $N_C=13|P_e\leq1\%$. In Fig.\ref{fig_class_8M_14c_128}, the classification results of 14 classes are shown with $P_e=1.25\%|N_Y=14$ which is still out of user capacity. For $N_{FFT}=1024p$, $N_C=15|P_e\leq1\%$. In Fig.\ref{fig_class_8M_15c_1024}, the classification results of 15 classes are shown with $P_e=0.47\%|N_Y=15$ which is within user capacity. Hence the experimental results match the discussion in previous section very well. \begin{figure}[!t] \centering \subfigure[]{ \includegraphics [width=1.6in] {fig//Classification_64p_per_3_LDA64_Pe15_67-eps-converted-to.pdf} \label{fig_class_8M_3c_64}} \subfigure[]{ \includegraphics [width=1.6in] {fig//Classification_256p_per_14_LDA150_Pe1_25-eps-converted-to.pdf} \label{fig_class_8M_14c_128}} \subfigure[]{ \includegraphics [width=1.6in] {fig//Classification_1024p_per_15_LDA150_Pe0_47-eps-converted-to.pdf} \label{fig_class_8M_15c_1024}} \caption{SNR=29dB , $f_s=8MS/s$. (a) Classification results for 3 classes, $N_{FFT}=64$. (b) Classification results 14 classes, $N_{FFT}=128$. (c) Classification results for 15 classes, $N_{FFT}=1024$. }\vspace{-15pt} \label{fig_class_NFFT} \end{figure} \begin{figure}[!t] \centering \subfigure[]{ \includegraphics [width=1.6in] {fig//Classification_2M_per_17_LDA150_Pe1_17-eps-converted-to.pdf} \label{fig_class_2M_17c}} \subfigure[]{ \includegraphics [width=1.6in] {fig//Classification_6M_per_17_LDA150_Pe0_58-eps-converted-to.pdf} \label{fig_class_6M_17c}} \subfigure[]{ \includegraphics [width=1.6in] {fig//Classification_10M_per_12_LDA150_Pe1_51-eps-converted-to.pdf} \label{fig_class_10M_12c}} \caption{SNR=29dB (8MS/s). (a) Classification results for 17 classes, $f_s=2$MS/s. (b) Classification results 17 classes, $f_s=6$MS/s. (c) Classification results for 12 classes, $f_s=10$MS/s. }\vspace{-15pt} \label{fig_class_fs} \end{figure} \subsection{Effects of sampling rate} We use the experiment results to validate the user capacity characterization for sampling rate case. We conduct the experiments as the case setting for Fig.\ref{fig_fs}. As the user capacity characterization under $f_s=2$MS/s is, $N_C=16|P_e\leq1\%$. In Fig.\ref{fig_class_2M_17c}, the classification results of 17 classes are shown with $P_e=1.17\%|N_Y=17$ which is out of user capacity. For $f_s=6$MS/s, $N_C=17|P_e\leq1\%$. In Fig.\ref{fig_class_6M_17c}, the classification results of 17 classes are shown with $P_e=0.58\%|N_Y=17$ which is within the capacity. For $f_s=10$MS/s, $N_C=11|P_e\leq1\%$. In Fig.\ref{fig_class_10M_12c}, the classification results of 12 classes under $f_s=10$MS/s are shown with $P_e=1.51\%|N_Y=12$, which is out of the capacity. The analyses in our previous discussion is also validated. \vspace{-5pt} \section{Conclusion} \label{sec: C} In this work, we establish a theoretical understanding on user capacity of Wireless Physical-layer Identification (WPLI) in an information-theoretic perspective unprecedentedly. Specifically, Radio Frequency Fingerprint (RFF) features of WPLI are analyzed in an information-theoretic perspective, including feature selection, extraction, noise level, resolution, and bandwidth, which advance the understanding of RFF feature. We then propose an information-theoretic approach based on mutual information between RFF and user identity to characterize the user capacity of WPLI. Using this theoretical tool, the achievable user capacity under practical constrains of a WPLI system is characterized with data collected by off-the-shelf receiving devices. Various experiments on classification error performance of a practical system are conducted to validate the accuracy and tightness of the information-theoretic user capacity characterization. \vspace{-5pt} \bibliographystyle{IEEEtran}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} In this paper, we address the problem of \emph{distributed learning under communication constraints}, motivated primarily by distributed signal processing in wireless sensor networks (WSNs) and data mining with distributed databases. WSNs are \textit{a fortiori} designed to make inferences from the environments they are sensing; however they are typically characterized by constraints on energy and bandwidth, which limit the sensors' ability to share data with each other or with a centralized fusion center. In data mining with distributed databases, multiple agents (e.g., corporations) have access to possibly overlapping databases, and wish to collaborate to make optimal inferences; privacy or security concerns, however, may preclude them from fully sharing information. Nonparametric methods studied within machine learning have demonstrated widespread empirical success in many centralized (i.e., communication \emph{unconstrained}) signal processing applications. Thus, in both the aforementioned applications, a natural question arises: can the power of machine learning methods be tapped for nonparametric inference in distributed learning under communication constraints? In this paper, we address this question by formalizing a general model for distributed learning, and then deriving a distributed algorithm for collaborative training in regularized kernel least-squares regression. The algorithm can be viewed as an instantiation of successive orthogonal projection algorithms, and thus, insight into the statistical behavior of these algorithms can be gleaned from standard analyses in mathematical programming. \subsection{Related Work} Distributed learning has been addressed in a variety of other works. Reference \cite{KeaSeu95} considered a PAC-like model for learning with many individually trained hypotheses in a distribution-specific learning framework. Reference \cite{NguWaiJor04} considered the classical model for decentralized detection \cite{Var96} in a nonparametric setting. Reference \cite{PreKulPoo05a} studied the existence of consistent estimators in several models for distributed learning. From a data mining perspective, \cite{GamKegAim05} and \cite{LazObr01} derived algorithms for distributed boosting. Most similar to the research presented here, \cite{GueBodThiPasMad04} presented a general framework for distributed linear regression motivated by WSNs. Ongoing research in the machine learning community seeks to design statistically sound learning algorithms that scale to large data sets (e.g., \cite{BorErtWesBot05} and references therein). One approach is to decompose the database into smaller ``chunks", and subsequently parallelize the learning process by assigning distinct processors/agents to each of the chunks. In principle, algorithms for parallelizing learning may be useful for distributed learning, and vice-versa. To our knowledge, there has not been an attempt to parallelize reproducing kernel methods using the approach outlined below. A related area of research lies in the study of ensemble methods in machine learning; examples of these techniques include bagging, boosting, and mixtures of experts (e.g., \cite{FreSch97b} and others). Typically, the focus of these works is on the statistical and algorithmic advantages of learning with an ensemble and not on the problem of learning under communication constraints. To our knowledge, the methods derived here have not been derived in this related context, though future work in distributed learning may benefit from the many insights gleaned from this important area. Those familiar with the online learning framework may find our collaborative training algorithm reminiscent of the equations for additive gradient updates \cite{KivWar97}. Though both algorithms may be interpreted in the context of successive orthogonal projection algorithms, it does not appear possible to specialize the current model for distributed learning in a way that recovers the online learning framework (or vice versa). The research presented here generalizes the model and algorithm discussed in \cite{PreKulPoo05c}, which focused exclusively on the WSN application. Distinctions between the current and former work are discussed in more detail below. \subsection{Organization} The remainder of this paper is organized as follows. In Section II, we review preliminary background information necessary for the remainder of the work. In Section III, we describe a general model for distributed learning and propose a distributed algorithm for collaboratively training regularized kernel least-squares regression estimators. Subsequently, we analyze the algorithm's convergence properties and use these properties to gain insight into the statistical behavior of the estimator in a simplified setting. We conclude with a discussion of the method in Section IV. \section{Preliminaries} In this section, we briefly review the supervised learning model for nonparametric least-squares regression, reproducing kernel methods, and alternating projection algorithms. Since a thorough introduction to these models and methods is beyond the scope of this paper, we refer the reader to standard references on the topics; see, for example, \cite{CenZen97}, \cite{GyoKohKrzWal02}, \cite{SchSmo02} and references therein. \subsection{Nonparametric Least-squares Regression} Let $X$ and $Y$ be ${\cal{X}}$ and ${\cal{Y}}$-valued random variables, respectively. ${\cal{X}}$ is known as the feature, input, or observation space; ${\cal{Y}}$ is known as the label, output, or target space. For now, we allow ${\mathcal{X}}$ to be arbitrary, but take ${\cal{Y}}= {\rm I \kern-0.20em R}$. In the least-squares estimation problem, we seek a decision rule mapping inputs to outputs that minimizes the expected squared error. In particular, we seek a function $g:{\cal{X}}\rightarrow {\cal{Y}}$ that minimizes \begin{equation}\nonumber {\mathbf{E}}\{|g(X) - Y|^2\}. \end{equation} It is well-known that $\eta(x) = {\mathbf{E}}\{Y\,|X=x\}$ is the loss minimizing rule. However, without prior knowledge of the joint distribution of $(X,Y)$, this regression function cannot be computed. In the supervised learning model, one is instead provided a database $S=\{(x_i, y_i)\}_{i=1}^n$ of training examples with $(x_i, y_i)\in{\cal{X}}\times{\cal{Y}}$ $\forall i\in\{1,\ldots,n\}$; the learning task is to use $S$ to estimate $\eta(x)$. \subsection{Regularized Kernel Methods} Regularized kernel methods \cite{SchSmo02} offer one approach to nonparametric regression. In particular, let ${\cal{H}}_K$ denote the \emph{reproducing kernel Hilbert space} (RKHS) induced by a \emph{positive semi-definite kernel} $K(\cdot, \cdot):{\cal{X}}\times{\cal{X}}\rightarrow{\rm I \kern-0.20em R}$; let $\|\cdot\|_{{\cal{H}}_K}$ denote the norm associated with ${\cal{H}}_K$. In practice, the kernel $K$ is a design parameter, chosen as a similarity measure between inputs to reflect prior application-specific domain knowledge. The regularized kernel least-squares estimate is defined as the solution $f_{\lambda}\in{\cal{H}}_K$ of the following optimization problem: \begin{eqnarray} \label{kernel}\min_{f\in{\cal{H}}_K}\Big{[} \sum_{i=1}^n (f(x_i) - y_i)^2 + \lambda \| f \|_{{\cal{H}}_K}^2\Big{]}. \end{eqnarray} The statistical behavior of this estimator is well-understood under various assumptions on the stochastic process that generates the examples $\{(x_i, y_i)\}_{i=1}^n$ \cite{SchSmo02, Wah90}. In this paper, we focus primarily on algorithmic aspects of computing a solution to (\ref{kernel}) (or an approximation thereof) in distributed environments. To this end, consider the following ``Representer Theorem" proved originally in \cite{KimWah71}. \begin{thm}[\cite{KimWah71}] Let $f_{\lambda} \in{\cal{H}}_K$ be the minimizer of (\ref{kernel}). Then, there exists ${\mathbf{c}}_{\lambda}\in{\rm I \kern-0.20em R}^n$ such that \begin{equation} \nonumber f_{\lambda}(\cdot) = \sum_{i=1}^n c_{\lambda,i} K(\cdot, x_i). \end{equation} \end{thm} From a computational perspective, the result is significant because it states that while the objective function (\ref{kernel}) is defined over a potentially infinite dimensional Hilbert space, its minimizer must lie in a finite dimensional subspace. Finally, note that (\ref{kernel}) can be naturally interpreted as an orthogonal projection. In particular, by introducing an auxiliary vector ${\mathbf{z}}\in{\rm I \kern-0.20em R}^n$, (\ref{kernel}) can be rewritten as the following optimization program: \begin{eqnarray} \label{rlsqreg-r1}\min & \| {\mathbf{z}} - {\mathbf{y}}\|_2^2 + \lambda \| f \|_{{\cal{H}}_K}^2\\ \label{extraconstraints}{\textrm{s.t.}} & z_i = f(x_i) &\forall i\in\{1,...,n\}\\ \nonumber & {\mathbf{z}}\in{{\rm I \kern-0.20em R}^n}\\ \nonumber & f\in{\cal{H}}_K. \end{eqnarray} Through the constraints in (\ref{extraconstraints}), (\ref{kernel}) and (\ref{rlsqreg-r1}) are equivalent in the following sense: if $f_{\lambda}$ is the minimizer of (\ref{kernel}) and $({\mathbf{z}}^{\prime}, f_{\lambda}^{\prime})$ is the solution of (\ref{rlsqreg-r1}), then $f_{\lambda}^{\prime} = f_{\lambda}$. Therefore, through (\ref{rlsqreg-r1}), we can interpret the regularized kernel least-squares estimator as a projection of the vector $({\mathbf{y}}, 0)\in{\rm I \kern-0.20em R}^n\times{\cal{H}}_K$ onto the set \begin{equation}\nonumber \Big{\{}({\mathbf{z}}, f)\in{\rm I \kern-0.20em R}^n\times{\cal{H}}_K \, : \, z_i = f(x_i) \,\,\forall i\in\{1,...,n\}\Big{\}}\subset{\rm I \kern-0.20em R}^n\times{\cal{H}}_K. \end{equation} This simple observation will recur in the sequel. \subsection{Alternating Projections Algorithms} Let $\cal{X}$ be a Hilbert space with a norm denoted by $\|\cdot\|$. Let $C_1, \ldots, C_m$ be closed convex subsets of $\cal{X}$ whose intersection $C = \cap_{i=1}^m C_i$ is nonempty. Let $P_{C}(\hat{x})$ denote the orthogonal projection of ${\hat{x}}\in {\cal{X}}$ onto $C$, i.e., \begin{eqnarray} \nonumber P_{C}( \hat{x}) \triangleq \arg \min_{x\in C} \| x - \hat{x}\|. \end{eqnarray} Define $P_{C_i}(\hat{x})$ analogously. Successive orthogonal projection (SOP) algorithms \cite{CenZen97} provide a natural way to compute $P_C(\cdot)$ given $\{P_{C_i}(\cdot)\}_{i=1}^m$. For example, the (unrelaxed) SOP algorithm is defined as follows: \begin{eqnarray}\label{SOP} x_{0}:=\hat{x} & x_{n} := P_{C_{(n\mod{ m}) + 1} }(x_{n-1}). \end{eqnarray} In words, the algorithm successively and iteratively projects onto each of the subsets. In the case where $C_i$ is a linear subspace for all $i\in\{1,\ldots,m\}$, this algorithm was first studied by von Neumann \cite{Von50}. Often examined in the context of the \emph{convex feasibility problem}, SOP has been generalized in various ways \cite{CenZen97}, to address more general convex sets and non-orthogonal (e.g., Bregman) projections; accordingly, the algorithm often takes on other names (e.g., the von Neumann-Halperin algorithm, Bregman's algorithm). Much of the behavior of this algorithm can be understood through Theorem 2; the proof of this fundamental result can be found in \cite{BauBor96}. \begin{thm} Let $\{C_i\}_{i=1}^m$ be a set of closed, convex subsets of $\cal{X}$ whose intersection $C = \cap_{i=1}^m C_i$ is nonempty. Let $x_n$ be defined as in (\ref{SOP}). Then, for every $x\in C$ and every $n\geq 1$, \begin{equation} \nonumber\|x_n -x\| \leq \|x_{n-1} - x\|. \end{equation} Moreoever, $\lim_{n\rightarrow\infty} x_n \in \cap_{i=1}^m C_i$. If $C_i$ are affine for all $i\in\{1,...,m\}$, then $\lim_{n\rightarrow\infty} \|x_n - P_C(\hat{x})\| = 0$. \end{thm} \section{Distributed Kernel Regression} \subsection{The Model} In contrast to the model for supervised learning reviewed in Section II, suppose that each member of a collection of $m$ learning agents has limited access to the training database $S=\{(x_i, y_i)\}_{i=1}^n$. In particular, assume that learning agent $i$ has access only to the training examples in subset $S_i\subseteq S$. For convenience, we shall henceforth refer to $\{S_i\}_{i=1}^m$ as an \emph{ensemble}. A bipartite graph is a convenient way to represent an ensemble in this model for distributed regression. As depicted in Figure~1, nodes on the top-level of the graph represent learning agents; nodes on the bottom-level represent training examples. An edge between a learning agent $i$ and a training sample $j$ signifies that agent $i$ has access to example $j$, i.e., $(x_j, y_j)\in S_i$. For now, we make no additional assumptions on the structural relationship between the agents' locally accessible training sets; for example, we do not require the ensemble $\{S_i\}_{i=1}^m$ to partition $S$, nor do we require the corresponding bipartite graph to be connected in any way. To be concrete, consider a few examples that illustrate special-cases of the general model depicted in Figure 1. The standard centralized model for supervised learning can be represented by the graph in Figure 2, where each of the $m$ learning agents has access to all exemplars in the training database. Figure 3 illustrates an ensemble where a publicly available database is available to all the learning agents, each of which retains a private training set. In some applications, ${\mathcal{X}}$ may be endowed with a topology. For example, in wireless sensor networks, ${\mathcal{X}={\rm I \kern-0.20em R}^2}$ may model locations in a city; learning agents (i.e., sensors) may exist as points within ${\mathcal{X}}$, and query those examples that are ``nearby" with respect to the underlying topology; such an ensemble is depicted in Figure 4. As mentioned earlier, the current model is a generalization of the the work discussed in \cite{PreKulPoo05c}. Whereas \cite{PreKulPoo05c} focuses exclusively on the WSN application by assuming a topology on $\cal{X}$ and by modeling one agent per training observation, the present formulation allows a more general structure with multiple agents per training datum and an arbitrary input space. \begin{figure}[htbp] \centering \includegraphics[width=3.4in]{General.eps} \caption{A Bipartite Graph Representation of an Ensemble in this Model for Distributed Regression} \label{Fig2} \vspace{-6mm} \end{figure} \begin{figure}[htbp] \centering \includegraphics[width=3.0in]{Centralized-c.eps} \caption{A ``Centralized" Ensemble} \label{Fig2} \vspace{-6mm} \end{figure} \begin{figure}[htbp] \centering \includegraphics[width=3.0in]{d-connected-b.eps} \caption{An Ensemble with a Public Database} \label{Fig2} \vspace{-6mm} \end{figure} \begin{figure}[htbp] \centering \includegraphics[width=1.7in]{SN.eps} \caption{A Sensor Network: An Ensemble with Topology Dependent Structure} \label{Fig2} \vspace{0mm} \end{figure} Presumably, each of the $m$ agents wishes to use nonparametric methods to estimate the regression function. One simple approach is for agent $i$ to compute $f_{\lambda_i}$ using only the exemplars in its local training database $S_i$. However, doing so ignores the structure of distributed regression and fails to exploit an opportunity to collaborate using the (partially) shared training database. We henceforth assume that after locally computing $f_{\lambda_i}\in{\mathcal{H}}_K$, agent $i$ may share $f_{\lambda_i}(x_j)\in{\rm I \kern-0.20em R}$ with any agent $k$ such that $(x_j, y_j)\in S_k$. In other words, neighboring agents (with respect to the bi-partite graph) communicate point estimates for the training data they share. Using such limited communication, can the agents collaborate to jointly improve the accuracy of their estimates? In the next section, we derive a collaborative training algorithm in this model for distributed nonparametric regression. The algorithm is derived as an application of SOP algorithms applied to a relaxation of the classical regularized kernel least-squares estimator. Subsequently, we analyze its convergence properties and investigate its statistical properties in a simplified theoretical setting. \vspace{-2mm} \subsection{A Collaborative Training Algorithm} For technical convenience, let us introduce sets $\{\bar{S}_i\}_{i=1}^m$, such that $\bar{S}_i\subseteq \{1,\ldots,n\}$. Let $j\in\bar{S}_i$ if and only if $(x_j, y_j)\in S_i$. In other words $\bar{S}_i$ contains the indices of the training examples in $S_i$ as enumerated in $S$. Analogously, let $\bar{S} = \{1,\ldots,n\}$. To begin, let us rewrite (\ref{kernel}) in a way that reveals the structure of distributed regression. To do so, first let us introduce a function $f_i\in{\mathcal{H}}_K$ for each agent $i\in\{1,\ldots, m\}$, and consider the following constrained optimization program: \begin{eqnarray} \label{rlsqreg-r2}\min & \|{\mathbf{z}} - {\mathbf{y}}\|_2^2 + \sum_{i=1}^m \lambda_i \| f_i \|_{{\mathcal{H}}_K}^2\\ \label{coupling1}\textrm{s.t.} &\hspace{-1cm} z_j = f_i(x_j) &\hspace{-1.6cm}\forall j\in \bar{S}, i\in\{1,...,m\}\\ \nonumber & \hspace{-1cm}f_i \in {\mathcal{H}}_K &\hspace{-1.6cm}i\in\{1,\ldots,m\} \end{eqnarray} Here, the optimization variables are ${\mathbf{z}}\in{\rm I \kern-0.20em R}^n$ and $\{f_i\}_{i=1}^n\subset {{\cal{H}}_K}$; $S=\{(x_i, y_i)\}_{i=1}^n$ and $\{\lambda_i\}_{i=1}^m\subset{\rm I \kern-0.20em R}$ are the program data. The \emph{coupling constraints} in (\ref{coupling1}) dictate that for any feasible solution to (\ref{rlsqreg-r2}), every agent's associated function is equivalent when evaluated at $\{x_i\}_{i=1}^n$. As a result, one can think about (\ref{rlsqreg-r2}) as an equivalent form of (\ref{kernel}) in the following sense. \begin{lem} Let $({\mathbf{z}}, f_{\lambda_1},...,f_{\lambda_m})\in{\rm I \kern-0.20em R}^n\times{{\cal{H}}^m_K}$ denote the solution of (\ref{rlsqreg-r2}) and let $f_{\lambda}\in{{\cal{H}}_K}$ denote the solution of (\ref{kernel}). Assume that $\lambda_i>0\,\,\forall i\in\{1,...,m\}$. Then, $f_{\lambda_1} = \cdots = f_{\lambda_m}$. If $\sum_{i=1}^m \lambda_i = \lambda$, then $f_{\lambda} = f_{\lambda_1}$. \end{lem} This form of the regularized least-squares regression problem suggests a natural relaxation that allows us to incorporate the structure of the distributed regression model into the estimator. In particular, we relax the coupling constraints to require that agents agree only on training examples they share: \begin{eqnarray} \label{rlsqreg-r3}\min & \|{\mathbf{z}} - {\mathbf{y}}\|_2^2 + \sum_{i=1}^m \lambda_i \| f_i \|_{{\mathcal{H}}_K}^2\\ \label{coupling2}\textrm{s.t.} &\hspace{-1cm} z_j = f_i(x_j) &\hspace{-1.6cm}\forall j\in \bar{S}_i, i\in\{1,...,m\}\\ \nonumber & \hspace{-1cm}f_i \in {\mathcal{H}}_K &\hspace{-1.6cm}i\in\{1,\ldots,m\} \end{eqnarray} Thus, for any feasible solution to (\ref{rlsqreg-r3}), $f_i({\mathbf{x}}_j) = f_k({\mathbf{x}}_j)$ if $(x_j, y_j)\in S_i\cap S_k$. Looked at in this way, (\ref{rlsqreg-r2}) models the ``centralized ensemble" depicted in Figure 2, while (\ref{rlsqreg-r3}) captures the more general structure in Figure 1. Note that just as (\ref{kernel}) can be interpreted as a projection via (\ref{rlsqreg-r1}), (\ref{rlsqreg-r3}) can be interpreted as a (weighted) projection of the vector $({\mathbf{y}}, 0, \ldots, 0)\in{\rm I \kern-0.20em R}^n\times{{\cal{H}}^m_K}$ onto the set $C = \cap_{i=1}^m C_i$, with \begin{eqnarray} C_i = \Big{\{}({\mathbf{z}}, f_1, \ldots, f_m)\,:\,\ f_i({\mathbf{x}}_j) = z_j \,\,\forall j\in \bar{S}_i, {\mathbf{z}}\in{\rm I \kern-0.20em R}^n,\\ \nonumber\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \{f_i\}_{i=1}^m\subset{{\cal{H}}_K }\Big{\}}\subset{\rm I \kern-0.20em R}^n\times{{\cal{H}}^m_K}. \end{eqnarray} The significance of this observation lies in the fact that the relaxed form of the regularized kernel least-squares estimator has been expressed as a projection onto the intersection of a collection of $m$ convex sets; in particular, note that each set $C_i$ is a subspace. Thus, by Lemma 1, the SOP algorithm can be used to solve the relaxed problem (\ref{rlsqreg-r3}). Moreover, computing $P_{C_i}(\cdot)$ requires agent $i$ to gather examples only within its locally accessible database. More precisely, note that for any ${\mathbf{v}}=({\mathbf{z}}, f_1, \ldots, f_m)\in{\rm I \kern-0.20em R}^n\times{{\cal{H}}^m_K}$, $P_{C_i}({\mathbf{v}}) = ({\mathbf{z}}^{\star}, f_1^{\star}, \ldots, f_m^{\star})$ where \begin{eqnarray} \nonumber f_j^{\star} & = & f_j \,\,\,\,\,\, \forall j\neq i\\ \nonumber f^{\star}_i &=& \arg\min_{f\in{{\cal{H}}_K}} \sum_{j\in \bar{S}_i} (f(x_j) - z_j)^2 + \lambda_i \| f - f_i\|_{{\cal{H}}_K}^2\\ \nonumber z_j^{\star} & = & z_j \,\,\,\,\,\,\forall j \textrm{ s.t. } j \notin \bar{S}_i\\ \nonumber z_j^{\star} & = & f_i^{\star}(x_j) \,\,\,\,\,\,\forall j \textrm{ s.t. } j\in \bar{S}_i \end{eqnarray} To emphasize, computing $P_{C_i}({\mathbf{v}})$ leaves $z_j$ unchanged for all $j\notin \bar{S}_i$ and leaves $f_j$ unchanged for all $j\neq i$. The function associated with agent $i$, $f_{i}^{\star}$ can be computed using $f_i$ and $S_i$ after the training data labels $\{y_j\}_{j\in\bar{S}_i}$ have been \emph{updated} with the corresponding ``message variables" $\{z_j\}_{j\in\bar{S}_i}$. Tying these observations together, we are left with an algorithm for collaborative regression estimation which solves a relaxed form of the regularized least-squares estimator (\ref{rlsqreg-r3}). The algorithm is summarized in psuedo-code in Table 1 and depicted pictorially in Figure 5. In words, the algorithm iterates over each agent in turn, allowing them to compute a local kernel estimate and to update the labels in the training database accordingly. Multiple passes (in fact, $T$ cycles) over the agents are made. \begin{table*}[htdp] \begin{center} \begin{tabular}{|ll|} \hline \textbf{Init:} & Agents agree on a positive semi-definite kernel $K(\cdot,\cdot):{\mathcal{X}}\times{\mathcal{X}}\rightarrow{\rm I \kern-0.20em R}$.\\ & Training database $S=\{(x_i, z_i)\}_{i=1}^n$ is initialized\\ &\,\,\,\,\, so that $z_i = y_i\,\, \forall i\in\{1,\ldots, n\}$. \\ &\\ \textbf{Train:} & for $t=1,\ldots, T$ \\ & \hspace{.5cm}for $i=1,\ldots, m$ \\ & \hspace{1cm}Agent $i$: \\ & \hspace{1.25cm} Retrieves database $S_i\subseteq S$\\ & \hspace{1.25cm} Computes $f_{i, t} := \arg\min_{f\in{{\cal{H}}_{K}}} \Big{[}\sum_{j\in \bar{S}_i} (f({\mathbf{x}}_j) - z_{j})^2 + \lambda_i \| f - f_{i, t-1}\|_{{\cal{H}}_{K}}^2\Big{]}$ \\ & \hspace{1.25cm} Updates database: $z_{j} \leftarrow f_{i, t}({\mathbf{x}}_j)\,\,\,\forall (x_j, z_j)\in S_i$\\ & \hspace{.5cm} {\textrm{end}}\\ & {\textrm{end}}\\ \hline \end{tabular} \end{center} \label{default} \caption{An Algorithm for Training Collaboratively} \vspace{-.30in} \end{table*}% \begin{figure}[htbp] \centering \includegraphics[width=3.3in]{Algorithm.eps} \caption{A Collaborative Training Algorithm} \label{Fig2} \vspace{-4mm} \end{figure} \subsection{Convergence} Note that the asymptotic behavior of the collaborative training algorithm is implied by the analysis of the SOP algorithm. In particular, we have the following. \begin{thm} Let $({\mathbf{z}}, f_{\lambda_1},\ldots, f_{\lambda_m})\in{\rm I \kern-0.20em R}^n\times{{\cal{H}}^m_K}$ be the solution to (\ref{rlsqreg-r3}) and let $\{f_{i,T}\}_{i=1}^m\subset {{\cal{H}}_K}$ be as defined in the algorithm described in Table I. Then, \begin{equation} \nonumber\lim_{T\rightarrow\infty} f_{i,T} = f_{\lambda_i} \end{equation} for all $i\in\{1,\ldots, m\}$. \end{thm} This theorem follows from Theorem 2 and the fact that convergence in norm implies point-wise convergence in RKHSs. Given the structure of RKHS and the general analysis in \cite{BauBor96}, the algorithm is expected to converge linearly for many kernels. We forego a discussion of this important, but technical point for the sake of space. Observe that Theorem 3 characterizes the output of collaborative training algorithm relative to (\ref{rlsqreg-r3}). This characterization is useful insofar as it sheds light on the relationship between the algorithm's output and (\ref{kernel}), the centralized regularized least-squares estimator. The following straightforward generalization of Theorem 1 is a step toward further understanding this important relationship. \begin{thm} Let $({\mathbf{z}}, f_{\lambda_1},\ldots, f_{\lambda_m})\in{\rm I \kern-0.20em R}^n\times{{\cal{H}}^m_K}$ be the solution to (\ref{rlsqreg-r3}) . Then, for every agent $i\in\{1,\ldots,m\}$, there exists ${\mathbf{c}}_{\lambda_i}\in{\rm I \kern-0.20em R}^{|S_i|}$ such that \begin{equation}\label{newrepresent} f_{\lambda_i}(\cdot) = \sum_{j\in \bar{S}_i} c_{\lambda_ij} K(\cdot, x_j). \end{equation} \end{thm} The proof of this theorem follows from the original Representer Theorem (applied to the update equation for $f_{i, t}$) and the fact that ${\cal{H}}_K$ is closed. The significance of Theorem 4 lies in the fact that the size of any agent's locally accessible database fundamentally limits the accuracy of that agent's estimate. In particular, an agent having access to only a few exemplars in an otherwise large training database will still be limited to estimates that lie in the span of functions determined by its local data; thus, local connectivity influences the agent's bias. Intuitively, however, the message-passing through the training database may optimize the estimator within that limited span if the ensemble is ``connected" in some meaningful way. To bear out this intuition in a simplified theoretical setting, we consider a simple notion of connectedness in the next section. \subsection{A Simplified Setting} For a given ensemble, kernel pair $(\{S_i\}_{i=1}^m, K)$, let us construct an auxiliary graph as follows: let there be a node for every learning agent and let there be an edge between node (i.e., agent) $i$ and node $k$ if the following condition holds: \begin{eqnarray} \label{connected}{\textrm{span}}(\{ K(\cdot, x_j)\}_{j\in\bar{S}_i}) &=& {\textrm{span}}( \{ K(\cdot, x_j)\}_{j\in\bar{S}_k} )\\ \nonumber&=& {\textrm{span}}(\{ K(\cdot, x_j)\}_{j\in\bar{S}_i\cap \bar{S}_k}) \end{eqnarray} In other words, an edge connects two nodes if the training examples they share determine the space of functions their estimates lie in as dictated by Theorem 4. \begin{defn} Let us call the ensemble, kernel pair $(\{S_i\}_{i=1}^m, K)$ \emph{connected} if and only if the auxiliary graph so constructed is connected. \end{defn} This definition leads to the following theorem, which can be viewed as a straightforward generalization of Lemma 1. \begin{thm} Let $(\{S_i\}_{i=1}^m, K)$ be connected and suppose the ensemble employs the collaborative training algorithm using $\{\lambda_i\}_{i=1}^m$. Finally, let $f_\lambda$ denote the solution to (\ref{kernel}) for $\lambda = \sum_{i=1}^m \lambda_i$. Then, \begin{eqnarray} f_{\lambda} = \lim_{T\rightarrow\infty} f_{i, T} \end{eqnarray} for all $i\in\{1,\ldots, m\}$. \end{thm} Theorem 5 follows from Theorem 3 after noting that connectedness implies that the solution to (\ref{rlsqreg-r3}) $({\mathbf{z}}, f_{\lambda_1},\ldots, f_{\lambda_m})$ satisfies $f_{\lambda_1} = \cdots = f_{\lambda_m}$. To illustrate the significance of Theorem 5 and to tie it to the foregoing discussion, consider the following example. \begin{example} Suppose ${\mathcal{X}}={\rm I \kern-0.20em R}^d$ and that $K({\mathbf{x}}, {\mathbf{x}}^{\prime}) = {\mathbf{x}}^T{\mathbf{x}}^{\prime}$ is the \emph{linear kernel}; in this case, ${\mathcal{H}}_K$ is the set of linear functions on ${\mathcal{X}}$. If $\{S_i\}_{i=1}^m$ is an ensemble with public database of $d$ \emph{linearly independent} examples (depicted in Figure 3 and discussed in Section III), then $(\{S_i\}_{i=1}^m, K)$ is connected. Therefore, by Theorem 5, the collaborative training algorithm would allow agent $i$ to find the best linear fit to the \emph{entire data set} $S$ (for the particular choice of regularization parameter $\lambda$), despite the fact that only $\frac{n_i + d}{ \sum_{i=1}^m n_i + d}$ percent of the data is locally accessible. More generally, if a $p^{\textrm{th}}$ order polynomial kernel is used, then an analogous observation holds when $d^p$ examples are shared.\end{example} In this simple example, the potential utility of the collaborative training algorithm is revealed. Consider the extreme case when each agent has access to only a single example in addition to the public database. As the number of agents $m\rightarrow\infty$, the collaborative training algorithm would allow every agent a consistent estimate of the optimal linear least-squares estimate as long as $\sum_{i=1}^m \lambda_i \rightarrow 0$; this is true despite the fact that each agent retains local access to only $d+1$ examples for all $m$. \vspace{-3mm} \section{Discussion} As described in Table 1, the inner loop of the collaborative training algorithm iterates over agents in the ensemble serially. Note that the ordering is non-essential and parallelism may be introduced. In fact, two agents can train simultaneously as long as they do not share exemplars in their locally accessible training database. In practical settings, multiple-access algorithms that are frequently studied in the communications literature (e.g., ALOHA) may be adapted to negotiate an ordering in a distributed fashion. Since the SOP algorithm and Theorem 2 have been generalized to a very general class of (perhaps random) control orderings \cite{BauBor96}, Theorem 3 can be extended in many cases. Experiments that validate the collaborative training algorithm in a WSN setting can be found in \cite{PreKulPoo05c}. In this paper, we have focused exclusively on regularized kernel least-squares regression. However using Bregman's algorithm \cite{CenZen97}, the method and many of the theorems may be extended to more general loss functions and regularizers including Bregman divergences. Those familiar with LDPC codes or Bayes networks may find the current model and algorithm reminiscent of message-passing algorithms such a belief-propagation which are frequently studied in those fields; variational interpretations of kernel methods in the context of Gaussian processes further suggests a relationship between these works. Formalizing such a connection would likely require one to interpret our ``relaxation" in the context of dependency structures in Gaussian processes, and to connect alternating projection algorithms with the generalized distributive law \cite{AjiMce00}. \vspace{-4mm}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} Searches for two-body decays of heavy resonances have a rich history of important discoveries, from the $J/\psi$ to the Higgs boson. Such resonances can provide an unambiguous signature of a localized invariant mass peak and offer simple background estimation from sidebands, allowing for discovery without requiring full models of the signal or background processes. These experimental features, combined with compelling theoretical arguments, motivate much of the current program of resonance searches. The theoretical arguments for new resonances mostly consist of simple generic extensions to the Standard Model (e.g. a new $U(1)$) or modifications to the SM which address an outstanding theoretical problem (e.g. Kaluza-Klein gravitons). To date, most of the experimental searches have followed these theoretical arguments, leading to many searches for pairs of identical objects (eg $ee,\mu\mu, jj$) and in rarer cases for non-identical pairs (eg $e\mu,ZW$). However, the dramatic scale of the open theoretical questions facing particle physics suggests that a correct theory of Nature may not be one of the models currently in fashion or under specific consideration. This motivates an experimental program which is not narrowly focused on current models and the signatures they suggest, but with a broad scope and systematic approach capable of theoretically unanticipated discoveries. While there have been many proposals for model-independent search programs at hadron colliders (such as the framework of on-shell effective theories \cite{ArkaniHamed:2007fw}), they have been largely motivated by specific theoretical frameworks, and consequently many holes remain in the existing experimental program at the LHC. To make concrete progress, we propose a systematic search for new particles decaying into $n$-body resonances. In the $n=2$ case, this would consist of searches for resonances in all pairs of objects, even those which have no theoretical motivation or are theoretically disfavored. The typical difficulty facing searches without specific theoretical motivation is the large number of possible observables, which incurs a very large trials factor and greatly reduces the discovery sensitivity. Here, rather than relying on theoretical guidance, we propose to restrict the vast space of possible theories into those that align well with experimental strengths. We are interested in covering the intermediate ground between the very specific and the very general search programs, by focusing on well-defined topologies independent of specific theory considerations. This broadens the search program beyond favored theories, but not so much so as to compromise discovery potential. Given that the data exist and resonances are fairly easy to discover, we argue that the two-particle spectra are worth directly examining. In many cases, there are indirect constraints on such resonances from other experiments or subjective theoretical arguments, but there is no real substitute for a direct search. In this paper, we lay out the details of the implementation of such a search program and survey the existing experimental and theoretical landscape for exclusive $n=2$-body resonances, leaving $n=3+$ (as well as inclusive $n=2$ final states) for future work. We find that the majority of 2-body resonances have some indirect theoretical constraints but have received almost no experimental attention, leaving most of the landscape unexplored and a large potential for unanticipated discovery. \section{Scope \& experimental searches} We consider resonances decaying to a basic set of identifiable light objects (charged leptons, photons, light-quark jets, $b$-tagged jets) as well as heavy objects (top quarks, weak bosons, Higgs bosons) which are routinely identified\footnote{One could imagine restricting the scope to light objects, categorizing the heavy objects as higher-level decays (eg $X\rightarrow WW\rightarrow 4j$ would be considered in the $n=4$ category rather than $X\rightarrow WW$ as $n=2$). This is equivalent, but allows us to call attention to these typical objects rather than considering them as special mass cases of higher-level decays.}. In the case of $n=2$ objects, this gives 55 unique pairs of exclusive final states, see Table~\ref{tab:res}. Final states with higher number of objects have a larger number of exclusive final states; we reserve these for future work. We examined experimental searches from ATLAS and CMS in data collected from proton-proton collisions with $\sqrt{s}=8$~TeV. We consider exclusive final states only in terms of the pairs of identifiable objects defined above. For example, in the $e\gamma$ category of this exclusive $n=2$ survey, we consider only searches for $e\gamma$, of which there are none, and do not consider searches for $e^+e^-\gamma$, of which there are several motivated by excited lepton models that give a resonance in $e\gamma$. The final state of $e^+e^-\gamma$ would be covered by an $n=3$ study, and extrapolation of those limits to the $n=2$ $e\gamma$ category requires theoretical assumptions about the production modes. The survey of $n=2$ final states is shown in Table~\ref{tab:res}, with the striking feature that most diagonal entries have existing searches, where as most off-diagonal entries do not. In the case of the Higgs boson in particular, there are several unexamined resonance categories. Note that the lack of searches in these resonance categories is not for want of theory models. Examples of theories that populate the entire landscape of 2-body resonances are shown in Table~\ref{tab:restheory}. Even in cases where searches exist, there are often unexamined regions in the resonance mass. Figures~\ref{fig:jg} and~\ref{fig:lwh} show the strongest limits on the cross section times branching ratio as a function of the resonance mass for all results which satisfy the requirements. \section{Theoretical constraints} Various theoretical constraints may be imposed on $n$-body resonances, which in turn influence the likely production and decay modes at the LHC. In order to maintain the broadest possible scope, we consider only the most stringent constraints imposed by gauge invariance and Lorentz invariance, as experimental constraints on e.g.~flavor violation depend on the details of the underlying model and may in principle be evaded. Gauge invariance and Lorentz invariance restrict the possible statistics and quantum numbers of a resonance decaying to a specified 2-body final state. The statistics and possible $SU(3)_c$ and $U(1)_{em}$ numbers of 2-body resonances are enumerated according to their exclusive final state in Table~\ref{tab:quantumnumbers}. Note that we enumerate only $SU(3)_c \times U(1)_{em}$ quantum numbers rather than $SU(3)_c \times SU(2)_L \times U(1)_Y$ quantum numbers, because a large number of $SU(3)_c \times SU(2)_L \times U(1)_Y$ representations may share the same exclusive final state provided additional insertions of the Higgs vacuum expectation value. We also do not exhaustively list all possible $SU(3)_c$ representations, but for simplicity restrict our attention to states transforming in the fundamental or adjoint representation; resonances transforming in other representations of $SU(3)_c$ may have different pair production cross sections but do not lead to significantly different signatures. While a fermionic resonance with Standard Model quantum numbers generally contributes to gauge anomalies, these anomalies may be cancelled by additional particles that do not influence the collider signatures of the resonance. Gauge invariance and Lorentz invariance also dictate the structure of operators coupling a resonance to Standard Model particles, and in many cases the couplings must arise via irrelevant operators. For example, a resonance $X$ decaying to $tg$ cannot couple via a minimal gauge coupling $ \bar X \gamma^\mu G_\mu t$, but may couple via e.g.~a chromoelectric dipole operator of the form $\bar X \gamma^{\mu \nu} G_{\mu \nu} t$. In many cases, more than one Lorentz structure is allowed for a given coupling. The various possible Lorentz structures for each coupling have a modest impact on kinematic distributions for the production and decay of each resonance (see e.g.~\cite{ArkaniHamed:2007fw}), but they do not alter the key feature of interest in this work, namely a bump in the $n$-body invariant mass spectrum. Note that these conclusions may be altered in the presence of significant interference effects, which may lead to deficits or peak-dip structures in the invariant mass spectrum if the Standard Model continuum interferes with the signal process. The existence and structure of interference effects cannot be determined by quantum numbers alone, and depends additionally on both the Lorentz structure and phases of couplings between the resonance and Standard Model states. However, in the limit of weak coupling, interference between a narrow resonance and Standard Model continuum backgrounds is negligible and may be neglected. To good approximation, as an expansion at weak coupling, searches for $n$-body resonances may therefore be parameterized solely in terms of the resonance mass, width, and production cross section times branching ratio. Having specified the possible gauge quantum numbers of the 2-body resonance given the final state, gauge invariance and Lorentz invariance provide a guide to the possible production modes at the LHC. For each resonance there are three possibilities: \begin{enumerate} \item The particle can be {\it resonantly produced} either exclusively using its tree-level decay coupling (as in, e.g., a resonance decaying to $qq$ or $gg$); via loop-induced processes involving the decay coupling (as in, e.g., gluon fusion production of a $t \bar t$ resonance); or via additional couplings to quarks and gluons allowed by its quantum numbers. The presence of such additional couplings may lead to additional theoretical constraints discussed below. Such resonant production channels fall under the scope of the exclusive 2-body searches proposed here. \item The particle can be produced via {\it associated production} exclusively using its decay couplings. For example, a resonance $X$ coupling to $tW^+$ can be produced in the process $q g \to t q X$ using only the $X t W^+$ coupling and Standard Model gauge couplings. This assumes no additional couplings to quarks and/or gluons. Such associated production channels fall under the scope of $n \geq 3$ studies, with a feature in the appropriate 2-body invariant mass spectrum. \item The particle can be {\it pair produced} using its gauge quantum numbers (e.g. Drell-Yan via electroweak quantum numbers). This process is kinematically suppressed for heavier resonances, but may be appreciable if the gauge couplings are significantly larger than the decay couplings. Such pair production channels fall under the scope of $n=4$ studies, with features in the appropriately-paired 2-body invariant mass spectra. \end{enumerate} The possible production modes for each resonance are enumerated in Table \ref{tab:modes}. In principle, a given resonance may be produced in all three modes, with varying rates depending on the relative sizes of phase space factors and production and decay couplings. In each case the final state contains a peak in the appropriate 2-body invariant mass, but with varying amounts of additional event activity. In this sense, the associated- and pair-production modes may not qualify for the $n=2$ exclusive case considered above, but serve as a useful foundation for future $n>2$ studies. As is apparent in Table \ref{tab:modes}, there are several possible 2-body resonances for which resonant production is incompatible with Standard Model gauge invariance, in the sense that the quantum numbers of the final state cannot be produced by any initial state with appreciable parton density in proton-proton collisions. Nonetheless, searches for these 2-body resonances at the LHC remain motivated by the possibility of new physics that mimics a Standard Model final state in the LHC detectors (in the sense that, e.g., a long-lived neutral particle decaying to electron-positron pairs might be reconstructed as a photon). These states may also be produced in associated production with associated particles sufficiently soft to still appear as an exclusive 2-body resonance, or may originate from $n \geq 2$ exclusive final states with missing energy appearing in $n=2$ exclusive searches. Such states may also be resonantly produced at other colliders consistent with gauge invariance, such as in electron-proton collisions at HERA. Apart from gauge invariance and Lorentz invariance, less robust constraints may also apply. Many such constraints arise only when the resonance possesses both its decay coupling and additional couplings to quarks and/or gluons. Proton decay provides the strongest such constraint, as strong bounds on the proton lifetime imply that the couplings of resonances inducing proton decay are vanishingly small. In the case of 2-body resonances, resonances coupling to a single pair of Standard Model particles will not induce proton decay, but proton decay may be induced by additional couplings to quarks required for resonant production at the LHC. Resonances for which this occurs are indicated in Table~\ref{tab:quantumnumbers}; in these cases it is reasonable to expect $n=2$ resonant production rates to be small. Beyond proton decay, there are a variety of constraints on flavor violation, lepton number violation, and other types of baryon number violation, but in practice even strong constraints may be avoided by appropriate symmetries, textures, or fortuitous cancellations (as in e.g.~maximal flavor violation \cite{BarShalom:2007pw} or diquark-type interactions \cite{Giudice:2011ak}). In these cases there is no substitute for a direct search. \begin{table*} \caption{ Existing two-body exclusive final state resonance searches at $\sqrt{s}=8$ TeV. The $\varnothing$\ symbol indicates no existing search at the LHC.} \begin{tabular}{lccccccccccc} \hline \hline & $e$\ \ & $\mu$\ \ & \ \ $\tau$\ \ & \ \ $\gamma$\ \ & \ \ $j$ \ \ & \ \ $b$\ \ & \ \ $t$ \ \ & \ \ $W$ \ \ & \ \ $Z$ \ \ & \ \ $h$ \ \ \\ \hline $e$ & $\pm\mp$\cite{atlasdilepton8tev},$\pm\pm$\cite{atlassslep8tev} & $\pm\pm$\cite{atlassslep8tev,Khachatryan:2016ovq} $\pm\mp$\cite{Aad:2015pfa,Khachatryan:2016ovq} & \cite{Aad:2015pfa} & $\varnothing$ & $\varnothing$ & $\varnothing$& $\varnothing$& $\varnothing$& $\varnothing$& $\varnothing$\\ $\mu$ & & $\pm\mp$\cite{atlasdilepton8tev},$\pm\pm$\cite{atlassslep8tev} & \cite{Aad:2015pfa}& $\varnothing$ & $\varnothing$ & $\varnothing$ & $\varnothing$& $\varnothing$& $\varnothing$& $\varnothing$\\ $\tau$ & & & \cite{atlastautau8tev} & $\varnothing$ & $\varnothing$ & $\varnothing$ & \cite{Khachatryan:2015bsa}& $\varnothing$& $\varnothing$& $\varnothing$\\ $\gamma$ & & & & \cite{atlasdiphoton8tev} & \cite{cmsphotonjet8tev,atlasphotonjet8tev,Aad:2015ywd} & $\varnothing$ & $\varnothing$ & \cite{Aad:2014fha} & \cite{Aad:2014fha} & $\varnothing$\\ $j$ & & & & & \cite{atlasdijet8tev} & \cite{CMS-PAS-EXO-12-023} & \cite{Aad:2012em} & \cite{Khachatryan:2014hpa} & \cite{Khachatryan:2014hpa} & $\varnothing$\\ $b$ & & & & & & \cite{CMS-PAS-EXO-12-023} & \cite{Aad:2015typ} & $\varnothing$ & $\varnothing$ & $\varnothing$\\ $t$ & & & & & & & \cite{Aad:2015fna} & \cite{Aad:2015voa} & $\varnothing$ & $\varnothing$\\ $W$ & & & & & & & & \cite{Aad:2015agg,Aad:2015owa,Aad:2015ufa,Khachatryan:2014gha} & \cite{Aad:2015owa,Aad:2015ufa,Khachatryan:2014xja,Aad:2015ipg} & \cite{Aad:2015yza,Khachatryan:2016yji,Khachatryan:2015bma} \\ $Z$ & & & & & & & & & \cite{Aad:2015kna,Aad:2015owa,Khachatryan:2014gha} & \cite{Aad:2015yza,Khachatryan:2015lba,Khachatryan:2015ywa,Khachatryan:2015bma} \\ $h$ & & & & & & & & & & \cite{Aad:2015xja,Khachatryan:2015yea, CMS-PAS-EXO-15-008, Khachatryan:2016cfa} \\ \hline \hline \end{tabular} \label{tab:res} \end{table*} \begin{table*} \caption{ Theory models motivating two-body final state resonance searches. Here $Z'$ and $W'$ denote additional gauge bosons, $\not \! \! R$ denotes R-parity violating decays of sparticles in supersymmetry, $H^{\pm \pm}$ denotes doubly-charged Higgs bosons, $H$ denotes additional neutral scalar or pseudoscalar Higgs bosons, $L^*$ and $Q^*$ denote excited fermions, $X_{KK}$ denote various Kaluza-Klein excitations of gravitons or Standard Model fields, $\rho$ denotes neutral or charged techni-rhos, $LQ$ denotes leptoquarks, $T'$, $B'$, $Q'$ denote vector-like top, bottom, and light-flavor quarks, and $\mathcal{Q}$ denotes quirks. See also \cite{KCKong}. } \begin{tabular}{lccccccccccc} \hline \hline & $e$\ \ & $\mu$\ \ & \ \ $\tau$\ \ & \ \ $\gamma$\ \ & \ \ $j$ \ \ & \ \ $b$\ \ & \ \ $t$ \ \ & \ \ $W$ \ \ & \ \ $Z$ \ \ & \ \ $h$ \ \ \\ \hline $e$ & $Z',H^{\pm\pm} $ & $\not \! \! R,H^{\pm\pm} $ & $\not \! \! R,H^{\pm\pm} $ & $L^*$ & $LQ,\not \!\! R$ & $LQ, \not \!\! R$ & $LQ,\not \!\! R$ & $L^*, \nu_{KK}$ & $L^*, e_{KK}$ & $L^*$ \\ $ \mu $ & & $Z',H^{\pm\pm}$ & $\not \! \! R,H^{\pm\pm} $ & $L^*$ & $LQ,\not \!\! R$ & $LQ, \not \!\! R$ & $LQ,\not \!\! R$ & $L^*, \nu_{KK}$ & $L^*, \mu_{KK}$ & $L^*$ \\ $\tau$ & & & $Z',H,H^{\pm\pm}$ & $L^*$ & $LQ, \not \!\! R$ & $LQ,\not \!\! R$ & $LQ,\not \!\! R$ & $L^*, \nu_{KK}$ & $L^*, \tau_{KK}$ & $L^*$ \\ $\gamma$ & & & & $H, G_{KK}, \mathcal{Q}$ & $Q^*$ & $Q^*$ & $Q^*$ & $W_{KK}, \mathcal{Q}$ & $H, \mathcal{Q}$ & $Z_{KK}$\\ $j$ & & & & & $Z',\rho, G_{KK}$ & $W',\not \!\! R$ & $T', \not \!\! R$ & $Q^*, Q_{KK}$ & $Q^*, Q_{KK}$ & $Q'$\\ $b$ & & & & & & $Z',H$ & $W', \not \!\! R, H^\pm$ & $T',Q^*, Q_{KK}$ & $Q^*, Q_{KK}$ & $B'$\\ $t$ & & & & & & & $H,G',Z'$ & $T'$ & $T'$ & $T'$\\ $W$ & & & & & & & & $H, G_{KK}, \rho$ & $W', \mathcal{Q}$ & $H^\pm, \mathcal{Q}, \rho$ \\ $Z$ & & & & & & & & & $H, G_{KK}, \rho$ & $A, \rho$ \\ $h$ & & & & & & & & & & $H, G_{KK}$ \\ \hline \hline \end{tabular} \label{tab:restheory} \end{table*} \begin{figure*} \includegraphics[width=0.7\textwidth]{all_1.pdf} \includegraphics[width=0.7\textwidth]{all_2.pdf} \caption{ Existing limits on the cross section times branching ratio for resonances to various 2-body final states, as a function of the resonance mass. Top pane emphasizes hadronic final states, bottom pane emphasizes photonic final states. References for searches can be found in Table~\ref{tab:res}.} \label{fig:jg} \end{figure*} \begin{figure*} \includegraphics[width=0.7\textwidth]{all_4.pdf} \includegraphics[width=0.7\textwidth]{all_6.pdf} \includegraphics[width=0.7\textwidth]{all_8.pdf} \caption{ Existing limits on the cross section times branching ratio for resonances to various 2-body final states, as a function of the resonance mass. Top pane emphasizes leptonic final states, center pane emphasizes bosonic final states, and the bottom pane emphasizes Higgs final states. References for searches can be found in Table~\ref{tab:res}.} \label{fig:lwh} \end{figure*} \begin{table*} \caption{The possible QCD and EM quantum numbers of each 2-body resonance, indicated as ({\bf QCD},{\bf EM}). Alternate quantum number assignments are indicated in parentheses. Round (square) brackets indicate a bosonic (fermionic) resonance. An ${}^*$ indicates that there is no possible initial state for resonant production at the LHC. A $\diamondsuit$ ($\heartsuit$) indicates that this state would lead to $\Delta B=1$ ($\Delta L=1$) processes if it possessed a resonant production mode at the LHC from additional couplings to quarks or gluons. \label{tab:quantumnumbers}} \begin{center} \begin{tabular}{cccccccccc} \hline\hline & $\ell$ & $\gamma$ & $q$ & $g$ & $b$ & $t$ & $W^+$ & $Z$ & $h$ \\ \hline $\ell$ & $\bnum{1}{2}^*$ & $\fnum{1}{1}^*$ & $\bnum{\bar 3}{\nicefrac{1(4)}{3}}^{\diamondsuit\heartsuit}$& $\fnum{8}{1}^*$& $\bnum{\bar 3}{\nicefrac{4}{3}}^{\diamondsuit\heartsuit}$& $\bnum{\bar 3}{\nicefrac{1}{3}}^{\diamondsuit\heartsuit}$& $\fnum{1}{0}^*$& $\fnum{1}{1}^*$ &$\fnum{1}{1}^*$ \\ $\bar \ell$ & $\bnum{1}{0}$ & $\fnum{1}{-1}^*$ & $\bnum{\bar 3}{-\nicefrac{2(5^*)}{3}}^{\diamondsuit\heartsuit}$& $\fnum{8}{-1}^*$& $\bnum{\bar 3}{-\nicefrac{2}{3}}^{\diamondsuit\heartsuit}$& $\bnum{\bar 3}{-\nicefrac{5}{3}}^*$& $\fnum{1}{-2}^*$ & $\fnum{1}{-1}^*$& $\fnum{1}{-1}^*$ \\ $\gamma$ & $\fnum{1}{1}^*$ & $\bnum{1}{0}$ & $\fnum{\bar 3}{\nicefrac{1(-2)}{3}}$ & $\bnum{8}{0}$& $\fnum{\bar 3}{\nicefrac{1}{3}}$& $\fnum{\bar 3}{-\nicefrac{2}{3}}$ & $\bnum{1}{-1}$ & $\bnum{1}{0}$ & $\bnum{1}{0}$ \\ $ q$ & $\bnum{\bar 3}{\nicefrac{1(4)}{3}}^{\diamondsuit\heartsuit}$ & $\fnum{\bar 3}{\nicefrac{1(-2)}{3}}$ & $\bnum{3}{\nicefrac{-1(2)(-4)}{3}}$ & $\fnum{\bar 3}{\nicefrac{1(-2)}{3}}$ & $\bnum{3}{\nicefrac{-1(2)}{3}}$ & $\bnum{3}{\nicefrac{-1(-4)}{3}}$ & $\fnum{\bar 3}{\nicefrac{-2(-5^{*})}{3}}$& $\fnum{\bar 3}{\nicefrac{1(-2)}{3}}$ & $\fnum{\bar 3}{\nicefrac{1(-2)}{3}}$ \\ $ \bar q $ & $\bnum{3}{\nicefrac{2(5^*)}{3}}^{\diamondsuit\heartsuit}$ & $\fnum{3}{\nicefrac{-1(2)}{3}}$& $\bnum{1(8)}{0(-1)}$ & $\fnum{3}{\nicefrac{-1(2)}{3}}$ & $\bnum{1(8)}{0(-1)}$ & $\bnum{1(8)}{0(-1)}$ & $\fnum{3}{\nicefrac{-1(-4^*)}{3}}$ & $\fnum{3}{\nicefrac{-1(2)}{3}}$ & $\fnum{3}{\nicefrac{-1(2)}{3}}$ \\ $g$ & $\fnum{8}{1}^*$ & $\bnum{8}{0}$ & $\fnum{\bar 3}{\nicefrac{1(-2)}{3}}$ & $\bnum{1(8)}{0}$ & $\fnum{\bar 3}{\nicefrac{1}{3}}$ & $\fnum{\bar 3}{-\nicefrac{2}{3}}$ & $\bnum{8}{-1}$ & $\bnum{8}{0}$ & $\bnum{8}{0}$ \\ $b$ & & $\fnum{\bar 3}{\nicefrac{1}{3}}$ & $\bnum{3}{\nicefrac{-1(2)}{3}}$ & $\fnum{\bar 3}{\nicefrac{1}{3}}$ & $\bnum{3}{\nicefrac{2}{3}}$ & $\bnum{3}{-\nicefrac{1}{3}}$ & $\fnum{\bar 3}{-\nicefrac{2}{3}}$ & $\fnum{\bar 3}{\nicefrac{1}{3}}$ & $\fnum{\bar 3}{\nicefrac{1}{3}}$ \\ $\bar b$ & & & $\bnum{1(8)}{0(-1)}$ & $\fnum{3}{-\nicefrac{1}{3}}$ & $\bnum{1(8)}{0}$ & $\bnum{1(8)}{-1}$ & $\fnum{3}{-\nicefrac{4}{3}}^*$ & $\fnum{3}{-\nicefrac{1}{3}}$& $\fnum{3}{-\nicefrac{1}{3}}$\\ $t$ & & & & $\fnum{\bar 3}{-\nicefrac{2}{3}}$ & $\bnum{3}{-\nicefrac{1}{3}}$ & $\bnum{3}{-\nicefrac{4}{3}}$& $\fnum{\bar 3}{-\nicefrac{5}{3}}^*$ & $\fnum{\bar 3}{-\nicefrac{2}{3}}$ & $\fnum{\bar 3}{-\nicefrac{2}{3}}$ \\ $\bar t$ & & & & & $\bnum{1(8)}{1}$ & $\bnum{1(8)}{0}$ & $\fnum{3}{-\nicefrac{1}{3}}$ & $\fnum{3}{\nicefrac{2}{3}}$& $\fnum{3}{\nicefrac{2}{3}}$ \\ $W^+$ & & & & & & $\fnum{\bar 3}{-\nicefrac{5}{3}}^*$ & $\bnum{1}{-2}^*$ & $\bnum{1}{-1}$ & $\bnum{1}{-1}$ \\ $W^-$ & & & & & & & $\bnum{1}{0}$& $\bnum{1}{1}$& $\bnum{1}{1}$ \\ $Z$ & & & & & & & & $\bnum{1}{0}$ & $\bnum{1}{0}$ \\ $h$ & & & & & & & & & $\bnum{1}{0}$ \\ \hline\hline \end{tabular} \end{center} \label{default} \end{table*} \begin{table*} \caption{For each pair of Standard Model particles, three boxes indicate the existence of various possible production modes for the corresponding resonance. In the first box, a \protect\!\!\parbox{5mm}{\begin{fmffile}{resplot1 \, indicates the existence of a resonant production mode at the LHC via the tree-level decay couplings, loop-induced processes involving the decay coupling, or the inclusion of additional couplings to quarks or gluons allowed by the quantum numbers of the resonance. In the second box, \protect\!\!\parbox{5mm}{\begin{fmffile}{resplot2 \,, \protect\!\!\parbox{5mm}{\begin{fmffile}{resplot3 \,, \protect\!\!\parbox{5mm}{\begin{fmffile}{resplot4 \,, or \protect\!\!\parbox{5mm}{ \begin{fmffile}{resplot5 \, indicate the leading production mode in association with one, two, three, or four Standard Model particles using the same coupling for production and decay in a four-flavor scheme. In the third box, \protect\!\!\parbox{5mm}{\begin{fmffile}{resplotp \, indicates the {\it unavoidable} existence of a pair production mode via Standard Model gauge bosons. This box is left empty if there is a possible choice of resonance quantum numbers that does not lead to a pair production mode. } \begin{center} \def1.5{1.5} \begin{tabular}{c||c|c|c||c|c|c||c|c|c||c|c|c||c|c|c||c|c|c||c|c|c||c|c|c||c|c|c|} \hline & \multicolumn{3}{|c||}{$\ell$} & \multicolumn{3}{|c||}{$\gamma$} & \multicolumn{3}{|c||}{$q$} & \multicolumn{3}{|c||}{$g$} & \multicolumn{3}{|c||}{$b$} & \multicolumn{3}{|c||}{$t$} & \multicolumn{3}{|c||}{$W^+$} & \multicolumn{3}{|c||}{$Z$} & \multicolumn{3}{|c|}{$h$} \\ \hline $\ell$ & & \!\!\parbox{5mm}{ \begin{fmffile}{resplot5 & \!\!\parbox{5mm}{\begin{fmffile}{resplotp & & \!\!\parbox{5mm}{\begin{fmffile}{resplot4 & \!\!\parbox{5mm}{\begin{fmffile}{resplotp & \!\!\parbox{5mm}{\begin{fmffile}{resplot1 & \!\!\parbox{5mm}{\begin{fmffile}{resplot3a & \!\!\parbox{5mm}{\begin{fmffile}{resplotp & & \!\!\parbox{5mm}{\begin{fmffile}{resplot3a & \!\!\parbox{5mm}{\begin{fmffile}{resplotp & \!\!\parbox{5mm}{\begin{fmffile}{resplot1 & \!\!\parbox{5mm}{\begin{fmffile}{resplot4 & \!\!\parbox{5mm}{\begin{fmffile}{resplotp & \!\!\parbox{5mm}{\begin{fmffile}{resplot1 & \!\!\parbox{5mm}{\begin{fmffile}{resplot4 & \!\!\parbox{5mm}{\begin{fmffile}{resplotp & & \!\!\parbox{5mm}{\begin{fmffile}{resplot4 & & & \!\!\parbox{5mm}{\begin{fmffile}{resplot4 & \!\!\parbox{5mm}{\begin{fmffile}{resplotp & & \!\!\parbox{5mm}{\begin{fmffile}{resplot4 & \!\!\parbox{5mm}{\begin{fmffile}{resplotp \\ \hline $\bar \ell$ & \!\!\parbox{5mm}{\begin{fmffile}{resplot1 & \!\!\parbox{5mm}{ \begin{fmffile}{resplot5 & & & \!\!\parbox{5mm}{\begin{fmffile}{resplot4 & \!\!\parbox{5mm}{\begin{fmffile}{resplotp & \!\!\parbox{5mm}{\begin{fmffile}{resplot1 & \!\!\parbox{5mm}{\begin{fmffile}{resplot3a & \!\!\parbox{5mm}{\begin{fmffile}{resplotp & & \!\!\parbox{5mm}{\begin{fmffile}{resplot3a & \!\!\parbox{5mm}{\begin{fmffile}{resplotp & \!\!\parbox{5mm}{\begin{fmffile}{resplot1 & \!\!\parbox{5mm}{\begin{fmffile}{resplot4 & \!\!\parbox{5mm}{\begin{fmffile}{resplotp & & \!\!\parbox{5mm}{\begin{fmffile}{resplot4 & \!\!\parbox{5mm}{\begin{fmffile}{resplotp & & \!\!\parbox{5mm}{\begin{fmffile}{resplot4 & \!\!\parbox{5mm}{\begin{fmffile}{resplotp & & \!\!\parbox{5mm}{\begin{fmffile}{resplot4 & \!\!\parbox{5mm}{\begin{fmffile}{resplotp & & \!\!\parbox{5mm}{\begin{fmffile}{resplot4 & \!\!\parbox{5mm}{\begin{fmffile}{resplotp \\ \hline $\gamma$ & & \!\!\parbox{5mm}{\begin{fmffile}{resplot4 & \!\!\parbox{5mm}{\begin{fmffile}{resplotp & \!\!\parbox{5mm}{\begin{fmffile}{resplot1 & \!\!\parbox{5mm}{\begin{fmffile}{resplot3 & & \!\!\parbox{5mm}{\begin{fmffile}{resplot1 & \!\!\parbox{5mm}{\begin{fmffile}{resplot2 & \!\!\parbox{5mm}{\begin{fmffile}{resplotp & \!\!\parbox{5mm}{\begin{fmffile}{resplot1 & \!\!\parbox{5mm}{\begin{fmffile}{resplot2 & \!\!\parbox{5mm}{\begin{fmffile}{resplotp & \!\!\parbox{5mm}{\begin{fmffile}{resplot1 & \!\!\parbox{5mm}{\begin{fmffile}{resplot3 & \!\!\parbox{5mm}{\begin{fmffile}{resplotp & \!\!\parbox{5mm}{\begin{fmffile}{resplot1 & \!\!\parbox{5mm}{\begin{fmffile}{resplot3 & \!\!\parbox{5mm}{\begin{fmffile}{resplotp & \!\!\parbox{5mm}{\begin{fmffile}{resplot1 & \!\!\parbox{5mm}{\begin{fmffile}{resplot3 & \!\!\parbox{5mm}{\begin{fmffile}{resplotp & \!\!\parbox{5mm}{\begin{fmffile}{resplot1 & \!\!\parbox{5mm}{\begin{fmffile}{resplot3 & & \!\!\parbox{5mm}{\begin{fmffile}{resplot1 & \!\!\parbox{5mm}{\begin{fmffile}{resplot3 & \\ \hline $ q $ & \!\!\parbox{5mm}{\begin{fmffile}{resplot1 & \!\!\parbox{5mm}{\begin{fmffile}{resplot3 & \!\!\parbox{5mm}{\begin{fmffile}{resplotp & \!\!\parbox{5mm}{\begin{fmffile}{resplot1 & \!\!\parbox{5mm}{\begin{fmffile}{resplot2 & \!\!\parbox{5mm}{\begin{fmffile}{resplotp & \!\!\parbox{5mm}{\begin{fmffile}{resplot1 & \!\!\parbox{5mm}{\begin{fmffile}{resplot1 & \!\!\parbox{5mm}{\begin{fmffile}{resplotp & \!\!\parbox{5mm}{\begin{fmffile}{resplot1 & \!\!\parbox{5mm}{\begin{fmffile}{resplot1 & \!\!\parbox{5mm}{\begin{fmffile}{resplotp & \!\!\parbox{5mm}{\begin{fmffile}{resplot1 & \!\!\parbox{5mm}{\begin{fmffile}{resplot2 & \!\!\parbox{5mm}{\begin{fmffile}{resplotp & \!\!\parbox{5mm}{\begin{fmffile}{resplot1 & \!\!\parbox{5mm}{\begin{fmffile}{resplot2 & \!\!\parbox{5mm}{\begin{fmffile}{resplotp & \!\!\parbox{5mm}{\begin{fmffile}{resplot1 & \!\!\parbox{5mm}{\begin{fmffile}{resplot2 & \!\!\parbox{5mm}{\begin{fmffile}{resplotp & \!\!\parbox{5mm}{\begin{fmffile}{resplot1 & \!\!\parbox{5mm}{\begin{fmffile}{resplot2 & \!\!\parbox{5mm}{\begin{fmffile}{resplotp & \!\!\parbox{5mm}{\begin{fmffile}{resplot1 & \!\!\parbox{5mm}{\begin{fmffile}{resplot2 & \!\!\parbox{5mm}{\begin{fmffile}{resplotp \\ \hline $\bar q$ & \!\!\parbox{5mm}{\begin{fmffile}{resplot1 & \!\!\parbox{5mm}{\begin{fmffile}{resplot3 & \!\!\parbox{5mm}{\begin{fmffile}{resplotp & \!\!\parbox{5mm}{\begin{fmffile}{resplot1 & \!\!\parbox{5mm}{\begin{fmffile}{resplot2 & \!\!\parbox{5mm}{\begin{fmffile}{resplotp & \!\!\parbox{5mm}{\begin{fmffile}{resplot1 & \!\!\parbox{5mm}{\begin{fmffile}{resplot1 & \!\!\parbox{5mm}{\begin{fmffile}{resplotp & \!\!\parbox{5mm}{\begin{fmffile}{resplot1 & \!\!\parbox{5mm}{\begin{fmffile}{resplot1 & \!\!\parbox{5mm}{\begin{fmffile}{resplotp & \!\!\parbox{5mm}{\begin{fmffile}{resplot1 & \!\!\parbox{5mm}{\begin{fmffile}{resplot2 & & \!\!\parbox{5mm}{\begin{fmffile}{resplot1 & \!\!\parbox{5mm}{\begin{fmffile}{resplot2 & & \!\!\parbox{5mm}{\begin{fmffile}{resplot1 & \!\!\parbox{5mm}{\begin{fmffile}{resplot2 & \!\!\parbox{5mm}{\begin{fmffile}{resplotp & \!\!\parbox{5mm}{\begin{fmffile}{resplot1 & \!\!\parbox{5mm}{\begin{fmffile}{resplot2 & \!\!\parbox{5mm}{\begin{fmffile}{resplotp & \!\!\parbox{5mm}{\begin{fmffile}{resplot1 & \!\!\parbox{5mm}{\begin{fmffile}{resplot2 & \!\!\parbox{5mm}{\begin{fmffile}{resplotp \\ \hline $g$ & & \!\!\parbox{5mm}{\begin{fmffile}{resplot3 & \!\!\parbox{5mm}{\begin{fmffile}{resplotp & \!\!\parbox{5mm}{\begin{fmffile}{resplot1 & \!\!\parbox{5mm}{\begin{fmffile}{resplot2 & \!\!\parbox{5mm}{\begin{fmffile}{resplotp & \!\!\parbox{5mm}{\begin{fmffile}{resplot1 & \!\!\parbox{5mm}{\begin{fmffile}{resplot1 & \!\!\parbox{5mm}{\begin{fmffile}{resplotp & \!\!\parbox{5mm}{\begin{fmffile}{resplot1 & \!\!\parbox{5mm}{\begin{fmffile}{resplot1 & & \!\!\parbox{5mm}{\begin{fmffile}{resplot1 & \!\!\parbox{5mm}{\begin{fmffile}{resplot2 & \!\!\parbox{5mm}{\begin{fmffile}{resplotp & \!\!\parbox{5mm}{\begin{fmffile}{resplot1 & \!\!\parbox{5mm}{\begin{fmffile}{resplot2 & \!\!\parbox{5mm}{\begin{fmffile}{resplotp & \!\!\parbox{5mm}{\begin{fmffile}{resplot1 & \!\!\parbox{5mm}{\begin{fmffile}{resplot2 & \!\!\parbox{5mm}{\begin{fmffile}{resplotp & \!\!\parbox{5mm}{\begin{fmffile}{resplot1 & \!\!\parbox{5mm}{\begin{fmffile}{resplot2 & \!\!\parbox{5mm}{\begin{fmffile}{resplotp & \!\!\parbox{5mm}{\begin{fmffile}{resplot1 & \!\!\parbox{5mm}{\begin{fmffile}{resplot2 & \!\!\parbox{5mm}{\begin{fmffile}{resplotp \\ \hline $b$ & && & \!\!\parbox{5mm}{\begin{fmffile}{resplot1 & \!\!\parbox{5mm}{\begin{fmffile}{resplot3 & \!\!\parbox{5mm}{\begin{fmffile}{resplotp & \!\!\parbox{5mm}{\begin{fmffile}{resplot1 & \!\!\parbox{5mm}{\begin{fmffile}{resplot2 & \!\!\parbox{5mm}{\begin{fmffile}{resplotp & \!\!\parbox{5mm}{\begin{fmffile}{resplot1 & \!\!\parbox{5mm}{\begin{fmffile}{resplot2 & \!\!\parbox{5mm}{\begin{fmffile}{resplotp & \!\!\parbox{5mm}{\begin{fmffile}{resplot1 & \!\!\parbox{5mm}{\begin{fmffile}{resplot3 & \!\!\parbox{5mm}{\begin{fmffile}{resplotp & \!\!\parbox{5mm}{\begin{fmffile}{resplot1 & \!\!\parbox{5mm}{\begin{fmffile}{resplot3 & \!\!\parbox{5mm}{\begin{fmffile}{resplotp & \!\!\parbox{5mm}{\begin{fmffile}{resplot1 & \!\!\parbox{5mm}{\begin{fmffile}{resplot3 & \!\!\parbox{5mm}{\begin{fmffile}{resplotp & \!\!\parbox{5mm}{\begin{fmffile}{resplot1 & \!\!\parbox{5mm}{\begin{fmffile}{resplot3 & \!\!\parbox{5mm}{\begin{fmffile}{resplotp & \!\!\parbox{5mm}{\begin{fmffile}{resplot1 & \!\!\parbox{5mm}{\begin{fmffile}{resplot3 & \!\!\parbox{5mm}{\begin{fmffile}{resplotp \\ \hline $ \bar b$ & &&&&& & \!\!\parbox{5mm}{\begin{fmffile}{resplot1 & \!\!\parbox{5mm}{\begin{fmffile}{resplot2 & & \!\!\parbox{5mm}{\begin{fmffile}{resplot1 & \!\!\parbox{5mm}{\begin{fmffile}{resplot2 & \!\!\parbox{5mm}{\begin{fmffile}{resplotp & \!\!\parbox{5mm}{\begin{fmffile}{resplot1 & \!\!\parbox{5mm}{\begin{fmffile}{resplot3 & & \!\!\parbox{5mm}{\begin{fmffile}{resplot1 & \!\!\parbox{5mm}{\begin{fmffile}{resplot3 & \!\!\parbox{5mm}{\begin{fmffile}{resplotp & & \!\!\parbox{5mm}{\begin{fmffile}{resplot3 & \!\!\parbox{5mm}{\begin{fmffile}{resplotp & \!\!\parbox{5mm}{\begin{fmffile}{resplot1 & \!\!\parbox{5mm}{\begin{fmffile}{resplot3 & \!\!\parbox{5mm}{\begin{fmffile}{resplotp & \!\!\parbox{5mm}{\begin{fmffile}{resplot1 & \!\!\parbox{5mm}{\begin{fmffile}{resplot3 & \!\!\parbox{5mm}{\begin{fmffile}{resplotp \\ \hline $t$ & &&&&&&&& & \!\!\parbox{5mm}{\begin{fmffile}{resplot1 & \!\!\parbox{5mm}{\begin{fmffile}{resplot2 & \!\!\parbox{5mm}{\begin{fmffile}{resplotp & \!\!\parbox{5mm}{\begin{fmffile}{resplot1 & \!\!\parbox{5mm}{\begin{fmffile}{resplot3 & \!\!\parbox{5mm}{\begin{fmffile}{resplotp & \!\!\parbox{5mm}{\begin{fmffile}{resplot1 & \!\!\parbox{5mm}{\begin{fmffile}{resplot3 & \!\!\parbox{5mm}{\begin{fmffile}{resplotp & & \!\!\parbox{5mm}{\begin{fmffile}{resplot3 & \!\!\parbox{5mm}{\begin{fmffile}{resplotp & \!\!\parbox{5mm}{\begin{fmffile}{resplot1 & \!\!\parbox{5mm}{\begin{fmffile}{resplot3 & \!\!\parbox{5mm}{\begin{fmffile}{resplotp & \!\!\parbox{5mm}{\begin{fmffile}{resplot1 & \!\!\parbox{5mm}{\begin{fmffile}{resplot3 & \!\!\parbox{5mm}{\begin{fmffile}{resplotp \\ \hline $\bar t$ & &&&&&&&&&&& & \!\!\parbox{5mm}{\begin{fmffile}{resplot1 & \!\!\parbox{5mm}{\begin{fmffile}{resplot3 & \!\!\parbox{5mm}{\begin{fmffile}{resplotp & \!\!\parbox{5mm}{\begin{fmffile}{resplot1 & \!\!\parbox{5mm}{\begin{fmffile}{resplot3 & & \!\!\parbox{5mm}{\begin{fmffile}{resplot1 & \!\!\parbox{5mm}{\begin{fmffile}{resplot3 & \!\!\parbox{5mm}{\begin{fmffile}{resplotp & \!\!\parbox{5mm}{\begin{fmffile}{resplot1 & \!\!\parbox{5mm}{\begin{fmffile}{resplot3 & \!\!\parbox{5mm}{\begin{fmffile}{resplotp & \!\!\parbox{5mm}{\begin{fmffile}{resplot1 & \!\!\parbox{5mm}{\begin{fmffile}{resplot3 & \!\!\parbox{5mm}{\begin{fmffile}{resplotp \\ \hline $W^+ $ & &&&&&&&&&&&&&& & & \!\!\parbox{5mm}{\begin{fmffile}{resplot3 & \!\!\parbox{5mm}{\begin{fmffile}{resplotp & & \!\!\parbox{5mm}{\begin{fmffile}{resplot3 & \!\!\parbox{5mm}{\begin{fmffile}{resplotp & \!\!\parbox{5mm}{\begin{fmffile}{resplot1 & \!\!\parbox{5mm}{\begin{fmffile}{resplot3 & \!\!\parbox{5mm}{\begin{fmffile}{resplotp & \!\!\parbox{5mm}{\begin{fmffile}{resplot1 & \!\!\parbox{5mm}{\begin{fmffile}{resplot3 & \!\!\parbox{5mm}{\begin{fmffile}{resplotp \\ \hline $W^-$ & &&&&&&&&&&&&&&&&& & \!\!\parbox{5mm}{\begin{fmffile}{resplot1 & \!\!\parbox{5mm}{\begin{fmffile}{resplot3 & & \!\!\parbox{5mm}{\begin{fmffile}{resplot1 & \!\!\parbox{5mm}{\begin{fmffile}{resplot3 & \!\!\parbox{5mm}{\begin{fmffile}{resplotp & \!\!\parbox{5mm}{\begin{fmffile}{resplot1 & \!\!\parbox{5mm}{\begin{fmffile}{resplot3 & \!\!\parbox{5mm}{\begin{fmffile}{resplotp \\ \hline $Z$ & &&&&&&&&&&&&&&&&&&&& & \!\!\parbox{5mm}{\begin{fmffile}{resplot1 & \!\!\parbox{5mm}{\begin{fmffile}{resplot3 & & \!\!\parbox{5mm}{\begin{fmffile}{resplot1 & \!\!\parbox{5mm}{\begin{fmffile}{resplot3 & \\ \hline $h$ & &&&&&&&&&&&&&&&&&&&&&&& & \!\!\parbox{5mm}{\begin{fmffile}{resplot1 & \!\!\parbox{5mm}{\begin{fmffile}{resplot3 & \\ \hline \end{tabular} \end{center} \label{tab:modes} \end{table*}% \section{Discussion} The data from the LHC are extraordinarily valuable, in that its collection required an enormous investment of financial and human resources and in its potential power to answer outstanding questions of particle physics. However, once those resources are spent and the data are collected, there remain difficult questions regarding how to use it. Experimental analysis of a given final state requires limited human and financial resources, and every search increases field-wide trials factor, making any local excess less globally significant. Therefore, it is necessarily the case that some experimental territory will be left uncovered, and proposals for new experimental searches must have a compelling argument. Here we have argued that in addition to the usual stable of theoretically-motivated searches, a set of experimentally-motivated searches should be conducted. We propose a set of exclusive 2-body resonance searches, which naturally limits the number of final states and are well matched to experimental capabilities. This is in contrast to the strategy of general searches, which attempt to satisfy a broad set of theory motivations, but do not focus on experimental strengths and suffer a very large trials factor. The final states with matched objects have been examined, though there remain openings at low- and high-mass regions. More significantly, we find that many of the mismatched pair final states have had no attention, despite the existence of theoretical models and the absence of strong theoretical constraints. \\ \section*{Acknowledgements} We thank Mohammad Abdullah, Jahred Adelman, and Tim Tait for useful conversations. This research was supported in part by the National Science Foundation under Grant No. NSF PHY11-25915 and the US Department of Energy under the grant DE-SC0014129 and DE-FG02-12ER41809; the authors are grateful to the Kavli Institute For Theoretical Physics, where some of this work was done.
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} Let $K$ be a homogeneous convolution kernel on a homogeneous group ${\hbox{\bf H}}$, so that so that $$ K(t \circ x) = t^{-N} K(x)$$ for all $x \in {\hbox{\bf H}}$, $t > 0$, where $t \circ x$ is the dilation operation on $H$ and $t^N$ is the Jacobian of $x \mapsto t \circ x$. Let $K_0$ be the restriction of $K$ to the unit annulus $A_0 = \{x \in {\hbox{\bf H}}: 1 \leq \rho(x) \leq 2 \}$, where $\rho$ is a norm associated to the dilation structure. We consider the question of what the minimal conditions on $K_0$ are so that the convolution operator $T: f \mapsto f * K$ is of weak-type $(1,1)$. Here of course $$ f*K(x) = \int f(y) K(y^{-1} x)\ dy.$$ Among the necessary conditions known are that $K_0$ must be in $L^1(A_0)$, and $T$ must be bounded on $L^2$. This in turn necessitates that $K_0$ must have mean zero. When ${\hbox{\bf H}}$ is an isotropic Euclidean space, the classical theorem of Calder\'on and Zygmund \cite{calderon:rotations} shows that $T$ is indeed bounded on $L^2$ when $K_0$ has mean zero and is either odd and in $L^1(A_0)$, or even and in the Orlicz space $L \log L(A_0)$. In particular, we have boundedness on $L^2$ whenever $K_0$ has mean zero and in $L \log L$. This last condition has been relaxed to $H^1$ and beyond; see \cite{stefanov:l2}. Unfortunately, these arguments rely on the method of rotations and therefore cannot be applied directly to the question of weak (1,1) boundedness (cf. the discussion by R. Fefferman\cite{rfeff:entropy}). It is natural to conjecture that analogous $L^2$ results hold for arbitrary homogeneous groups. If $K_0$ is odd and in $L^1$ one can use the method of rotations and the work of Ricci and Stein \cite{ricci:nilpotent} on convolution operators on singular sets in homogeneous groups to obtain $L^2$ boundedness. In case when $K_0$ is merely in $L \log L$ we shall use a variant of Littlewood-Paley theory and an iterated $TT^*$ method to answer this conjecture affirmatively: \begin{theorem}\label{L2} If $K_0$ is in $L \log L$ and has mean zero, then $T$ is bounded on $L^2$. \end{theorem} The weak-type $(1,1)$ question in Euclidean space has been considered by several authors (\cite{christ:weak-1},\cite{christ:rough}, \cite{hofmann:weak},\cite{seeger:rough}); recently A. Seeger \cite{seeger:rough} has shown that $T$ is of weak-type $(1,1)$ on Euclidean space whenever $K_0$ is in $L \log L$ and has mean zero. The corresponding questions for odd $L^1$ or even $H^1$ kernels remain open. We remark that the corresponding $(H^1,L^1)$ conjecture is false by an example of Mike Christ. For this and further discussion, see the survey in \cite{stefanov:l2}. In this paper we generalize the result in \cite{seeger:rough} to arbitrary homogeneous groups. Specifically, we show that \begin{theorem}\label{weak-11} If $K_0$ is in $L \log L$ and $T$ is bounded on $L^2$, then $T$ is of weak-type $(1,1)$. \end{theorem} Combining the two results and using duality we thus have \begin{corollary} If $K_0$ is in $L \log L$ and has mean zero, then $T$ is bounded on $L^p$ for all $1 < p < \infty$, and is of weak-type $(1,1)$. \end{corollary} The methods in \cite{seeger:rough} rely on the Euclidean Fourier transform and do not appear to be adaptable to non-abelian settings. Our approach is more in the spirit of Christ and Rubio de Francia \cite{christ:rough}, in that one considers the expression $T f$ as an operator acting on the kernel $K_0$ rather than one acting on $f$. One can then reduce weak $(1,1)$ boundedness to something resembling a $(L^2,L^2)$ estimate, which is now amenable to orthogonality techniques such as the $TT^*$ and $(TT^*)^M$ methods. The argument can also be used to treat the slightly smoother maximal and square function operators corresponding to $L \log L$ generators $K_0$, either by direct modification of the proof, or by using a Radamacher function argument based on the fact that $\sum_i r_i(t) f * K_i$ is of weak-type $L^1$ uniformly in $t$. For these operators the mean zero condition is not required. One can also use the arguments in \cite{seeger:rough} to weaken the radial regularity on $K$ to a Dini-type continuity condition. We will not pursue these matters here. This work originated from the author's dissertation at Princeton University under the inspiring guidance of Eli Stein. The author is supported by NSF grant 9706764. \section{Notation} We will work exclusively with real-valued functions; none of our functions will be complex-valued. The letters $C$, (resp. $c$, $\epsilon$) will always be used to denote large (resp. small) positive constants that depend only on the homogeneous group ${\hbox{\bf H}}$ and any other specified quantities. The values of these constants will change from line to line. We use $A <_\sim B$ to denote the statement that $A \leq CB$, and $A \sim B$ to denote the statement that $A <_\sim B$ and $B <_\sim A$. We define a homogeneous group to be a nilpotent Lie group ${\hbox{\bf H}} = {\hbox{\bf R}}^n$ with multiplication, inverse, dilation, and norm structures $$ (x,y) \mapsto xy, \quad x \mapsto x^{-1}, \quad (t,x) \mapsto t \circ x, \quad x \mapsto \rho(x)$$ for $x,y \in {\hbox{\bf H}}$, $t>0$, where the multiplication and inverse operations are polynomial and form a group with identity $0$, the dilation structure preserves the group operations and is given in co-ordinates by \be{dilate} t \circ (x_1, \ldots, x_n) = (t^{\alpha_1} x_1, \ldots, t^{\alpha_n} x_n) \end{equation} for some constants $0 < \alpha_1 \leq \alpha_2 \leq \ldots \leq \alpha_n$, and $\rho(x)$ equals one on the Euclidean unit sphere, and satisfies $\rho(t \circ x) = t \rho(x)$. It can be shown that Lebesgue measure $dx$ is a Haar measure for this group, and that $\rho(x) \sim \rho(x^{-1})$. For further properties of homogeneous groups see e.g. \cite{stein:large}. We call $n$ the Euclidean dimension of ${\hbox{\bf H}}$, and the quantity $N = \alpha_1 + \ldots + \alpha_n$ the homogeneous dimension of ${\hbox{\bf H}}$. We will always assume ${\hbox{\bf H}}$ to be a homogeneous group with Euclidean dimension $n > 1$; the case $n=1$ can of course be treated by classical methods. In addition to the homogeneous group structures mentioned above, we shall also exploit the corresponding Euclidean structures $$ (x,y) \mapsto x+y,\quad x \mapsto -x, \quad (t,x) \mapsto tx,\quad x \mapsto |x|,$$ together with the Euclidean inner product $(x,y) \mapsto x \cdot y$. We shall also use the Euclidean structure of the exterior algebra $\Lambda$ of ${\hbox{\bf R}}^n$. Recall that $\Lambda$ is spanned by basis elements of the form $$ e_P = e_{p_1} \wedge \ldots \wedge e_{p_r}$$ where $0 \leq r \leq n$ and $P = (p_1, \ldots, p_r)$ is an increasing subsequence of $1, \ldots, n$. We give $\Lambda$ the usual inner product and norm structure $$ (\sum_P a_P e_P) \cdot (\sum_Q b_Q e_Q) = \sum_P a_P b_P$$ and $$ | \sum_P a_P e_P | = (\sum_P |a_P|^2)^{1/2}.$$ Later on we shall define some further structures on $\Lambda$ which are more compatible with the non-isotropic dilation \eqref{dilate}. We use $\prod_{i=1}^k x_i$ to denote the product $x_1 \ldots x_k$, and $\prod_{i=k}^1 x_i$ to denote the product $x_k \ldots x_1$. We define a left-invariant quasi-distance $d$ on ${\hbox{\bf H}}$ by $d(x,y) = \rho(x^{-1} y)$. A \emph{ball} $J = B(x_J, 2^j)$ with center $x_J$ and radius $2^j$ is defined to be any set of the form $$ J = \{ x: d(x, x_J) < 2^j\}$$ for some $x_J \in {\hbox{\bf H}}$ and $j \in {\hbox{\bf Z}}$. If $J$ appears in an expression, then $x_J$, $j$ are always understood to be defined as above. If $C > 0$, then $CJ$ denotes the ball with the same center as $J$ but $C$ times the radius. We use $J_\Delta$ to denote the annulus $CJ \backslash C^{-1} J$. If $E$ is a finite set, we use $\# E$ to denote the cardinality of $E$; if $E$ is a measurable set, we use $|E|$ to denote the Lebesgue measure of $E$. Note that $|t \circ E| = t^N |E|$ for all $t > 0$ and $E \subset {\hbox{\bf H}}$. For each $t$ define the scaling map $\Delta[t]$ by $$\Delta[t] f(y) = t^{-N} f(t^{-1} \circ y);$$ note that these operators are an isometry on $L^1$. \section{Left-invariant differentiation structures} Let $f(t)$ be a smooth function from ${\hbox{\bf R}}$ to ${\hbox{\bf H}}$. The Euclidean derivative $\partial_t f(t)$ can of course be defined by Newton's approximation $$ f(t+\varepsilon) = f(t) + \varepsilon \partial_t f(t) + \varepsilon^2 O(1)$$ for $\varepsilon$ small. We shall also need a left-invariant derivative $\partial^L_t f(t)$ defined by $$ f(t+\varepsilon) = f(t)(\varepsilon \partial^L_t f(t)) + \varepsilon^2 O(1).$$ If $f(t)$ is bounded, then the operation of left multiplication of $f(t)$ is bilipschitz, and so we have \be{comparable} |\partial_t f(t)| \sim |\partial^L_t f(t)| \hbox{ whenever } |f(t)| <_\sim 1. \end{equation} We observe the product rule \be{group-diff} \partial^L_t (f(t)g(t)) = \partial^L_t g(t) + C[g(t)] \partial^L_t f(t) \end{equation} where the linear transformation $C[x]: {\hbox{\bf R}}^n \to {\hbox{\bf R}}^n$ is the derivative of the conjugation map $y \mapsto x^{-1} y x$ at the origin. In other words, for all $x \in {\hbox{\bf H}}$, $v \in {\hbox{\bf R}}^n$ we have $$ x^{-1}(\varepsilon v)x = \varepsilon C[x] v + \varepsilon^2 O(1).$$ The rule \eqref{group-diff} is easily verified by expanding $f(t+\varepsilon)g(t+\varepsilon)$ to first order in two different ways. We note the identities \be{c-twine} C[t \circ x] (t \circ v) = t \circ (C[x] v), \quad C[x]^{-1} = C[x^{-1}]. \end{equation} Since $C[x]$ and its inverse are both polynomial in $x$, we have \be{c-bound} |C[x] v| \sim |v| \hbox{ whenever } |x| <_\sim 1. \end{equation} Now suppose $F(x)$ is a smooth function from ${\hbox{\bf R}}^n$ to ${\hbox{\bf H}}$. We define the left-invariant derivative $D^L_x F(x)$ to be the matrix with columns given by $$D^L_x F(x) = (\partial^L_{x_1} F(x), \ldots, \partial^L_{x_n} F(x)).$$ In other words, we have the Newton approximation $$ F(x + \varepsilon v) = F(x) (\varepsilon D^L_x F(x) v) + \varepsilon^2 O(1).$$ Since $dx$ is a Haar measure, we see that the determinant of $D^L_x F(x)$ is equal to the Jacobian of $F$ at $x$ with respect to Lebesgue measure. We note that \be{det-form} \det D^L_x F(x) (e_1 \wedge \ldots \wedge e_n) = \partial^L_{x_1} F(x) \wedge \ldots \wedge \partial^L_{x_n} F(x). \end{equation} The vector field $$X(x) = \partial^L_t (t \circ x)|_{t = 1}.$$ shall be crucial in our arguments. An equivalent definition is \be{x-def} (1+\varepsilon) \circ x = x (\varepsilon X(x)) + \varepsilon^2 O(1). \end{equation} Note that $X$ commutes with dilation: \be{x-dil} X(t \circ x) = t \circ X(x). \end{equation} Since $X$ depends polynomially on $x$, we therefore have \be{x-bound} \rho(X(x)) <_\sim \rho(x) \end{equation} For comparison, we also observe the bound \be{trivx-bound} \rho(\partial_t (t \circ x)) \sim \rho(t \circ x) \end{equation} which follows immediately from \eqref{dilate}. The left-invariant derivative $\partial^L_t$ interacts with dilations via the formula \be{dil-diff} \partial^L_t (s(t) \circ f(t)) = s(t) \circ \partial^L_t f(t) + \frac{s'(t)}{s(t)} (s(t) \circ X[f(t)]) \end{equation} which is verified by expanding $s(t+\varepsilon) \circ f(t+\varepsilon)$ to first order in two different ways. Finally, we have \begin{lemma}\label{x-invert} The map $X: {\hbox{\bf R}}^n \to {\hbox{\bf R}}^n$ is a polynomial diffeomorphism with Jacobian comparable to 1. \end{lemma} \begin{proof} From the monotonicity assumptions on $\alpha_i$ and the assumption that dilations preserve the multiplication structure, it is easy to see that the multiplication law $(x,y) \mapsto xy$ on ${\hbox{\bf H}}$ must have the upper diagonal form \be{mult} \begin{split} (xy)_1 &= x_1 + y_1\\ (xy)_2 &= x_2 + y_2 + P_2(x_1,y_1)\\ (xy)_3 &= x_3 + y_3 + P_3(x_1,x_2,y_1,y_2)\\ &\ldots \\ (xy)_n &= x_n + y_n + P_n(x_1, \ldots, x_{n-1}, y_1, \ldots, y_{n-1}) \end{split} \end{equation} where $P_2, \ldots, P_n$ are polynomials. From \eqref{dilate} and \eqref{x-def} we have $$ \varepsilon(\alpha_1 x_1, \ldots, \alpha_n x_n) = x (\varepsilon X(x)) + \varepsilon^2 O(1).$$ Inserting this into \eqref{mult} and solving recursively for the components of $X(x)$ we see that \begin{align*} X(x)_1 &= \alpha_1 x_1\\ X(x)_2 &= \alpha_2 x_2 + Q_2(x_1)\\ &\ldots\\ X(x)_n &= \alpha_n x_n + Q_n(x_1, \ldots, x_{n-1}) \end{align*} for some polynomials $Q_2, \ldots, Q_n$ which depend on the $\alpha_i$. The claim follows. \end{proof} \section{Proof of Theorem \ref{L2}. Kernel truncation and frequency localization.} We now begin the proof of Theorem \ref{L2}. The heart of the argument is an iterated $TT^*$ method, in the spirit of Christ and Rubio de Francia \cite{christ:rough}. We may normalize $\|K_0\|_{L \log L} = 1$. We first partition the kernel $K$ dyadically. From the identity $$ K = \frac{1}{\ln 2} \int \Delta[t] K_0\ \frac{dt}{t}$$ we have the decomposition $K = \sum_j S_j K_0$, where $S_j$ is the operator \begin{equation}\label{kj-def} S_j F = 2^{-j} \int \varphi(2^{-j} t) \Delta[t] F\ dt, \end{equation} and $\varphi$ is a bump function adapted to $\{t \sim 1\}$ such that $\sum_j 2^{-j} t \varphi(2^{-j} t) = \frac{1}{\ln 2}$. Note that \be{sj-est} \|S_j F\|_1 <_\sim \|F\|_1 \end{equation} uniformly in $j$. From the a priori assumptions on $K_0$ we see that the $S_j K_0$ are all $C^\infty_0$ functions. We need to show that $$ \| f * \sum_j S_j K_0 \|_2 <_\sim \|f\|_2.$$ Write $K_0 = \sum_{s \geq 0} K_0^s$, where $$K_0^s = \hat K_0^s - \frac{\chi_{A_0}}{|A_0|} \int_{A_0} \hat K_0^s$$ and $\hat K_0^s$ is the portion of $K_0$ on the set $$2^{2^s} \leq 1+|K_0| < 2^{2^{s+1}}.$$ Note that each $K_0^s$ has mean zero. By the triangle inequality and the computation $$ \sum_{s \geq 0} 2^s \|K_0^s\|_1 <_\sim \sum_{s \geq 0} 2^s \|\hat K_0^s\|_1 <_\sim \|K_0\|_{L \log L} = 1$$ it thus suffices to show that $$ \| f * \sum_j S_j K_0^s \|_2 <_\sim \|f\|_2 (2^s \|K_0^s\|_1 + 2^s 2^{-\varepsilon 2^s})$$ for all $s \geq 0$. Fix $s$. For each integer $k$, let $T_k$ denote the operator $$ T_k f = f * \sum_{j=k2^s}^{(k+1)2^s - 1} S_j K_0^s.$$ Our task is to show the operator norm estimate $$ \| \sum_k T_k \| <_\sim 2^s \|K_0^s\|_1 + 2^s 2^{-\varepsilon 2^s}.$$ From Young's inequality and \eqref{sj-est} we have $$ \| T_k f \|_2 <_\sim \|f\|_2 2^s \|K_0^s\|_1$$ for all integers $k$. In particular, we have the operator norm estimates $$ \| T_k T_{k^\prime}^* \|, \| T_k^* T_{k^\prime} \| <_\sim (2^s \|K_0^s\|_1)^2$$ for all $k,{k^\prime}$. We will shortly show that \be{L2-targ} \| T_k T_{{k^\prime}}^* \|, \| T_k^* T_{k^\prime} \| <_\sim 2^{2s} 2^{-\varepsilon 2^s |k-{k^\prime}|} \end{equation} for $|k-{k^\prime}| \geq C$, where $C$ is a large constant to be determined later. From these estimates the desired bound on $\| \sum_k T_k \|$ follows from the Cotlar-Knapp-Stein lemma (see e.g. \cite{stein:large}). It remains to prove \eqref{L2-targ}. We prove only the first estimate, as the second is analogous. We rewrite this as $$ \|\sum_{j=k2^s}^{(k+1)2^s-1} \sum_{j'=k'2^s}^{(k'+1)2^s-1} f * S_j K_0^s * S_{j'} \tilde K_0^s \|_2 <_\sim 2^{2s} 2^{-\varepsilon 2^s |k-{k^\prime}|} \|f\|_2$$ where $\tilde F$ denotes the function $\tilde F(x) = F(x^{-1})$. By the triangle inequality it suffices to show that \be{decay} \| f * S_j K_0^s * S_{j'} \tilde K_0^s \|_2 <_\sim 2^{-\varepsilon |j-{j^\prime}|} \|f\|_2 \end{equation} for all integers $j,{j^\prime}$ for which $|j-{j^\prime}| > C2^s$. The next step is to introduce a form of Littlewood-Paley theory, although we shall avoid any explicit use of the Fourier transform. Fix a function $\phi$ on the unit ball with $\|\phi\|_{C^1} <_\sim 1$ which has unit mass. We may also assume that $\phi = \tilde \phi$. For each integer $k$, write $$ \Psi_k = \Delta[2^{k-1}] \phi - \Delta[2^k] \phi.$$ Note that $\Psi_k$ is supported on the ball of radius $C2^{k}$, has mean zero, and $\tilde \Psi_k = \Psi_k$. Since $1 = \sum_k \Psi_k$, we may write $$ f * S_j K_0^s * S_0 \tilde K_0^s = \sum_k \sum_{k^\prime} f * S_j K_0^s * \Psi_k * \Psi_{k^\prime} * S_{{j^\prime}} \tilde K_0^s.$$ Suppose for the moment that we could prove \begin{proposition}\label{L2-key} For any integers $j$, $k$, and any $L^\infty$ function $K_0$ on the unit annulus with mean zero, we have $$ \| f * S_j K_0 * \Psi_k \|_2 <_\sim 2^{-\varepsilon |j-k|} \|f\|_2 \|K_0\|_\infty.$$ \end{proposition} Then we would have the estimates $$ \| f * S_j K_0^s * \Psi_k \|_2 <_\sim 2^{2^{s+1}} 2^{-\varepsilon |j-k|} \|g\|_2.$$ and (by duality) $$ \| g * \Psi_{k^\prime} * S_{j^\prime} \tilde K_0^s\|_2 <_\sim 2^{2^{s+1}} 2^{-\varepsilon |{k^\prime}-{j^\prime}|} \|g\|_2.$$ Combining these two estimates we see that \be{first-fss} \| f * S_j K_0^s * \Psi_k * \Psi_{k^\prime} * S_{j^\prime} \tilde K_0^s \|_2 <_\sim 2^{2^{s+2}} 2^{-\varepsilon |j-k|} 2^{-\varepsilon |{k^\prime}-{j^\prime}|} \|f\|_2. \end{equation} On the other hand, by Young's inequality we also have the estimate $$ \| f * S_j K_0^s * \Psi_k * \Psi_{k^\prime} * S_{j^\prime} \tilde K_0^s \|_2 <_\sim \|f\|_2 \|S_j K_0^s\|_1 \|\Psi_k * \Psi_{k^\prime}\|_1 \|S_0 \tilde K_0^s\|_1.$$ Since $K_0$ is in $L \log L$, $K_0^s$ is in $L^1$, and so by \eqref{sj-est} we have $$ \|S_j K_0^s\|_1, \|S_{j^\prime} \tilde K_0^s\|_1 <_\sim 1.$$ Also, from the smoothness and mean zero conditions on $\Psi_k$, $\Psi_{k^\prime}$ we have $$ \|\Psi_k * \Psi_{k^\prime}\|_1 <_\sim 2^{-\varepsilon |k-{k^\prime}|}.$$ Thus we obtain the bound of $$ \| f * S_j K_0^s * \Psi_k * \Psi_{k^\prime} * S_{j^\prime} \tilde K_0^s \|_2 <_\sim 2^{-\varepsilon |k-{k^\prime}|} \|f\|_2.$$ Taking the geometric mean of this with \eqref{first-fss} we obtain $$ \| f * S_j K_0^s * \Psi_k * \Psi_{k^\prime} * S_{j^\prime} \tilde K_0^s \|_2 <_\sim 2^{2^{s+1}} 2^{-\varepsilon |j-k|/2} 2^{-\varepsilon|k-{k^\prime}|/2} 2^{-\varepsilon |{k^\prime}-{j^\prime}|/2} \|f\|_2.$$ If we then sum this in $k$ and ${k^\prime}$ we obtain $$ \| f * S_j K_0^s * S_{j^\prime} \tilde K_0^s \|_2 <_\sim 2^{2^{s+1}} |j-{j^\prime}| 2^{-\varepsilon |j-{j^\prime}|/2},$$ which gives \eqref{decay} for some $\varepsilon > 0$, if $|j-{j^\prime}| > C2^s$ for a sufficiently large $C$. \section{Proof of Theorem \ref{L2} continued. Iterated $TT^*$ methods.} It thus remains to prove Proposition \ref{L2-key}. We may normalize so that $\|K_0\|_\infty = 1$; by scale invariance we may assume that $j=0$. If $k \geq -C$ then from the mean zero condition on $K_0$ and the smoothness of $\Psi_k$ we have $$ \| S_0 K_0 * \Psi_k \|_1 <_\sim 2^{-\varepsilon k}$$ and the desired bound thus follows from Young's inequality. We may therefore assume that $k < -C$. Fix $k = -s$ for some $s>C$. Our task is now to show $$ \| f * S_0 K_0 * \Psi_{-s} \|_2 <_\sim 2^{-\varepsilon s} \|f\|_2.$$ It is possible to use Fourier techniques to handle this estimate, taking advantage of the microlocal regularity properties of $S_0 K_0$. However, we shall pursue a different approach based on the iterated $T^*T$ method, as we shall need these techniques later on for the (more difficult) weak (1,1) estimate. Roughly speaking, the idea is as follows. The kernel $S_0 K_0$ is smooth along the ``radial'' direction, but is otherwise rough. Thus there is no obvious way to exploit the cancellation properties of $\Psi_{-s}$. However, if we convolve $S_0 K_0$ with itself $n$ times then one should obtain a kernel which is smooth in $n$ separate directions at any given point. Assuming that these directions are linearly independent, the iterated kernel thus has isotropic regularity, and one will pick up the desired $2^{\varepsilon s}$ gain by exploiting the moment conditions of $\Psi_{-s}$. Of course, there will be an exceptional portion of the convolution in which the directions of smoothing are not independent. For this portion one cannot exploit cancellation and one must instead replace everything by absolute values. We now turn to the details. By the $T^*T$ method, it suffices to show that $$ \| f * \Psi_{-s} * S_0 \tilde K_0 * S_0 K_0 * \Psi_{-s} \|_2 <_\sim 2^{-\varepsilon s} \|f\|_2.$$ From the operator norm identity $\|T^* T\| = \|(T^* T)^n\|^{1/n}$, it thus suffices to show that $$ \| f * \Psi_{-s} * S_0 \tilde K_0 * S_0 K_0 * \Psi_{-s} * \ldots * \Psi_{-s} * S_0 \tilde K_0 * S_0 K_0 * \Psi_{-s} \|_2 <_\sim 2^{-\varepsilon s} \|f\|_2$$ for a slightly different value of $\varepsilon > 0$, where the convolution is iterated $n = \dim {\hbox{\bf H}}$ times. By Young's inequality it suffices to show that $$ \| \Psi_{-s} * S_0 \tilde K_0 * S_0 K_0 * \Psi_{-s} * \ldots * \Psi_{-s} * S_0 \tilde K_0 * S_0 K_0 * \Psi_{-s} \|_1 <_\sim 2^{-\varepsilon s}.$$ The function $\Psi_{-s} * S_0 \tilde K_0$ is bounded in $L^1$, and is therefore an average of delta functions in $B(0,C)$. From Minkowski's inequality, it therefore suffices to show that $$ \| \delta_{w_1} * S_0 K_0 * \delta_{w_2} * S_0 K_0 * \ldots * \delta_{w_n} * S_0 K_0 * \Psi_{-s} \|_1 <_\sim 2^{-\varepsilon s}$$ uniformly for all $w_1, \ldots, w_n \in B(0,C)$. Fix $w=(w_1, \ldots, w_n)$. It suffices to show that $$ |\langle \delta_{w_1} * S_0 K_0 * \delta_{w_2} * S_0 K_0 * \ldots * \delta_{w_n} * S_0 K_0 * \Psi_{-s},g \rangle| <_\sim 2^{-\varepsilon s}$$ for all test functions $g$ which are normalized in $L^\infty$. Fix $g$. We write the left-hand side as $$ |\int \int \int \Psi_{-s}(x) g(\Phi_y(t) x) \prod_{q=1}^n K_0(y_q) \varphi(t_q)\ dy dt dx|$$ where $t = (t_1, \ldots, t_n) \in [C^{-1},C]^n$, $y = (y_1, \ldots, y_n) \in A_0^n$, and \be{Phi-def} \Phi_{y}(t) = \prod_{q=1}^n w_q (t_q \circ y_q). \end{equation} The treatment of this integral depends on whether the map $\Phi_y: {\hbox{\bf R}}^n \to {\hbox{\bf H}}$ is degenerate or not. This degeneracy is measured by the Jacobian $\det D^L_t(\Phi_y(t))$. Accordingly, we split our estimates into \be{non-cancel} |\int \int \int \Psi_{-s}(x) \eta(2^{n\varepsilon s} \det D^L_t(\Phi_{y})(t)) g(\Phi_{y}(t)x) \prod_{q=1}^n K_0(y_q) \varphi(t_q)\ dy dt dx| <_\sim 2^{-\varepsilon s} \end{equation} and \be{cancel} |\int \int \int \Psi_{-s}(x) [1-\eta(2^{n\varepsilon s} \det D^L_t(\Phi_{y})(t))] g(\Phi_{y}(t)x) \prod_{q=1}^n K_0(y_q) \varphi(t_q)\ dy dt dx| <_\sim 2^{-\varepsilon s}, \end{equation} where $\eta$ is a smooth non-negative bump function which equals $1$ near 1. \section{Proof of Theorem \ref{L2} continued. The degenerate portion of the integral.} To show \eqref{non-cancel} we simply replace everything by absolute values, and use the bounds on $K_0$, $g$, and $\varphi$ to reduce to $$ \int \int_{[C^{-1},C]^n} \int_{A_0^N} |\Psi_{-s}(x)| \eta(2^{n\varepsilon s} \det D^L_t(\Phi_{y})(t))\ dy dt dx <_\sim 2^{-\varepsilon s}.$$ Performing the $x$ integration and taking supremums in the $t$ integral, we reduce to $$ \sup_{t \in [C^{-1},C]^n} \int_{y \in A_0^n: |\det D^L_t(\Phi_{y})(t)| <_\sim 2^{-n\varepsilon s}} dy <_\sim 2^{-\varepsilon s}.$$ Fix $t \in [C^{-1},C]^n$. From \eqref{det-form} we have $$ |\det D^L_t(\Phi_y)| = |\partial^L_{t_1} \Phi_y \wedge \ldots \wedge \partial^L_{t_n} \Phi_y|.$$ From \eqref{Phi-def}, \eqref{group-diff} and \eqref{dil-diff} we have \be{Phi-diff} \partial^L_{t_q} \Phi_y = t_q^{-1} C[Q_q] (t_q \circ X(y_q)) \end{equation} for all $q$, where $Q_q$ is the quantity $$ Q_q = \prod_{j=q+1}^n w_j (t_j \circ y_j).$$ Since $|Q_q| <_\sim 1$, $t_q \sim 1$, and $|y_q| \sim 1$, we see from \eqref{c-bound} that $$ |\partial^L_{t_q} \Phi_y| \sim 1. $$ We therefore have \be{ei-decomp} \int_{y \in A_0^n: |\det D_t(\Phi_{y})(t)| <_\sim 2^{-n\varepsilon s}} dy_1 \ldots dy_n \leq \sum_{q=1}^{n-1} |E_q|, \end{equation} where $E_q$ is the set. $$ E_q = \{ y \in A_0^n: | \partial^L_{t_q} \Phi_y \wedge \ldots \wedge \partial^L_{t_n} \Phi_y | < 2^{-\varepsilon s} | \partial^L_{t_{q+1}} \Phi_y \wedge \ldots \wedge \partial^L_{t_n} \Phi_y | \}.$$ It thus suffices to show that $|E_q| <_\sim 2^{-\varepsilon s}$ for each $q$. Fix $q$, and freeze all the $y_j$ variables except for $y_q$. From \eqref{Phi-diff} and \eqref{c-bound}, we see that in order for $y$ to be in $E_q$, $X(y_q)$ must live in a boundedly finite union of $2^{-\varepsilon s}$-neighbourhoods of planes. These planes depend only on $Q_q$ and $\partial_{t_{q+1}} \Phi_y \wedge \ldots \wedge \partial_{t_n} \Phi_y$, and so are independent of $y_q$. Since $y_q$ is bounded, we thus see from Lemma \ref{x-invert} that $y_q$ lives in a finite union of $C2^{-\varepsilon s}$-neighbourhoods of compact hypersurfaces. In particular, the variable $y_q$ must range in a set of measure $O(2^{-\varepsilon s})$. The desired bound on $E_q$ follows by unfreezing the remaining $y$ variables. This concludes the proof of \eqref{non-cancel}. \section{Proof of Theorem \ref{L2} continued. The non-degenerate portion of the integral.} To finish the proof of Theorem \ref{L2} we must show \eqref{cancel}. Since $K_0$ is in $L^\infty(A_0)$, it is in $L^1$, and it suffices to show $$ |\int \int \Psi_{-s}(x) g(\Phi_{y}(t)x) [1-\eta(2^{n\varepsilon s} \det D^L_T(\Phi_{y})(t))] \prod_{i=1}^n \varphi(t_i)\ dt dx| <_\sim 2^{-\varepsilon s} $$ uniformly in $y$. Fix $y$. We now utilize the moment conditions in the $\Psi_{-s}$ by rewriting $\Psi_{-s}$ as a (Euclidean) divergence of a function which is small in $L^1$. More precisely, we shall use \begin{lemma}\label{integ} Let $f$ be a function on $B(0,C)$ with mean zero and $\|f\|_1 <_\sim 1$. Then there exists functions $f_1, \ldots, f_n$ supported on a slightly larger ball $B(0,C)$ with $\|f_i\|_1 <_\sim 1$ and $$ f(x) = \sum_i \partial_{x_i} f_i(x).$$ \end{lemma} \begin{proof} Without loss of generality we may assume that $f$ is supported on the unit cube $[0,1]^n$. When $n=1$ the lemma is clear. For $n>1$ we write $$ f(x_1, \ldots x_n) = \partial_{x_n} f_n(x_n) + F_{x_n}(x_1, \ldots, x_{n-1})$$ for all $x \in [0,1]^n$, where $$ f_n(x_n) = \int_{x'_n \leq x_n} f(x')\ dx'$$ and $$ F_{x_n}(x_1, \ldots, x_{n-1}) = f(x_1, \ldots, x_n) - \int_{x'_n = x_n} f(x')\ dx.$$ Clearly $f_n$, $F_{x_n}$ have bounded $L^1$ norm on the unit cube, and $F_{x_n}$ has mean zero for each $x_n$. The lemma then follows from induction. \end{proof} Applying this lemma to $f = \Psi_0$ and then rescaling, we may write $$ \Psi_{-s}(x) = \sum_{i=1}^n \partial_{x_i} f_i(x)$$ where the functions $f_i$ are supported on $B(0,C 2^{-s})$ and satisfy \be{l1-gain} \|f_i\|_1 <_\sim 2^{-\alpha_i s}. \end{equation} We thus need to show that \be{integ-L2} | \int \int \partial_{x_i} f_i(x) g(\Phi_{y}(t)x) a(t)\ dt dx| <_\sim 2^{-\varepsilon s} \end{equation} for all $i=1,2,\ldots, n$, where $$ a(t) = [1-\eta(2^{n\varepsilon s} \det D^L_T(\Phi_{y})(t))] \prod_{i=1}^n \varphi(t_i).$$ Fix $i$. The idea is to use integration by parts to somehow move the derivative $\partial_{x_i}$ onto the smooth function $a$, so that one can exploit \eqref{l1-gain} and the $L^\infty$ control on $g$. If we integrate by parts in the $x_i$ variable, the left-hand side of \eqref{integ-L2} becomes $$ | \int \int f_i(x) \partial_{x_i} g(\Phi_{y}(t)x) a(t)\ dt dx|.$$ From \eqref{l1-gain}, it thus suffices (if $\varepsilon$ is chosen sufficiently small) to show that \be{integ-l3} | \int \partial_{x_i} g(\Phi_{y}(t)x) a(t)\ dt| <_\sim 2^{C \varepsilon s}, \end{equation} for all $x \in B(0,C)$. Fix $x$. We now apply the following application of the chain rule, which allows one to convert a derivative of one variable to a derivative on another variable, provided that a certain Jacobian is non-zero. \begin{lemma}\label{chain} Let $f: {\hbox{\bf R}} \times {\hbox{\bf R}}^n \to {\hbox{\bf H}}$ and $F: {\hbox{\bf H}} \to {\hbox{\bf R}}$ be smooth functions. Then \be{chain-rule} \partial_{s} F(f(s,t)) = \nabla_t F(f(s,t)) \cdot (D^L_t f(s,t))^{-1} \partial^L_{s} f(s,t) \end{equation} whenever $\det D^L_t f(s,t)$ is non-zero. \end{lemma} \begin{proof} For any small $\varepsilon$, we have the Newton approximations $$ F(f(s+\varepsilon,t)) = F(f(s,t)) + \varepsilon \partial_s F(f(s,t)) + \varepsilon^2 O(1)$$ $$ f(s+\varepsilon,t) = f(s,t) (\varepsilon \partial^L_s f(s,t)) + \varepsilon^2 O(1)$$ $$ f(s,t+\varepsilon v) = f(s,t) (\varepsilon D^L_t f(s,t) v) + \varepsilon^2 O(1)$$ $$ F(f(s,t + \varepsilon v)) = F(f(s,t)) + \varepsilon \nabla_t F(f(s,t)) \cdot v + \varepsilon^2 O(1).$$ Combining all these estimates with $v = D^L_t f(x,t)^{-1} \partial^L_s f(s,t)$ and letting $\varepsilon \to 0$ gives the result. \end{proof} From this lemma, \eqref{integ-l3} becomes $$ | \int \nabla_t g(\Phi_y(t)x) \cdot (D^L_t (\Phi_y(t) x))^{-1} \partial^L_{x_i}(\Phi_y(t)x) a(t)\ dt| <_\sim 2^{C\varepsilon s}.$$ By another integration by parts and the fact that $g \in L^\infty$, it suffices to show the uniform estimate $$ | \nabla_t \cdot [(D^L_t (\Phi_y(t) x))^{-1} \partial^L_{x_i}(\Phi_y(t)x) a(t)] | <_\sim 2^{C\varepsilon s}.$$ But this is easily verified, since all variables are compactly supported and all functions are smooth, with norms at most $O(2^{C\varepsilon s})$. The $(D^L_t (\Phi_y(t) x))^{-1}$ term is well-behaved since $$ |\det D^L_t(\Phi_y(t) x)| \sim |\det D^L_t(\Phi_y(t))| \gtrsim 2^{-n\varepsilon s}$$ on the support of $a(t)$. This completes the proof of Proposition \ref{L2-key} and thus Theorem \ref{L2}. \section{Proof of Theorem \ref{weak-11}. Truncation of the kernel and strong-type estimates.} We now begin the proof of Theorem \ref{weak-11}. The arguments will be similar in flavor to the ones used to prove Theorem \ref{L2}, but with two major differences. Firstly, because the function $f$ is now only controlled in $L^1$, one is forced (as in \cite{christ:rough}) to perform the $TT^*$ method with respect to $K_0$ rather than $f$. Secondly, the Littlewood-Paley operators are not particularly useful in the $L^1$ setting, and we cannot reduce to an estimate on a single scale such as Proposition \ref{L2-key}. Instead, we are forced to consider the interactions between several scales. This will cause an increase in complexity in our arguments. We remark that if one were to treat the maximal function or square function instead of the singular integral, then one could again localize to a single scale; cf. the arguments in \cite{christ:rough}. Let $K$ be as in the statement of the theorem. We wish to show that $$ \{ |f * K| \gtrsim \alpha \} <_\sim \alpha^{-1} \|f\|_1 \|K_0\|_{L \log L}.$$ We may assume that $f$ is a $C^\infty_0$ function. By linearity we may assume that $\alpha = 1$ and $\|K_0\|_{L \log L} = 1$. We perform the standard Calder\'on-Zygmund decomposition of $f$ at height $1$ to obtain $f = g + \sum_J b_J$, where $\|g\|_1 <_\sim \|f\|_1$, $\|g\|_\infty <_\sim 1$, the $J$ range over a collection of disjoint balls with $\sum_J |J| <_\sim \|f\|_1$, and for each $J$ the functions $b_J$ are supported on $CJ$ with \be{bj-prop} \|b_J\|_{L^1(CJ)} <_\sim |J|, \quad \int b_J = 0. \end{equation} Since $f$ is smooth, the collection of $J$ is finite. We may arrange matters so that the $b_J$ are smooth. We now proceed with the standard reduction argument as employed in \cite{christ:rough}, \cite{seeger:rough}. As in the proof of Theorem \ref{L2}, we decompose $K = \sum_j S_j K_0$. We need to estimate the set where $$ f * K = g * K + \sum_{s \leq C} \sum_J b_J * S_{j+s} K_0 + \sum_{s > C}\sum_J b_J * S_{j+s} K_0 $$ is essentially greater than 1; here and in the sequel, $j = j(J)$ is the integer such that $J$ has side-length $2^j$. The first term can be handled by the $L^2$ boundedness hypothesis and Chebyshev's inequality because $g$ is in $L^2$ with norm $O(\|f\|_1^{1/2})$. The second term is supported in $\bigcup_J CJ$, and so that contribution is acceptable since $\sum_J |CJ| <_\sim \|f\|_1$. To handle the remaining term it suffices to show that $$ |\{ \sum_{s > C} |\sum_J b_J * S_{j+s} K_0| \gtrsim 1\}| <_\sim \sum_J |J|. $$ We introduce a cutoff to emphasize the fact that $b_J * S_{j+s} K_0$ is supported on the annulus $(2^s J)_\Delta$. Namely, we rewrite the above as \be{weak-est} |\{ \sum_{s > C} |\sum_J \psi_J(b_J * S_{j+s} K_0)| \gtrsim 1\}| <_\sim \sum_J |J| \end{equation} where $\psi_J(x) = \psi(2^{-j} \circ (x_J^{-1} x))$ and $\psi$ is a suitable cutoff function supported on (a slight thickening of) the unit annulus $A_0$. We now claim that \eqref{weak-est} will follow from \begin{proposition}\label{first} Let $s \geq C$, ${\cal J}$ be a non-empty finite collection of disjoint balls such that \be{size} \sum_J |J| <_\sim 1, \end{equation} and $b_J$ be a collection of smooth functions satisfying \eqref{bj-prop}. Let $\psi_J$ be defined as above. Let $1 < p < 2$ be an exponent. Then there exists an exceptional set $E = E_s$ such that $|E| <_\sim 2^{-\varepsilon s}$ and \begin{equation}\label{first-reduction} \| \sum_J \psi_J(b_J * S_{j+s} F_J) \|_{L^p(E^c)} <_\sim 2^{-\varepsilon s} (\sum_J |J| \|F_J\|_2^2)^{1/2} \end{equation} for all functions $F_J$ in $L^2({\hbox{\bf H}})$. \end{proposition} We will prove this proposition in later sections. For now, we show why Proposition \ref{first} implies \eqref{weak-est}. It suffices by dilation invariance to verify \eqref{weak-est} in the case when $\sum_J |J| \sim 1$. In particular, we may assume that \eqref{size} holds. For each $s > C$ we decompose $K_0$ as $K_0 = K^{\leq s} + K^{>s}$, where $K^{\leq s}$ is the portion of $K_0$ supported on the set $|K_0| <_\sim 2^{\varepsilon s/2}$. We have to show that \be{former} |\{ |\sum_{s > C} \sum_J \psi_J(b_J * S_{j+s} K^{>s}) | \gtrsim 1\}| <_\sim 1 \end{equation} and \be{latter} |\{ |\sum_{s > C} \sum_J \psi_J(b_J * S_{j+s} K^{\leq s}) | \gtrsim 1\}| <_\sim 1. \end{equation} To show \eqref{former} it suffices by Chebyshev's inequality to show that $$\| \sum_{s > C} \sum_J \psi_J(b_J * S_{j+s} K^{> s}) \|_1 <_\sim 1.$$ But from \eqref{sj-est}, \eqref{bj-prop}, and Young's inequality one sees that $$ \| \psi_J(b_J * S_{j+s} K^{> s}) \|_1 <_\sim |J| \|K^{> s}\|_1,$$ and the desired estimate follows from \eqref{size} and the observation that $$\sum_{s > C} \|K^{> s}\|_1 <_\sim \|K_0\|_{L \log L} = 1.$$ To show \eqref{latter} it suffices by Chebyshev and the observation $|\bigcup_{s > C} E_s| <_\sim 1$ to show that $$ \| \sum_{s > C} \sum_J \psi_J(b_J * S_{j+s} K^{\leq s})\|_{L^p((\bigcup_{s>C} E_s)^c)} <_\sim 1.$$ By the triangle inequality it suffices to show $$ \| \sum_J \psi_J(b_J * S_{j+s} K^{\leq s})\|_{L^p(E_s^c)} <_\sim 2^{-\varepsilon s/2}$$ for each $s$. But this follows from \eqref{first-reduction} with $F_J = K^{\leq s}$ for all $J$, since $$(\sum_J J \|K^{\leq s}\|_2^2)^{1/2} <_\sim \| K^{\leq s} \|_2 <_\sim \|K^{\leq s}\|_\infty <_\sim 2^{\varepsilon s/2}.$$ This completes the derivation of \eqref{weak-est} from Proposition \ref{first}. It remains only to prove Proposition \ref{first}. \section{Proof of Theorem \ref{weak-11} continued. Bounded overlap of dilated balls.} In the remainder of the argument, $s>C$ and $1 < p < 2$ will be fixed. To prove Proposition \ref{first}, we first prove it under a natural multiplicity assumption on the overlap of the sets $2^s J$. More precisely, we will show in later sections that \begin{proposition}\label{next} Let ${\cal J}$ be a non-empty finite collection of disjoint balls such that \eqref{size} and \be{infty-count} \| \sum_J \chi_{C2^s J}\|_{\infty} <_\sim 2^{Ns} \end{equation} hold. Let $b_J$ be a collection of smooth functions satisfying \eqref{bj-prop}, and let $\psi_J$ be defined as above. Then we have \begin{equation}\label{next-reduction} \| \sum_J \psi_J(b_J * S_{j+s} F_J) \|_p <_\sim 2^{-\varepsilon s} (\sum_J |J| \|F_J\|_2^2)^{1/2} \end{equation} for all functions $F_J$ in $L^2({\hbox{\bf H}})$. \end{proposition} In this section we show how Proposition \ref{next} can be used to imply Proposition \ref{first}. Suppose that we are in the situation of Proposition \ref{first}. We first observe a useful lemma which will also be needed much later in this argument. \begin{lemma}\label{bmo-mult} Let $B \subset B(0,C)$ be any Euclidean ball of side-length at least $2^{-\varepsilon s}$, and define the functions $\psi_{J,B}$ by $$ \psi_{J,B}(x) = \psi_B(2^{-j-s} \circ (x_J^{-1} x))$$ where $\psi_B$ is any bump function which is adapted to $B$. Then we have $$ | \{ \sum_J \psi_{J,B}(x) > s^3 2^{Ns} |B|\}| <_\sim 2^{-\varepsilon s^2}.$$ \end{lemma} \begin{proof} It suffices to show the two estimates \be{l1-count} \| \sum_J \psi_{J,B}\|_1 <_\sim 2^{Ns} |B| \end{equation} and \be{bmo-count} \| \sum_J \psi_{J,B}\|_{BMO} <_\sim s 2^{Ns} |B| \end{equation} where BMO is defined with respect to the ball structure of the homogeneous group. The desired distributional estimate follows from \eqref{l1-count} and \eqref{bmo-count} thanks to the inequality $$ | \{ |f| \geq \alpha \} <_\sim e^{-C\alpha/\|f\|_{BMO}} \frac{\|f\|_1}{\alpha} $$ which follows immediately from the John-Nirenberg inequality and the Calder\'on-Zygmund decomposition. The estimate \eqref{l1-count} follows trivially from the triangle inequality and \eqref{size}. To show \eqref{bmo-count}, It suffices to show that $$ \sum_J {\hbox{osc}}_I \psi_{J,B} <_\sim 2^{Ns} |B|$$ for all balls $I$, where ${\hbox{osc}}_I f = \frac{1}{|I|} \int_I |f - f_I|$ and $f_I$ is the mean of $f$ on $I$. Fix $I$, and suppose that $I$ has radius $2^i$. We divide into three cases, depending on the relative sizes of $I$, $J$ and $2^s J$. We first consider the case where $I$ is larger than $2^s J$. In this case ${\hbox{osc}}_I \psi_{J,B}$ vanishes unless $J$ is in $CI$, in which case the oscillation is $O(2^{Ns}|J| |B|/|I|)$. Since the $J$ are disjoint and live in $CI$ the total contribution from these balls is acceptable. Next, we consider the case where $I$ has size between $J$ and $2^s J$ inclusive. For each scale $j$, there are at most $O(2^{Ns}|B|)$ balls $J$ of size $2^j$ which give a non-zero contribution. Since each ball contributes at most $O(1)$, we are done. Finally, we consider the case where $I$ has size smaller than $J$. For each scale $j$, there are at most $O(2^{Ns}|B|)$ balls $J$ which contribute. But from the smoothness of $\psi_{J,B}$ we see that each ball gives a contribution of $O(2^{-\varepsilon(j+s-i)})$ for some $\varepsilon > 0$. Summing in $j$ we see that this contribution is also acceptable. \end{proof} By applying this lemma with a ball of size roughly 1 and a non-negative cutoff, we obtain \be{bigmult} | \{ \sum_J \chi_{C 2^s J} \gtrsim s^3 2^{Ns}\}| <_\sim 2^{-\varepsilon s^2}. \end{equation} To pass from this to \eqref{infty-count} we shall use a sieving argument of C\'ordoba \cite{cordoba:sieve}. For any ball $J \in {\cal J}$, define the \emph{height} $h(J)$ to be the number $$ h(J) = \# \{ J' \in {\cal J}: 2J \subset 2J' \}.$$ We first deal with the contribution of those balls in ${\cal J}$ with height at least $s^3 2^{Ns}$. Clearly, the counting function $\sum_{J \in {\cal J}} \chi_{C2^s J}$ is at least $s^3 2^{Ns}$ on these balls. By the above lemma, the total measure of these balls is $O(2^{-\varepsilon s^2})$. This implies that the contribution of these balls to \eqref{first} is supported on a set of measure $O(2^{Ns} 2^{-\varepsilon s^2})$, which can safely be placed in the exceptional set $E$. We now consider for each $a = 0, 1, \ldots, s^3-1$ the contribution of those balls in ${\cal J}$ of height between $a 2^{Ns}$ and $(a+1) 2^{Ns}$. If we denote this collection of balls by ${\cal J}_a$, then we claim that ${\cal J}_a$ obeys \eqref{infty-count}. The estimate \eqref{first-reduction} would then follow from $s^3$ applications of \eqref{next-reduction} and the triangle inequality. It remains to verify \eqref{infty-count}. Let $x$ be an arbitrary point and let ${\cal J}^x$ be the set of all $J \in {\cal J}_a$ for which $x \in 2^s J$. We wish to show that $\# {\cal J}^x <_\sim 2^{Ns}$. We may of course assume that ${\cal J}^x$ is non-empty. Let $J_0$, $J_1$ be elements of ${\cal J}^x$ with minimal and maximal radius $2^{j_0}$ and $2^{j_1}$ respectively. We observe that there are at most $O(2^{Ns})$ balls in ${\cal J}^x$ of radius comparable to $2^{j_0}$ since the balls are disjoint. Similarly there are at most $O(2^{Ns})$ balls in ${\cal J}^x$ of radius comparable to $2^{j_1}$. So it only remains to show that there are at most $O(2^{Ns})$ balls in ${\cal J}^x$ of radius much larger than $2^{j_0}$ and much smaller than $2^{j_1}$. But each such ball makes a contribution of $1$ to $h(j_1) - h(j_0)$, which is $O(2^{Ns})$ by assumption. This completes the derivation of Proposition \ref{first} from Proposition \ref{next}. \section{Proof of Theorem \ref{weak-11} continued. Iterated $TT^*$ methods.} Let ${\cal J}$, $b_J$ satisfy the conditions of Proposition \ref{next}. To finish the proof of Theorem \ref{weak-11} it suffices to show \eqref{next-reduction}. By duality, it suffices to show $$ (\sum_J |J|^{-1} \| S_{j+s}^*( \widetilde{b_J} * (\psi_J F) ) \|_2^2)^{1/2} <_\sim 2^{-\varepsilon s} \|F\|_{p'}$$ for all test functions $F$ on ${\hbox{\bf H}}$, where $\widetilde{b_J}(x) = b_J(x^{-1})$. By the $TT^*$ method, it therefore suffices to show that $$ \| \sum_J |J|^{-1} \psi_J(b_J * S_{j+s} S_{j+s}^* (\widetilde{b_J} * (\psi_J F)))\|_{p} <_\sim 2^{-\varepsilon s} \|F\|_{p'}.$$ Since $S_{j+s} S_{j+s}^* = 2^{-Ns} |J|^{-1} S_0 S_0^*$, we may rewrite this as \be{new-targ} \| TF \|_{p} <_\sim 2^{-\varepsilon s} \|F\|_{p'}, \end{equation} where $$ T = 2^{-Ns} \sum_J \psi_J T_J \psi_J$$ and $T_J$ is the self-adjoint operator $$ T_J F = \frac{b_J}{|J|} * S_0 S_0^* (\frac{\widetilde{b_J}}{|J|} * F)).$$ Define the smooth functions $c_J$ supported on the ball $B(0,C)$ by $$ c_J(v) = |J|^{-1} b_J(d_J(v))$$ where $d_J: B(0,C) \to CJ$ is the map \be{dj-def} d_J(v) = x_J (2^j \circ v); \end{equation} from \eqref{bj-prop} we see that \be{cj-prop} \|c_J\|_{L^1(B(0,C))} <_\sim 1, \quad \int_{B(0,C)} c_J = 0. \end{equation} Also, note that $$S_0 S_0^* F(x) = \int \tilde\varphi(t) F(t \circ x)\ dt$$ where $\tilde\varphi$ is a bump function adapted to $\{t \sim 1\}$. We may rewrite $T_J$ as \be{aj-def} T_J F(x) = \int \int \int c_J(v) \tilde\varphi(t) c_J(w) F(d_J(w) t \circ (d_J(v)^{-1} x))\ dw dt dv. \end{equation} We now define a slightly larger, and non-cancellative, version of $T_J$. For each $J$, let $\psi^+_J$ be a slight enlargement of $\psi_J$ which is positive on the support of $\psi_J$. Also, apply Lemma \ref{integ} to find functions $c_J^1, \ldots, c_J^n$ supported on $B(0,C)$ such that \be{div} c_J = \sum_{i=1}^n \partial_{x_i} c_J^i \end{equation} and $\|c_J^i\|_1 <_\sim 1$. If one then defines $$ c_J^+ = |c_J| + \sum_{i=1}^n |c_J^i|,$$ then one sees that $c_J^+$ is a non-negative function on $B(0,C)$ with \be{cjp-prop} \| c_J^+\|_1 <_\sim 1. \end{equation} Finally, we choose $\varphi^+$ to be any enlargement of $\tilde\varphi$ which is strictly positive on the support of $\tilde\varphi$, and obeys the condition $\varphi^+(t) = t^{-N} \varphi^+(t^{-1})$. We then define the self-adjoint operator $T_J^+$ by $$ T_J^+ F(x) = \int \int \int c^+_J(v) \varphi^+(t) c^+_J(w) F(d_J(w) t \circ (d_J(v)^{-1} x))\ dw dt dv,$$ and $$ T^+ = 2^{-Ns} \sum_J \psi^+_J T^+_J \psi^+_J.$$ Clearly we have the pointwise bounds $T_J F(x) \leq T_J^+ F(x)$ and $T F(x) \leq T^+ F(x)$ for all $J$ and non-negative $F$. The operator $$ F(\cdot) \mapsto F(d_J(w) t \circ (d_J(v)^{-1} \cdot))$$ is bounded on every $L^p$ uniformly in all variables. From this, \eqref{aj-def}, \eqref{cjp-prop} and Minkowski's inequality we therefore have $\| T_J^+ F \|_p <_\sim \|F\|_p$ uniformly in $p$ and $J$. We now show \be{ap-bound} \| T^+ F \|_{p} <_\sim \|F\|_q \end{equation} for all $1 \leq p \leq q \leq \infty$; note that this would imply \eqref{new-targ} were it not for the $2^{-2\varepsilon s}$ factor. By interpolation and duality it suffices to verify this for $q = \infty$. Since $T_J^+$ is positivity preserving it suffices to verify this when $F$ is identically 1, i.e. we need to show that $$ \| 2^{-Ns} \sum_J \psi_J T_J^+ \psi_J \|_{p} <_\sim 1. $$ By \eqref{infty-count} and H\"older's inequality we have the pointwise estimate $$ 2^{-Ns} \sum_J \psi_J T_J^+ \psi_J <_\sim 2^{-Ns/p} (\sum_J (T_J^+ \psi_J)^p)^{1/p}.$$ Thus to show \eqref{ap-bound} it suffices to show $$ 2^{-Ns/p} (\sum_J \|T_J^+ \psi_J\|_p^p)^{1/p} <_\sim 1.$$ But this follows from the $L^p$ boundedness of $T_J^+$ and \eqref{size}. To finish the proof of Theorem \ref{weak-11} we must obtain the gain of $2^{-2\varepsilon s}$ for \eqref{new-targ}. To obtain this gain, we will iterate $T$ $m$ times as in the proof of Theorem \ref{L2}, until the kernel is smooth enough to profitably interact with the derivatives in \eqref{div}. As long as there is enough isotropic smoothing, we hope to bound (in an appropriate sense) $T^m$ by $2^{-\varepsilon s} (T^+)^m$. As before, there will be an exceptional portion of $T^m$, but we hope to also show this part is also very small. Naively, one expects this to work with $m=n$. However, one runs into two difficulties with this choice. Firstly, if one composes the operator $T_I$ with $T_J$, where $I$ is much larger than $J$, the cutoff functions ${\psi_{2^s J}}$ corresponding to $J$ can seriously truncate the smoothing effect from $T_I$. Secondly, when ${\hbox{\bf H}}$ is a general homogeneous group, the smoothing effects of the $T_J$ will tend to be along almost parallel directions, rather than being isotropically dispersed. Although each of these obstacles is individually tractable, the combined effect of these two obstacles may restrict the smoothing effects discussed above to very short, very parallel arcs, which will not give much isotropic regularity\footnote{ These obstructions also show that in the non-Euclidean case the smoothing effect is global rather than local, and one cannot exhibit this effect by naive microlocal methods (as used in the standard proof of the averaging lemma); this is in contrast with the Fourier transform analysis of \cite{seeger:rough} in the Euclidean case.}. To avoid this problem we shall iterate $T$ by considerably more than $n$ times to ensure the existence of at least $n$ untruncated arcs. In fact we shall iterate $m = 2^{2n - 3}$ times. We now turn to the details. Let $m$ be a large number to be chosen later. To show \eqref{new-targ} it suffices to show that \be{am-bound} \| T^m F \|_p <_\sim 2^{-\varepsilon s} \| F\|_{p^\prime} \end{equation} for some $\varepsilon > 0$. To see this, observe from the $TT^*$ method and the self-adjointness of $T$ that \eqref{am-bound} implies $$ \| T^{m/2} F \|_p <_\sim 2^{-\varepsilon s} \|F\|_2$$ with a slightly worse value of $\varepsilon$. On the other hand, from many applications of \eqref{ap-bound} we have $$ \| T^{m/2} F \|_p <_\sim \|F\|_q$$ for all $q \geq p$. By interpolation we thus obtain $$ \| T^{m/2} F \|_p <_\sim 2^{-\varepsilon s} \|F\|_{p^\prime}$$ for an even worse value of $\varepsilon$. By iterating this argument $2n-3$ times we thus obtain \eqref{new-targ} (for a very small value of $\varepsilon$). It remains to show \eqref{am-bound}. Since $A^m$ is bounded on $L^2$ by \eqref{ap-bound}, it suffices by interpolation to prove this for $p=1$. By expanding $T^m$, we thus reduce to showing that $$ \|2^{-Nms} \sum_{J_1 \ldots J_m \in {\cal J}} (\prod_{i=1}^m \psi_{J_i} T_{J_i} \psi_{J_i}) F \|_1 <_\sim 2^{-\varepsilon s} \|F\|_\infty. $$ We use $2^{j_i}$ to denote the radius of $J_i$. The balls $J_i$ may be radically different sizes, and need not be arranged in any sort of monotone order. Nevertheless, we can still extract a subsequence of $n$ balls whose sizes do increase monotonically, and have no smaller balls between elements of the sequence. More precisely, we have \begin{definition} Let $k = (k_1, \ldots, k_n)$ be a strictly increasing $n$-tuple of integers in $\{1, \ldots, m\}$. We say that an $m$-tuple $J = (J_1, \ldots, J_m)$ of balls is \emph{ascending} with respect to $k$ if $$ j_{k_q} \leq j_l \hbox{ for all } k_q \leq l \leq k_n,$$ and write this as $J \nearrow k$. Similarly, we say that $J$ is \emph{descending} with respect to $k$ if $$ j_{k_q} \leq j_l \hbox{ for all } k_1 \leq l \leq k_q,$$ and write this as $J \searrow k$. \end{definition} \begin{lemma} If $m \geq 2^{2n-3}$ and $J \in {\cal J}^m$, then there exists a sequence $k$ such that either $J \nearrow k$ or $J \searrow k$. \end{lemma} \begin{proof} We first construct an auxilliary sequence $l_1, \ldots, l_{2n-2}$ of integers and a sequence $S_1, \ldots S_{2n-2}$ of intervals of integers by the following iterative procedure. Let $S_1$ be the interval $\{1, \ldots, m\}$. For each $p = 1, \ldots, 2n-2$ in turn, we choose $l_p \in S_p$ so that $J_{l_q}$ has minimal radius among all the balls $\{J_l: l \in S_p\}$. Removing the element $l_p$ from $S_p$ divides the remainder into two intervals $\{l \in S_p: l < l_p\}$ and $\{l \in S_p: l > l_p\}$; we choose $S_{p+1}$ to be the larger of the two intervals. We then increment $p$ and iterate the above construction. One can easily show inductively that $|S_p| \geq 2^{2n-2-p}$ for all $s$, so that all the $l_p$ are well defined. Furthermore, one has $j_{l_q} \leq j_l$ for all $l$ between $l_q$ and $l_{2n-2}$. One of the sets $\{p: l_p \leq l_{2n-2}\}$, $\{p: l_p \geq l_{2n-2}\}$ has a cardinality of at least $n$. If the former set is larger, we choose $k$ to be the first $n$ elements of this set and observe that $J \nearrow k$. Otherwise we choose $k$ to be the first $n$ elements of the latter set and observe that $J \searrow k$. \end{proof} Temporarily set $m = 2^{2n-3}$. Order the sequences $k$ lexicographically, so in particular we have $k < k'$ whenever $k_1 < k'_1$. For all $J \in J^m$, let $k_{max}(J)$ be the largest sequence with respect to this ordering so that either $J \nearrow k$ or $J \searrow k$. From the above lemma we see that $k_{max}(J)$ is well-defined. Since the number of sequences is finite, it suffices to show that $$\|2^{-Nms} \sum_{J_1, \ldots, J_m \in {\cal J}: k_{max}(J_1, \ldots, J_m) = k} (\prod_{i=1}^m \psi_{J_i} T_{J_i} \psi_{J_i}) F \|_1 <_\sim 2^{-\varepsilon s} \|F\|_\infty$$ for each $k$. Fix $k$. The purpose of the following (somewhat technical) discussion is to enable us to reduce to the case when $k_1 = 1$ and $k_n = m$. We observe from the lexicographical ordering that the property that $k_{max}(J_1, \ldots, J_m) = k$ is independent of the choices of $J_i$ for $1 \leq i < k_1$. We thus abuse notation and write $$ k_{max}(J_{k_1}, \ldots, J_m) = k \hbox{ instead of } k_{max}(J_1, \ldots, J_m) = k.$$ The desired estimate can then be factored as $$\|2^{-N(m-k_1+1)s} \sum_{J_{k_1}, \ldots, J_m \in {\cal J}: k_{max}(J_{k_1}, \ldots, J_m) = k} (\prod_{i=k_1}^m \psi_{J_i} T_{J_i} \psi_{J_i}) T^{k_1-1} F \|_1 <_\sim 2^{-\varepsilon s} \|F\|_\infty.$$ By \eqref{ap-bound}, $T$ is bounded on $L^\infty$, and it suffices to show that $$2^{-N(m-k_1+1)s} \|\sum_{J_{k_1}, \ldots, J_m \in {\cal J}: k_{max}(J_{k_1}, \ldots, J_m) = k} (\prod_{i=k_1}^m \psi_{J_i} T_{J_i} \psi_{J_i}) F \|_1 <_\sim 2^{-\varepsilon s} \|F\|_\infty.$$ The left-hand side is majorized by \be{tmp} 2^{-N(m-k_1+1)s} \| \sum_{J_{k_1}, \ldots, J_m \in {\cal J}: J \nearrow k \hbox{ or } J \searrow k } |(\prod_{i=k_1}^m \psi_{J_i} T_{J_i} \psi_{J_i}) F|\|_1 \end{equation} where $J = (J_1, \ldots, J_m)$, and the choices of $J_1, \ldots, J_{k_1-1}$ are irrelevant. Since the properties $J \nearrow k$, $J \searrow k$ do not depend on $J_{k_n+1}, \ldots, J_m$, we may estimate \eqref{tmp} crudely by $$ 2^{-N(k_n-k_1+1)s} \| (T^+)^{m-k_n} \sum_{J_{k_1}, \ldots, J_{k_n} \in {\cal J}: J \nearrow k \hbox{ or } J \searrow k } |(\prod_{i=k_1}^{k_n} \psi_{J_i} T_{J_i} \psi_{J_i}) F|\|_1 $$ By \eqref{ap-bound} we may discard the $(T^+)^{m-k_n}$ operator. By a re-labelling of $J$ and $k$, and reducing $m$ to $k_n-k_1+1$, it thus suffices to show that $$ 2^{-Nms} \| \sum_{J_{1}, \ldots, J_{m} \in {\cal J}: J \nearrow k \hbox{ or } J \searrow k } |(\prod_{i=1}^{m} \psi_{J_i} T_{J_i} \psi_{J_i}) F|\|_1 <_\sim 2^{-\varepsilon s} \|F\|_\infty $$ for all $m \leq 2^{2n-3}$ and all $k$ such that $k_1 = 1$, $k_n = m$. Fix $m$, $k$. By duality it suffices to show that \be{bilinear} 2^{-Nms} \sum_{J_{1}, \ldots, J_{m} \in {\cal J}: J \nearrow k \hbox{ or } J \searrow k } |\langle (\prod_{i=1}^{m} \psi_{J_i} T_{J_i} \psi_{J_i}) F_J, G_J \rangle| <_\sim 2^{-\varepsilon s} \end{equation} for all functions $F_J$, $G_J$ in the unit ball of $L^\infty$. It suffices to consider the contribution of $J \nearrow k$, since the other contribution then follows by self-adjointness. For each $J \nearrow k$, we expand the inner product in \eqref{bilinear} as \be{expand} \int \int \int \int G_J(x_0) F_J(x_m) (\prod_{i=1}^m \psi_{J_i}(x_{i-1}) c_{J_i}(v_i) \tilde\varphi(t_i) c_{J_i}(w_i) \psi_{J_i}(x_i))\ dx_0 dw dt dv \end{equation} where $v = (v_1, \ldots, v_n)$, $w = (w_1, \ldots, w_n)$ range over $B(0,C)^n$, $t = (t_1, \ldots, t_n)$ ranges over $[C^{-1},C]^n$, $dw = \prod_{i=1}^n dw_i$, $dv = \prod_{i=1}^n dv_i$, $x_0$ ranges over ${\hbox{\bf H}}$, and $x_1, \ldots, x_m$ are defined recursively by \be{xi-def} x_i = F(d_{J_i}(w_i) t_i \circ (d_{J_i}(v_i)^{-1} x_{i-1})) \hbox{ for } i=1, \ldots, m. \end{equation} Note that each $x_i$ is a function of $x_0$ and $J_l, v_l, t_l, w_l$ for all $l = 1, \ldots, i$. We call the variables $x_0$ and $v_l, t_l, w_l$ for $l=1, \ldots, m$ \emph{integration variables}. There are many variables of integration here, but the only ones that we shall actively use are the dilation parameters $t_{k_1}, \ldots, t_{k_n}$ and the translation parameter $v_1$. Accordingly, we define new variables $\tau = (\tau_1, \ldots, \tau_n), y = (y_1, \ldots, y_n) \in {\hbox{\bf R}}^n$ by $\tau_q = t_{k_q}$ and $y = v_1$. Each $\tau_q$ integration is smoothing in one direction. The combined smoothing effect of all the $\tau$ variables shall be beneficial provided that the Jacobian $$\det D^L_\tau(x_m)$$ is sufficiently large. As will become clear later, the natural size for $\det D^L_\tau$ is $2^{M_n}$, where the quantities $M_0, \ldots, M_n$ are defined by \be{M-def} M_q = \sum_{i=1}^q \alpha_i (j_{k_i}+s). \end{equation} Accordingly, we shall decompose the $J \nearrow k$ portion of \eqref{bilinear} into \be{non-cancel-l1} \begin{split} \sum_{J \in {\cal J}^m: J \nearrow k} |&\int\int\int\int G_J(x_0) F_J(x_m) (\prod_{i=1}^m \psi_{J_i}(x_{i-1}) c_{J_i}(v_i) \tilde\varphi(t_i) c_{J_i}(w_i) \psi_{J_i}(x_i))\\ & \eta(2^{\delta s} 2^{-M_n} \det D^L_\tau(x_m)) \ dx_0 dw dt dv| <_\sim 2^{-\varepsilon s} 2^{Nms} \end{split} \end{equation} and \be{cancel-l1} \begin{split} \sum_{J \in {\cal J}^m: J \nearrow k} |&\int\int\int\int G_J(x_0) F_J(x_m) (\prod_{i=1}^m \psi_{J_i}(x_{i-1}) c_{J_i}(v_i) \tilde\varphi(t_i) c_{J_i}(w_i) \psi_{J_i}(x_i)) \\ &(1-\eta(2^{\delta s} 2^{-M_n} \det D^L_\tau(x_m))) \ dx_0 dw dt dv| <_\sim 2^{-\varepsilon s} 2^{Nms}. \end{split} \end{equation} Here $\delta > 0$ is a small number to be chosen later, and $\eta$ is a bump function which equals $1$ near 1. We shall prove these two estimates in later sections. But first we must introduce some preliminaries to treat the Jacobian $\det D^L_\tau(x_m)$, which is the wedge product of $n$ vectors of vastly different sizes. \section{The exterior algebra and non-isotropic scaling} It shall be necessary to define some artificial structures on the exterior algebra $\Lambda$ of ${\hbox{\bf R}}^n$. Define a quasi-order $\precsim$ on $\Lambda$ by $$ \sum_P a_P e_P \precsim \sum_P b_P e_P \quad \iff \quad |a_P| <_\sim b_P \hbox{ for all } P,$$ and write $w \approx w'$ if $w \precsim w'$ and $w' \precsim w$. We define an absolute value by $$ \|\sum_P a_P e_P\| = \sum_P |a_P| e_P.$$ Of course, $\|a\| = |a|$ for scalars $a$. We let ${\hbox{\rm \bf 1}}$ denote the vector ${\hbox{\rm \bf 1}} = (1, \ldots, 1)$. We define a non-cancellative analogue $\diamond$ of the wedge product by $$ (\sum_P a_P e_P) \diamond (\sum_Q b_Q e_Q) = \sum_P \sum_Q a_P b_Q \|e_P \wedge e_Q\|.$$ Note that $\diamond$ is bilinear and associative. This operation dominates the wedge product in the following sense: if $\omega_1 \precsim a_1$ and $\omega_2 \precsim a_2$ then \be{wedge-order} \|\omega_1 \wedge \omega_2\| \precsim a_1 \diamond a_2, \quad |\omega_1 \cdot \omega_2| <_\sim a_1 \cdot a_2. \end{equation} Finally, we observe that if $1 \leq r \leq n$ and $i_1, \ldots, i_r$ is any non-decreasing sequence of integers, then \be{diamond} (2^{i_1} \circ {\hbox{\rm \bf 1}}) \diamond \ldots \diamond (2^{i_r} \circ {\hbox{\rm \bf 1}}) \approx \sum_{1 \leq p_1 < \ldots < p_r \leq n} 2^{\alpha_{p_1} i_1 + \ldots + \alpha_{p_r} i_r} e_{p_1} \wedge \ldots \wedge e_{p_r}. \end{equation} In particular, from \eqref{M-def} and the hypothesis $J \nearrow k$ we have \be{m-size} (2^{j_{k_1}+s} \circ {\hbox{\rm \bf 1}}) \diamond \ldots \diamond (2^{j_{k_n}+s} \circ {\hbox{\rm \bf 1}}) \approx 2^{M_n} e_1 \wedge \ldots \wedge e_n. \end{equation} If $F(x)$ is a form-valued function of $x$ and $C > 0$, define $$ \| (1 + C \nabla_x)F(x) \| = \| F(x) \| + C \sum_{i=1}^n \| \partial_{x_i} F(x)\|.$$ More generally, we define $$ \| (A + B \nabla_x + C \nabla_y) F(x,y) \| = A \|F(x,y)\| + B \sum_{i=1}^n \|\partial_{x_i} F(x,y)\| + C \sum_{q=1}^n \| \partial_{y_q} F(x,y) \|.$$ From the product rule and \eqref{wedge-order} we observe that \be{product} \| (1 + C\nabla_x) (F \cdot G) \| <_\sim \| (1 + C \nabla_x) F \| \cdot \| (1 + C\nabla_x) G\| \end{equation} and \be{wedgehog} \| (1 + C\nabla_x) (F \wedge G) \| \precsim \| (1 + C \nabla_x) F \| \diamond \| (1 + C\nabla_x) G\|. \end{equation} We record the following estimates on the size and derivatives of $x_l$, and $\det D^L_T(x_m)$. \begin{lemma}\label{derivs} If $t_l \sim 1$ and $x_l \in (2^s J_l)_\Delta$ for all $1 \leq l \leq m$, then \begin{align} \|(1 + \nabla_\tau) \partial^L_{y_i} x_l\| &\precsim 2^{j_{1} + s - cs} \circ {\hbox{\rm \bf 1}} \label{xl-yi}\\ \|(1 + 2^{cs} \nabla_y + \nabla_\tau) \partial^L_{\tau_q} x_m\| &\precsim 2^{j_{k_q} + s} \circ {\hbox{\rm \bf 1}} \label{xl-tq-v}\\ \|(1 + 2^{cs} \nabla_y + \nabla_\tau) \det D^L_\tau(x_m)\| &<_\sim 2^{M_n}\label{jac-y}\\ \|(1 + 2^{cs} \nabla_y + \nabla_\tau) \psi_{J_l}(x_{l'})\| &<_\sim \psi^+_{J_l}(x_{l'})\label{psi} \end{align} for all $i, q,q' = 1, \ldots n$, $1 \leq l \leq m$, $0 \leq l' \leq l$, where $c>0$ is a constant independent of $\varepsilon$, $\delta$. \end{lemma} \begin{proof} From \eqref{xi-def}, \eqref{dj-def}, \eqref{group-diff}, and \eqref{dil-diff} we have \be{yi-form} \partial^L_{y_i} x_l = (t_l \ldots t_0) \circ C[x_{J_1}^{-1} x_0] \partial^L_{y_i} (2^{j_1} \circ y^{-1}). \end{equation} By \eqref{c-twine} and \eqref{dil-diff}, this becomes $$ \partial^L_{y_i} x_l = (t_l \ldots t_0 2^{j_1 + s}) \circ C[2^{-j_1 - s} \circ x_{J_1}^{-1} x_0] (2^{-s} \circ \partial^L_{y_i} y^{-1}).$$ Since $x_0 \in (2^s J_1)_\Delta$, $2^{-j_1 - s} \circ x_{J_1}^{-1} x_0$ is bounded. From this, \eqref{c-bound} and the observation that $|\partial^L_{y_i} y^{-1}| <_\sim 1$, we thus have $$ |C[2^{-j_1 - s} \circ x_{J_1}^{-1} x_0] (2^{-s} \circ \partial^L_{y_i} y^{-1})| <_\sim 2^{-cs}.$$ Inserting this into the previous estimate we thus obtain $$ \partial^L_{y_i} x_l \precsim 2^{j_1 + s - cs} \circ {\hbox{\rm \bf 1}} $$ which is the first part of \eqref{xl-yi}. The $\nabla_\tau$ portion of \eqref{xl-yi} then follows from \eqref{yi-form} and \eqref{trivx-bound}. We now turn to \eqref{xl-tq-v}. From \eqref{xi-def}, \eqref{group-diff}, and \eqref{dil-diff}, we have \be{vi-formula} \partial^L_{\tau_q} x_m = t_{k_q}^{-1} ((t_m \ldots t_{k_q} 2^{j_{k_q} + s}) \circ X(u_q)) \end{equation} where $u_q$ is the quantity \be{uq-def} u_q = 2^{-j_{k_q} - s} \circ (d_{J_{k_q}}(v_{k_q})^{-1} x_{k_q-1})) = (2^{-j_{k_q}-s} \tau_q^{-1}) \circ (d_{J_{k_q}}(w_{k_q})^{-1} x_{k_q}). \end{equation} Since $x_{k_q-1} \in (2^s J_{k_q})_\Delta$, $u_q$ is bounded, and so the first part of \eqref{xl-tq-v} obtains. To show the $2^{cs} \nabla_y$ portion of \eqref{xl-tq-v}, it suffices from \eqref{vi-formula} and the chain rule to show that $\|\partial^L_{y_i} u_q \| <_\sim 2^{-cs}$. But by \eqref{uq-def}, \eqref{group-diff}, and \eqref{dil-diff} we have $$ \partial^L_{y_i} u_q = (\tau_q^{-1} 2^{-j_{k_q}-s}) \circ \partial^L_{y_i} x_{k_q},$$ and the claim follows from \eqref{xl-yi} and the inequality $j_{k_1} \leq j_{k_q}$ arising from the hypothesis $J \nearrow k$. We now show the $\nabla_\tau$ portion of \eqref{xl-tq-v}. We consider the $\partial_{\tau_{q'}}$ derivatives for $q' \geq q$ and $q' < q$ separately. If $q' \geq q$, then we see from \eqref{vi-formula} and \eqref{trivx-bound} that $$ \rho(\partial_{\tau_{q'}} \partial^L_{\tau_q} x_m) <_\sim \rho(\partial^L_{\tau_q} x_m),$$ so the claim follows from the first part of \eqref{xl-tq-v}. If $q' < q$, then from \eqref{vi-formula} we have $$ \partial_{\tau_{q'}} \partial^L_{\tau_q} x_m = t_{k_q}^{-1} (t_m \ldots t_{k_q} 2^{j_{k_q} + s}) \circ \partial_{\tau_{q'}} X(u_q).$$ Since $X$ is polynomial and $u_q$ is bounded, it thus suffices by \eqref{comparable} to show that $$ |\partial^L_{\tau_{q'}} u_q| <_\sim 1.$$ But from \eqref{uq-def}, \eqref{group-diff}, and \eqref{dil-diff}, we have $$ \partial^L_{\tau_{q'}} u_q = 2^{-j_{k_q} - s} \circ \partial^L_{\tau_{q'}} x_{k_q-1},$$ and the claim follows from the first part of \eqref{xl-tq-v}. We now turn to \eqref{jac-y}. It suffices to show that $$ |(1 + 2^{cs} \nabla_y + \nabla_\tau) (\partial^L_{\tau_1} x_m \wedge \ldots \wedge \partial^L_{\tau_n} x_m)| \precsim 2^M e_1 \wedge \ldots \wedge e_n.$$ From \eqref{wedgehog} and \eqref{wedge-order} we have \begin{align*} |(1 + 2^{cs} \nabla_y + \nabla_\tau) &(\partial^L_{\tau_1} x_m \wedge \ldots \wedge \partial^L_{\tau_n} x_m)|\\ &\precsim |(1 + 2^{cs} \nabla_y + \nabla_\tau) \partial^L_{\tau_1} x_m| \diamond \ldots \diamond |(1 + 2^{cs} \nabla_y + \nabla_\tau) \partial^L_{\tau_n} x_m|. \end{align*} By \eqref{xl-tq-v} this is majorized by $$ (2^{j_{k_1}+s} \circ {\hbox{\rm \bf 1}}) \diamond \ldots \diamond (2^{j_{k_l}+s} \circ {\hbox{\rm \bf 1}}).$$ The claim then follows from \eqref{m-size}. Finally, we show \eqref{psi}. We can rewrite the desired estimate as $$ |(1 + 2^{cs} \nabla_y + \nabla_\tau) \psi(2^{-j_l-s} \circ (x_{J_l}^{-1} x_{l'}))| <_\sim \psi^+(2^{-j_l-s} \circ (x_{J_l}^{-1} x_{l'})).$$ From the support assumptions on $\psi$ and $\psi^+$ we have $|(1 + \nabla)\psi| <_\sim \psi^+$. Thus by the chain rule and \eqref{comparable}, it suffices to show that $$ |2^{cs} \partial^L_{y_i} (2^{-j_l-s} \circ (x_{J_l}^{-1} x_{l'})| <_\sim 1$$ and $$ |\partial^L_{\tau_q} (2^{-j_l-s} \circ (x_{J_l}^{-1} x_{l'})| <_\sim 1$$ for all $i,q = 1, \ldots, n$. We may of course assume that $1 \leq l'$ and $k_q \leq l'$ since the claims are trivial otherwise. But these estimates follow from \eqref{group-diff}, \eqref{dil-diff}, \eqref{xl-yi}, and \eqref{xl-tq-v}, noting that $j_1, j_{k_q} \leq j_{l}$ from the hypothesis $J \nearrow k$. \end{proof} \section{Proof of Theorem \ref{weak-11} continued. The degenerate portion of the integral.} We now prove \eqref{non-cancel-l1}. For this estimate we do not exploit any cancellation, and crudely majorize the left-hand side as \be{non-deg} \sum_{J \in {\cal J}^m: J \nearrow k} \int_{|\det D^L_\tau x_m| <_\sim 2^{-\delta s} 2^{M_n}} \prod_{i=1}^m \psi^+_{J_i}(x_{i-1}) c^+_{J_i}(v_i) \varphi^+(t_i) c^+_{J_i}(w_i) \psi^+_{J_i}(x_i)\ dx_0 dw dt dv \end{equation} We discard the $\psi^+_{J_i}(x_i)$ multiplier. We may freeze the $t_i$, $v_i$, $w_i$ variables using \eqref{cjp-prop} and reduce ourselves to showing \be{nondeg-targ} \sum_{J \in {\cal J}^m: J \nearrow k} \int_{|\partial^L_{\tau_1} x_m \wedge \ldots \wedge \partial^L_{\tau_n} x_m| <_\sim 2^{-\delta s} 2^{M_n}} \prod_{i=1}^m \psi^+_{J_i}(x_{i-1}) \ dx_0 <_\sim 2^{-\varepsilon s} 2^{Nms} \end{equation} uniformly over all choices of $t_i \sim 1$, $v_i \in B(0,C)$, $w_i \in B(0,C)$, where $v_i$ and $w_i$ are allowed to depend on $J_i$. Fix $t$, $w$, $v$. To show \eqref{nondeg-targ}, we first exclude an exceptional set of $x$'s. \begin{definition} For each $x$ in ${\hbox{\bf H}}$, define the set $S(x)$ by $$ S(x) = \{ 2^{-j-s} \circ (x_J^{-1} x): J \in {\cal J}, x \in (2^s J)_\Delta \}.$$ A point $x$ is said to be \emph{good} if one has \be{equi} \# S(x) \cap B <_\sim s^3 2^{Ns} |B| \end{equation} for all balls $B$ of radius $2^{-\varepsilon s}$. \end{definition} From \eqref{infty-count} we see that $S(x)$ is supported in the unit annulus $A_0$ and $\# S(x) <_\sim 2^{Ns}$ for all $x$. The property \eqref{equi} can thus be thought of as a statement about the uniform distribution of $S(x)$. Let $E$ denote the set of all points in $x$ which are not good. Fortunately, $E$ is very small: \begin{lemma} If $\varepsilon$ is sufficiently small, we have \be{e-size} |E| <_\sim 2^{-\varepsilon s^2}. \end{equation} \end{lemma} \begin{proof} Since there are at most $O(2^{C s})$ finitely overlapping $B$ which need to be considered for \eqref{equi}, it suffices to show that $$ | \{ x \in {\hbox{\bf H}}: \# S(x) \cap B \gtrsim s^3 2^{Ns} |B| \}| <_\sim 2^{-\varepsilon s^2}$$ for each ball $B$. But this follows from Lemma \ref{bmo-mult} after some re-arranging. \end{proof} For each $i = 1,\ldots,m$, the contribution to \eqref{non-deg} of the case when $x_i \in E$ is bounded by $$\sum_{J \in {\cal J}^m} \int \prod_{l=1}^n \psi^+_{J_l}(x_{l-1}) \chi_E(x_l) \ dx_0.$$ By \eqref{infty-count} this is bounded by $$ 2^{Nms} \int \chi_E(x_l)\ dx_0.$$ Thus the contribution to \eqref{non-deg} is definitely acceptable by \eqref{e-size} and the observation that $x_0 \mapsto x_l$ is a diffeomorphism with Jacobian \be{diffeo} \det D_{x_0}(x_{l}) = (t_1 \ldots t_l)^N \sim 1 \end{equation} Thus it remains only to show that $$ \sum_{J \in {\cal J}^m: J \nearrow k} \int_{|\partial^L_{\tau_1} x_m \wedge \ldots \wedge \partial^L_{\tau_n} x_m| <_\sim 2^{-\delta s} 2^{M_n}} \prod_{i=1}^m \psi^+_{J_i}(x_{i-1}) \chi_{E^c}(x_{i-1}) \ dx_0 <_\sim 2^{-\varepsilon s} 2^{Nms}. $$ For each $q=0,\ldots,n$, define $P_q$ to be the property that $$ 2^{-q\delta s/n} 2^{M_q} <_\sim | \partial^L_{\tau_1} x_m \wedge \ldots \wedge \partial^L_{\tau_q} x_m \cdot e_1 \wedge \ldots \wedge e_q|,$$ where $M_q$ was defined in \eqref{M-def}. The desired estimate can thus be rewritten as $$ \sum_{J \in {\cal J}^m: J \nearrow k} \int_{P_n \hbox{ fails}} \prod_{i=1}^m \psi^+_{J_i}(x_{i-1}) \chi_{E^c}(x_{i-1}) \ dx_0 <_\sim 2^{-\varepsilon s} 2^{Nms}. $$ Since $P_0$ is vacuously true, it thus suffices to show \be{non-deg-targ} \sum_{J \in {\cal J}^m: J \nearrow k} \int_{P_{q-1} \hbox{ holds}, P_q \hbox{ fails}} \prod_{i=1}^m \psi^+_{J_i}(x_{i-1}) \chi_{E^c}(x_{i-1}) \ dx_0 <_\sim 2^{-\varepsilon s} 2^{Nms} \end{equation} for all $q=1,\ldots, n$ (cf. \eqref{ei-decomp}). Fix $1 \leq q \leq n$. We now make the key observation \begin{proposition} If we fix $x_0$ and all the $J_i$ except for $J_{k_q}$, then we have \be{jkq-card} \# \{ J_{k_q} : J \in {\cal J}^m_k, P_{q-1} \hbox{ holds}, P_q \hbox{ fails} \} <_\sim 2^{-\varepsilon s} 2^{Ns} \end{equation} provided that $x_{k_q-1}$ is good. \end{proposition} \begin{proof} Suppose $J_{k_q}$ is in the set in \eqref{jkq-card}. Since $P_q$ fails, we have $$ | (\partial^L_{\tau_1} x_m \wedge \ldots \wedge \partial^L_{\tau_q} x_m) \cdot (e_1 \wedge \ldots \wedge e_q) | <_\sim 2^{-q\delta s/n} 2^{M_q}.$$ We rewrite this as \be{vec} | (2^{-j_{k_q}-s} \circ \partial^L_{\tau_q} x_m) \cdot a | <_\sim 2^{-q\delta s/n} \end{equation} where the vector $a = a_1 e_1 + \ldots + a_q e_q$ is defined by $$ a_l = 2^{\alpha_l (j_{k_q}+s)} 2^{-M_q} (\partial^L_{\tau_1} x_m \wedge \ldots \wedge \partial^L_{\tau_{q-1}} x_m \wedge e_l) \cdot (e_1 \wedge \ldots \wedge e_q). $$ Since $M_q = M_{q-1} + \alpha_q (j_{k_q}+s)$, we may rewrite this as \be{al-def} a_l = \pm 2^{-(\alpha_q - \alpha_l)(j_{k_q}+s)} 2^{-M_{q-1}} (\partial^L_{\tau_1} x_m \wedge \ldots \wedge \partial^L_{\tau_{q-1}} x_m) \cdot (e_1 \wedge \ldots \widehat{e_l} \ldots \wedge e_q) \end{equation} where the $\widehat{e_l}$ denotes that the $e_l$ term is missing from the wedge product. Since $P_{q-1}$ holds, we thus see that \be{large} |a_q| \gtrsim 2^{-(q-1)\delta s/n}. \end{equation} Also, from \eqref{xl-tq-v} we see that $$ |a_l| <_\sim 2^{-(\alpha_q - \alpha_l)(j_{k_q}+s)} 2^{-M_{q-1}} ( (2^{j_{k_1}+s} \circ {\hbox{\rm \bf 1}}) \diamond \ldots \diamond (2^{j_{k_{q-1}}+s} \circ {\hbox{\rm \bf 1}}) ) \cdot (e_1 \wedge \ldots \widehat{e_l} \ldots \wedge e_q).$$ By \eqref{diamond}, we thus have $$ |a_l| <_\sim 2^{-(\alpha_q - \alpha_l)(j_{k_q}+s)} 2^{-M_{q-1}} (\prod_{l'=1}^{l-1} 2^{\alpha_{l'}(j_{k_{l'}}+s)}) (\prod_{l'=l}^{q-1} 2^{\alpha_{l'+1}(j_{k_{l'}}+s)}).$$ By \eqref{M-def}, this simplifies to $$ |a_l| <_\sim 2^{-(\alpha_q - \alpha_l)(j_{k_q}+s)} \prod_{l'=l}^{q-1} 2^{(\alpha_{l'+1}-\alpha_{l'})(j_{k_{l'}}+s)}).$$ Since $J \nearrow k$, we have $j_{k_{l'}} + s \leq j_{k_{q-1}} + s$. Applying this inequality, we obtain a telescoping product which simplifies to \be{al-est} a_l <_\sim 2^{-(\alpha_q - \alpha_l)(j_{k_q} - j_{k_{q-1}})} \end{equation} From \eqref{al-def} we see that $a_l$ is independent of $j_{k_q}$ if $\alpha_l = \alpha_q$. If $\alpha_l < \alpha_q$, then $a_l$ can vary with $j_{k_q}$. However, from \eqref{al-est} we see that $a_l = O(2^{-Cs})$ unless $j_{k_q} = j_{k_{q-1}} + O(s)$. In both cases we thus conclude that, up to an error of $2^{-Cs}$, the quantities $a_l$ can each take at most $O(s)$ values. From this, \eqref{large}, and \eqref{vec}, we see that $2^{-j_{k_q}-s} \circ \partial^L_{\tau_q} x_m$ lies in a union of $O(s^C)$ $O(2^{-\delta s/n})$-neighbourhoods of hyperplanes. From \eqref{vi-formula} and the fact that the frozen quantities $t_i$ are comparable to 1, we thus see that $X(u_q)$ also lives in a union of $O(s^C)$ $O(2^{-\delta s/n})$-neighbourhoods of hyperplanes. From Lemma \ref{x-invert} and the boundedness of $u_q$, we thus see that $u_q$ lives in a union of $O(s^C)$ $O(2^{-\delta s/n})$-neighbourhoods of compact hypersurfaces. From \eqref{uq-def} we have $$ u_q = (2^{-s} \circ v_{k_q}^{-1}) 2^{-j_{k_q}-s} \circ (x_{J_{k_q}}^{-1} x_{k_q-1}),$$ and so $2^{-j_{k_q}-s} \circ (x_{J_{k_q}}^{-1} x_{k_q-1})$ also lives in the union of $O(s^C)$ $O(2^{-\delta s/n})$-neighbourhoods of compact hypersurfaces. The desired cardinality bound on the possible $J_{k_q}$ then follows from \eqref{equi} and a covering argument. \end{proof} From this proposition, we may estimate the left-hand side of \eqref{non-deg-targ} as $$ \sum_{(J_i)_{i \neq k_q} \in {\cal J}^{m-1}} \int 2^{-\varepsilon s} 2^{Ns} \prod_{i \neq k_q} \psi^+_{J_i}(x_{i-1})\ dx_0.$$ Choose an $i_0 \in \{1, \ldots, n\}$ not equal to $k_q$. By applying \eqref{infty-count} to all $J_i$ other than $J_{i_0}$, we estimate this by $$ 2^{-\varepsilon s} 2^{N(m-1)s} \sum_{J_{i_0} \in {\cal J}} \int \psi^+_{J_{i_0}}(x_{i_0-1})\ dx_0.$$ This estimate \eqref{non-deg-targ} then follows from \eqref{size} and \eqref{diffeo}. This concludes the proof of \eqref{non-cancel-l1}. \section{Proof of Theorem \ref{weak-11} continued. The non-degenerate portion of the integral.} It remains to show \eqref{cancel-l1}. By \eqref{ap-bound}, it suffices to show that \begin{align*} \sum_{J \in {\cal J}^m_k} |\int\int\int\int &G_J(x_0) F_J(x_m) \prod_{i=1}^m \psi_{J_i}(x_{i-1}) c_{J_i}(v_i) \tilde\varphi(t_i) c_{J_i}(w_i) \psi_{J_i}(x_i))\\ &\eta(2^{\delta s} 2^{-M} \det D_\tau(x_m)) \ dx_0 dw dt dv| <_\sim 2^{-\varepsilon s} \langle (T^+)^m 1, 1\rangle. \end{align*} By expanding out $T^+$, we see that it suffices to show that \begin{align*} |\int \int\int \int &G_J(x_0) F_J(x_m) \prod_{i=1}^m \psi_{J_i}(x_{i-1}) c_{J_i}(v_i) \tilde\varphi(t_i) c_{J_i}(w_i) \psi_{J_i}(x_i))\\ &\eta(2^{\delta s} 2^{-M} \det D_\tau(x_m)) \ dx_0 dw dt dv|\\ &<_\sim 2^{-\varepsilon s} \int\int\int\int \prod_{i=1}^m (\psi^+_{J_i}(x_{i-1}) c^+_{J_i}(v_i) \varphi^+(t_i) c^+_{J_i}(w_i) \psi^+_{J_i}(x_i))\ dx_0 dw dt dv \end{align*} for all $J \nearrow k$. Fix $J \nearrow k$. We freeze all the integration variables except for $\tau_1, \ldots, \tau_n$, and $y$. It thus suffices to show that \begin{align*} |\int\int &G_J(x_0) \prod_{i=1}^m \psi_{J_i}(x_{i-1}) c_{J_i}(v_i) \tilde \varphi(t_i) c_{J_i}(w_i) \psi_{J_i}(x_i)) F_J(x_m)\\ & \eta(2^{\delta s} 2^{-M} \det D_\tau(x_m)) dy d\tau| \\ &<_\sim 2^{-\varepsilon s} \int \int \prod_{i=1}^m \psi^+_{J_i}(x_{i-1}) c^+_{J_i}(v_i) \varphi^+(t_i) c^+_{J_i}(w_i) \psi^+_{J_i}(x_i))\ dy d\tau \end{align*} uniformly in the frozen variables. Fix all the frozen variables. Throwing out all the factors in the above expression which do not depend on $y$ or $\tau$, we reduce to $$ |\int\int c_{J_{1}}(y) F_J(x_m) a(y,\tau)\ dy d\tau| <_\sim 2^{-\varepsilon s} \int\int c^+_{J_{1}}(y) a^+(y,\tau)\ dy d\tau $$ where \be{a-def} a(y,\tau) = (\prod_{l=1}^m \psi_{J_l}(x_{l-1}) \psi_{J_l}(x_l)) \eta(2^{\delta s} 2^{-M} \det D_\tau(x_m)) \prod_{q=1}^n \tilde \varphi(\tau_q) \end{equation} and \be{ap-def} a^+(y,\tau) = (\prod_{l=1}^m \psi^+_{J_l}(x_{l-1}) \psi^+_{J_l}(x_l)) \prod_{q=1}^n \varphi^+(\tau_q). \end{equation} We now repeat the argument used to treat \eqref{cancel}. By \eqref{div} it suffices to show that \be{cl1-targ} |\int\int \partial_{y_i} c_{J_{1}}^i(y) F_J(x_m) a(y,\tau)\ dy d\tau| <_\sim 2^{-\varepsilon s} \int\int c^+_{J_{1}}(y) a^+(y,\tau) \ dy d\tau \end{equation} for all $i=1, \ldots, n$. Fix $i$. By an integration by parts, the left-hand side of \eqref{cl1-targ} is majorized by \be{split} |\int\int c_{J_{1}}^i(y) F_J(x_m) \partial_{y_i}a(y,\tau)\ dy d\tau| + |\int\int c_{J_{1}}^i(y) (\partial_{y_i} F_J(x_m)) a(y,\tau)\ dy d\tau|. \end{equation} We now apply \begin{lemma} We have the pointwise estimate \be{a-deriv} |(1 + 2^{\varepsilon s} \nabla_y + \nabla_\tau) a(y,\tau)| <_\sim a^+(y,\tau). \end{equation} \end{lemma} \begin{proof} From \eqref{product}, \eqref{a-def}, \eqref{ap-def}, it suffices to verify \begin{align*} |(1 + 2^{\varepsilon s} \nabla_y + \nabla_\tau) \psi_{J_l}(x_{l-1}))| &<_\sim \psi^+_{J_l}(x_{l-1})\\ |(1 + 2^{\varepsilon s} \nabla_y + \nabla_\tau) \psi_{J_l}(x_l))| &<_\sim \psi^+_{J_l}(x_l)\\ |(1 + 2^{\varepsilon s} \nabla_y + \nabla_\tau) \tilde \varphi(\tau_q)| &<_\sim \varphi^+(\tau_q)\\ |(1 + 2^{\varepsilon s} \nabla_y + \nabla_\tau) \eta(2^{\delta s} 2^{-M} \det D^L_\tau(x_m))| &<_\sim 1. \end{align*} The first two estimates follow from \eqref{psi}, while the third is trivial. The fourth estimate follows from the chain rule and \eqref{jac-y} providing that $\delta \geq \varepsilon$. \end{proof} From this lemma we see that the first term of \eqref{split} is acceptable. To treat the second term, it suffices to show that \be{split-second} |\int (\partial_{y_i} F_J(x_m)) a(y,\tau)\ d\tau| <_\sim 2^{-\varepsilon s} \int a^+(y,\tau)\ d\tau \end{equation} uniformly in $y$. Fix $y$. By Lemma \ref{chain}, we can rewrite the left-hand side as $$ |\int \nabla_\tau F_J(x_m) \cdot (D^L_\tau x_m)^{-1} \partial^L_{y_i} x_m a(y,\tau)\ d\tau|. $$ Integrating by parts, we see that this is equal to $$ |\int F_J(x_m) \nabla_\tau \cdot ((D^L_\tau x_m)^{-1} \partial^L_{y_i} x_m a(y,\tau)) \ d\tau|.$$ Thus to show \eqref{split-second}, it suffices to verify the pointwise estimate $$ \|(1 + \nabla_\tau) ((D^L_\tau x_m)^{-1} \partial^L_{y_i} x_m a(y,\tau))\| <_\sim 2^{-\varepsilon s} a^+(y,\tau).$$ We may of course assume that $(y,\tau)$ is in the support of $a$, so that \be{ndeg} |\det D^L_\tau x_m| \gtrsim 2^{-\delta s} 2^M. \end{equation} By \eqref{a-deriv} and \eqref{product}, the left-hand side is majorized by $$ a^+(y,\tau) |(1 + \nabla_\tau) ((D^L_\tau x_m)^{-1} \partial^L_{y_i} x_m)|$$ and so it suffices to show that $$ \|(1 + \nabla_\tau) ((D^L_\tau x_m)^{-1} \partial^L_{y_i} x_m)\| <_\sim 2^{-\varepsilon s}.$$ By Cramer's rule, it suffices to show that $$ \|(1 + \nabla_\tau) \frac{\partial^L_{\tau_1} x_m \wedge \ldots \wedge \partial^L_{y_i} x_m \wedge \ldots \partial^L_{\tau_n} x_m} {\det D^L_\tau x_m}\| <_\sim 2^{-\varepsilon s}$$ for all $q$, where the numerator is the wedge product of all the $\partial^L_{\tau_{q'}} x_m$, $q' = 1, \ldots, n$, but with the $q^{th}$ term $\partial^L_{\tau_q} x_m$ replaced by $\partial^L_{y_i} x_m$. Fix $q$. From the quotient rule, \eqref{jac-y} and \eqref{ndeg} it suffices (if $\varepsilon$ and $\delta$ are sufficiently small) to show that $$ \|(1 + \nabla_\tau) (\partial^L_{\tau_1} x_m \wedge \ldots \wedge \partial^L_{y_i} x_m \wedge \ldots \partial^L_{\tau_n} x_m)\| <_\sim 2^{-cs} 2^M$$ for some constant $c > 0$. On the other hand, from \eqref{xl-yi} and the inequality $j_1 \leq j_{k_q}$ arising from the hypothesis $J \nearrow k$, we see that $$ \| (1 + \nabla_\tau) \partial^L_{y_i} x_m \| \precsim 2^{-cs} (2^{j_{k_q} + s} \circ {\hbox{\rm \bf 1}}).$$ Meanwhile, from \eqref{xl-tq-v} we have $$ \| (1 + \nabla_\tau) \partial^L_{\tau_{q'}} x_m \| \precsim (2^{j_{k_{q'}} + s} \circ {\hbox{\rm \bf 1}}).$$ The desired estimate thus follows from \eqref{wedgehog} and \eqref{m-size}. This concludes the proof of \eqref{cancel-l1} and thus of Theorem \ref{weak-11}. \hfill {\vrule height6pt width6pt depth0pt}\medskip
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} \label{sec:intro} Sub-GeV dark matter (DM) scenarios have received recent attention from various quarters spurred by cosmological observations at the galactic scale in the context of structure formation \cite{ Spergel:1999mh, Nakama:2017ohe, Bullock:2017xww}. Independently a lot of effort have been made to update the direct detection experiments to target sub-GeV weakly interacting DM \cite{Crisler:2018gci, Abramoff:2019dfb, Essig:2011nj, Essig:2015cda, Lee:2015qva, Hochberg:2016sqx, Kurinsky:2019pgb, Hochberg:2015pha, Hochberg:2016ajh, Hochberg:2019cyy, Dror:2019onn}. These have led to a renewed interest in motivated model building for sub-GeV DM beyond the standard model \cite{Battaglieri:2017aum}. Generalizing the standard paradigm of $2\to 2$ annihilation to $N(>2) \to 2$ topologies naturally lead to light DM in sub-GeV domain \cite{Dolgov:1980uu, Dolgov:2017ujf, Hochberg:2014dra, Hochberg:2014kqa, Dey:2016qgf, Bernal:2015xba, Lee:2015uva, Choi:2015bya, Bernal:2015bla, Hochberg:2015vrg, Kuflik:2015isi, Choi:2016tkj, Choi:2016hid, Bernal:2017mqb, Ho:2017fte, Cline:2017tka, Choi:2017zww, Kuflik:2017iqs, Hochberg:2018rjs, Hochberg:2018vdo, Bhattacharya:2019mmy, Chauhan:2017eck, Choi:2017mkk}. An effort in this direction is the so called \textit{assisted annihilation} framework that was introduced in \cite{Dey:2016qgf}. Minimal version of this class of models has a sub-GeV stable thermal DM state along with assisters that can promptly decay to SM. By construction, the annihilation to SM states in the early universe is dominated by a $N\to 2$ topology where a pair of DM particles annihilate with one or more assisters in the initial state. Interestingly since the assisters are not charged under the same stabilizing symmetry as the DM it can in principle be lighter leading to a Boltzmann boost to the annihilation process \cite{Dey:2018yjt}. There are non-trivial effect of these new light states on the cosmology of the early Universe. Some of the relevant constraints on this framework from Big Bang Nucleosynthesis (BBN) and Cosmic Microwave Background (CMB) have been explored in \cite{Dey:2018yjt}. Additionally this class of models remain insulated from the present and proposed direct detection experiments due to additional flux suppression. From the point of view of model building the challenge is to have the flux suppressed $3\to 2$ channel dominate over possible $2 \to 2 $ processes. A possibility of eliminating the $2 \to 2 $ channel by a combination of kinematic phase space and Boltzmann suppression was presented in \cite{Dey:2018yjt}. This required augmentation of the minimal setup to include a heavy mediator in addition to the DM and assister states. The object was to suppress the associated $2 \to 2$ process without tuning couplings. Keeping within this setup, in this paper we take the complimentary view of boosting the assisted annihilation process by tuning it near a resonance peak. For universal couplings the resonant $s$-channel mediated $3 \to 2$ can easily dominate over $2 \to 2 $ processes. We find that for such scenarios it is easy to have an assisted annihilation dominated freeze-out of DM that saturates the observed relic abundance limits with perturbative couplings. In this paper, we present a simple scalar model of assisted annihilation containing a scalar $\mathbb{Z}_2$ odd DM, a photophilic scalar assister and a scalar mediator. We explore the region of parameter space where the assisted annihilation is near resonance. We demonstrate that within this framework it is easy to match the observed relic density of DM with perturbative couplings. We briefly comment on the possibility of probing a part of the relic density allowed parameter space of this framework using indirect detection and beam dump experiments. The paper is organized as follows. In section \ref{sec:model} we present the details of the minimal scalar model for resonant assisted annihilation. We make a systematic study of the relic density of DM within this framework in section \ref{sec:relic}. We discuss the possibility of exploring this framework in indirect detection and beam dump experiments in section \ref{sec:pheno} before concluding. \section{Minimal Model for Resonant \textit{Assisted Annihilation}} \label{sec:model} The minimal real scalar model for resonant assisted annihilation contains three real scalar fields viz. a stable DM ($\phi$), an assister $(A)$ and heavy mediator $(S)$ which are all singlet under the SM gauge symmetries. The stability of the DM is ensured by assigning an odd charge to it under a discrete $\mathbb{Z}_2$ symmetry, while both the assisters and mediator can promptly decay to SM states which are all even under the same $\mathbb{Z}_2$. The assister and the mediator also double up as a portal to the visible sector keeping DM in thermal equilibrium before freeze-out. The most general scalar potential for the dark sector and a photophilic portal coupling to the visible sector consistent with aforementioned charge assignments can be written as \begin{equation} \label{eq:pot1} \begin{split} \mathcal{L}_{\mbox{dark}} &= \frac{1}{2} m_{\phi}^2 \, \phi^2 \, + \, \frac{1}{2} m_{A}^2 \, A^2 \, + \frac{1}{2} \, m_{S}^2 \, S^2 \\ &+ \, \frac{\lambda_1}{4} \, \phi^2 A^2 \, + \, \frac{\lambda_2}{4} \, \phi^2 S^2 \, + \, \frac{\lambda_3}{2} \, \phi^2 AS \, + \, \frac{\lambda_4}{4}\, A^2 S^2 \, \\&+ \, \frac{\lambda_5} {6}\, A^3 S \, + \, \frac{\lambda_6}{6} \, S^3 A + \frac{\mu_1}{2} \phi^2 A + \frac{\mu_2}{2} \phi^2 S \\ &+ \frac{\mu_3}{6} A^3 \, + \, \frac{\mu_4}{6} S^3 \, + \, \frac{\mu_5}{2} A^2 S \, + \, \frac{\mu_6}{2} \, S^2 A \, \\ \mathcal{L}_{\mbox{portal}} &= \, c^{a}_{\gamma} AF^{\mu \nu}F_{\mu \nu} +c^{s}_{\gamma}SF^{\mu \nu}F_{\mu \nu}, \end{split} \end{equation} where $F^{\mu \nu}$ is the standard electromagnetic field strength tensor and $c^{a}_{\gamma}, c^{s}_{\gamma} $ have mass dimension of minus one. The non-renormalizable portal coupling of the dark sector to the SM represents a special choice which enables us to extract the most interesting phenomenological implications of the framework. Generalizations are straightforward and do not affect the resonant assisted annihilation driven freeze-out of DM discussed in the next section. \begin{figure}[th] \begin{center} \subfloat[\label{sf:2phito2A}]{ \includegraphics[scale=0.20]{2to2}~~~~ \includegraphics[scale=0.20]{2to2-med}}~~~~ \subfloat[\label{sf:2phito2g}]{ \includegraphics[scale=0.18]{2DM2yy}}~~~~ \subfloat[\label{sf:2phiAtoAA}]{ \includegraphics[scale=0.20]{3to2-all}}~~~~ \subfloat[\label{sf:2phiAto2g}]{ \includegraphics[scale=0.20]{2DM1A2yy}} \caption{Feynman diagram of the relevant $2 \to 2$ and $3 \to 2$ processes.} \label{fig:fd} \end{center} \end{figure} As can be easily read out from the interactions given in equation \eqref{eq:pot1} one of the annihilation channel for the DM proceed through the novel $3 \to 2 $ topology given by $\phi \phi A \rightarrow S \rightarrow A A / \gamma \gamma .$ Ordinarily this will be overwhelmed by a host of $2\to2$ processes like $\phi \phi \rightarrow SS / AA /AS$ etc. However in certain regions of the parameter space where the masses are tuned to put the assisted annihilation process on $s$-channel resonance this will dominantly drive the freeze-out of DM. In the next section, we will explore this possibility of resonant assisted annihilation setting the DM relic density. Subsequently, we will explore the phenomenological consequences of this framework. \section{Relic Density} \label{sec:relic} In this section we will focus on the region of parameter space where the resonant assisted annihilation processes (shown in figure \ref{sf:2phiAtoAA} and \ref{sf:2phiAto2g}) set the required relic density of DM \cite{Aghanim:2018eyx}. To keep the discussion tractable we will further assume that all the $\lambda_i$, $\mu_i$ and $c^{i}_{\gamma}$ of equation \eqref{eq:pot1} are universal and equal to $\lambda$, $\mu$ and $c_{\gamma}$ respectively. However, to have an handle on the relative strength of the $2 \to 2$ and $3 \to 2$ processes we keep $\lambda_3$ as an independent coupling. We call this four parameter scenario as the Benchmark Model. Admittedly, this requires a tuning of masses of the dark sector states of the form, $(2m_\phi + m_A ) \sim m_S.$ The relevant Boltzmann equation is given by, \begin{subequations} \begin{align} \label{seq:boltzYY} \frac{dY_{\phi}}{dx} &= -\frac{s^{2} g_{*}}{x H} N_{\rm Bolt} \langle \sigma v^{2}\rangle_{3 \to 2} \left[Y_{\phi}^{2} Y_{\phi}^{\rm eq} - \left(Y_{\phi}^{\rm eq}\right)^{3} \right] -\frac{s g_{*}}{x H} \langle \sigma v \rangle_{2 \to 2} \left[Y_{\phi}^{2} - \left(Y_{\phi}^{\rm eq}\right)^{2} \right] \\ \label{seq:nbolt} N_{\rm Bolt} &= e^{x(1-\epsilon)}\epsilon^{3/2},~~g_{*} = 1 + \frac{1}{3} \frac{d(\text{ln}~g_s)}{d(\text{ln}~T)}, \end{align} \label{eq:boltzy} \end{subequations} where $x=m_{\phi}/T$, $\epsilon=m_{A}/m_{\phi}$, entropy density $s=2 \pi^2 g_s T^3/45$, Hubble constant $H= \sqrt{\pi^2 g_{\rho}/90}\left(T^2/M_{\rm Pl}\right)$, $g_s$ and $g_{\rho}$ are the effective number of relativistic degrees of freedom corresponding to entropy and energy density respectively. For the temperature dependence of $g_{\rho}$, $g_{s}$ and $g_{*}$ we have followed \cite{Drees:2015exa}. The thermally averaged cross section $\langle \sigma v^{2}\rangle_{3 \to 2}$ includes all $3 \to 2$ processes while $\langle \sigma v \rangle_{2 \to 2}$ quantify the sub-dominant $2 \to 2$ annihilation cross sections\footnote{In principle to obtain the correct relic density full set of coupled Boltzmann equations involving the DM, assisters and mediator should be considered. However, at resonance, equation \eqref{eq:boltzy} reproduces the results adequately.}. Note that, for $\epsilon > 1$ there will be a Boltzmann suppression of the initial state assister flux while for $\epsilon < 1$ there is an enhancement of the effective cross section for similar reasons. This is a novel feature of the assisted annihilation framework that should be contrasted with the usual co-annihilation scenario, where by construction a Boltzmann suppression is obtained depending on the mass-splitting of the co-annihilating states \cite{Dey:2018yjt}. \begin{figure}[t] \begin{center} \includegraphics[scale=0.3]{resonance} \caption{Relic density as a function of $ \delta$, where $\delta$ has been defined in equation \eqref{eq:delta}. The light blue and light orange lines show $\Omega h^2$ estimated using $2 \to 2$ and $3 \to 2$ channels for different choices of $\lambda_3$. For the solid, dashed and dot dashed lines $\lambda_3$ has been fixed to $1,~0.5,~\rm and~ 0.1$ respectively. We set $m_{\phi} = 200$ MeV, $m_{A} = 100$ MeV, $\lambda =10^{-5}$, $\mu/m_{\phi}=10^{-5}$ and $c_{\gamma}=10^{-11}$ MeV$^{-1}$.} \label{fig:resonance} \end{center} \end{figure} The relevant Feynman diagrams of $2 \to 2$ processes are shown in figure \ref{sf:2phito2A} and \ref{sf:2phito2g} while the Feynman diagrams of $3 \to 2$ assisted annihilation processes are shown in figure \ref{sf:2phiAtoAA} and \ref{sf:2phiAto2g}. Note that the cross section of both $3 \to 2$ and $2 \to 2$ processes depend on $\lambda,~\lambda_3,~\mu$ and $c_{\gamma}$. In spite of the strong constraint on $c_{\gamma}$ from fixed target experiments \cite{Aloni:2019ruo} with $\lambda$ and $\mu/m_{\phi} \sim \mathcal{O}(1)$ the $2 \to 2$ processes in general will dominate. However, in the region of parameter space where the masses are tuned so that $(2m_\phi + m_A ) \sim m_S$, the $3 \to 2$ assisted annihilation, now set at resonance, can dominantly drive freeze-out to saturate the required relic density bound. To illustrate the effect of resonance on relic density we define following parameter \cite{Choi:2017mkk} \begin{equation} \delta \equiv \frac{m_S^2-\left(2 m_\phi + m_A\right)^2 }{\left(2 m_\phi + m_A\right)^2 } \label{eq:delta} \end{equation} The thermally averaged cross section near the pole within the narrow width approximation is given by \cite{Choi:2017mkk}, \begin{equation} \langle \sigma v^2 \rangle \approx \frac{243 \pi \lambda_3 ^2 \,\delta^2 x^3 e^{-3x \delta/2 } }{64 \left(2m_{\phi} + m_A\right)^2 m^2_S \sqrt{m^2_S-4 m^2_{\phi}}} \rm Br(S \to AA/ \gamma \gamma)\Theta(\delta), \end{equation} where $\mu$ has been assumed to be small to suppress the contribution of $2 \to 2$ processes. In figure \ref{fig:resonance} the relic density is plotted as a function of $\delta$, keeping $m_{\phi} = 200$ MeV, $\epsilon = 0.5$, $\lambda =10^{-5}$, $\mu/m_{\phi}= 10^{-5}, ~ c_{\gamma}=10^{-11}$ MeV$^{-1}$. The relic density keeping only the corresponding $2 \to 2$ processes in equation \eqref{eq:boltzy} is shown by the light blue lines. The solid, dashed and dot dashed lines corresponds to $\lambda_3=1,~0.5,~0.1$ respectively. The black solid band shows allowed range of DM relic density ($\Omega h^2 = 0.12 \pm 0.001$) \cite{Aghanim:2018eyx}. Clearly, a resonant $3 \to 2$ assisted annihilation can effectively drive freeze-out to obtain the required relic density. As is evident from the definition in equation \eqref{eq:delta}, $\delta$ determines how close a parameter point is to resonance and therefore is a measure of tuning in the theory. As $\delta$ is set near the resonance, the $3 \to2$ assisted annihilation contribution to the relic density starts dominating while the contribution of the $2 \to 2$ processes become numerically insignificant as evidenced in figure \ref{fig:resonance}. In figure \ref{fig:relic}, DM relic density allowed contours for DM masses $m_{\phi}=50$, $200$, $500$ and $1000$ MeV has been displayed in $\epsilon - \delta$ plane. In the plot we set $\lambda_3=3$ keeping it safely within the tree level perturbativity limit of $4 \pi$. \begin{figure}[t] \begin{center} \includegraphics[scale=0.3]{relic} \caption{Relic density allowed contours in $\epsilon$ vs $\delta $ plane. The light red, light violet, light brown and sky blue lines corresponds to $m_{\phi} = 50$ MeV, $m_{\phi} = 200$ MeV, $m_{\phi} = 500$ MeV and $m_{\phi} = 1000$ MeV respectively. The other couplings are: $\lambda_3 =3, ~\lambda =10^{-5}$, $\mu/m_{\phi}=10^{-5}, ~ c_{\gamma}=10^{-11}$ MeV$^{-1}$.} \label{fig:relic} \end{center} \end{figure} \section{Phenomenology} \label{sec:pheno} Being immune to direct detection experiments the $3 \to 2$ assisted annihilation framework is amenable to probing through indirect effects. First, we examine the cosmological implications of the light states, especially the late decay of the photophilic MeV scale assisters. Beam dump experiments can put complementary constraints on the photophilic assisters. Finally, the $\gamma$-ray flux arising from associated DM annihilation at present day universe may be of interest in the context of indirect detection experiments. We now elaborate on these phenomenological consequences of the resonant assisted annihilation framework in the context of the model presented in section \ref{sec:model}. \subsection{Cosmological Constraint} \label{subsec:bbn} Any coupling between the MeV scale assister/mediator $(A/S)$ to the photon will have several cosmological implications. A detailed discussion on this can be found in \cite{Dey:2018yjt}. The late time decay of the $A/S$ to photons may lead to photo-dissociation of the BBN products \cite{Protheroe:1994dt, Kawasaki:1994sc,Cyburt:2002uv, Jedamzik:2006xz,Poulin:2015opa, Hufnagel:2018bjp, Forestell:2018txr}. This essentially puts an upper bound on the lifetime of the decaying species. Here we use a conservative limit on the lifetime of both the assister and the mediator, to be less than $1$ s, and fix $c_{\gamma}$ to $10^{-11}$ MeV$^{-1}$ which is also consistent with beam dump experiments discussed next. Additionally, light degrees of freedom can increase Hubble expansion rate which may alter BBN yields, constraining the masses to be greater than $1$ MeV \cite{Cyburt:2015mya}. Other than these, the direct annihilation to photons after neutrino decoupling may change photon to neutrino temperature ratio \cite{Kolb:1986nf, Serpico:2004nm, Nollett:2013pwa, Nollett:2014lwa, Depta:2019lbe}. Following \cite{Depta:2019lbe} we find that with two light real scalar states in the dark sector, a DM mass below $\sim 8$ MeV is constrained from BBN and CMB observations. In the resonant assisted annihilation dominated regime the associated $2 \to 2$ annihilation can inject energy during dark ages which could modify the anisotropies of CMB through ionizing particles. The limit can be presented through the following parameter \begin{equation} p_{\rm ann}= f_{\rm eff} \frac{\langle \sigma v\rangle}{m_{\phi}}, \label{eq:pann} \end{equation} which determines the amount of energy deposited through DM annihilations. The weighted efficiency factor ($f_{\rm eff}$) has been calculated using only the photon spectra \cite{Slatyer:2015jla}, with a conservative choice $f^{\gamma}_{\rm eff} (E)=1$. The most robust bound on $p_{\rm ann}<3.2 \times 10^{-28} \rm cm^3 \rm s^{-1} \rm GeV^{-1}$ has been given by the Planck result \cite{Aghanim:2018eyx}. For $\epsilon=0.5$ the light blue shaded region in figure \ref{fig:lamda-m} has been excluded by aforementioned bound. \subsection{Fixed Target Searches} \label{subsec:fixtarget} An alternate strategy to search for photophilic $A/S$ is through fixed target experiments. There are several experiments like P\textsc{rim}E\textsc{x} \cite{Aloni:2019ruo}, G\textsc{lue}X \cite{Aloni:2019ruo}, E137 \cite{Bjorken:1988as}, Belle-II \cite{Dolan:2017osp}, SHiP \cite{Alekhin:2015byh}, FASER2 \cite{Feng:2018noy}, SeaQuest \cite{Berlin:2018pwi}, and NA 62 \cite{Dobrich:2015jyk} which may probe the effective photon coupling $c_{\gamma}$ of $A/S.$ A conservative limit of $c_{\gamma} \leq 10^{-11}$ MeV$^{-1}$ is found to be in consonance with the exclusion bounds in the mass range of interest \cite{Dey:2018yjt}. \subsection{Indirect Detection} \label{subsec:ID} In our region of interest the dominant contribution to the relic density is driven by the $s$-channel resonant assisted annihilation processes, shown in figure \ref{sf:2phiAtoAA} and \ref{sf:2phiAto2g}. However, this $3 \to 2$ annihilation processes is inoperative once the assister number density plummets due to decay. Interestingly, the $2 \to 2$ sub-dominant processes given in figure \ref{sf:2phito2A} and \ref{sf:2phito2g} survives and provides a handle to explore these framework through indirect detection. In the most generic form of the Lagrangian in equation \eqref{eq:pot1} it is possible to tune the couplings to drive resonant assisted annihilation while keeping all the relevant cross sections negligible. However, in the benchmark model due to universal coupling choices a sizeable assisted annihilation would lead to a correlated cross section of indirect detection. It is in this context we explore the possibility to probe the parameter space of the benchmark model through indirect detection. For the presented model, photophilic assister may produce potential $\gamma$-ray flux. The differential photon flux from the annihilation of a self-conjugate DM is given by \cite{Slatyer:2017sev} \begin{equation} \label{eq:ID-flux1} \Phi_{\gamma}^{\prime} \left(E_{\gamma} \right) = \frac{\rho_{\odot}^2 r_{\odot}}{8 \pi m_{\phi}^2} \sum_{i} \langle \sigma v \rangle_{i} \, \frac{d N^{i}_{\gamma}}{dE_{\gamma}} \frac{J}{\Delta\Omega}, \end{equation} \begin{figure}[t] \begin{center} \textbf{$\boldsymbol{\epsilon < 1}$ \hspace{6.5 cm} $\boldsymbol{\epsilon > 1}$} \subfloat[\label{sf:Eg-phig-lt} Convoluted gamma-ray flux $\Phi_{\gamma} \left(E_{\gamma} \right)$ for two DM masses of $10$ MeV and $500$ MeV by light blue and light green lines respectively. The solid lines represent $\epsilon = 0.99$ while dashed lines are for $\epsilon = 0.5$.]{ \includegraphics[scale=0.25]{ID-epsilon-lt}}~~~~ \subfloat[\label{sf:Eg-phig-gt} Smeared gamma-ray flux $\Phi_{\gamma} \left(E_{\gamma} \right)$ for two DM masses of $10$ MeV and $200$ MeV with $\epsilon >1$. ]{ \includegraphics[scale=0.26]{ID-epsilon-gt}} \caption{The chosen values of the remaining parameters are mentioned in figure \ref{fig:relic}.} \label{fig:ID-flux} \end{center} \end{figure} where $\langle \sigma v \rangle_i$ is the thermally averaged annihilation cross section, $d N^i_{\gamma}/dE_{\gamma}$ is the corresponding spectrum of $\gamma$-rays, $r_{\odot} \simeq 8.5$ kpc is the Sun's distance from the Galactic center, $\rho_{\odot} \simeq 0.3$ GeV/cm$^3$ is the local DM density, and $J$ is the standard $J$-factor which integrates intermediate DM density along the line of sight over the solid angel $\Delta\Omega$. We have used NFW profile \cite{Navarro:1995iw, Navarro:1996gj} to calculate $J$ factor for the considered indirect detection experiments \cite{Essig:2013goa}. The dominant contribution on the $\gamma$-ray flux comes from two different kinds of processes \begin{enumerate} \item Two body annihilation to assisters as shown in figure \ref{sf:2phito2A}. In the center-of-mass frame of the DM, the subsequent decay of the assisters to photons will develop a box-shaped spectra for the former. The spectra of the photon can be written as \cite{Ibarra:2012dw, Boddy:2015efa} \begin{equation} \frac{d N_{\gamma}}{dE_{\gamma}} = \frac{4}{\Delta E} \, \Theta \left(E_{\gamma} - E_{-}\right) \Theta \left(E_{\gamma} - E_{+}\right), \label{eq:box} \end{equation} where \begin{equation*} E_{\pm}=\frac{m_{\phi}}{2} \left(1\pm\sqrt{1-\frac{m_A^2}{m^2_{\phi}}} \right) \end{equation*} are the edges of the box and $\Delta E$ is the difference between them and $\Theta$ is the step function. This channel will be operative only for $\epsilon < 1$. \item Direct two body annihilation to photons as depicted in figure \ref{sf:2phito2g}, which will be functional both for $\epsilon < 1$ and $\epsilon > 1$. In the center-of-mass frame of the DM, this would give rise to following line spectra of the photons \begin{equation} \frac{d N_{\gamma}}{dE_{\gamma}} = 2 \, {\mathcal{\delta}}\left(E_{\gamma}-m_{\phi} \right), \label{eq:line} \end{equation} \end{enumerate} where $\delta\left(E_{\gamma}-m_{\phi} \right)$ is the Dirac delta function. We have assumed a Gaussian detector response \cite{Bringmann:2008kj} which would spread out the spikes and sharp kinematic edges of the flux ($\Phi_{\gamma}^{\prime} \left(E_{\gamma} \right)$) and the the convoluted gamma-ray flux ($\Phi_{\gamma} \left(E_{\gamma} \right)$) has been compared with the experimental results. There have been several gamma-ray satellites which search for such flux from DM annihilation. Since we are exploring phenomenology of DM mass in sub-GeV range, therefore we have used outcome of low energy gamma-ray detector like HEAO-1 \cite{Gruber:1999yr}, INTEGRAL \cite{Bouchet:2008rp}, COMPTEL \cite{Weidenspointner:99, Kappadath:98}, EGRET \cite{Strong:2004de} and Fermi \cite{Ackermann:2012pya} to obtain the constraint on the relevant parameters of the model presented in section \ref{sec:model}. We have used central values of the observations of the experiments and interpolated that to obtain a continuous flux in their respective energy window. \begin{figure}[t] \begin{center} \includegraphics[scale=0.3]{ID-CMB} \caption{Allowed regions of $\lambda$ as a function of DM mass $m_{\phi}$. The upper limit on $\lambda$ for $\epsilon=0.5$ from INTEGRAL, COMPTEL, EGRET and Fermi is shown by orange, purple, red, and blue lines respectively. The BBN and CMB excluded region is shown by light red shading. The light blue shaded region denotes CMB exclusion region from the energy injection through DM annihilation. The other parameters are the same as in figure \ref{fig:relic}.} \label{fig:lamda-m} \end{center} \end{figure} In figures \ref{sf:Eg-phig-lt} and \ref{sf:Eg-phig-gt} we have shown smeared differential gamma-ray flux ($\Phi_{\gamma} \left(E_{\gamma} \right)$) as a function of gamma-ray energy for $\epsilon <1$ and $\epsilon >1$ respectively. For $\epsilon <1$ both $\phi \phi \to 4 \gamma$ through $A$ and $\phi \phi \to \gamma \gamma$ is operative. However, from BBN and fixed target constraints on $c_{\gamma}$ and for $\mu/m_{\phi} = 10^{-5}$ contribution of former to the total flux is numerically insignificant, this leads to box type spectrum as shown in figure \ref{sf:Eg-phig-lt}. Apart from this, with the choice $\lambda = \mu/m_{\phi}=10^{-5}$, DM annihilation to assisters through point interaction would dominate the same annihilation through $A/S$ mediation. For $\epsilon > 1$ only channel to probe DM signals through indirect detection is $\phi \phi \to \gamma \gamma$. However, in the region of parameter space where resonance assisted annihilation is the dominant channel to drive the freeze-out, smallness of $\mu$ and $c_{\gamma}$ makes the differential gamma-ray flux beyond the reach of the current experimental sensitivity. This has been shown in figure \ref{sf:Eg-phig-gt} for DM masses $10$ and $200$ MeV. Since the thermally averaged cross section $\langle \sigma v \rangle_i$ of a particular channel is inversely proportional to $m_{\phi}^2$, therefore for a fixed choice of other couplings and masses the maximum value of flux increases with decrease in DM mass. The upper limit on $\lambda$ for the benchmark model from INTEGRAL, COMPTEL, EGRET and Fermi against the DM mass is shown in figure \ref{fig:lamda-m}. For completeness, four benchmark points which satisfy the relic density constraint are shown by circle with cross, star, diamond, and cross. The details of the benchmark points are given in table \ref{tab:benchmark}. \begin{table}[t]\caption{Benchmark points} \label{tab:benchmark} \centering \begin{tabular}{|c|c|c|c|c|c|c|c|c|} \hline\hline BP & $m_{\phi}$ & $\lambda$ & $\lambda_3$ & $c_{\gamma}$ & $\mu/ m_{\phi}$ & $\epsilon$ & $\delta$ & Flux $ \Phi_{\gamma} \left(E_{+} \right)$ \\ & (MeV) & & & MeV$^{-1}$ & & & & (MeV s sr)$^{-1}$ cm$^{-2}$ \\ \hline \multirow{2}*{BP1 \textbf{$\times$}} & \multirow{2}*{50 } & \multirow{2}*{$10^{-5}$} & \multirow{2}*{$3$} & \multirow{2}*{$10^{-11}$} & \multirow{2}*{$10^{-5}$} &\multirow{2}*{ $0.5$ } & $0.445 $ & \multirow{2}*{$1.3 \times 10^{-5}$ }\\ \cline{8-8} & & & & & & & $4.47 \times 10^{-4}$ & \\ \hline \multirow{2}*{BP2 $\blacklozenge$} & \multirow{2}*{200 } & \multirow{2}*{$10^{-5}$} & \multirow{2}*{$3$} & \multirow{2}*{$10^{-11}$} & \multirow{2}*{$10^{-5}$} & \multirow{2}*{$0.5$} & $0.282$ & \multirow{2}*{$1.3 \times 10^{-8}$} \\ \cline{8-8} & & & & & & & $2.79 \times 10^{-3}$ & \\ \hline \multirow{2}*{BP3 $\bigstar$} & \multirow{2}*{500 } & \multirow{2}*{$10^{-5}$} &\multirow{2}*{$3$} & \multirow{2}*{$10^{-11}$} & \multirow{2}*{$10^{-5}$} & \multirow{2}*{$0.5$} & $0.174$ &\multirow{2}*{ $1.3 \times 10^{-10}$} \\ \cline{8-8} & & & & & & & $9.60 \times 10^{-3}$ & \\ \hline \multirow{2}*{BP4 $\bigotimes$} & \multirow{2}*{1000 } & \multirow{2}*{$10^{-5}$} & \multirow{2}*{$3$} & \multirow{2}*{$10^{-11}$} & \multirow{2}*{$10^{-5}$} & \multirow{2}*{$0.5$} & $0.088$ &\multirow{2}*{ $4.2 \times 10^{-12}$} \\ \cline{8-8} & & & & & & & $0.030$ & \\ \hline \hline \end{tabular} \end{table} \section{Conclusions} \label{sec:summary} In this paper, we present an alternate possibility to drive thermal freeze-out, through a multi-body $3 \to 2$ resonant assisted annihilation. Here along with a pair of DM, there is an SM-like assister in the initial state. The key challenge here is to overcome the contribution of related $2 \to 2$ channels in comparison to these flux suppressed multi-body $3 \to 2$ assisted annihilation processes. In this paper, we have presented a simple model having three real scalars, a stable DM ($\phi$), an assister ($A$) and a heavy mediator ($S$) where the latter two also double up as a portal to the visible sector. We find that in the region of parameter space where masses are tuned to $(2m_\phi + m_A ) \sim m_S$ an $s-$channel $3 \to 2$ assisted annihilation channel can have resonance and dominantly drive freeze-out to produce the observed relic density of DM. We show that in this tuned region the relic abundance is in the right ballpark for a DM mass between few MeV to a few GeV with perturbative couplings. The resonant assisted annihilation channels are difficult to probe in direct detection experiments owing to its novel topology. In this article, we have shown that even in the distinctive resonance region, the correlated $2 \to 2$ annihilation channels can produce appreciable indirect detection signatures. Annihilation of the DM to assisters and subsequent decay of the photophilic assister can be constrained by experiments like INTEGRAL, EGRET, COMPTEL, Fermi, and through the anisotropies of CMB in certain regions of the parameter space. \paragraph*{Acknowledgments\,:} We would like to thank Ujjal Kumar Dey for comments on the manuscript. TNM would like to thank MHRD, Government of India for a research fellowship. TSR is partially supported by the Department of Science and Technology, Government of India, under the Grant Agreement No. IFA13-PH-74 (INSPIRE Faculty Award). \bibliographystyle{JHEP}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} \label{sec:intro} The deep connection between entanglement and geometry has the potential to provide profound insights into the inner workings of a nonperturbative theory of quantum gravity. This connection has been made especially manifest in the AdS/CFT duality, which relates certain conformal field theories (CFT) without gravitational dynamics to string theory on asymptotically (locally) anti-de Sitter (AdS) backgrounds~\cite{Mal97,Wit98a}. In this correspondence, the CFT lives on a representative of the conformal class of boundary metrics of the AdS space; we colloquially say that the CFT ``lives on the boundary of AdS''. In the limit where the string theory is well approximated by classical gravity, the dual CFT is strongly coupled (large~$\lambda$) with a large number of colors (large~$N$). Numerous observables in the CFT are dual in this limit to geometric objects in the (now classical) AdS space. In this context, an issue of considerable interest is that of bulk reconstruction. That is, given some CFT data, how much of the bulk data can be reconstructed, and how is this reconstruction performed? Understanding how this reconstruction works in the limit where the AdS bulk is classical may offer insights into how to reconstruct the bulk perturbatively in~$1/N$, and even potentially in a nonperturbative regime (\textit{i.e.}~finite~$N$). Because many CFT observables are dual to geometric bulk constructs in the large~$N$ limit, a fundamental bulk object to reconstruct is the geometry itself. A promising approach has focused on reconstructing the bulk using extremal codimension-two surfaces anchored to the boundary: according to the Ryu-Takayanagi (RT) and Hubeny-Rangamani-Takayanagi (HRT) conjectures~\cite{RyuTak06, HubRan07}, such extremal surfaces are dual to the entanglement entropy of regions of the CFT. In fact, arguments made by~\cite{CzeKar12, Wal12} suggest that the density matrix of a subregion of the CFT should be sufficient to reconstruct a portion of the bulk geometry (the domain of dependence of a set of relevant extremal surfaces~\cite{CzeKar12}, or the region bounded by a null hypersurface fired from an extremal surface~\cite{Wal12}). Indeed,~\cite{Czech:2014ppa,Czech:2015qta} explicitly offer such a construction for the spatial slices of AdS$_3$ by using the hole-ographic approach~\cite{Balasubramanian:2013rqa,Balasubramanian:2013lsa} of reconstructing bulk surfaces from boundary-anchored extremal surfaces (see also~\cite{Czech:2014wka,HeaMye14,Myers:2014jia} for related constructions). The appeal of this approach stems from its conceptual simplicity: it relates (\textit{a priori}) any bulk surface to CFT observables. Specifically, the area of an arbitrary bulk surface~$\gamma$ is dual to the so-called differential entropy of a family of boundary intervals. The full range of validity of hole-ography remains unclear, though substantial headway in this direction was made in~\cite{HeaMye14}. In this paper, we continue this exploration: in any (2+1)-dimensional spacetime (or in any higher dimensional spacetime with a sufficient degree of symmetry), we will state and prove general theorems that constrain how well surfaces in the bulk spacetime can be reconstructed from extremal surfaces anchored to the AdS boundary. We interpret these constraints in terms of the so-called holographic screens introduced in~\cite{CEB2}. We emphasize that while our strongest theorems only apply to systems that are ``effectively (2+1)-dimensional'', they are otherwise covariant. In particular, while our results are constrained in more than two spatial dimensions to these effectively lower-dimensional setups, in (2+1)-dimensional bulk spacetimes we impose no restrictions except a generic condition and a condition on the Ricci tensor (the null curvature condition), which amounts to positivity of the stress tensor for a bulk obeying the Einstein equation. To give these statements some context, recall that the Hubeny-Rangamani-Takayangi (HRT) conjecture~\cite{HubRan07} states that in the large-$N$ limit, the entanglement entropy of a region $\mathcal{R}$ in the CFT can be constructed as follows. Consider all bulk codimension-two extremal surfaces~$X$ homologous to the region~$\mathcal{R}$ on the AdS boundary\footnote{Note that the homology constraint (see~\textit{e.g.}~\cite{Headrick:2007km}) implies that~$X$ must be anchored to the AdS boundary on~$\partial\mathcal{R}$.}. Then the entanglement entropy of~$\mathcal{R}$ is \be \label{eq:HRT} S(\mathcal{R}) = \min_{X\sim \mathcal{R}} \frac{\mathrm{Area}(X)}{4G_N \hbar}, \ee where~$G_N$ is the bulk Newton's constant and~$\sim$ means ``homologous to''. Both the left- and right-hand sides of the above equation are na\"ively divergent and are understood to be regulated appropriately. A generalization of this prescription exists for perturbatively quantum bulk spacetimes~\cite{FauLew13, EngWal14}. \begin{figure}[t] \centering \includegraphics[page=1]{Figures-pics} \caption{An arbitrary closed curve~$\gamma$ on a static time slice of global~AdS$_3$. The set of all geodesics tangent to~$\gamma$ define a family of regions on the boundary parametrized by a (possibly multi-valued) function~$\alpha(\theta)$. The differential entropy of these regions gives the length of~$\gamma$.} \label{fig:holeography} \end{figure} The key insight of hole-ography is that the HRT formula~\eqref{eq:HRT} can in certain cases be utilized to compute the area of arbitrary bulk surfaces. In the pure AdS$_3$ context, consider an arbitrary curve~$\gamma$ lying on a static time slice, as shown in Figure~\ref{fig:holeography}. At each point~$p$ on~$\gamma$, there is a unique geodesic tangent to~$\gamma$ at~$p$ anchored at the ends of some boundary interval~$I_\theta = (\theta - \alpha(\theta),\theta + \alpha(\theta))$; here information about~$\gamma$ is contained in the region function~$\alpha(\theta)$. By the RT (and HRT) conjectures, the length of this geodesic computes the entanglement entropy~$S(\alpha)$ of the interval~$I_\theta$. The result of~\cite{Balasubramanian:2013lsa} is that the length of~$\gamma$ can be computed from the boundary entanglement entropies as \be \label{eq:holeography} \frac{\mathrm{length}(\gamma)}{4G_N \hbar} = \frac{1}{2} \int_0^{2\pi} d\theta \, \left. \frac{dS(\alpha)}{d\alpha} \right|_{\alpha = \alpha(\theta)}. \ee This construction has been generalized to non-static contexts and higher dimensions (admitting a sufficient degree of symmetry) in~\cite{HeaMye14,Czech:2014wka}. The quantity on the right-hand side was termed ``differential entropy'' in~\cite{Myers:2014jia}, related but not equivalent to the residual entropy discussed in~\cite{Balasubramanian:2013rqa, Balasubramanian:2013lsa}. In particular,~\cite{Hub14} showed that the reconstruction of bulk curves from the residual entropy is subject to strong restrictions; the differential entropy is not subject to the same constraints~\cite{HeaMye14}. In order to use the hole-ographic approach for bulk reconstruction,~\cite{Czech:2014ppa} suggested that points in the bulk spacetime can be identified by effectively shrinking~$\gamma$ to arbitrarily small size around a point~$p$, so that the geodesics tangent to~$\gamma$ all intersect at~$p$; see Figure~\ref{subfig:holepoint}. The resulting region function~$\alpha_p(\theta)$ is an extremum of a boundary action constructed only from~$S(\alpha)$, and thus provides a definition of bulk points from boundary data. Similarly, to compute the geodesic distance between two points~$p$ and~$q$,~$\gamma$ is shrunk to a thin convex\footnote{In this context, a closed curve~$\gamma$ is convex if any geodesic connecting two points on~$\gamma$ lies entirely inside~$\gamma$.} curve that encircles~$p$ and~$q$, as shown in Figure~\ref{subfig:holedist}. The region function for such a curve can be constructed from those that define the points~$p$ and~$q$,~$\alpha_p(\theta)$ and~$\alpha_q(\theta)$, and is therefore also constructed purely from boundary data. \begin{figure}[t] \centering \subfigure[]{ \includegraphics[page=2]{Figures-pics} \label{subfig:holepoint} } \hspace{1cm} \subfigure[]{ \includegraphics[page=3]{Figures-pics} \label{subfig:holedist} } \caption{\subref{subfig:holepoint}: reconstruction of bulk points via hole-ography. The curve~$\gamma$ is shrunk to be arbitrarily small and centered at~$p$, so that~$p$ is identified by the common intersection of all the geodesics generated by~$\alpha(\theta)$. \subref{subfig:holedist}: reconstruction of geodesic distances via hole-ography. The curve~$\gamma$ is shrunk to be an arbitrarily thin convex curve (thick red line) encircling two points~$p$ and~$q$. The geodesic distance between~$p$ and~$q$ is then given by the differential entropy of the resulting boundary intervals.} \label{fig:reconstruction} \end{figure} This approach is clean and elegant and has close ties to integral geometry~\cite{Czech:2015qta} and to tensor networks and MERA~\cite{Swingle:2009bg,Swingle:2012wq}. It is therefore quite natural to ask how much it can be generalized, and how much of the bulk it can reconstruct. One obvious impediment to this reconstruction is the presence of extremal surface barriers (or relatedly, bulk regions that cannot be reached by any HRT surfaces -- ``entanglement shadows''~\cite{Freivogel:2014lja}). These are surfaces that split the bulk spacetime in two such that no codimension-two extremal surface can be deformed to cross them~\cite{EngWal13}. Then anything behind the barrier cannot be probed via boundary entanglement entropy. Interestingly, in~\cite{Hubeny:2012ry} it was found that under appropriate restrictions, extremal surfaces anchored to one asymptotic boundary cannot be deformed to enter the event horizons of static black holes. This barrier phenomenon was characterized for arbitrary spacetimes in~\cite{EngWal13}; in particular, such barriers do not include the event horizons of dynamical black holes. Thus generically, an event horizon is a barrier only in stationary setting. This is not so surprising: in a dynamical context, an event horizon is a global object, but from a local perspective, its only special property is the fact that its area is non-decreasing. Since extremal surfaces are not sensitive to the global structure of the spacetime, there is no reason to expect the event horizon to generically play a special role in constraining their behavior. A much more promising alternative is that of \textit{local} analogues of the event horizon: it is common to consider dynamical horizons~\cite{AshKri02} or trapping horizons~\cite{Hay93}, but we will instead consider more general objects called holographic screens~\cite{CEB2}. These will be defined precisely in Section~\ref{sec:theorems} below, but they should roughly be thought of as objects that can be foliated by marginally trapped (or anti-trapped) surfaces. Holographic screens can be constructed from an arbitrary foliation of a spacetime\footnote{This means a given spacetime may generally admit infinitely many holographic screens: one per null foliation.}; we will illustrate such a construction in Section~\ref{sec:theorems} (see Figure~\ref{fig:screenconstruction}). Our motivation for focusing on these screens is fourfold. First, there is a sense in which they are analogues of event horizons that are local in time and defined independenly of an asymptotic boundary. Second, it was shown in~\cite{BouEng15a, BouEng15b} that under certain (fairly generic) assumptions, they obey an area law much like that obeyed by event horizons. Third, they have a holographic interpretation by the Bousso bound~\cite{CEB1}: their area places an upper bound on the total entropy lying on one of the null surfaces orthogonal to them. The fourth and last point is a technical one: holographic screens can be constructed from a null foliation of spacetime, and null congruences are very useful in constraining the behavior of codimension-two extremal surfaces. Thus it should be relatively straightforward to derive constraints on such surfaces in the presence of holographic screens. Interestingly, our results show that while there are indeed such constraints, they are subtle. Holographic screens need not be barriers: codimension-two extremal surfaces may enter them. However, we prove that when they do, the extremal surfaces must move through a certain subregion of the interior of a holographic screen monotonically\footnote{In the special case where the extremal surfaces are anchored on a connected boundary region, extremal surfaces must move monotonically through the entire interior of the holographic screen.}. That is, they may never become tangent to one of the leaves of the null foliation that was used to construct the screen. This puts a limit on how well hole-ographic approaches can reconstruct surfaces and geometry in the interior of a holographic screen, for any (sufficiently smooth) codimension-two spacelike surface~$\gamma$ lying inside the screen must be tangent to at least two of the null foliation surfaces. This implies that there are points -- and more generally open subsets -- on~$\gamma$ that cannot be tangent to any boundary-anchored codimension-two extremal surface. Thus we prove a no-go theorem for hole-ography: it cannot be used to reconstruct arbitrary surfaces contained in the interiors of holographic screens. At best, it can reconstruct only portions of them, yielding some ``coarse-grained'' form of reconstruction. The outline of this paper is as follows. We develop and state our main theorems in Section~\ref{sec:theorems}. In the interest of readability, we will defer the lengthier of our proofs to Appendix~\ref{app:proofs}. In Section~\ref{sec:examples} we present some examples illustrating the ideas used in our construction, and highlighting previous instances in the literature where hints of our results first appeared. Finally, in Section~\ref{sec:discussion} we discuss the relevance of our results to bulk reconstruction, as well as some possible generalizations, and conclude. \section{Constraints on the Behavior of Extremal Surfaces} \label{sec:theorems} In this section, we will state the theorems discussed in Section~\ref{sec:intro}. For pedagogical reasons, some results (specificially Lemma~\ref{lem:NARWHAL} and Theorem~\ref{thm:main}) will be presented for~(2+1) dimensions first. Section~\ref{subsec:higherD} provides a generalization to higher dimensions. For this reason, we will continue to discuss ``codimension-two surfaces'' rather than ``curves'', so the generalization to higher dimensions is natural. Furthermore, while we will narrate the development of the theorems for purposes of pedagogy and clarity, we will leave a discussion and interpretation of their consequences to Sections~\ref{sec:examples} and~\ref{sec:discussion}. Terms in quotation marks are intended to provide intuition, and will be made precise in due course. \textbf{Preliminaries} We will always consider a spacetime~$M$ that obeys the null curvature condition: $R_{ab}k^{a}k^{b} \geq 0$ everywhere for any null vector $k^a$. Unless otherwise specified, we take all null vectors to be future-directed. The term \textit{extremal surface} will always be used to refer to spacelike, $C^{2}$, codimension-two extremal surfaces. A null hypersurface and the null geodesic congruence that generates it will be given the same name (\textit{e.g.}~$N$). The expansion of a congruence~$N$ will be denoted~$\theta(N)$, while the expansion of a spacetime-filling family of congruences~$\{N_s\}$ will be denoted~$\theta(\{N_s\})$. All unspecified conventions and definitions are as in~\cite{Wald}. \subsection{General Behavior of Null Hypersurfaces and Extremal Surfaces} First, we introduce a null foliation~$\{N_s\}$ of~$M$ into null hypersurfaces~$N_s$ which we shall call \textit{leaves}\footnote{Recall that the leaves~$\{N_s\}$ form a foliation of~$M$ if for every~$p \in M$,~$p$ lies on precisely one leaf~$N_s$. Also, note that this foliation is arbitrary; any spacetime admits infinitely many such foliations.}. The leaves are permitted to have cusps, but only at intersections of their generators; a generator leaves a leaf if and only if it encounters an intersection with another generator of the same congruence. Next, recall that any extremal surface~$X$ has two null normals, each of which generates a null congruence (as shown in Figure~\ref{fig:ExtremalCongruences}), and the extremality condition is simply the requirement that the expansions of the null geodesic congruences tangent to these normals vanish on~$X$. If~$X$ is tangent to a null hypersurface~$N$, the extremality of~$X$ constrains the expansion of~$N$: \begin{figure}[t] \centering \includegraphics[page=4]{Figures-pics} \caption{The two null congruences of an extremal surface~$X$ of codimension two anchored to a timelike boundary~$\partial M$.} \label{fig:ExtremalCongruences} \end{figure} \begin{lem} \label{lem:aron} Let $N$ be a null hypersurface in~$M$ and let $X$ be a codimension-two spacelike extremal surface which is tangent to $N$ at a point $p$; let~$\mathcal{O}_p$ be an open neighborhood of~$p$. Then: \begin{itemize} \item If $X \cap \mathcal{O}_p$ is nowhere to the past of $N$, then $\left. \theta(N)\right|_{p}\leq 0$; \item If $X \cap \mathcal{O}_p$ is nowhere to the future of $N$, then $\left. \theta(N)\right|_{p}\geq 0$. \end{itemize} \end{lem} \begin{proof} As explained in~\cite{BouEng15b}, this follows directly from Theorem 1 of~\cite{Wal10QST} or Theorem 4 of~\cite{Wal12}. \end{proof} As a useful illustration of this lemma, consider extremal surfaces and light cones in flat space, as shown in Figure~\ref{fig:lemma}. \begin{figure}[t] \centering \includegraphics[page=5]{Figures-pics} \caption{An illustration for Lemma~\ref{lem:aron}. In Minkowski space, an extremal surface~$X$ is just a plane (drawn here as a straight line). If~$X$ is tangent to an expanding light cone, it lies nowhere to the cone's future, and the cone has positive expansion. If~$X$ is tangent to a shrinking light cone, it lies nowhere to the cone's past, and thus the shrinking light cone has negative expansion.} \label{fig:lemma} \end{figure} The converse of Lemma~\ref{lem:aron} is in general not true\footnote{We thank Aron Wall for pointing this out to us.}. However, in the restricted case of a~(2+1)-dimensional spacetime, we can indeed prove its converse -- see Section~\ref{subsec:higherD} for a generalization to higher dimensions: \begin{lem} \label{lem:NARWHAL} Let $N$ be a null hypersurface in a~(2+1)-dimensional spacetime $M$ and let $X$ be a codimension-two spacelike extremal surface which is tangent to $N$ at a point $p$. Then there exists a small neighborhood $\mathcal{O}_p$ of $p$ such that \begin{itemize} \item If $\left. \theta(N)\right|_{p}> 0$, then $X \cap \mathcal{O}_p$ is nowhere to the future of $N$; \item If $\left. \theta(N)\right|_{p}< 0$, then $X \cap \mathcal{O}_p$ is nowhere to the past of $N$. \end{itemize} \end{lem} \begin{proof} Consider the first case, where the expansion of $N$ is positive. At $p$, the null generator of $N$ agrees with a null normal of $X$; call this vector $k^a$. Also let~$v^a$ be the unit vector tangent to~$X$ at~$p$ (which will also be tangent to~$N$, since~$X$ is), and let~$\ell^a$ be the other null normal to~$X$ at~$p$ normalized so~$k \cdot \ell = -1$. Then the metric at~$p$ can be decomposed as \be g_{ab}|_p = -2k_{(a}\ell_{b)} + v_a v_b. \ee The expansion of~$N$ at~$p$ can then be written as \be \label{eq:expansioncondition} 0 < \theta(N)|_p = \left. \nabla_a k^a \right|_p = \left. \ ^{N}\!K_{ab}v^{a}v^{b} \right|_p, \ee where $^{N}\!K_{ab}$ is the extrinsic curvature of $N$\footnote{Recall that the extrinsic curvature of a null codimension-one hypersurface with normal~$k^a$ is given (up to scaling) by \be K_{ab} = \frac{1}{2} \pounds_k g_{ab}. \ee For a codimension-two surface with null normals~$k^a$ and~$\ell^a$, the extrinsic curvature gets an extra index: \be K^{a}_{\phantom{a}bc} = \frac{1}{2} \left(\ell^a \pounds_{k} g_{bc} + k^a \pounds_{\ell} g_{bc}\right). \ee}. Next, consider a spacelike surface~$\Sigma$ containing~$X$. Recall that~$^{N}\!K_{ab}v^{a}v^{b}|_p$ is a measure of how much~$N \cap \Sigma$ bends away from its tangent plane (\textit{i.e.}~the plane spanned by~$k^a$ and~$v^a$) with motion away from $p$ in the $v^{a}$ direction. By extremality, the trace of the extrinsic curvature of $X$ vanishes:~$^{X}\! K^{c}_{\phantom{c}ab}v^{a}v^{b}|_{p}=0$, so $X$ must curve away from its tangent plane less than $N \cap \Sigma$ on a small open neighborhood of $p$. But this immediately implies that $X\cap \mathcal{O}_p$ cannot lie in the future of $N$. The proof proceeds identically for the second case. \end{proof} Lemmata~\ref{lem:aron} and~\ref{lem:NARWHAL} give conditions on how extremal surfaces are allowed to be tangent to null hypersurfaces. Crucially, these conditions do not impose any restrictions on the global structure of the null hypersurface -- it may be a hypersurface of non-constant expansion on a global scale, but as long as it has definite expansion on an open set that contains $p$, both lemmata are applicable. This means that in any region of the spacetime with constant sign of~$\theta(\{N_s\})$ -- a scalar function on the spacetime -- an extremal surface can ``turn around'' at most once with respect to the foliation~$\{N_s\}$ (this notion will be made precise below). In order to understand the general behavior of extremal surfaces, it is therefore useful to divide the spacetime into those regions where~$\theta(\{N_s\})$ is positive, and those where~$\theta(\{N_s\})$ is negative. \subsection{Holographic Screens} The division between regions of positive and negative~$\theta(\{N_s\})$ is provided quite naturally by so-called \textit{preferred} holographic screens, first defined in~\cite{CEB2}. The idea is the following: given a spacetime foliation~$\{N_s\}$, move along each leaf~$N_s$ until its expansion changes sign. By the focusing theorems (see \textit{e.g.}~\cite{Wald}), this sign change can happen at most once (since the expansion of~$N_s$ is non-increasing). Thus, assuming a generic condition to be stated below, to each leaf~$N_s$ this procedure associates at most one codimension-two surface~$\sigma_s$ called a \textit{leaflet}\footnote{Note that this terminology goes against convention: typically the~$\sigma_s$ are referred to as ``leaves''. Here we reserve the term ``leaves'' for the null hypersurfaces of the spacetime foliation.}. The union of all such leaflets is a preferred holographic screen and provides the division we were looking for; see Figure~\ref{fig:screenconstruction} for an example of this construction. The term holographic screen is derived from the Bousso bound, which postulates that the leaflet is holographic: its area provides a bound on the entropy of $N_{s}$ \cite{CEB1, CEB2}. \begin{figure}[t] \centering \includegraphics[page=6]{Figures-pics} \caption{Constructing a preferred holographic screen from a null foliation of a spacetime. The dashed diagonal lines are the leaves of the foliation; the dot on each leaf marks the leaflet~$\sigma_s$ where the expansion of the leaf changes sign. The union of all the leaflets is a preferred holographic screen.} \label{fig:screenconstruction} \end{figure} Note that each leaflet~$\sigma_s$ has two null normal directions, each tangent to an associated null congruence. By construction, one of these congruences has zero expansion. We can use the sign of the expansion of the other congruence to label the ``type'' of holographic screen: in analogy with event and dynamical horizons, a screen will be called ``future'' (``past'') if it is foliated by marginally (anti-)trapped surfaces~\cite{BouEng15a, BouEng15b}. This notion is made precise by the following definition: \begin{defn} \textit{Preferred future holographic screen}. A preferred future holographic screen $H$ associated to a null spacetime foliation $\{N_{s}\}$ is a smooth hypersurface such that for each leaf~$N_s$, the intersection $H\cap N_{s}$ is either empty or a codimension-two achronal surface $\sigma_{s}$ such that the two orthogonal null directions~$k^a_s$ and~$\ell^a_s$ to~$\sigma_{s}$ obey: \bea \theta_{k_s} &= 0, \\ \theta_{\ell_s} &< 0, \eea where $\theta_{k_s,\ell_s}$ are the expansions of the null geodesic congruences fired off of~$\sigma_s$ in the~$k^a_s$ and~$\ell^a_s$ directions. The intersections $\sigma_{s}$ are called \textit{leaflets} of $H$, and the null normals~$k^a_s$ and~$\ell^a_s$ to all the leaflets define null vector fields~$k^a$ and~$\ell^a$ everywhere on~$H$. \end{defn} Past holographic screens are defined analogously, except that~$\theta_{\ell} > 0$, \textit{i.e.}~the leaflets are marginally \textit{anti}-trapped. All discussions and proofs for past holographic screens proceed identically to future holographic screens via time reversal (all future constructs become past-directed), so for the rest of this section we will refer only to future holographic screens. The above definition of holographic screens is too weak to guarantee that they be sufficiently well-behaved for our purposes. But by further imposing some mild conditions, it is possible to ensure that the screens obey certain ``nice'' properties. For this reason, we require the screen to be regular: \begin{defn} \label{def:regular} \textit{Regular future holographic screen}. A preferred future holographic screen is \textit{regular} if the following are true~\cite{BouEng15b}: \begin{itemize} \item The null expansion of leaflets in the $k^a$ direction immediately decreases away from $H$: $k^a_s\nabla_a \theta_{k_s}|_{\sigma_s} < 0$; \item The boundary of all spacelike subsets of $H$ within $H$ is the boundary of all timelike subsets of $H$ within $H$ (\textit{i.e.}~the only null portions of~$H$ are junctions between spacelike and timelike pieces); \item Every inextendible portion of $H$ with indefinite sign is either timelike or contains a complete leaflet; and \item Every leaflet is compact and splits a Cauchy surface containing it into two disjoint subsets. \end{itemize} \end{defn} The first two assumptions can be viewed as types of generic conditions\footnote{However, these do not reduce to the usual generic condition used in the singularity theorems, see \textit{e.g.}~\cite{Wald}.}. We will not have occasion to explicitly use the last two assumptions in this section, but they are required for certain properties of regular holographic screens to hold. Also note that we will occasionally use the word ``screen'' to refer to a regular holographic screen when it will cause no ambiguity. The screens on which we will focus must divide the spacetime into two disjoint regions so that we can sensibly refer to their ``interior'' and ``exterior''. Such screens will be referred to as \textit{splitting screens}; the holographic screen shown in Figure~\ref{fig:screenconstruction} is an example. Moreover, if the screen is regular, we can uniquely define its interior and exterior:~\cite{BouEng15a, BouEng15b} showed that when~$k^a$ points to one side a regular screen, it is always the same side ($k^a$ may be tangent to the screen, but never switches from one side to the other). Thus we will call the interior~$\mathrm{Int}(H)$ of a splitting future holographic screen~$H$ the region towards which the null vector field~$k^a$ points\footnote{This definition may seem backwards, since we typically think of the ``interior'' of a surface as the direction in which the expansion of its null normals is more negative. However, note that since marginally trapped surfaces must always lie behind (or possibly on) the future event horizon of the spacetime~$M$,~$\mathrm{Int}(H)$ can never have any intersection with the asymptotic region of~$M$. It is in this sense that this definition agrees with intuition. }. The exterior~$\mathrm{Ext}(H)$ will be the complement in~$M$. We are now equipped to make statements about the behavior of extremal surfaces in general spacetimes in the presence of holographic screens. We need one more definition to make precise what we mean by an extremal surface ``turning around'': \begin{defn} \textit{Turning and inflection points}. We say that an extremal surface $X$ has a \textit{pivot point} at a point $p$ if it is tangent to a leaf~$N_s$ at~$p$. $X$ is said to have a \textit{turning point} at $p$ if in a small neighborhood of $p$, $X$ lies nowhere to the past or nowhere to the future of~$N_s$. Otherwise,~$X$ is said to have an \textit{inflection point} at~$p$. Moreover, if an extremal surface~$X$ has a turning point in some region~$R \subset M$, then we say~$X$ \textit{turns around} in~$R$. See Figure~\ref{fig:pivot} for an illustration. \end{defn} Note that turning points and the notion of turning around are dependent on the foliation~$\{N_s\}$. Also note that by definition, if $N$ is any null splitting hypersurface, then any surface $Q$ which has a turning point on $N$ is (in some small neighborhood) in its past or future. In the former case, we will say $Q$ is \textit{tangent to $N$ from the past}, and in the latter we will say $Q$ is \textit{tangent to $N$ from the future}. \begin{figure}[t] \centering \includegraphics[page=7]{Figures-pics} \caption{An extremal surface tangent to a foliation leaf has a pivot point which can either be a turning point (left) or an inflection point (right).} \label{fig:pivot} \end{figure} \subsection{Theorems} We can now state our first theorem, which is simply a precise rephrasing of the heuristic discussion above: \begin{thm} \label{thm:traffic} Let $R$ be a region such that~$\theta(\{N_s\})$ has a definite sign everywhere in~$R$, and let~$X$ be a (codimension-two) extremal surface. Then any connected portion of $X$ in~$R$ can turn around at most once and has no inflection points if~$M$ is~(2+1)-dimensional. In particular, if~$H$ is a regular splitting future holographic screen, any connected portion of~$X$ in~$\mathrm{Int}(H)$ can turn around at most once. \end{thm} For a detailed proof, see Appendix~\ref{subapp:traffic}. For a pictoral proof, see Figure~\ref{fig:traffic}: if a connected portion of~$X$ in~$R$ has more than one turning point, at least one such turning point must violate Lemma~\ref{lem:aron}. \begin{figure}[t] \centering \includegraphics[page=8]{Figures-pics} \caption{$X_{R}$ is any connected portion of~$X$ in~$R$. It cannot have multiple turning points without violating Lemma~\ref{lem:aron}.} \label{fig:traffic} \end{figure} Theorem~\ref{thm:traffic} and the lemmata make local statements: they make no use of the global structure of the spacetime~$M$. We now focus on the asymptotically locally AdS case\footnote{See~\cite{Fischetti:2012rd} for the definition of asymptotically locally AdS spacetimes.}, where~$M$ has a timelike boundary~$\partial M$ to which the extremal surfaces are anchored. Then within a subregion of the interior of a holographic screen, Theorem~\ref{thm:traffic} can be strengthened significantly. We will call this region the \textit{umbral} region\footnote{We will show that~$U(H)$ is similar to but more general than the partial shadows of~\cite{Freivogel:2014lja}, hence the nomenclature.} $U(H)$: \begin{defn} \textit{Umbral region}. Let $H$ be a regular splitting future holographic screen. Consider the null congruences~$\{L_s\}$ generated from each leaflet by firing null geodesics in the~$\ell^a$ direction\footnote{As with the~$N_s$, we will take the generators of~$L_s$ to leave~$L_s$ at local and non-local intersections.}, and suppose that these foliate~$\mathrm{Int}(H)$. Let $\sigma_{s,s'}=N_{s}\cap L_{s'}$. The umbral region $U(H)$ is the union of all those $\sigma_{s,s'}$ with no intersection with the past of $H$: \begin{equation} U(H) \equiv \bigcup \sigma_{s,s'} \ : \ \sigma_{s,s'}\cap I^{-}(H)=\varnothing . \end{equation} See Figure~\ref{fig:umbral} for an illustration. \end{defn} Note that if $H$ is achronal, $U(H)$ is the entire interior of $H$. \begin{figure}[t] \centering \includegraphics[page=24]{Figures-pics} \caption{In a collapsing star geometry, the future holographic screen $H$ (solid green) has indefinite signature. Its umbral region~$U(H)$ is the (green) shaded region in the interior of~$H$; by construction, the future of~$U(H)$ has no intersection with~$H$.} \label{fig:umbral} \end{figure} \begin{thm} \label{thm:main} Let $H$ be a regular splitting future holographic screen in a~(2+1)-dimensional asymptotically locally AdS spacetime~$M$. Then no boundary-anchored extremal surface can have a pivot point in~$U(H)$. In particular, if $H$ is achronal, no such extremal surfaces can have a pivot point in~$\mathrm{Int}(H)$. \end{thm} For a detailed proof, see Appendix~\ref{subapp:main}. For a sketch of part of the proof, consider an extremal surface~$X$ with a turning point on some leaf~$N_m$ in~$U(H)$. If at this turning point~$X$ is also tangent to a leaf~$L_m$, then Lemma~\ref{lem:NARWHAL} implies that~$X$ must lie to the future of~$N_m$ and~$L_m$, and therefore to the future of their intersection; see Figure~\ref{fig:mainsketch}. But by assumption, this future can have no intersection with $H$, implying that $X$ must live entirely in the interior of $H$; therefore $X$ cannot be boundary-anchored. The case when~$X$ is not tangent to a leaf~$L_m$ is more complicated, but similar in spirit. One of the appealing properties of regular future holographic screens found in~\cite{BouEng15a} is that they obey an area law even when they have indefinite signature. It is therefore natural to ask whether our theorem applies to such screens. It well may be the case that it does, but the approach used in the proof above cannot be used for non-achronal screens. To see why, consider Figure~\ref{fig:prooffail}: an extremal surfaces $X$ can have a turning point somewhere in the past of the screen, in which case it may exit the interior of the screen through a timelike portion. Theorem~\ref{thm:main} may still be true for regular future holographic screens of indefinite signature, but it is not clear to us how a proof of such a statement would proceed. However, some progress can be made in higher dimensions, as we will now discuss. \newpage \begin{figure}[h] \centering \includegraphics[page=23]{Figures-pics} \caption{If an extremal surface~$X$ (solid blue) is tangent to leaves~$N_m$ (dashed black) and~$L_m$ (dotted orange) inside the umbral region of a holographic screen~$H$ (solid green), it must lie entirely in the future of their intersection (black dot), and therefore cannot be boundary-anchored. Note that we have suppressed a spatial direction in this figure, which is why~$X$ appears timelike and ends at the black dot. It is actually spacelike everywhere and tangent to the dot in the suppressed direction.} \label{fig:mainsketch} \end{figure} \vspace{2cm} \begin{figure}[h] \centering \includegraphics[page=12]{Figures-pics} \caption{Here we show how an extremal surface~$X$ (solid blue) may potentially have a turning point in the interior of a regular future holographic screen of indefinite signature. Because the turning point lies in the past of the screen, the extremal surface may exit the screen through the timelike portion; none of the turning points shown are forbidden by Lemma~\ref{lem:aron}. Note that the~$X$ is everywhere spacelike; the apparent mixed signature here is only due to the suppression of the extra spatial dimension.} \label{fig:prooffail} \end{figure} \newpage \subsection{Higher Dimensions} \label{subsec:higherD} Theorem~\ref{thm:main} relies heavily on Lemma~\ref{lem:NARWHAL}, which is only valid in~(2+1)-dimensional spacetimes. If we were to attempt a na\"ive extension of it to~$(d+1)$ dimensions, we would encounter a problem: equation~\eqref{eq:expansioncondition} and the extremality condition would become \begin{subequations} \label{eqs:higherd} \bea 0 &< \ ^N \! K_{ab} \left. \left(v^a v^b + \sum_i \xi^a_{(i)} \xi^b_{(i)}\right)\right|_p, \\ 0 &= \ ^X \! K^a_{\phantom{a}bc} \left. \left(v^b v^c + \sum_i \xi^b_{(i)} \xi^c_{(i)}\right)\right|_p, \eea \end{subequations} where the sum runs over an additional~$(d-2)$ orthonormal spatial vectors~$\xi^a_{(i)}$ that are orthogonal to~$v^a$ and tangent to~$X$ and~$N$ at~$p$. These summed expressions do not allow us to separately compare the bending of~$X$ and~$N$ in different directions, so the proof does not go through as it did before. It is clear from the above considerations, however, that a version of Lemma~\ref{lem:NARWHAL} will remain true in higher dimensions if we require that all of the~$\xi^a_{(i)}$ have trivial contraction with the extrinsic curvatures of~$X$ and~$N$. In such a case, only the~$v^a$ terms in equations~\eqref{eqs:higherd} remain, and the proof of the lemma proceeds as in the~(2+1)-dimensional case. We therefore define: \begin{defn} \textit{Reducibility to~(2+1) dimensions}. Let~$S$ be a surface of codimension at most two in a~$(d+1)$-dimensional spacetime~$M$. We will say that~$S$ is \textit{reducible to~(2+1) dimensions} (or \textit{reducible} for short) if there exist~$(d-2)$ vector fields~$\xi_{(i)}^a$,~$i = 1,\ldots,d-2$ in~$M$ that are everywhere spacelike\footnote{The~$\xi_{(i)}^a$ may be singular on a set of measure zero.} such that~$S$ is tangent to the~$\xi^a_{(i)}$ everywhere, and for each~$i$ \be \ ^S K^a_{\phantom{a}bc} \, \xi^b_{(i)} \, \xi^c_{(i)} = 0, \ee where~$\ ^S K^a_{\phantom{a}bc}$ is the extrinsic curvature of~$S$. If multiple reducible surfaces share the~$\{\xi^a_{(i)}\}$, then they are \textit{simultaneously} reducible. \end{defn} For an arbitrary surface~$S$, the reducibility condition is simply a constraint on its allowed behavior. However, we will specifically require that extremal surfaces be reducible: this will in general only be possible when the spacetime exhibits a sufficient amount of symmetry. In particular, note that in spacetimes obeying the generalized planar symmetry of~\cite{HeaMye14} (see their Section~3.3 for a full definition), extremal surfaces that have this symmetry are reducible. For example, spacetimes with planar or spherical symmetry provide a simple setup admitting reducible extremal surfaces. Most significantly, Lemma~\ref{lem:NARWHAL} still holds for reducible extremal surfaces and foliations, and therefore so does Theorem~\ref{thm:main}. We have therefore shown that \textit{Theorems~\ref{thm:traffic} and~\ref{thm:main} will hold in any~($d$+1)-dimensional spacetime if the foliations~$\{N_s\}$ and all extremal surfaces~$X$ under consideration are simultaneously reducible to~(2+1) dimensions}. As mentioned at the end of the previous section, we can actually do more in higher dimensions: for~$d > 2$, it is possible for extremal surfaces to ``cap off'' (for example, where the size of spheres spanned by the~$\xi^a_{(i)}$ shrinks to zero). Before we state and prove the theorem, however, we require a restricted notion that takes into account the global structure of the extremal surface. Specifically, we restrict our analysis to so-called \textit{H-deformable} extremal surfaces~\cite{EngWal13}: these are surfaces that can be deformed to lie entirely in the exterior of a screen~$H$ while still being kept extremal (for a precise definition, see Appendix~\ref{app:proofs}). We therefore have the following theorem, which holds for the entire interior of an arbitrary regular holographic screens (\textit{i.e.}~it is not restricted to the umbral region): \begin{thm} \label{thm:mainconnected} Let~$M$ be an asymptotically locally AdS spacetime, and let $H$ be a regular splitting future holographic screen constructed from a reducible foliation~$\{N_s\}$. Assume that that there exists a foliation of the future of~$H$ with $L_s$ congruences, which are simultaneously reducible with the $\{N_{s}\}$ leaves. Let $X$ be a boundary-anchored, codimension-two spacelike extremal surface such that: \begin{enumerate} \item $X$ is reducible to~(2+1) dimensions simultaneously with $\{N_{s}\}$ and $\{L_{s}\}$; \item $\partial X$ is connected; and \item $X$ intersects~$\mathrm{Ext}(H)$ only on regions with $\theta(\{N_s\}) > 0$. \end{enumerate} Assume further that there exists an~$H$-deformation of~$X$ that obeys the above conditions as well. Then~$X$ cannot have a pivot point in~$\mathrm{Int}(H)$. \end{thm} Note that condition~(2) rules out geodesics, so this theorem is only nontrivial in~$d > 2$. Also, condition~(3) is meant to exclude possible pathological behavior from other holographic screens somewhere else in the spacetime. For a detailed proof of this theorem, see Appendix~\ref{subapp:mainconnected}. For a rough picture, consider the case where an extremal surface~$X$ is not tangent to an~$L_s$ leaf at its turning point, as in Figure~\ref{fig:prooffail}. In such a case,~$X$ must be anchored on the boundary at two places, \textit{i.e.}~$\partial X$ is disconnected. The case where~$X$ is tangent to an~$L_s$ leaf at its turning point is more subtle and requires invoking~$H$-deformability; we leave the details to the Appendix. It is worth making some remarks about potential pitfalls in higher dimensional spacetimes in which the extremal surfaces and/or null foliations are not reducible. As noted above, Lemma~\ref{lem:NARWHAL} will then generally be false, and cannot be used to rule out inflection points. We suspect it should be possible to use only Lemma~\ref{lem:aron} to prove weaker versions of Theorems~\ref{thm:main} and~\ref{thm:mainconnected} that do not exclude inflection points. However, such constraints have minimal relevance for hole-ography. We should also note that while our proofs do not hold in non-reducible settings, we can think of no counterexamples to the statements of the theorems. It is possible that they hold in more generality, but if that is the case, they would need to be proven using a different approach than that taken here. \section{Examples} \label{sec:examples} Here we present examples illustrating the application and consequences of the theorems discussed in the previous section. \subsection{dS and AdS Spacetimes} \begin{figure}[t] \centering \subfigure[]{ \includegraphics[page=15]{Figures-pics} \label{subfig:dS} } \hspace{1cm} \subfigure[]{ \includegraphics[page=16]{Figures-pics} \label{subfig:AdS} } \caption{The conformal diagrams of de Sitter \subref{subfig:dS} and anti-de Sitter \subref{subfig:AdS} space. Each point on these diagrams corresponds to a suppressed sphere~$S^{d-1}$ whose area is parametrized by a radial coordinate~$r$. The null foliations shown are generated by light rays fired from~$r = 0$, \textit{i.e.}~the north and south poles of dS and the origin of AdS. The black arrows indicate the directions in which extremal surface are allowed to turn around (\textit{e.g.}~an arrow pointing down and to the right indicates that extremal surfaces may only be tangent to the dashed foliation from the past). They imply that extremal surfaces must bend away from~$\mathscr{I}_\mathrm{dS}$, but towards~$\mathscr{I}_\mathrm{AdS}$.} \label{figs:dSAdS} \end{figure} As an example of Theorem~\ref{thm:traffic} (which states that connected components of extremal surfaces can have no more than one turning point in a region of constant~$\theta(\{N_s\})$), consider the simplest cases of pure de Sitter (dS) or anti-de Sitter (AdS) spacetimes, whose conformal diagrams are shown in Figure~\ref{figs:dSAdS} (the analysis of Minkowski space is similar to that of AdS, so we will not discuss it separately). Both dS and AdS have a spherical isometry to which we have adapted the conformal diagrams; we introduce a coordinate~$r$ which parametrizes the areas of the spheres of symmetry\footnote{Specifically,~$r$ is the usual radial coordinate that appears in the slicing \be ds^2 = -\left(1 \pm r^2/\ell^2\right) dt^2 + \frac{dr^2}{1 \pm r^2/\ell^2} + r^2 d\Omega^2_{d-1}, \ee with the positive (negative) sign for the global (static) slicing of AdS (dS).}. In each spacetime we introduce two null foliations which we take to be adapted to its spherical isometry: these foliations are generated by light cones fired from~$r = 0$ towards the boundary~$r = \infty$. It is then easy to use Theorem~\ref{thm:traffic} to understand how extremal surfaces must behave. The cross-sectional area of the null foliations increases with~$r$, so the expansion along each foliation is positive in the direction of increasing~$r$. It then follows that the expansion along each sheet of the foliations never changes sign. This allows us to draw on Figure~\ref{figs:dSAdS} the directions in which extremal surfaces are allowed to turn with respect to these foliations. In particular, note that extremal surfaces in AdS must bend \textit{towards} the conformal boundary~$\mathscr{I}_\mathrm{AdS}$, while extremal surfaces in dS sufficiently near the boundary~$\mathscr{I}_\mathrm{dS}$ must bend \textit{away} from it. In principle, these claims only constrain the behavior of extremal surfaces with respect to the two null foliations introduced here. However, the high degree of symmetry of both dS and AdS allows us to conclude that \textit{all} extremal surfaces in AdS must be attracted to~$\mathscr{I}_\mathrm{AdS}$, while \textit{all} extremal surfaces in dS must be repelled from~$\mathscr{I}_\mathrm{dS}$. The former point is, of course, well-known: extremal surfaces anchored to the boundary of AdS come up frequently in holographic contexts, and necessarily bend towards the boundary. The latter point was made generally in~\cite{Fischetti:2014uxa} using similar considerations to the ones used here. In particular, it follows that no boundary-anchored extremal surfaces exist in dS, since they would necessarily need to bend towards the boundary. \subsection{AdS Spherical Collapse} The above simple examples of dS and AdS illustrate how Theorem~\ref{thm:traffic} puts constraints on the general behavior of extremal surfaces in arbitrary spacetimes, even those not containing splitting holographic screens. Our focus, however, is on applications to AdS/CFT and bulk reconstruction. To that end, let us now discuss how Theorem~\ref{thm:main} (which states that bounday-anchored extremal surfaces in the interior of achronal screens cannot have pivot points) explains some of the observations of~\cite{Liu:2013iza,Liu:2013qca, Hubeny:2013dea} in the context of null collapse in AdS. To briefly review, consider the formation of a black hole in Poincar\'e AdS by infalling null dust. In the holographic context, this process is dual to the thermalization of the boundary field theory following a perturbation (typically a form of a quantum quench). The bulk solution consists of two pieces: to the past of the null dust, the solution is a vacuum solution and therefore just (the Poincar\'e patch of) pure AdS. The portion of the bulk containing the dust and to the future of it is AdS-Vaidya: \be ds^2 = -f(r,v) dv^2 + 2 \, dv \, dr + \frac{r^2}{\ell^2} \, d\vec{x}_{d-1}^2, \ee where \be f(r,v) = \frac{r^2}{\ell^2}\left(1 - \frac{\mu(v)}{r^d}\right), \ee $d$ is the boundary spacetime dimension, and we can think of compactifying the planar directions~$\vec{x}$ into a torus (it is the planar symmetry of these directions that allows us to apply Theorem~\ref{thm:main} here). Here the mass function~$\mu(v)$ characterizes the profile of the dust; the null energy condition is satisfied when~$\mu'(v) \geq 0$. The full solution is shown in Figure~\ref{subfig:AdSVaidyathick}. \begin{figure}[t] \centering \subfigure[]{ \includegraphics[page=17]{Figures-pics} \label{subfig:AdSVaidyathick} } \hspace{1cm} \subfigure[]{ \includegraphics[page=18]{Figures-pics} \label{subfig:AdSVaidyathin} } \caption{The formation of a black hole in pure AdS by infalling null dust (the shaded gray regions). To the past of the dust, the solution is pure AdS; the portion of the spacetime containing the dust is AdS-Vaidya. \subref{subfig:AdSVaidyathick}: for continuously infalling null dust, the spacetime contains an achronal future holographic screen (solid green curve). \subref{subfig:AdSVaidyathin}: if the dust is taken to be a thin shell, the screen approaches two null pieces, with one lying on the event horizon and the other on the shell. However, if the shell has an arbitrarily small but nonzero thickness, and if an arbitrarily small but nonzero amount of matter continues to fall in after the shell, the screen will be achronal (and arbitrarily close to being null). Note that the null boundaries are actually the Poincar\'e horizons~$\mathcal{H}^\pm_\mathrm{Poin}$.} \label{fig:nullcollapse} \end{figure} Let us now consider the plane symmetric foliation of this spacetime generated by light cones fired from~$r = 0$. The cross-sectional areas of these sheets go like~$A \propto r^{d-1}$, so the expansion is positive in the direction of increasing~$r$. In particular, this means that the expansion of the right-moving null sheets to the future of the event horizon changes sign, giving rise to a future holographic screen. This screen coincides with the dynamical horizon at~$f(r,v) = 0$. In the context of holographic quantum quenches,~\cite{Liu:2013iza,Liu:2013qca} considered such a collapse scenario with the infalling null matter taken to be a thin shell. The resulting holographic screen technically violates the assumptions of our theorems, since it is null and therefore not regular. However, it is easy to consider a regulated solution in which the null shell is smeared out slightly and given a rapidly decaying tail all the way into the far future. The screen will then be slightly deformed into a regular achronal screen, as illustrated in Figure~\ref{subfig:AdSVaidyathin}. Then our theorems can be applied to the regulated collapse geometry. By taking the limit where the regulator goes to zero, we may expect our theorems to apply to the thin shell solution as well. Theorem~\ref{thm:main} asserts that any reducible boundary-anchored extremal surface cannot turn around inside the screen. This is precisely what~\cite{Liu:2013iza,Liu:2013qca} found, as shown in Figure~\ref{fig:LiuSuh}. Extremal surfaces anchored to strips on the boundary sometimes penetrate the screen, but the turning point never does. In particular, as the boundary strips are taken to later times, the turning point of the corresponding extremal surfaces tracks out a curve which never enters the screen. Thus the interesting behavior of the turning point shown in Figure~\ref{fig:LiuSuh} is simply a consequence of our theorem. \begin{figure}[t] \centering \includegraphics[page=19]{Figures-pics} \caption{A family of extremal surfaces (solid blue lines) anchored to the boundary of AdS in thin shell Vaidya-AdS, as found in~\cite{Liu:2013iza,Liu:2013qca}. The dot at the end of each surface indicates the location of its turning point; the dotted black line follows the path of this turning point as time at which the surface is anchored to~$\partial M$ is varied. Note that some surfaces in this family do enter the holographic screen~$H$, but the turning points never do.} \label{fig:LiuSuh} \end{figure} Ref.~\cite{Hubeny:2013dea} considered a similar problem, but in global AdS. In that case, the generalized planar symmetry is (a subset of) the full spherical symmetry. They too found extremal surfaces anchored to spherical boundary regions that penetrated the holographic screen, but never any that turned around in it. \section{Ramifications for Hole-ography} \label{sec:discussion} The key questions of hole-ography are: does there exist an object in the CFT which is dual to the area of an arbitrary spacelike codimension-two bulk surface? If so, what are the limitations of this duality? The former question has been addressed in~\cite{Balasubramanian:2013rqa,Balasubramanian:2013lsa,Czech:2014wka,HeaMye14,Myers:2014jia}; in this paper, we have proven theorems that give a partial answer to the latter. \subsection{Incomplete Reconstruction Inside Screens} Recall that~\cite{HeaMye14} showed that under an appropriate set of assumptions (including generalized planar symmetry), if a given bulk spacelike codimension-two surface~$\gamma$ can be reconstructed from boundary-anchored extremal surfaces tangent to it, then the area of~$\gamma$ is given by the differential entropy of the boundary regions selected by the extremal surfaces. This direction was referred to as the ``bulk-to-boundary'' direction. Conversely, given a set of intervals on the boundary, the extremal surfaces anchored to them can be used to define at least one bulk surface~$\gamma$ whose area is equal to the differential entropy of the intervals; this is the ``boundary-to-bulk'' direction. We pause here to note an important subtlety: to get a good correspondence between the area of~$\gamma$ and the CFT differential entropy, the extremal surfaces must be the minimal-area ones that are picked out by the HRT formula (since there may exist more than one surface with the same boundary conditions). More generally, if the extremal surfaces used to reconstruct~$\gamma$ are not the minimal-area ones, they may be related to other CFT quantities such as entwinement~\cite{Balasubramanian:2014sra} (for example, minimal surfaces alone cannot be used to reconstruct the AdS$_3$ conical defect geometry or BTZ~\cite{Czech:2014ppa}). Here we will show that surfaces inside holographic screens cannot be fully reconstructed from \textit{any} boundary-anchored extremal surfaces, be they minimal-area or not. \begin{figure}[t] \centering \subfigure[]{ \includegraphics[page=20]{Figures-pics} \label{subfig:gammainH} } \hspace{1cm} \subfigure[]{ \includegraphics[page=21]{Figures-pics} \label{subfig:gammainHreconstruct} } \caption{The plane of the page is a time slice containing a spacelike curve~$\gamma$ (solid black line) in the interior of a holographic screen~$H$; the green oval shows the intersection of~$H$ with this particular time slice. \subref{subfig:gammainH}: the curve~$\gamma$ will always be tangent to at least two leaves of the foliation (dotted black lines); in this particular case, it is tangent to four of them at the marked points. \subref{subfig:gammainHreconstruct}: by our theorem, portions of~$\gamma$ in a neighborhood of these points cannot be reconstructed from boundary-anchored extremal surfaces.} \label{fig:curveinH} \end{figure} For example, consider the consequences of our results for a bulk-to-boundary construction: let~$\gamma$ be a sufficiently smooth spacelike closed curve\footnote{Here we will restrict the discussion to three bulk dimensions (so~$\gamma$ is just a curve), though our statements also hold in reducible setups.} that lies entirely in the interior of some regular holographic screen~$H$. Since~$\gamma$ is smooth, there must be some points at which~$\gamma$ is tangent to leaves of the null foliation used to construct~$H$. We have illustrated this in Figure~\ref{subfig:gammainH}, where we have shown a spatial slice containing~$\gamma$ and its intersection with~$H$ and some leaves of the null foliation. Theorem~\ref{thm:main} implies that there cannot exist boundary-anchored geodesics tangent to~$\gamma$ at the marked points (any extremal surfaces tangent to~$\gamma$ there must \textit{e.g.}~end at a singularity). Moreover, if we slightly deform the null foliation, these points will shift slightly along~$\gamma$, so we find that there are open regions of~$\gamma$ to which no boundary-anchored geodesics are tangent. This is our main result:~$\gamma$ cannot be entirely reconstructed from any set of boundary-anchored geodesics, minimal or not. Generically, however, there will be regions of~$\gamma$ that \textit{can} be. Thus in this bulk-to-boundary approach,~$\gamma$ can only be \textit{partially} reconstructed from boundary-anchored geodesics (and therefore in principle from CFT observables dual to them). This is a form of coarse-graining: the boundary data dual to geodesics simply do not know how to reconstruct some pieces of~$\gamma$. This coarse-grained reconstruction is illustrated in Figure~\ref{subfig:gammainHreconstruct}. Recall, however, that the boundary-to-bulk approach of~\cite{HeaMye14} is slightly different: in order to reconstruct (the area of) a bulk curve~$\gamma$ from a set of boundary intervals, we do not need the corresponding geodesics to be tangent to~$\gamma$. Rather, we only require what~\cite{HeaMye14} call the ``null alignment condition'': where a geodesics meets~$\gamma$, their tangent vectors need not agree, but may simply span a null plane. This is a weaker constraint, and it is therefore natural to wonder if the boundary-to-bulk construction fares any better in this case. The answer is no. Suppose a smooth bulk curve~$\gamma$ constructed via the boundary-to-bulk approach is contained entirely inside~$H$. Consider the two null planes generated by congruences fired off of~$\gamma$ in its four orthogonal (past and future) null directions. The null alignment condition says that~$\gamma$ may be constructed from boundary-anchored geodesics that intersect~$\gamma$ and are tangent to one of these planes when they do. But since~$\gamma$ is smooth, by the same argument given above there must exist some points at which this null plane is tangent to a leaf of the foliation. Then proceeding as we did in the bulk-to-boundary construction, we conclude that~$\gamma$ must contain segments that cannot be constructed from boundary-anchored geodesics. The conclusion is that whether one takes the bulk-to-boundary or boundary-to-bulk approach, it is not possible to reconstruct an entire smooth\footnote{There may still be reconstruction issues even if~$\gamma$ has cusps, but we will not consider this case here.} curve~$\gamma$ contained inside a holographic screen from boundary-anchored geodesics. In fact, it is very plausible that there are curves of which only an arbitrarily small portion can be reconstructed. Of course, there is nothing preventing either approach from reconstructing a bulk curve that is only partly contained inside the holographic screen. However, a promising approach of hole-ography was to be able to reconstruct the bulk geometry itself via the integral geometry approach of~\cite{Czech:2014ppa,Czech:2015qta}. In order to use this approach to reconstruct the spacetime inside a holographic screen, one would need to reconstruct arbitrary curves entirely contained within it. It would thus appear that this approach to reconstructing the interior of a holographic screen will not succeed. \subsection{Quantum Effects} A possible objection to our conclusion is the following: why not use boundary-anchored extremal surfaces to reconstruct the geometry of a portion of a Cauchy slice~$\Sigma$ to the past of the screen, and then use the bulk equations of motion to evolve forward from~$\Sigma$ to reconstruct its entire causal development? In particular, this may include the interior of a holographic screen; such an example is shown in Figure~\ref{fig:EOMreconstruction}. \begin{figure}[t] \centering \includegraphics[page=22]{Figures-pics} \caption{Attempting to reconstruct the interior of a holographic screen by evolving forward from an initial time slice~$\Sigma$. The shaded yellow region shows the domain of dependence~$D(\Sigma)$ of~$\Sigma$; in principle, if we knew the equations of motion everywhere, we could reconstruct this entire domain just from data on~$\Sigma$. In particular, this can include the interior of a holographic screen~$H$ (green).} \label{fig:EOMreconstruction} \end{figure} In principle this is possible, but only if we know the equations of motion \textit{a priori}. However, there is a sense in which a ``full'' bulk reconstruction should reconstruct the equations of motion as well as the geometry \textit{ab initio}. This is especially relevant given that the interiors of holographic screens tend to contain singularities, that is, regions where quantum gravitational effects become important. As soon as quantum fluctuations are introduced into the metric, even perturbatively, the possibility of reconstructing the bulk from its equations of motion is lost, particularly in near-singularity regions. For this reason, we find it more natural to seek a way of reconstructing the bulk \textit{directly} from CFT data, without recourse to any equations of motion. While our work has hithero been entirely classical, the appearance of quantum effects motivates the following observations: \begin{itemize} \item Recall that the interior of a holographic screen has a holographic interpretation in terms of bulk entropy via the Bousso bound~\cite{CEB1, CEB2}. The area of a leaflet of a future or past holographic screen gives a bound on the entropy of the leaf generating it: \be S(N_{s}) \leq \frac{\mathrm{Area}(\sigma_{s})}{2G_{N}\hbar}. \ee This raises an interesting question: can the holographic screen itself be reconstructed from boundary observables? More precisely, what is the CFT dual of a holographic screen, and how is it linked to bulk entropy? As discussed in the previous subsection, the holographic screen is an obstacle to complete hole-ographic reconstruction of its interior; perhaps the information that is lost in the ``coarse-graining'' discussed above is stored in extra degrees of freedom associated with the screen (similar to \textit{e.g.}~the superselection sectors of~\cite{MarWal12}). \item Even in the presence of quantum effects, the option of direct reconstruction from boundary observables remains: for a semiclassical bulk (\textit{i.e.}~working to first order in $G_{N}\hbar/\Lambda^{d-1}$ where $\Lambda$ is a characteristic length scale of the quantum fields in the theory),~\cite{FauLew13} found that the \textit{generalized entropy} of extremal surfaces yields the dual CFT entanglement entropy. More precisely, the generalized entropy of a spacelike codimension-two surface $X$ is given by~\cite{Bek73}: \be S_{\mathrm{gen}} (X)= \frac{\mathrm{Area}(X)}{4G_{N}\hbar} + S_{\mathrm{ent}} + \mathrm{counterterms}, \ee where $S_{\mathrm{ent}}$ is the von Neumann entropy of the exterior of $X$ on some Cauchy surface. It was later conjectured by~\cite{EngWal14} that, at any finite order in perturbation theory in $G_{N}\hbar/\Lambda^{d-1}$ in the bulk, there exists a quantum analogue of a classical extremal surface, obtained by replacing the area by the generalized entropy in the extremization procedure. The quantum extremal surface is obtained by extremizing $S_{\mathrm{gen}}$ with respect to variations along a null surface fired from $X$. The entanglement entropy of the boundary region enclosed by $\partial X$ is conjectured to be dual to the generalized entropy of $X$~\cite{EngWal14}. The extension of the hole-ographic construction to semiclassical and perturbatively quantum gravity has not been discussed, as it is yet to be well-understood even at the classical level. However, it is very tempting to hope that a similar construction can be made using quantum extremal surfaces. \item Since quantum fields may violate the null energy condition (which was assumed for all of the proofs in this paper), it may \textit{prima facie} appear that our results are applicable exclusively to the classical case, where reconstruction may be undertaken via the bulk equations of motion. However, it should be possible to prove similar statements about bulk reconstruction from ``quantum hole-ography'' by: (1) replacing all surfaces with their quantum analogues; (2) relinquishing the null energy condition in favor of the recent quantum focussing conjecture of~\cite{BouFis15}, which asserts that the variation of the generalized entropy (rather than the area) is nonincreasing, or equivalently the second variation is nonpositive; and (3) imposing an analogous generic condition to be introduced in~\cite{BouEngToAppear}. In other words, quantum extremal surfaces cannot be used to reconstruct surfaces in spacetime regions foliated by leaves with decreasing generalized entropy. \end{itemize} \acknowledgments It is a pleasure to thank Raphael Bousso, Dalit Engelhardt, William Kelly, Matthew Headrick, Gary Horowitz, Don Marolf, Aron Wall, and Jason Wien for helpful discussions. NE is supported by the National Science Foundation Graduate Research Fellowship under Grant No. DGE-1144085. SF is supported by the National Science Foundation under grant number PHY12-05500 and by funds from the University of California.
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} \label{I} The Lorentz-symmetry breaking is a promising pros\-pect for experimental detection of new physics at the Planck scale, and the field theoretic approach turns out to be the best way to describe the physics in this context \cite{LV1,LV2}. The comprehensive realistic effective field theory for Lorentz and CPT violation incorporating both the Standard Model (SM) and General Relativity is the Standard-Model Extension (SME) \cite{LV3,LV4}. The SME is a general framework which helps to develop investigations concerning the breaking of Lorentz and CPT symmetries in attainable energy scales. The photon sector of the SME is composed by a CPT-even and a CPT-odd part. The CPT-odd sector is given by the Carroll-Field-Jackiw model $\sim k_{\mu}\varepsilon^{\mu\nu\rho\sigma}A_{\nu}F_{\rho\sigma}$, whose properties were investigated, firstly, in Ref.\cite{CFJ}. The CPT-even sector is given by the term $\sim F^{\alpha\beta}K_{(F)\alpha\beta\sigma\tau}F^{\sigma\tau}$, where the background tensor $K_{(F)\alpha\beta\sigma\tau}$ has nineteen independents components, with ten of them being sensitive to birefringence and nine being nonbirefringent \cite{CPT1,CPT2}. The nonbirefringent components can be studied from experimental tests involving the Cherenkov radiation \cite{EXP1,EXP2,EXP3,EXP4}. The birefringent components are strongly limited through astrophysical tests involving data of cosmological sources \cite{EXP5}. The photon sector of the SME has been extensively investigated in literature. We can mention, for instance, the vacuum emission of Cherenkov radiation \cite{CHR1,CHR2,CHR3}, modifications on the Casimir effect \cite{Casimir,Casimir2}, studies concerning the radiative generation of the CFJ term \cite{GP1,GP2,GP3,GP4,GP5}, modifications in the dispersion relations \cite{Wave,ClaE2,Wave2,CPTe1,CPTe2,CPTe6,CPTe7,CPTe8,CPTe12}, effects on the classical electrodynamics \cite{ClaE1,Fontes,CPTe9,CPTe10,CPTe11}, effects induced in the hydrogen atom \cite{HydrLV}, the photon field quantization \cite{GRP1,GRP2,GRP3}, effects of Aharonov-Bohm type \cite{AB1,AB2,AB3} and so on. One of the most fundamental questions one can make about models with Abelian gauge fields which exhibit explicit Lorentz-symmetry breaking concerns on the physical phenomena produced by the presence of external field sources, mainly on the phenomena with no counterpart in the Maxwell electrodynamics. Studies of this kind were, recently, performed in literature in reference \cite{Fontes}, where it was considered a Lorentz-symmetry breaking from the CPT-even sector of the SME, and in reference \cite{Fontes2}, where it was considered a Lorentz-symmetry breaking model with higher order derivatives in the field variables. Field sources in Lorentz-symmetry breaking scenarios were also considered for the graviton field \cite{LuizDenis}. In this paper we search for new effects produced by the presence of field sources in the CPT-even photon sector of the SME, where the background tensor $K_{(F)\alpha\beta\sigma\tau}$ is responsible for introducing the Lorentz-symmetry breaking. We perform our analysis in a similar manner that was employed in Ref's \cite{Fontes,Fontes2} and we treat the background tensor up to the leading order by using standard perturbative methods of Quantum Field Theory. We show that our results are in agreement with the ones of Ref. \cite{Fontes} if we write the background tensor $K_{(F)\alpha\beta\sigma\tau}$ as a function of a single background vector $v^{\mu}$ (as in Ref. \cite{Fontes}) and if we take the results of Ref. \cite{Fontes} in lowest order in $v^{\mu}$. We also show that new results, not yet explored in the literature, emerge from the considered model when we take examples where the background tensor cannot be written as a function of a single background vector. Specifically, we show that it emerges a spontaneous torque on a classical electromagnetic dipole and an interaction between a steady straight line current and a point-like charge. We investigate some phenomena due to the presence of a Dirac string and show that the string can interact with a point charge as well as with a straight steady line current in the Lorentz-symmetry breaking scenario considered. We compute the electromagnetic field produced by the string. In connection with the results related to Dirac strings, we make a study concerning the Aharonov-Bohm bound states of the $2$-dimensional quantum rigid rotor. We obtain the corrections to the energy levels, of this system, induced by the presence of the background tensor for an ilustrative and specific example. Some of the results obtained along the paper are compared with experimental atomic data in order to stablish upper bounds to the Lorentz-symmetry breaking parameters. We also make numerical estimates in order to investigate the relevance of the obtained results in condensed matter systems. The paper is structured as follows; in Section (\ref{II}) we describe some general aspects of the model used along the paper and compute the contribution, due to the sources, to the ground state energy of the system. In Section (\ref{III}) we consider effects due to the presence of point-like stationary charges. In Section (\ref{IV}) we obtain the interaction energy between a steady line current and a point-like stationary charge. Section (\ref{V}) is dedicated to the study of physical phenomena due to the presence of one Dirac string. In the section (\ref{Aharonov}) we calculate the corrections to the so called Aharonov-Bohm bound states of a $2$-dimensional quantum rigid rotor. Section (\ref{conclusoes}) is dedicated to our final remarks and conclusions. Along the paper we shall deal with models in $3+1$ dimensional space-time and use Minkowski coordinates with diagonal metric $\eta^{\mu\nu}=(+,-,-,-)$. \section{The model} \label{II} The CPT-even part of the SME gauge sector is described by the following Lagrangian \begin{eqnarray} \label{modeloEM} {\cal L}=-\frac{1}{4}F_{\mu\nu}F^{\mu\nu}-\frac{1}{2\xi}\left(\partial_{\mu}A^{\mu}\right)^{2}-\frac{1}{8}K_{(F)\alpha\beta\sigma\tau}F^{\sigma\tau}F^{\alpha\beta} \nonumber\\ +J^{\mu}A_{\mu} \ , \end{eqnarray} where $A^{\mu}$ is the photon field, $F^{\mu\nu}=\partial^{\mu}A^{\nu}-\partial^{\nu}A^{\mu}$ is the field strength, $J^{\mu}$ is the external source, $\xi$ is a gauge parameter and $K_{(F)\alpha\beta\sigma\tau}$ is a dimensionless background tensor responsible for the Lorentz-symmetry breaking. The tensor $K_{(F)\alpha\beta\sigma\tau}$ has the same symmetries as the Riemann tensor \begin{eqnarray} \label{simetriasQ} K_{(F)\alpha\beta\sigma\tau}=K_{(F)\sigma\tau\alpha\beta} \ ,\nonumber\\ K_{(F)\alpha\beta\sigma\tau}=-K_{(F)\beta\alpha\sigma\tau}=-K_{(F)\alpha\beta\tau\sigma}=K_{(F)\beta\alpha\tau\sigma}\ ,\nonumber\\ K_{(F)\alpha\beta\sigma\tau}+K_{(F)\alpha\tau\beta\sigma}+K_{(F)\alpha\sigma\tau\beta}=0\ , \end{eqnarray} and a null double trace \begin{eqnarray} \label{dtrace} K^{\mu\nu}_{(F)\ \mu\nu}=0. \end{eqnarray} The generating functional for the model (\ref{modeloEM}) can be written in the following way \cite{Zee} \begin{eqnarray} \label{lae} {\cal Z}&=&\int DA\exp\left(\frac{i}{2}\int d^{4}x \ A^{\mu}K_{(F)\mu\alpha\nu\beta}\partial^{\alpha}\partial^{\beta}A^{\nu}\right)\nonumber\\ & &\times\exp\Biggl\{i\int d^{4}x\Biggl[\frac{1}{2}A_{\mu}\Biggl(\eta^{\mu\nu}\partial_{\alpha}\partial^{\alpha}\nonumber\\ & &-\left(1-\frac{1}{\xi}\right)\partial^{\mu}\partial^{\nu}\Biggr)A_{\nu} +J^{\mu}A_{\mu}\Biggr]\Biggr\} \ . \end{eqnarray} Since the background tensor is very tiny, let us treat it perturbatively up to first order. By using standard methods of quantum field theory, we perform the substitution \cite{Zee} \begin{equation} A^{\mu}\left(x\right)\rightarrow\frac{1}{i}\frac{\delta}{\delta J_{\mu}\left(x\right)} \ , \end{equation} for the Lorentz violating term (first line in Eq. (\ref{lae})) and expand it, up to first order, in such a way that \begin{eqnarray} \label{lah} {\cal Z}=\left[1-\frac{i}{2}\int d^{4}x\left(\frac{\delta}{\delta J^{\mu}}\right)K_{(F)}^{\mu\alpha\nu\beta}\partial_{\alpha}\partial_{\beta}\left(\frac{\delta}{\delta J^{\nu}}\right)\right]{\cal Z}_{0} , \end{eqnarray} where $\delta/\delta J_{\mu}\left(x\right)$ stands for functional derivative with respect to the external source, ${\cal Z}_{0}$ is the free generating functional for the electromagnetic field \cite{Zee} \begin{eqnarray} \label{laj} {\cal Z}_{0}=\exp\left[-\frac{i}{2}\int\int d^{4}y \ d^{4}z\ J^{\rho}\left(y\right)\Delta_{\rho\gamma}(y,z)J^{\gamma}\left(z\right)\right] \ , \end{eqnarray} and $\Delta_{\rho\gamma}(y,z)$ is the free photon propagator \begin{eqnarray} \label{lao} \Delta_{\rho\gamma}(y,z)=-\int\frac{d^{4}p}{(2\pi)^{4}}\frac{1}{p^{2}}\left[\eta_{\rho\gamma}-(1-\xi)\frac{p_{\rho}p_{\gamma}}{p^{2}}\right]\nonumber\\ \times\exp[-ip\cdot(y-z)] \ . \end{eqnarray} The ground state energy of any quantum system can be written in terms of the generating functional, as follows \cite{Zee} \begin{eqnarray} \label{lai} E=\frac{i}{T}\ln{\cal Z} \ , \end{eqnarray} where $T$ is the time variable and it is implicit the limit $T\rightarrow\infty$. Substituting Eq. (\ref{laj}) in (\ref{lah}), acting with the functional derivatives and using Eq. (\ref{lai}) we obtain \begin{eqnarray} \label{IEES} E&=&\frac{i}{T}\ln {\cal Z}_{0}-\frac{i}{2T} \ K_{(F)\mu\alpha\nu\beta}\int d^{4}x \ \partial^{\alpha}\partial^{\beta}\Delta^{\mu\nu}\left(x,x\right)\nonumber\\ & &-\frac{1}{2T} \ K_{(F)\mu\alpha\nu\beta}\int\int d^{4}y \ d^{4}z \ J_{\rho}\left(y\right)J_{\gamma}\left(z\right)\nonumber\\ & &\times\int d^{4}x \ \Bigl(\partial^{\alpha}\partial^{\beta}\Delta^{\nu\rho}\left(x,y\right)\Bigr) \Delta^{\mu\gamma}\left(z,x\right) \ , \end{eqnarray} where we used the approximation $\ln(1+x)\cong x$. The first term on right hand side of Eq. (\ref{IEES}) is the energy of the field in the absence of the background tensor, which is well known in literature \cite{BaroneHidalgo1,BaroneHidalgo2}. We can neglected the second term on the right hand side of Eq. (\ref{IEES}) since it does not depend on the external sources and, therefore, does not contribute to the interaction energy between field sources. The third term is a contribution to the interaction energy due to the Lorentz-symmetry breaking. Then, the energy of our quantum system due to the presence of the external source is given by the sum of the first and third terms in Eq. (\ref{IEES}). Substituting Eq. (\ref{lao}) in the third term of Eq. (\ref{IEES}), using the gauge where $\xi=1$, acting with the differential operators and using the Fourier representation for the Dirac delta function $\delta^{4}\left(p-p'\right)=\int d^{4}x/\left(2\pi\right)^{4}e^{-ix\cdot(p-p')}$, we obtain \begin{eqnarray} \label{JJJJ1} \int d^{4}x \ \Bigl(\partial^{\alpha}\partial^{\beta}\Delta^{\nu\rho}\left(x,y\right)\Bigr) \Delta^{\mu\gamma}\left(z,x\right) \nonumber\\ =-\eta^{\nu}_{\ \rho}\eta^{\mu}_{\ \gamma}\int\frac{d^{4}p}{(2\pi)^{4}}\frac{p^{\alpha}p^{\beta}}{p^{4}}\exp\left[-ip(z-y)\right] \ . \end{eqnarray} Now, using Eq's (\ref{laj}) and (\ref{JJJJ1}), the total interaction energy between external sources, up to the leading order in the background tensor, reads \begin{eqnarray} \label{EF} E&=&\frac{1}{2T}\int\int d^{4}y \ d^{4}z \ J^{\gamma}\left(z\right)D_{\gamma\rho}\left(z,y\right)J^{\rho}\left(y\right) \ , \end{eqnarray} where we defined the propagator, \begin{eqnarray} \label{EFD} D_{\gamma\rho}\left(z,y\right)&=&\int\frac{d^{4}p}{(2\pi)^{4}}\Biggl[-\frac{\eta_{\gamma\rho}}{p^{2}} +K_{(F)\gamma\alpha\rho\beta}\frac{p^{\alpha}p^{\beta}}{p^{4}}\Biggr]\nonumber\\ & &\times\exp\left[-ip\cdot(z-y)\right] \ . \end{eqnarray} From expression (\ref{EF}) we can compute the interaction energy between field sources up to first order in the background tensor $K_{(F)\gamma\alpha\rho\beta}$. It includes effects due to birefringent components as well as the nonbirefringent components of the background tensor. The exact propagator due to the CPT-even sector was studied in reference \cite{Fontes} for the very restrict case where the background tensor $K_{(F)\mu\nu\alpha\beta}$ was wrritten as a function of a single background vector $v^{\mu}=(v^{0},{\bf v})$, as follows \begin{equation} \label{param} K_{(F)\mu\nu\alpha\beta}=\eta_{\mu\alpha}v_{\nu}v_{\beta}-\eta_{\nu\alpha}v_{\mu}v_{\beta}+\eta_{\nu\beta}v_{\mu}v_{\alpha}- \eta_{\mu\beta}v_{\nu}v_{\alpha} \ , \end{equation} where it can be easily verified that this specific example satisfies the conditions (\ref{simetriasQ}) and (\ref{dtrace}). \section{Point-like charges} \label{III} In this section we study the interaction energy between two stationary point-like charges following the same approach employed in Ref's \cite{BaroneHidalgo1,BaroneHidalgo2,Fontes,Fontes2}. The external sour\-ce which describes this system is given by \begin{eqnarray} \label{corre1Em} J_{\rho}^{CC}({y})=q_{1}\eta^{0}_{\ \rho}\delta^{3}\left({\bf y}-{\bf a}_ {1}\right)+q_{2}\eta^{0}_{ \ \rho}\delta^{3}\left({\bf y}-{\bf a}_ {2}\right) \ , \end{eqnarray} where we have two spatial Dirac delta functions concentrated at the positions ${\bf a}_{1}$ and ${\bf a}_{2}$. The parameters $q_{1}$ and $q_{2}$ are the electric charges and the super-index $CC$ means that we have the interaction between two point-like charges. Substituting Eq. (\ref{corre1Em}) in (\ref{EF}), discarding the self interaction contributions (the interactions of a given point-charge with itself), computing the integrals in the following order: $d^{3}{\bf y}$, $d^{3}{\bf z}$, $dy^{0}$, $dp^{0}$ and $dz^{0}$, using the fact that $\delta(p^{0})=\int dy^{0}/(2\pi)e^{ip^{0}y^{0}}$ and identifying the time interval as $T=\int dz^{0}$, we can write \begin{eqnarray} \label{EEI} E^{CC}&=&q_{1}q_{2}\int\frac{d^{3}{\bf p}}{(2\pi)^{3}}\frac{\exp(i{\bf p}\cdot{\bf a})}{{\bf p}^2}-q_{1}q_{2}\nonumber\\ & &\times\sum_{i,j=1}^{3} K_{(F)}^{ij}{\bf{\nabla}}_{{\bf a}}^{i}{\bf{\nabla}}_{{\bf a}}^{j}\int\frac{d^{3}{\bf p}}{(2\pi)^{3}}\frac{\exp(i{\bf p}\cdot{\bf a})}{{\bf p}^4}, \end{eqnarray} where $i,j=1,2,3$ are spacial indexes and we defined the distance between the two electric charges ${\bf{a}={\bf a}_{2}-{\bf a}_{1}}=(a^{1},a^{2},a^{3})$ and the differential operator ${\bf {\nabla}}_{{\bf a}}^{i}=\partial/\partial a^{i}$. The components $K_{(F)}^{ij}$ can be represented in a 3$\times$3 matrix, as follows \begin{equation} \label{matri11em} K_{(F)}=\bordermatrix{& \cr & K_{(F)}^{0101} \ \ & K_{(F)}^{0102} \ \ & K_{(F)}^{0103} \ \cr & K_{(F)}^{0201} \ \ & K_{(F)}^{0202} \ \ & K_{(F)}^{0203} \ \cr & K_{(F)}^{0301} \ \ & K_{(F)}^{0302} \ \ & K_{(F)}^{0303} \ \cr}\ \ . \end{equation} Using the fact that \cite{BaroneHidalgo1} \begin{eqnarray} \label{Ener4EM1} \int\frac{d^{3}{\bf p}}{(2\pi)^{3}}\frac{\exp(i{\bf p}\cdot{\bf a})}{{\bf p}^2}=\frac{1}{4\pi |{\bf{a}}|} \ , \nonumber\\ \int\frac{d^{3}{\bf p}}{(2\pi)^{3}}\frac{\exp(i{\bf p}\cdot{\bf a})}{{\bf p}^4}=-\frac{|{\bf{a}}|}{8\pi} \ , \end{eqnarray} and acting with the differentials operators, we obtain \begin{eqnarray} \label{maq} E^{CC}=\frac{q_{1}q_{2}}{4\pi |{\bf{a}}|}\Bigl[1+\frac{1}{2}\Bigl(\sum_{i=1}^{3} K_{(F)}^{ii}-\sum_{i,j=1}^{3} K_{(F)}^{ij}\frac{a^{i}a^{j}}{{\bf a}^{2}}\Bigr)\Bigr] \ . \end{eqnarray} Eq. (\ref{maq}) gives us the interaction energy between two point-like charges for the model (\ref{modeloEM}). If we take $ K_{(F)}^{ij}=0$, the expression (\ref{maq}) reduces to the well-known Coulomb interaction. In Eq. (\ref{maq}) the summation $\sum_{i=1}^{3} K_{(F)}^{ii}$ is the trace of the matrix (\ref{matri11em}) which can be absorbed into the definition of the electric charges $q_{1}$ and $q_{2}$. The extra factor proportional to $\sum_{i,j=1}^{3} K_{(F)}^{ij}a^{i}a^{j}$ is an evident contribution which evinces the Lorentz symmetry breaking, leading us to an anisotropic interaction between the charges. Taking $q_{1}=q_{2}=q$ and the limit ${\bf a}\to\infty$ in Eq. (\ref{maq}) we can show that the self energy of a point-like charge diverges, on the contrary to what happen in the nonminimal model considered in reference \cite{FBB2018} The interaction force between two charges can be obtained by taking the gradient of energy (\ref{maq}) with respect to ${\bf a}$, as follows \begin{eqnarray} \label{FI} {\bf F}^{CC}&=&-{\bf {\nabla}}_{{\bf a}} E^{CC}\nonumber\\ &=&\frac{q_{1}q_{2}}{4\pi{\bf a}^{2}}\Biggl[\left(1+\frac{1}{2}\sum_{i=1}^{3} K_{(F)}^{ii}-\frac{3}{2}\sum_{i,j=1}^{3} K_{(F)}^{ij}\frac{a^{i}a^{j}}{{\bf a}^{2}}\right){{\hat a}}\nonumber\\ & &+\frac{1}{|{\bf{a}}|}\sum_{i=1}^{3}a^{i}\left(K_{(F)}^{1i}{\hat{x}}+K_{(F)}^{2i}{\hat{y}}+K_{(F)}^{3i}{\hat{z}}\right)\Biggr] \ , \end{eqnarray} where $\hat a$ is an unit vector pointing in the direction of the vector ${\bf a}$. In this paper $\hat{x},\hat{y},\hat{z}$ stand for the usual unit vectors in cartesian coordinates. The energy (\ref{maq}) exhibits an anisotropy due to the presence of the background tensor. As a consequence, we have the emergence of a spontaneous torque on an electric dipole, similarly to what was pointed out in Ref's \cite{Fontes,Fontes2}. In order to investigate this effect, we consider a typical dipole composed by two opposite electric charges $q_{1}=-q_{2}=q$ placed at a fixed distance apart, at the positions ${\bf a}_{1}={\bf R}+\frac{{\bf d}}{2}$ and ${\bf a}_{2}={\bf R}-\frac{{\bf d}}{2}$, where $\bf d$ is taken to be a fixed vector. Let us also choose a simple ilustrative example for the background tensor which satisfies the properties (\ref{simetriasQ}) and (\ref{dtrace}), and cannot be expressed in the form (\ref{param}), as follows \begin{eqnarray} \label{parQ1} K_{(F)}^{0l0m}&=& 0 \ , \ \mbox{for}\ \ \ l,m=1,2,3\ \ \mbox{and}\ l\neq m \ , \end{eqnarray} \begin{eqnarray} \label{parQ2} K_{(F)}^{0101}=0 \ , \ K_{(F)}^{0202}=-k_{1}\ , \ K_{(F)}^{0303}=k_{1} \ , \end{eqnarray} and \begin{eqnarray} \label{dtraceeee} K^{lm}_{(F)\ lm}=0 \ , \mbox{for}\ \ \ l,m=1,2,3 \ . \end{eqnarray} where $k_{1}$ is a tiny constant. In this specific case, the tensor (\ref{matri11em}) becomes \begin{equation} \label{matri1em} K_{(F)}=\bordermatrix{& \cr & 0 \ \ &0 \ \ &0 \ \cr & 0 \ \ &-k_{1} \ \ &0 \ \cr & 0 \ \ &0 \ \ &k_{1} \ \cr}\ \ . \end{equation} Using Eq. (\ref{matri1em}), we can rewrite the energy (\ref{maq}) in the following way \begin{eqnarray} \label{maq2} E^{CC}_{dipole}&=&-\frac{q^{2}}{4\pi |{\bf{d}}|}\Bigl[1-\frac{k_{1}}{2{\bf{d}}^{2}} \Bigl(-\left({{\bf d}}\cdot\hat{y}\right)^{2}+\left({{\bf d}}\cdot\hat{z}\right)^{2} \Bigr)\Bigr] \nonumber\\ &=&-\frac{q^{2}}{4\pi |{\bf{d}}|}\Bigl\{1-\frac{k_{1}}{2}\Bigl[1-\sin^{2}\theta\Bigl(1+\sin^{2}\phi\Bigr)\Bigr]\Bigr\} \ ,\nonumber\\ &\ & \end{eqnarray} where $0<\theta<\pi$ and $0<\phi<2\pi$ are the polar and azimuthal angles, in spherical coordinates (the $z$-axis is the polar axis), for the vector ${\bf{d}}$. From the energy (\ref{maq2}) have can compute two kinds of spontaneous torques acting on the vector ${\bf d}$, one related to the angle $\theta$ and another, to angle $\phi$, as follows \begin{eqnarray} \label{TorqueEM} \tau^{(\theta)}_{dipole}&=&-\frac{\partial E^{CC}_{dipole}}{\partial\theta}=\frac{q^{2}} {8\pi |{\bf{d}}|}k_{1}\left(1+\sin^{2}\phi\right)\sin(2\theta) \ , \nonumber\\ \tau^{(\phi)}_{dipole}&=&-\frac{\partial E^{CC}_{dipole}}{\partial\phi}=\frac{q^{2}} {8\pi |{\bf{d}}|}k_{1}\sin^{2}\theta\sin(2\phi) \ . \end{eqnarray} From expressions (\ref{TorqueEM}) we can see that $\tau^{(\theta)}_{dipole}=0$ when $\theta=0,\pi/2,\pi$, and for $\theta=\pi/4,\phi=\pi/2$, $\theta=\pi/4,\phi=3\pi/2$, $\tau^{(\theta)}_{dipole}$ attains its maximum intensity. For $\theta=0,\pi$ and $\phi=0,\pi/2,\pi,3\pi/2,2\pi$ the torque $\tau^{(\phi)}_{dipole}$ vanishes, for $\theta=\pi/2,\phi=\pi/4$ it has its maximum intensity. It is convenient to rewrite our results in terms of the parameters defined in Refs. \cite{CPT1,CPT2}, which encloses the 19 components of the background tensor $K_{(F)\mu\nu\alpha\beta}$ in a parity-even and a parity-odd subsectors, represented by the matrices ${\tilde{\kappa}}_{e}$ and ${\tilde{\kappa}}_{o}$, respectively, as follows \begin{eqnarray} \label{componentsK1} \left({\tilde{\kappa}}_{e+}\right)^{jk}&=&\frac{1}{2}\left(\kappa_{DE}+\kappa_{HB}\right)^{jk} \ ,\\ \label{componentsK33} \left({\tilde{\kappa}}_{e-}\right)^{jk}&=&\frac{1}{2}\left(\kappa_{DE}-\kappa_{HB}\right)^{jk}-\delta^{jk}{\tilde{\kappa}}_{{\mathrm{tr}}} \ ,\\ \label{componentsK43} {\tilde{\kappa}}_{{\mathrm{tr}}}&=&\frac{1}{3}{\mathrm{tr}}\left(\kappa_{DE}\right) \ ,\\ \label{componentsK53} \left({\tilde{\kappa}}_{o+}\right)^{jk}&=&\frac{1}{2}\left(\kappa_{DB}+\kappa_{HE}\right)^{jk} \ ,\\ \label{componentsK63} \left({\tilde{\kappa}}_{o-}\right)^{jk}&=&\frac{1}{2}\left(\kappa_{DB}-\kappa_{HE}\right)^{jk} \ , \end{eqnarray} where the $3\times3$ matrices $\kappa_{DE},\kappa_{HB},\kappa_{DB},\kappa_{HE}$ are defined in terms of the $K_{(F)}$-tensor components, as follows, \begin{eqnarray} \label{componentsK2} \left(\kappa_{DE}\right)^{jk}&=&-2K_{(F)}^{0j0k} , \left(\kappa_{HB}\right)^{jk}=\frac{1}{2}\epsilon^{jpq}\epsilon^{klm}K_{(F)}^{pqlm}, \nonumber\\ \left(\kappa_{DB}\right)^{jk}&=&-\left(\kappa_{HE}\right)^{kj}=\epsilon^{kpq}K_{(F)}^{0jpq} \ . \end{eqnarray} The matrices $\kappa_{DE}$ and $\kappa_{HB}$ contain together 11 independent components, while $\kappa_{DB}$ and $\kappa_{HE}$ contain together 8 components, which sum the 19 independent elements of the background tensor $K_{(F)\mu\nu\alpha\beta}$. The 10 coefficients sensitive to birefringence are contained in the matrices ${\tilde{\kappa}}_{e+}$ and ${\tilde{\kappa}}_{o-}$, and the 9 nonbirefringent coefficients are contained in ${\tilde{\kappa}}_{e-}$ and ${\tilde{\kappa}}_{o+}$. The coeficient $k_{1}$ in Eq. (\ref{matri1em}) can be written as follows \begin{eqnarray} \label{componentsK3} k_{1}=\frac{1}{2}\left[{(\tilde{\kappa}}_{e-})^{22}+({\tilde{\kappa}}_{e+})^{22}\right] \ . \end{eqnarray} Substituting (\ref{componentsK3}) in (\ref{maq2}), we arrive at \begin{eqnarray} \label{enerkec} E^{CC}_{dipole}&=&-\frac{q^{2}}{4\pi |{\bf{d}}|}\Bigl\{1-\frac{1}{4}\left[{(\tilde{\kappa}}_{e-})^{22}+({\tilde{\kappa}}_{e+})^{22}\right]\nonumber\\ & &\times\left[1-\sin^{2}\theta\left(1+\sin^{2}\phi\right)\right]\Bigr\} \ . \end{eqnarray} Substituting (\ref{componentsK3}) in (\ref{TorqueEM}), we obtain \begin{eqnarray} \label{TorqueEMke} \tau^{(\theta)}_{dipole}&=&\frac{q^{2}} {16\pi |{\bf{d}}|}\left[({\tilde{\kappa}}_{e-})^{22}+({\tilde{\kappa}}_{e+})^{22}\right]\cr &\ &\times\left(1+\sin^{2}\phi\right)\sin(2\theta) \ , \nonumber\\ \tau^{(\phi)}_{dipole}&=&\frac{q^{2}} {16\pi |{\bf{d}}|}\left[({\tilde{\kappa}}_{e-})^{22}+({\tilde{\kappa}}_{e+})^{22}\right]\cr &\ &\times\sin^{2}\theta\sin(2\phi) \ . \end{eqnarray} From Eq. (\ref{TorqueEMke}) we can notice that the torques induced on an electric dipole, by the background tensor (\ref{matri1em}), are effects due to the parity-even components of the background tensor. These torques have contributions due to the nonbirefringent component, $({\tilde{\kappa}}_{e-})^{22}$, as well as contributions due to the birefringent component, $({\tilde{\kappa}}_{e+})^{22}$. Another illustrative example which satisfies the conditions (\ref{simetriasQ}) and (\ref{dtrace}), and is not a particular case of (\ref{param}), is the following \begin{eqnarray} \label{paraQ3} K_{(F)}^{0103}=k_{2}\ , \end{eqnarray} with all other elements of the background tensor, $K_{(F)}$, taken to be equal to zero and where $k_{2}$ is a very tiny constant. In this case, the matrix $K_{(F)}$ in (\ref{matri11em}) reads \begin{equation} \label{matri120} K_{(F)}=\bordermatrix{& \cr & 0 \ \ &0 \ \ &k_{2} \ \cr & 0 \ \ &0 \ \ &0 \ \cr & k_{2} \ \ &0 \ \ &0 \ \cr}\ \ , \end{equation} with the corresponding energy (\ref{maq}), \begin{eqnarray} \label{PTorque} E^{CC}_{dipole}=-\frac{q^{2}}{4\pi |{\bf{d}}|}\left(1-\frac{k_{2}}{2}\sin(2\theta)\cos \phi\right)\ . \end{eqnarray} In the same way, from the energy (\ref{PTorque}) we can obtain two kinds of spontaneous torques on the vector ${\bf d}$, as follows \begin{eqnarray} \label{TorqueEMP} \tau^{(\theta)}_{dipole}&=&-\frac{\partial E^{CC}_{dipole}}{\partial\theta}=-\frac{q^{2}}{4\pi |{\bf{d}}|}k_{2}{\cos (2\theta)}\cos\phi \ , \nonumber\\ \tau^{(\phi)}_{dipole}&=&-\frac{\partial E^{CC}_{dipole}}{\partial\phi}=\frac{q^{2}}{8\pi |{\bf{d}}|}k_{2}\sin(2\theta)\sin\phi \ . \end{eqnarray} If $\theta=\pi/4,3\pi/4$ and $\phi=\pi/2,3\pi/2$ the torque $\tau^{(\theta)}_{dipole}$ vanishes and for $\theta=0,\phi=0$, $\theta=\pi/2,\phi=\pi$ it exhibits its maximum intensity. If $\theta=0,\pi$ and $\phi=0,\pi,2\pi$ we have $\tau^{(\phi)}_{dipole}=0$ and for $\theta=\pi/4,\phi=\pi/2$, we have the maximum intensity for $\tau^{(\phi)}_{dipole}$. Using the Lorentz-breaking coefficients defined in Refs. \cite{CPT1,CPT2}, we can write \begin{eqnarray} \label{componentsK333} k_{2}=-\frac{1}{2}\left[{(\tilde{\kappa}}_{e-})^{31}+({\tilde{\kappa}}_{e+})^{31}\right] \ , \end{eqnarray} So, substituting (\ref{componentsK333}) in (\ref{PTorque}), we obtain \begin{eqnarray} \label{PTorque2} E^{CC}_{dipole}&=&-\frac{q^{2}}{4\pi |{\bf{d}}|}\Bigl\{1+\frac{1}{4}\left[{(\tilde{\kappa}}_{e-})^{31}+({\tilde{\kappa}}_{e+})^{31}\right]\nonumber\\ & &\times\sin(2\theta)\cos \phi\Bigr\}\ . \end{eqnarray} and the torques (\ref{TorqueEMP}) become \begin{eqnarray} \label{TorqueEMPke} \tau^{(\theta)}_{dipole}&=&\frac{q^{2}}{8\pi |{\bf{d}}|}\left[{(\tilde{\kappa}}_{e-})^{31}+({\tilde{\kappa}}_{e+})^{31}\right]{\cos (2\theta)}\cos\phi \ , \nonumber\\ \tau^{(\phi)}_{dipole}&=&-\frac{q^{2}}{16\pi |{\bf{d}}|}\left[{(\tilde{\kappa}}_{e-})^{31}+({\tilde{\kappa}}_{e+})^{31}\right]\sin(2\theta)\sin\phi \ . \nonumber\\ &\ & \end{eqnarray} Once again, we have an example where the expontaneous torques are induced by the parity-even sector of the background tensor, with contributions of its birefringent and nonbirefringent component, $({\tilde{\kappa}}_{e+})^{31}$ and $({\tilde{\kappa}}_{e-})^{31}$, respectively, in this case If we use the tensor (\ref{param}) in the Lagrangian (\ref{modeloEM}), it becomes equivalent to the one considered in Ref. \cite{Fontes}. Substituting Eq. (\ref{param}) in (\ref{maq}) and (\ref{FI}), we obtain \begin{eqnarray} \label{ENB} E^{CC}(v)&=&\frac{q_{1}q_{2}}{4\pi |{\bf{a}}|}\Biggl[1-(v^{0})^{2}+\frac{1}{2}\Biggl({\bf v}^{2}-\frac{({\bf v}\cdot{\bf a})^{2}}{{\bf a}^{2}}\Biggr)\Biggr], \\ \label{ENBII5} {\bf F}^{CC}(v)&=&\frac{q_{1}q_{2}}{4\pi{\bf a}^{2}}\Biggl[\Biggl(1-(v^{0})^{2}+\frac{1}{2}{\bf v}^{2}\Biggr){\hat a}\nonumber\\ & &+({\bf v}\cdot{\hat a})\Biggl({\bf v}-\frac{3}{2}({\bf v}\cdot{\hat a}){\hat a}\Biggr)\Biggr]\ , \end{eqnarray} which are in perfect agreement with the result of reference \cite{Fontes} if they are expanded up to order of $v^{2}$. To put the possible signals of Lorentz-symmetry breaking obtained in this section in a physical context and obtain estimates in order of magnitude for the Lorentz-symmetry breaking parameters involved, let us consider experimental data for the hydrogen atom, which is a system whose dynamics is governed by the Coulomb (electrostatic) potential. This approach is justified because any deviation from the Coulomb potential would bring out experimental signals on hydrogen atoms. The ground state energy for the hydrogen atom has an uncertainty given by $~6.1\times10^{-9}$ \cite{dadosQED}. From Eq. (\ref{maq}) we can see that the relative correction in the Coulomb behavior imposed by the Lorentz symmetry breaking is proportional to the coefficients $K_{(F)}^{ij}$, or equivalently, the ones defined in Eqs. (\ref{componentsK1}), (\ref{componentsK33}), (\ref{componentsK43}), (\ref{componentsK53}), and (\ref{componentsK63}). So, by using atomic data, we can overestimate an upperbound for the coefficients (\ref{componentsK1}), (\ref{componentsK33}), (\ref{componentsK43}), (\ref{componentsK53}), (\ref{componentsK63}) as being of order $~6.1\times10^{-9}$. We can search for more restrictive estimates and use the hyperfine corrections to the hydrogen spectra, which are proportional to the fine structure constant, whose relative uncertainty is $~2.3\times10^{-10}$ \cite{dadosQED2}. So this more restictive estimates predicts an upper bound of order $~2.3\times10^{-10}$ for (\ref{componentsK1}), (\ref{componentsK33}), (\ref{componentsK43}), (\ref{componentsK53}) and (\ref{componentsK63}). These estimates are far beyond the ones obtained from optical methods \cite{dados}. In order to investigate if the torques (\ref{TorqueEMke}) and (\ref{TorqueEMPke}) can be of some relevance in condesend matter physics, let us consider a typical microscopic system of condensend matter, with distances in order of angstrons ($|{\bf d}|\sim\mbox{\AA})$, electric charges equal, in magnitude, to the electron's one ($q^{2}\sim 2.899\times10^{-27}Nm^{2}$) and the overestimated values for the Lorentz-symmetry breaking parameters obtained from Ref. \cite{dados} (${(\tilde{\kappa}}_{e-})^{ij}\sim4\times10^{-18}$, ${(\tilde{\kappa}}_{e+})^{ij}\sim2\times10^{-37}$). In this case, we have for the torques (\ref{TorqueEMke}) and (\ref{TorqueEMPke}), $\tau\sim10^{-36}Nm$. This very small result suggests that these kinds of effects are out of reach of measurements nowadays in systems of condensed matter. \section{A steady line current and a point-like charge} \label{IV} In this Section we study the interaction energy between a steady line current and a point-like stationary charge. Such interaction does not exist in the Maxwell electrodynamics, as discussed in references \cite{Fontes,Fontes2}. The steady line current shall be taken to flow parallel to the $z$-axis, along the straight line located at ${\bf A}=(A^{1},A^{2},0)$. The electric charge is placed at position ${\bf u}$. The external source for this system is given by \begin{eqnarray} \label{corre3Em} J_{\rho}^{SC}\left(y\right)=I\eta^{3}_{\ \rho}\delta^{2}\left({\bf y}_{\perp}-{\bf A}\right)+q\eta^{0}_{\ \rho}\delta^{3}\left({\bf y}-{\bf u}\right) \ . \end{eqnarray} where ${\bf y}_{\perp}=(y^{1},y^{2},0)$, is the position vector perpendicular to the straight line current. The parameters $I$ and $q$ stand for, respectively, the current intensity and electric charge strength. The super-index $SC$ means that we have a system composed by a steady line current and a point-like charge. Substituting Eq. (\ref{corre3Em}) in (\ref{EF}), discarding self-interac-tion terms, performing the integrals in the following order: $d^{2}{\bf y_{\perp}}$, $d^{2}{\bf z_{\perp}}$, $dz^{3}$, $dy^{3}$, $dp^{3}$, $dy^{0}$, $dp^{0}$ and $dz^{0}$ and identifying the time interval $\int dz^{0}=T$, we obtain \begin{eqnarray} \label{EE2} E^{SC}=-qI\sum_{i,j=1}^{2}K_ {(F)\perp}^{ij}{\bf{\nabla}}_{{\bf a}_{\perp}}^{i}{\bf{\nabla}}_{{\bf a}_{\perp}}^{j}\int\frac{d^{2}{\bf p}_{\perp}}{(2\pi)^{2}}\frac{e^{i{\bf p}_{\perp}\cdot{\bf a}_{\perp}}}{{\bf p}_{\perp}^4}\ , \end{eqnarray} where ${\bf p}_{\perp}=(p^{1},p^{2},0)$ and ${\bf a}_{\perp}={\bf A}-{\bf u}=(A^{1}-u^{1},A^{2}-u^{2},0)=(a^{1},a^{2},0)$ is the distance between the charge and the line current. We also defined the differential operator ${\bf{\nabla}}_{{\bf a}_{\perp}}^{i}=\partial/\partial a^{i}$ (with $i=1,2$) and the 2$\times$2 matrix $K_{(F)\perp}^{ij}$ as follows \begin{equation} \label{matri1em2} K_{(F)\perp}=\bordermatrix{& \cr & K_{(F)}^{0131} \ \ &K_{(F)}^{0132} \ \cr & K_{(F)}^{0231} \ \ &K_{(F)}^{0232} \ \cr}\ \ . \end{equation} The integral in Eq. (\ref{EE2}) is divergent. In order to solve this problem we proceed as in reference \cite{BaroneHidalgo1,Fontes}, introducing a regulator parameter with mass dimension, as follows \begin{eqnarray} \label{EE3} E^{SC}&=&-qI\sum_{i,j=1}^{2}K_{(F)\perp}^{ij}\nonumber\\ & &\times{\bf{\nabla}}_{{\bf a}_{\perp}}^{i}{\bf{\nabla}}_{{\bf a}_{\perp}}^{j}\lim_{m\rightarrow 0}\int\frac{d^{2}{\bf p}_{\perp}}{(2\pi)^{2}}\frac{e^{i{\bf p}_{\perp}\cdot{\bf a}_{\perp}}}{({\bf p}_{\perp}^2+m^{2})^{2}} \nonumber \\ &=&qI\sum_{i,j=1}^{2}K_{(F)\perp}^{ij}{\bf{\nabla}}_{{\bf a}_{\perp}}^{i}{\bf{\nabla}}_{{\bf a}_{\perp}}^{j}\nonumber\\ & &\times\lim_{m\rightarrow 0}\frac{1}{2m}\frac{\partial}{\partial m}\int\frac{d^{2}{\bf p}_{\perp}}{(2\pi)^{2}}\frac{e^{i{\bf p}_{\perp}\cdot{\bf a}_{\perp}}}{{\bf p}_{\perp}^2+m^{2}} \ . \end{eqnarray} Using the fact that \cite{BaroneHidalgo1} \begin{eqnarray} \label{int4EM} \int\frac{d^{2}{\bf q}_{\perp}}{(2\pi)^{2}}\frac{\exp(i{\bf p}_{\perp}\cdot{\bf a}_{\perp})}{{\bf p}_{\perp}^2+m^{2}}=\frac{1}{2\pi}K_{0}(m|{{\bf a}}_{\perp}|) \ , \end{eqnarray} and acting with the differential operators, we arrive at \begin{eqnarray} \label{EE4} E^{SC}&=&-\frac{qI}{4\pi}\Bigl\{\lim_{m\rightarrow 0}[-K_{0}(m|{{\bf a}}_{\perp}|)]\sum_{i=1}^{2}K_ {(F)\perp}^{ii}\nonumber\\ & &+\lim_{m\rightarrow 0}mK_{1}(m|{{\bf a}}_{\perp}|)\sum_{i,j=1}^{2}K_{(F)\perp}^{ij}\frac{a^{i}a^{j}}{|{{\bf a}}_{\perp}|}\Bigr\} \ , \end{eqnarray} where $K_{0}(m|{{\bf a}}_{\perp}|)$ and $K_{1}(m|{{\bf a}}_{\perp}|)$ stand for the K-Bessel functions. Using the fact that \cite{Arfken} \begin{eqnarray} \label{kbessel} -K_{0}(m|{{\bf a}}_{\perp}|)&\stackrel{m\rightarrow0}{\rightarrow}&\ln\left(m|{{\bf a}}_{\perp}|/2\right)+\gamma \nonumber\\ mK_{1}(m|{{\bf a}}_{\perp}|)&\stackrel{m\rightarrow0}{\rightarrow}&1/|{{\bf a}}_{\perp}|, \end{eqnarray} where $\gamma$ is the Euler constant, we rewrite Eq. (\ref{EE4}) in the following form \begin{eqnarray} \label{EE5} E^{SC}&=&-\frac{qI}{4\pi}\Biggl\{\lim_{m\rightarrow 0}\left[\ln\left(\frac{m|{{\bf a}}_{\perp}|}{2}\right)+\gamma\right]\sum_{i=1}^{2}K_ {(F)\perp}^{ii}\nonumber\\ & &+\sum_{i,j=1}^{2}K_{(F)\perp}^{ij}\frac{a^{i}a^{j}}{{\bf a}_{\perp}^{2}}\Biggr\}\nonumber\\ &=&-\frac{qI}{4\pi}\Biggl\{\lim_{m\rightarrow 0}\Bigl[\ln\left(\frac{m|{{\bf a}}_{\perp}|}{2}\right)+\gamma+\ln(ma_{0})\nonumber\\ & &-\ln(ma_{0})\Bigr]\sum_{i=1}^{2}K_ {(F)\perp}^{ii}+\sum_{i,j=1}^{2}K_{(F)\perp}^{ij}\frac{a^{i}a^{j}}{{\bf a}_{\perp}^{2}}\Biggr\} \nonumber\\ &=&-\frac{qI}{4\pi}\Biggl\{\Bigl[\ln\left(\frac{|{{\bf a}}_{\perp}|}{a_{0}}\right)+\gamma-\ln 2+\lim_{m\rightarrow 0}\ln(ma_{0})\Bigr]\nonumber\\ & &\times\sum_{i=1}^{2}K_ {(F)\perp}^{ii}+\sum_{i,j=1}^{2}K_{(F)\perp}^{ij}\frac{a^{i}a^{j}}{{\bf a}_{\perp}^{2}}\Biggr\}\ , \end{eqnarray} where, in the third line of Eq. (\ref{EE5}), we added and subtract the quantity $\ln(ma_{0})$, where $a_{0}$ is an arbitrary length-dimensional constant. Neglecting, in the penultimate line of Eq. (\ref{EE5}), the terms that do not depend on the distance ${\bf a}_{\perp}$, since they do not contribute to the force between the line current and the point charge, we obtain \begin{eqnarray} \label{EE6} E^{SC}&=&-\frac{qI}{4\pi}\Bigg[\ln\left(\frac{|{{\bf a}}_{\perp}|}{a_{0}}\right)\sum_{i=1}^{2}K_{(F)\perp}^{ii}\cr\cr &\ &+\sum_{i,j=1}^{2}K_{(F)\perp}^{ij}\frac{a^{i}a^{j}}{{\bf a}_{\perp}^{2}}\Bigg]. \end{eqnarray} The interaction energy (\ref{EE6}) is an effect due solely to the Lorentz symmetry breaking. In Eq. (\ref{EE6}), the coefficient $\sum_{i=1}^{2}K_{(F)\perp}^{ii}$ is the trace of the matrix (\ref{matri1em2}), which can be absorbed into the definition of the current intensity $I$, and electric charge $q$. The term proportional to $\sum_{i,j=1}^{2}K_{(F)\perp}^{ij}a^{i}a^{j}$ is an evident contribution which evinces the Lorentz-symmetry breaking, leading us to an anisotropic interaction between the line current and the point-like charge. The force on the point charge can be obtained from Eq. (\ref{EE6}) as follows, \begin{eqnarray} \label{FI2} {\bf F}^{SC}&=&-{\bf {\nabla}}_{{\bf a}_{\perp}}E^{SC}\nonumber\\ &=&-\frac{qI}{4\pi |{{\bf a}}_{\perp}|}\Biggl[\Biggl(\sum_{i=1}^{2}K_ {(F)\perp}^{ii}-2\sum_{i,j=1}^{2}K_{(F)\perp}^{ij}\frac{a^{i}a^{j}}{{\bf a}_{\perp}^{2}}\Biggr){\hat{a}_{\perp}}\nonumber\\ &\ &+\frac{1}{|{{\bf a}}_{\perp}|}\sum_{i=1}^{2}{a_{\perp}^{i}} \Bigl[\Bigl(K_{(F)\perp}^{i1}+K_{(F)\perp}^{1i}\Bigr){\hat{x}}\nonumber\\ &\ &+\left(K_{(F)\perp}^{i2}+K_{(F)\perp}^{2i}\right){\hat{y}}\Bigr]\Biggr] \ , \end{eqnarray} where ${\hat{a}_{\perp}}$ is the unit vector pointing on the direction of ${{\bf a}}_{\perp}$ and we used the fact that \begin{equation} \label{aperp} {\bf {\nabla}}_{{\bf a}_{\perp}}=\left(\frac{\partial}{\partial a^{1}},\frac{\partial}{\partial a^{2}},0\right)\ . \end{equation} The second term inside brackets in the interaction energy (\ref{EE6}) leads to a torque on the steady line current when we fix the point-like charge. In order to calculate this torque in a specific example (which is in accordance with the properties (\ref{simetriasQ}) and (\ref{dtrace})), let us take \begin{eqnarray} \label{paraQ6} K_{(F)}^{0131}=K_{(F)}^{0132}=K_{(F)}^{0231}=0 \ , \ K_{(F)}^{0232}=k_{3} \ , \end{eqnarray} where $k_{3}$ is a tiny constant. In this case, the matrix (\ref{matri1em2}) reads \begin{equation} \label{matri1em3} K_{(F)\perp}=\bordermatrix{& \cr & 0 \ \ &0 \ \cr & 0 \ \ &k_{3} \ \cr}\ \ . \end{equation} Substituting Eq. $(\ref{matri1em3})$ in $(\ref{EE6})$, we obtain \begin{eqnarray} \label{EE65} E^{SC}&=&-\frac{qI}{4\pi}k_{3}\left[\ln\left(\frac{|{{\bf a}}_{\perp}|} {a_{0}}\right)+\frac{\left({{\bf a}}_{\perp}\cdot{\hat{y}}\right)^{2}} {{{\bf a}}_{\perp}^{2}}\right]\nonumber\\ &=&-\frac{qI}{4\pi}k_{3}\left[\ln\left(\frac{|{{\bf a}}_{\perp}|} {a_{0}}\right)+\sin^{2}\alpha\right] \ , \end{eqnarray} where $\alpha$ is the angle between the vector ${{\bf a}}_{\perp}$ and the unit vector ${\hat x}$ (the angle of the vector ${{\bf a}}_{\perp}$ in polar coordinates). If we take a setup where the distance between the current line and the point-charge is fixed, the energy (\ref{EE65}) leads us to a torque in the whole system, as follows \begin{eqnarray} \label{TorqueEMII} \tau^{SC}=-\frac{\partial E^{SC}}{\partial\alpha}=\frac{qI}{4\pi}k_{3}\sin(2\alpha) \ . \end{eqnarray} Notice that for $\alpha=0,\pi/2,\pi,2\pi$ the torque vanishes and for $\alpha=\pi/4$ and $\alpha=3\pi/4$ we have its maximum intensity. The constant $k_{3}$ in (\ref{matri1em3}) can be written in terms of the Lorentz-breaking coefficients defined in Ref's \cite{CPT1,CPT2}, as follows \begin{equation} \label{k3} k_{3}=-\frac{1}{2}\left[(\tilde{\kappa}_{o+})^{21}+(\tilde{\kappa}_{o-})^{21}\right] \ . \end{equation} In this way, the energy (\ref{EE65}) and the torque (\ref{TorqueEMII}) become \begin{eqnarray} \label{EMIIko} E^{SC}&=&\frac{qI}{8\pi}\left[(\tilde{\kappa}_{o+})^{21}+(\tilde{\kappa}_{o-})^{21}\right]\nonumber\\ & &\times\left[\ln\left(\frac{|{{\bf a}}_{\perp}|} {a_{0}}\right)+\sin^{2}\alpha\right] \ .\\ \label{TorqueEMIIko} \tau^{SC}&=&-\frac{qI}{8\pi}\left[(\tilde{\kappa}_{o+})^{21}+(\tilde{\kappa}_{o-})^{21}\right]\sin(2\alpha) \ . \end{eqnarray} From the result (\ref{TorqueEMIIko}) we can notice that the torque acting on the current line, for the specific example (\ref{matri1em3}), is an effect due to the parity-odd components of the background tensor. This torque exhibits contributions comming from the nonbirefringent component $({\tilde{\kappa}}_{o+})^{21}$ and from the birefringent component $({\tilde{\kappa}}_{o-})^{21}$ of the background tensor. An infinite straight line current is an idealization. In a more realistic situation, one might take into account a finite length for the line current. The results obtained in this section must be considered when edge effects are negligible, what happens when the length of the line current is much higher in comparison with the distance between the line current and the point-charge. If we substitute Eq. (\ref{param}) in (\ref{EE6}), we obtain \begin{eqnarray} \label{ENBII} E^{SC}\left(v\right)=\frac{qI}{2\pi}v^{0}v^{3}\ln\left(\frac{|{{\bf a}}_{\perp}|}{a_{0}}\right)+\frac{qI}{4\pi}v^{0}v^{3} \ . \end{eqnarray} The second term in the right-hand side of Eq. (\ref{ENBII}) does not depend of distance $|{\bf a}_{\perp}|$, so it can be neglected. Therefore \begin{eqnarray} \label{ENBIII} E^{SC}\left(v\right)=\frac{qI}{2\pi}({{\bf v}\cdot{\hat z}})\ v^{0}\ln\left(\frac{|{{\bf a}}_{\perp}|}{a_{0}}\right) \ , \end{eqnarray} where $v^{3}={{\bf v}\cdot{\hat z}}$ is the projection of the vector ${\bf v}$ along the straight line current. This result is in accordance with the reference \cite{Fontes}, up to order of $v^{2}$. Once again, let us investigate if the above effects can have some relevance in condensed matter systems. The highest electric currents achieved in laboratory are of magnitude $10^{5}A$, the overestimated values from Ref. \cite{dados}, $(\tilde{\kappa}_{o+})^{ij}\sim 2\times10^{-18}$, $(\tilde{\kappa}_{o-})^{ij}\sim2\times10^{-37}$, and the charge of the electron, we have the estimate value for the torque (\ref{TorqueEMIIko}), $\tau^{SC}\sim10^{-30}Nm$. In the opposite direction, we can search for some Lorentz-symmetry breaking signals in current jets produced on galaxies, where we find the highests electric currents in nature \cite{AstrJour}, of magnitude $\sim10^{18}A$. In this case, the torque is estimated by $\tau^{SC}\sim10^{-17}Nm$. These small result suggests that the effects of the torque (\ref{TorqueEMIIko}) are far beyond any measurement range nowadays. \section{Dirac strings} \label{V} In this section we consider, firstly, a system composed by a point-like charge placed at position ${\bf a}$ and a Dirac string. This system is described by the external source \begin{eqnarray} \label{Dcurrent1} J_{\rho}^{DC}\left(y\right)=J_{(D)\rho}\left(y\right)+q\eta_{\ \rho}^{0}\delta^{3}({\bf y}-{\bf a}) \ , \end{eqnarray} where $J_{(D)}^{\rho}\left(y\right)$ stands for the external field source produced by the Dirac string and the second term on the right hand side is the source produced by the point-like charge. The super-index $DC$ means that we have a Dirac string and a point-like charge. Now, we choose a coordinate system where the Dirac string lies along the $z$-axis with internal magnetic flux $\Phi$. Its corresponding source is given by \cite{Fontes,Fontes2,MBB,FernandaDissertacao,AndersonDissertacao} \begin{equation} \label{Dircurr2} J_{(D)}^{\rho}({y})=i\Phi(2\pi)^{2}\int\frac{d^{4}p}{(2\pi)^{4}}\delta(p^{0})\delta(p^{3})\varepsilon^{0\rho}_{\ \ \nu3}\ p^{\nu}e^{-ipy}\ , \end{equation} where $\varepsilon^{\mu\nu\alpha\beta}$ stands for Levi-Civita tensor with $\varepsilon^{0123}=1$. If $\Phi>0$, we have the internal magnetic field along $\hat z$. For $\Phi<0$, the internal magnetic field points in the opposite direction. From now on, in this Section, the sub-index $\perp$ means that we are taking just the components of a given vector perpendicular to the string. For instance, ${\bf p}_{\perp}=(p^{1},p^{2},0)$ is the momentum perpendicular to the string. Substituting Eq. (\ref{Dcurrent1}) in (\ref{EF}), discarding self-interac\-tion terms, which do not contribute to the force between the string and the charge (the self-interaction terms are proportional to $q^{2}$ or $\Phi^{2}$ ) and following similar steps employed in the previous sections, we can show that \begin{eqnarray} \label{EE10} &&E^{DC}=-\frac{q\Phi}{4\pi |{{\bf a}}_{\perp}|}\Bigl[\left({{\bf a}_{\perp}}\cdot{\hat{x}}\right)K_{(F)}^{0121}-\left({{\bf a}_{\perp}}\cdot{\hat{y}}\right)K_{(F)}^{0212}\Bigr]\nonumber\\ & &\times\lim_{m\rightarrow 0}\Bigl[2mK_{1}(m|{{\bf a}}_{\perp}|)-|{{\bf a}}_{\perp}|m^{2}K_{0}(m|{{\bf a}}_{\perp}|) \Bigr]\ . \end{eqnarray} Now, taking the limit $m\rightarrow0$, we arrive at \begin{eqnarray} \label{EE100} E^{DC}&=&-\frac{q\Phi}{2\pi{\bf a}_{\perp}^{2}}\Bigl[\left({{\bf a}_{\perp}}\cdot{\hat{x}}\right)K_{(F)}^{0121}-\left({{\bf a}_{\perp}}\cdot{\hat{y}}\right)K_{(F)}^{0212}\Bigr] \ . \end{eqnarray} We notice that Eq. (\ref{EE100}) is also an effect due solely to the Lorentz symmetry breaking. The energy (\ref{EE100}) leads to a force between the Dirac string and the charge as well as a torque on the string, if we take a setup where the distance between the charge and the string is fixed. Defining the tiny constants $k_{4}$ and $k_{5}$, \begin{eqnarray} \label{paraQ7} K_{(F)}^{0112}=k_{4} \ , \ \ K_{(F)}^{0212}=k_{5}\ , \end{eqnarray} we can write Eq's (\ref{paraQ7}) in (\ref{EE100}) in the form \begin{eqnarray} \label{EE1011} E^{DC}&=&\frac{q\Phi}{2\pi{\bf{a}}_{\perp}^{2}}\left[k_{4} \left({\bf{a}}_{\perp}\cdot{\hat{x}}\right)+k_{5} \left({\bf{a}}_{\perp}\cdot{\hat{y}}\right)\right]\nonumber\\ &=&\frac{q\Phi}{2\pi\mid{\bf{a}}_{\perp}\mid} \left(k_{4}\cos\alpha+ k_{5}\sin\alpha\right)\ , \end{eqnarray} what leads to a torque on the Dirac string, as follows \begin{eqnarray} \label{tostring} \tau^{DC}=-\frac{\partial E^{DC}}{\partial\alpha}= \frac{q\Phi}{2\pi\mid{\bf{a}}_{\perp}\mid}\left(k_{4}\sin\alpha -k_{5}\cos\alpha\right)\ . \end{eqnarray} where $\alpha$ is the angle between ${\bf a}_{\perp}$ and ${\hat x}$. In terms of the Lorentz-breaking coefficients defined in Ref's \cite{CPT1,CPT2}, we have \begin{eqnarray} \label{k4k5} k_{4}=\frac{1}{2}\left[(\tilde{\kappa}_{o+})^{13}+(\tilde{\kappa}_{o-})^{13}\right] \ ,\nonumber\\ k_{5}=\frac{1}{2}\left[(\tilde{\kappa}_{o+})^{23}+(\tilde{\kappa}_{o-})^{23}\right] \ . \end{eqnarray} From the Eqs. (\ref{EE1011}), (\ref{tostring}) and (\ref{k4k5}), we can write \begin{eqnarray} E^{DC}&=&\frac{q\Phi}{4\pi\mid{\bf{a}}_{\perp}\mid} \Bigl\{\left[(\tilde{\kappa}_{o+})^{13}+(\tilde{\kappa}_{o-})^{13}\right]\cos\alpha\nonumber\\ & &+\left[(\tilde{\kappa}_{o+})^{23}+(\tilde{\kappa}_{o-})^{23}\right]\sin\alpha\Bigr\} \ .\\ \tau^{DC}&=& \frac{q\Phi}{4\pi\mid{\bf{a}}_{\perp}\mid}\Bigl\{\left[(\tilde{\kappa}_{o+})^{13}\sin\alpha -(\tilde{\kappa}_{o+})^{23}\cos\alpha\right]\nonumber\\ & &+\left[(\tilde{\kappa}_{o-})^{13}\sin\alpha -(\tilde{\kappa}_{o-})^{23}\cos\alpha\right]\Bigr\}\ . \label{tostringko} \end{eqnarray} The torque in Eq. (\ref{tostringko}) is an effect due to the parity-odd components of the background tensor. The first contribution inside brackets on the right hand side of the Eq. (\ref{tostringko}) comes from the nonbirefringent sector of the background tensor and the second term comes from the birefringent sector. Just for completeness, we consider the interaction between a Dirac string and a steady line current, both parallel to each other. The corresponding external source is given by \begin{equation} J^{DS}_{\rho}({y})=J_{\rho(D)}\left({y}\right)+I\eta^{3}_{\ \rho}\delta^{2}\left({\bf y}_{\perp}-{\bf a}_{\perp}\right) \end{equation} where $J_{(D)}^{\rho}\left({y}\right)$ is given by (\ref{Dircurr2}). The super-index $DS$ means that we have a Dirac string and a steady line current. Following similar steps employed previously, we obtain the result \begin{eqnarray} \label{EE105} {\cal E}^{DS}&=&\frac{E^{DS}}{L}\cr\cr &=&-\frac{I\Phi}{2\pi{\bf a}_{\perp}^{2}}\Bigl[\left({{\bf a}_{\perp}}\cdot{\hat{x}}\right)K_{(F)}^{3121}-\left({{\bf a}_{\perp}}\cdot{\hat{y}}\right)K_{(F)}^{3212}\Bigr]\ , \end{eqnarray} where we identified the length of the Dirac string, $L=\int dz^{3}$, defined the energy per unit of string length ${\cal E}$. From the energy (\ref{EE105}), which is an effect due solely to the Lorentz symmetry breaking, we can obtain a force between the Dirac string and the steady line current, as well as a torque between them. Considering $\alpha$ the angle between ${\bf a}_{\perp}$ and ${\hat x}$, and defining \begin{equation} \label{k6k7} K_{(F)}^{3121}=k_{6} \ , \ \ \ K_{(F)}^{3212}=k_{7} \ , \end{equation} the energy (\ref{EE105}) reads \begin{equation} \label{eneralpha} {\cal E}^{DS}=-\frac{I\Phi}{2\pi\mid{\bf a}_{\perp}\mid}\left(k_{6}\cos\alpha-k_{7}\sin\alpha\right)\ , \end{equation} where $k_{6}$ and $k_{7}$ are tiny constants. Using the Lorentz-breaking coefficients defined in Ref's \cite{CPT1,CPT2}, we obtain \begin{eqnarray} \label{k6k7energy67} k_{6}=\frac{1}{2}\left[(\tilde{\kappa}_{e-})^{23}-(\tilde{\kappa}_{e+})^{23}\right] \ ,\nonumber\\ k_{7}=\frac{1}{2}\left[(\tilde{\kappa}_{e-})^{13}-(\tilde{\kappa}_{e+})^{13}\right] \ , \end{eqnarray} and the energy (\ref{eneralpha}) becomes \begin{eqnarray} \label{lllll} {\cal E}^{DS}&=&-\frac{I\Phi}{4\pi\mid{\bf{a}}_{\perp}\mid}\Bigl\{\left[-(\tilde{\kappa}_{e+})^{23}\cos\alpha +(\tilde{\kappa}_{e+})^{13}\sin\alpha\right]\nonumber\\ & &+\left[(\tilde{\kappa}_{e-})^{23}\cos\alpha -(\tilde{\kappa}_{e-})^{13}\sin\alpha\right]\Bigr\} \ . \end{eqnarray} The energy in Eq. (\ref{lllll}) is an effect due to the parity-even components of the background tensor. The first contribution inside brackets on the right hand side of the Eq. (\ref{lllll}) is birefringent and the second one is nonbirefringent. With the specific example (\ref{param}), Eq. (\ref{EE105}) becomes \begin{equation} \label{EIIVNB} {\cal E}^{DS}\left(v\right)=\frac{I\Phi}{2\pi{\bf a}_{\perp}^{2}}({{\bf v}\cdot{\hat z}}) \left[{\hat z}\cdot\left({\bf a}_{\perp}\times{\bf v}_{\perp}\right)\right] \ , \end{equation} which is in accordance with \cite{Fontes}. It is worth mentioning that an infinite Dirac string is an idealization of an infinitely long solenoid with vanishing radius and finite internal magnetic flux. In a more realistic situation, one might take into account a non-vanishing radius and a finite length for the solenoid. The results obtained in this section must be considered when the length of the solenoid is much higher than the distance between the line current and the point-charge and when the radius of the solenoid is negligible in comparison with this distance. \section{Aharonov-Bohm bound states} \label{Aharonov} In this section we consider a simplified version of the so called Aharonov-Bohm bound states in the scenario described by the model (\ref{modeloEM}). For this task we start by considering the propagator (\ref{EFD}) and the field configuration produced by a given external source, \begin{eqnarray} \label{fields} A^{\mu}\left(x\right)&=&\int d^{4}y \ D^{\mu\nu}\left(x,y\right)J_{\nu}\left(y\right)\nonumber\\ &=&\int d^{4}y\Bigl[D_{M}^{\mu\nu}\left(x,y\right)+D_{LV}^{\mu\nu}\left(x,y\right)\Bigr]J_{\nu}\left(y\right) , \end{eqnarray} where we defined the standard Maxwell propagator, \begin{eqnarray} \label{propm} D_{M}^{\mu\nu}\left(x,y\right)=-\int\frac{d^{4}p}{(2\pi)^{4}}\frac{\eta^{\mu\nu}}{p^{2}}e^{-ip\cdot(x-y)} \ , \end{eqnarray} and the correction \begin{eqnarray} \label{proplv} D_{LV}^{\mu\nu}\left(x,y\right)&=&\int\frac{d^{4}p}{(2\pi)^{4}}K_{(F)}^{\mu\alpha\nu\beta} \ \frac{p_{\alpha}p_{\beta}}{p^{4}}e^{-ip\cdot(x-y)} \ . \end{eqnarray} With the aid of Eq's (\ref{fields}), (\ref{propm}) and (\ref{proplv}) we can write \begin{eqnarray} \label{fields2} A^{\mu}\left(x\right)=A_{M}^{\mu}\left(x\right)+\Delta A_{LV}^{\mu}\left(x\right) \ . \end{eqnarray} where \begin{eqnarray} \label{defAs} A_{M}^{\mu}\left(x\right)&=&\int d^{4}y D_{M}^{\mu\nu}\left(x,y\right)J_{\nu}(y)\cr\cr \Delta A_{LV}^{\mu}\left(x\right)&=&\int d^{4}y D_{LV}^{\mu\nu}\left(x,y\right) J_{\nu}\left(y\right) \end{eqnarray} In a simplified setup, let us calculate the field configuration produced by a Dirac string (\ref{Dircurr2}). In this case, the first Eq. (\ref{defAs}) becomes \begin{eqnarray} \label{fieldM} A_{M(D)}^{\mu}\left(x\right)=\frac{\Phi}{2\pi}\Bigl(0,\frac{-x^{2}}{(x^{1})^{2}+(x^{2})^{2}},\frac{x^{1}}{(x^{1})^{2}+(x^{2})^{2}},0\Bigr)\ . \end{eqnarray} The potential (\ref{fieldM}) produces a vanishing electromagnetic field outside the $z$-axis. For the Lorentz violation correction, we substitute Eq. (\ref{Dircurr2}) in the second Eq. (\ref{defAs}) and perform some simple manipulations, \begin{eqnarray} \label{fieldlv} \Delta A_{LV(D)}^{\mu}\left(x\right)=\cr\cr =\Phi \sum_{i,j=1}^{2}{\hat z}\cdot\Bigl[{\bf {\nabla}}_{{\bf x}_{\perp}}\times\Bigl(K_{(F)}^{\mu i1j}\ {\hat{x}}+K_{(F)}^{\mu i2j}\ {\hat{y}}\Bigr)\Bigr]\cr\cr \times{\bf{\nabla}}_{{\bf x}_{\perp}}^{i}{\bf{\nabla}}_{{\bf x}_{\perp}}^{j}\lim_{m\rightarrow 0}\frac{1}{2m}\frac{\partial}{\partial m}\int\frac{d^{2}{\bf p}_{\perp}}{(2\pi)^{2}}\frac{e^{i{\bf p}_{\perp}\cdot{\bf x}_{\perp}}}{{\bf p}_{\perp}^2+m^{2}}\ ,\ \ \ \ \ \ \ \end{eqnarray} where ${\bf x}_{\perp}=\left(x^{1},x^{2},0\right)$. The potential (\ref{fieldlv}) can be computed by following the same procedures employed in the previous section. The result is \begin{eqnarray} \label{fieldlv2} \Delta A_{LV(D)}^{\mu}\left(x\right)=-\frac{\Phi}{2\pi{\bf x}_{\perp}^{2}}\left(x^{1}K_{(F)}^{\mu 121}-x^{2}K_{(F)}^{\mu 212}\right) \ . \end{eqnarray} From Eq. (\ref{fieldlv2}), we have a $0$-component for the field \begin{eqnarray} \label{fieldlv3} \Delta A_{LV(D)}^{0}\left(x\right)=-\frac{\Phi}{2\pi{\bf x}_{\perp}^{2}}\left(x^{1}K_{(F)}^{0121}-x^{2}K_{(F)}^{0 212}\right) \ , \end{eqnarray} and a corresponding electric field outside the string, \begin{eqnarray} \label{electric} \Delta{\bf{E}}&=&-{\bf{\nabla}}_{{\bf x}_{\perp}}\left(\Delta A_{LV(D)}^{0}\right)\nonumber\\ &=&\frac{\Phi}{2\pi{\bf x}_{\perp}^{2}}\Biggl[-\frac{2}{|{\bf x}_{\perp}|}\left(x^{1}K_{(F)}^{0121}-x^{2}K_{(F)}^{0212}\right){\hat{x}}_{\perp}\nonumber\\ & &+\left(K_{(F)}^{0121} \ {\hat{x}}-K_{(F)}^{0212} \ {\hat{y}}\right)\Biggr] \ , \end{eqnarray} where ${\hat{x}_{\perp}}$ is a unit vector pointing in the direction of ${{\bf x}}_{\perp}$. From (\ref{fieldM}) and (\ref{fieldlv2}) the vector potential reads, \begin{eqnarray} \label{pv1} {\bf{A}}_{(D)}&=&\frac{\Phi}{2\pi{\bf x}_{\perp}^{2}}\Bigl[\left(1-K_{(F)}^{1212}\right)\left(-x^{2}{\hat{x}}+x^{1}{\hat{y}}\right)\nonumber\\ & &-\left(x^{1}K_{(F)}^{3121}-x^{2}K_{(F)}^{3212}\right){\hat{z}}\Bigr] \ . \end{eqnarray} From the vector potential (\ref{pv1}) we have an induced magnetic field outside the string, as follows \begin{eqnarray} \label{magnetic} \Delta{\bf{B}}&=&{\bf{\nabla}}_{{\bf x}_{\perp}}\times{\bf{A}}_{(D)}\nonumber\\ &=&-\frac{\Phi}{2\pi{\bf x}_{\perp}^{2}}\Biggl[-\Bigl(K_{(F)}^{3212} \ {\hat{x}}+K_{(F)}^{3121} \ {\hat{y}}\Bigr)\cr\cr &\ &+\frac{2}{{\bf x}_{\perp}^{2}}\left(x^{1}K_{(F)}^{3121}-x^{2}K_{(F)}^{3212}\right)\left(-x^{2}{\hat{x}}+x^{1}{\hat{y}}\right)\Biggr] \ . \nonumber\\ \ \end{eqnarray} The electric and magnetic fields, (\ref{electric}) and (\ref{magnetic}), can induce physical phenomena outside the string. Now, let us take a very simple and illustrative example for background tensor (which satisfies the conditions (\ref{simetriasQ}) and (\ref{dtrace})), as follows \begin{eqnarray} \label{parak71} K_{(F)}^{0121}&=&K_{(F)}^{0212}=K_{(F)}^{3121}=K_{(F)}^{3212}=0 \ ,\nonumber\\ K_{(F)}^{1212}&=&k_{8} \ , \end{eqnarray} where $k_{8}$ is a tiny constant. In this case, the $0$-component of the field (\ref{fieldlv3}) vanishes and the vector potential (\ref{pv1}) becomes \begin{eqnarray} \label{pv3} {\bf{A}}_{(D)}&=&\frac{\Phi\left(1-k_{8}\right)}{2\pi{\bf x}_{\perp}^{2}}\left(-x^{2}{\hat{x}}+x^{1}{\hat{y}}\right) \cr\cr &=&\frac{\Phi\left(1-k_{8}\right)}{2\pi\rho}{\hat{\phi}} \ , \end{eqnarray} where we used cylindrical coordinates, with the radial coordinate $\rho=|{\bf x}_{\perp}|=\sqrt{(x^{1})^{2}+(x^{2})^{2}}$ and with $\hat{\phi}$ standing for the unitary vector for the azimuthal coordinate. Notice that the potential (\ref{pv3}) does not produce any elctromagnetic field outside the string. It is well known in the literature \cite{Grifthis,Sakurai} that the energy levels of a two dimensional quantum rigid rotor are modified when it circumvents an infinite solenoid. In this case we have a very simplified version of the so called Aharonov Bohm bound states \cite{Sakurai}. Taking a quantum rigid rotor composed by a non-relativistic particle with mass $M$ and electric charge $q$, restricted no move along a ring of radius $b$, adopting a coordinate system where the ring lies on the plane $z =0$, centered at the origin, considering a Dirac string placed along the $z$-axis, with internal magnetic flux $\Phi$, and using Eq. (\ref{pv3}) we can write the Hamiltonian for the charged particle in cylindrical coordinates \begin{eqnarray} \label{ahab1} H=-\frac{1}{2Mb^{2}}\frac{d^{2}}{d\phi^{2}}+\frac{i q\Phi\left(1-k_{8}\right)}{2\pi Mb^{2}}\frac{d}{d\phi}\cr\cr +\frac{q^{2}\Phi^{2}\left(1-k_{8}\right)^{2}}{8\pi^{2}Mb^{2}} \end{eqnarray} where it is implicit that we must discard the terms proportional to $k_{8}^{2}$. The energy eigenfunctions of the hamiltonian (\ref{ahab1}) are given by \begin{eqnarray} \label{eigenfun} \Psi\left(\phi\right)= Be^{in\phi}\ \ ,\ \ n=0,\pm1,\pm2,\cdots \end{eqnarray} where $B$ is a normalization constant. Up to order $k_{8}$, the corresponding energy levels are \begin{eqnarray} \label{eneraha} E_{n}=\frac{1}{2Mb^{2}}\left(n-\frac{q\Phi}{2\pi}\right)^{2}+\frac{q\Phi k_{8}}{2\pi Mb^{2}}\left(n-\frac{q\Phi}{2\pi}\right), \end{eqnarray} The first term on the right hand side of (\ref{eneraha}) is the well known Aharonov Bohm energy \cite{Grifthis} and the second term is a correction due to the Lorentz-symmetry breaking. In terms of the SME coefficients, defined in Refs. \cite{CPT1,CPT2}, we can write \begin{eqnarray} \label{k6} k_{8}=\frac{1}{2}\left[(\tilde{\kappa}_{e+})^{33}-(\tilde{\kappa}_{e-})^{33}-{\tilde{\kappa}}_{{\mathrm{tr}}}\right] \ . \end{eqnarray} Therefore, \begin{eqnarray} \label{enerahaketr} E_{n}&=&\frac{1}{2Mb^{2}}\left(n-\frac{q\Phi}{2\pi}\right)^{2}+\frac{q\Phi}{4\pi Mb^{2}}\left(n-\frac{q\Phi}{2\pi}\right)(\tilde{\kappa}_{e+})^{33}\nonumber\\ & &-\frac{q\Phi}{4\pi Mb^{2}}\left(n-\frac{q\Phi}{2\pi}\right)\left[(\tilde{\kappa}_{e-})^{33}+{\tilde{\kappa}}_{{\mathrm{tr}}}\right]\ . \end{eqnarray} The energy (\ref{enerahaketr}) is due to the parity-even components of the background tensor. The second term on the right hand side is a birefringent contribution and the third one is nonbirefringent. \section{Conclusions and perspectives} \label{conclusoes} In this paper we have investigated the interactions between external sources for the gauge field in the CPT-even sector of the SME. We have focused on physical phenomena which have no counterpart in Maxwell electrodynamics. We have obtained our results in $3+1$ dimensions and we treated the background tensor $K_{(F)\alpha\beta\sigma\tau}$ perturbatively up to first order. Specifically, we have showed that it emerges a spontaneous torque on a classical electromagnetic dipole and an interaction between a steady straight line current and a point-like charge. We have also investigated some phenomena due to the presence of a Dirac string. We have showed that the string can interact with a point charge as well as with a straight steady line current in the Lorentz symmetry breaking scenario. We have showed that our results are in agreement with the corresponding ones obtained in reference \cite{Fontes}, up to lowest order in the background vector, for the very restrictive situation where the background vector can be written in terms of just one single background vector (\ref{param}), the only one considered in Ref. \cite{Fontes}. We have studied the so called Aharonov-Bohm bound states, for a $2$-dimensional quantum rigid rotator, in the Lorentz symmetry breaking scenario. We have obtained the energy levels for a specific example for the background tensor. The obtained results concerning deviations in the Coulombian behavior of the interaction energy between electric charges were used to estimate, euristically, upper bounds to the Lorentz-symmetry breaking parameters involved. The obtained estimates are far beyond the ones calculated with optical experimental data. Some numerical estimates have been made in order to investigate if some of the obtained effects were relevant for condensed matter systems. As a final remark, we point out that in this paper all the field sources considered are spinless. An interesting extension of this work would be the investigation of spin effects in the interactions between field sources. \begin{acknowledgments} L.H.C. Borges thanks to S\~ao Paulo Research Foundation (FAPESP) under the grant 2016/11137-5 for financial support. F.A. Barone thanks to CNPq (Brazilian agency) under the grants 311514/2015-4 and 313978/2018-2 for financial support. \end{acknowledgments}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} Photonic crystals (PC) are structures in which the dielectric constant varies periodically in space. If the contrast in dielectric constant is large enough, a frequency gap may occur in which light cannot propagate analogous to the electronic band gap in semiconductors. \cite{Yablonovitch, John, Soukoulis} The emission of light inside a photonic crystal can be strongly manipulated in the region of the photonic band gap, resulting in intense interest in optoelectronics and photocatalysis applications. However, the fabrication of such crystals at visible wavelengths has posed a formidable challenge. Recently, several groups \cite{Vos,Imhof,Velev,Zakhidov,Holland} have successfully fabricated materials in which pores are arranged in periodic arrays on a length scale comparable to optical wavelengths. Such materials are very good candidates for use as photonic crystals. However, in all the previous publications, a definitive signature of the existence of a photonic gap is still missing. We present a novel ceramic fabrication technique for photonic crystal thin films at visible wavelengths. We show that the reflectance spectra of these crystals shift systematically with the pore size, providing evidence of photonic crystal effects. Colloidal crystals are very attractive candidates for use in fabrication of optical photonic crystals. Monodisperse colloidal suspensions of silica or polystyrene spheres can self-assemble into close-packed structures at optical length scales, with excellent long-range periodicity. However, to observe photonic gaps requires an inverse structure where lower dielectric spheres (refractive index $n_1$) are embedded in an interconnected higher dielectric background (refractive index $n_2$). Optimum photonic effects require a low filling ratio (20-30 $\%$) of the dielectric background.\cite{theory} A major difficulty lies in the introduction of an interconnected dielectric background for the colloidal spheres, and the subsequent removal of the spheres, by calcination or etching, to achieve the desired dielectric contrasts. We use titania as the background dielectric filling medium which has a refractive index of $\sim 2.6 -2.8$ at optical wavelengths, with negligible absorption above 400 nm. In contrast to previous work \cite{Vos, Zakhidov, Holland} where the colloidal template is first assembled and the titania is introduced afterwards in a sol-gel process, we start with a slurry of nano-crystalline titania suspension and monodisperse polystyrene spheres. A few drops of this slurry is spread on a glass substrate and allowed to dry slowly over a period of $\sim 24$ hours in a humidity chamber. The samples are then pressed in a cold isostatic press to improve the initial green density of the as-dried samples and to reduce stress cracks during subsequent heat treatment. The sample is slowly heated to $520^oC$ for $5$ hours whereby the polystyrene spheres are burned off leaving behind air spheres in a titania matrix. Thin films with dimensions $\sim 10mm\times 2-3mm$ can be reproducibly synthesized in this way in much shorter times ($\sim$1 day) than with the infiltration technique.\cite{Vos, Zakhidov, Holland} Optical inspection of our samples reveals shiny regions with characteristic colors that depends on the size of the polystyrene spheres used. This is especially clear when the samples are viewed under the microscope. Samples fabricated with 395 nm spheres exhibit bright green regions. With larger spheres (479 nm), the color shifts to a salmon-red color. Unlike previous reports\cite{Vos, Zakhidov, Holland} our films exhibit uniform color over large regions millimeters in size. Wide view scanning electron microscope (SEM) images (Fig. 1a) show large domains with excellent order extending from $\sim 50 \mu m$ to more than $\sim 100 \mu m$. Also visible (Fig. 1a) are single-height steps separating large domains of hexagonally ordered regions. The domains are well ordered across drying cracks in the sample (Fig. 1a), indicating that ordering in the samples occurs upon deposition and is not disrupted by the drying and heating process. Our crystals exhibit considerably better short-range and long-range order than the macroporous materials fabricated with sol-gel methods\cite{Imhof}. At still lower magnification, the scan periodicity of the CRT display and the object periodicity interact, producing fringes or a Moire pattern.\cite{Goldstein} This pattern (Fig. 1b) illustrates well the domain orientation and strain within the individual domains. Higher magnification SEM images (Fig. 1c) reveals hollow regions of air spheres that are very well ordered in a triangular lattice. There are three dark regions inside each hollow region corresponding to the air spheres of the underlying layer, indicating that the spheres are indeed close-packed. The SEM images indicate that the crystalline grains in the film are highly oriented with the close-packed planes parallel to the substrate. Preferential orientation also exists in the close-packed plane probably due to stresses developing during the drying process. This alignment of crystal grains may prove very useful for applications and measurements especially in cases where a full photonic band gap does not exist in all directions of propagation in the crystal. Determination of the lattice constant indicates a small shrinkage of $\leq 5 \%$ in the lateral direction of the film due to the heat treatment and densification of the titania network. Experimental thickness measurements, before and after the pressing and heating process, indicate a larger shrinkage in the direction perpendicular to the film. Since the ordered films are thick ($\geq 10 \mu$), their transmission is small and the major optical signature is found in reflectance measurements. The specular reflectance at near normal incidence from our nanostructured films is shown (Fig. 2a) for different sizes of polystyrene spheres as templates. The initial sphere sizes (Fig. 2a, legend) were measured directly from SEM images of ordered arrays of polystyrene spheres. The prominent feature is a specular reflectivity peak for each structure, that systematically shifts from 1120 nm to 521 nm over the range of photonic crystals. The wavelength of the specular peak corresponds very well to the visual color of the samples. The larger pore samples have reflectivity peaks in the near-infrared. In addition, there is a gradual but featureless increase in reflectivity at longer wavelengths (above 1000 nm) in several samples (Fig. 3a). This is due to the rough surface of the PC's appearing smoother when probed at longer wavelengths. This increases the specular reflectivity, with an accompanying decrease of the diffuse reflectance at longer wavelength, which is also observed. The position of the observed reflectivity peak scales remarkably well with the diameter of the spheres (Fig. 2b), indicating that it is an intrinsic feature of the photonic crystals. This is the first observation of the optical signature of a photonic crystal together with the required scaling with sphere size, that has not been seen in any previous work \cite{Vos,Imhof,Velev,Zakhidov,Holland} on such templated PC's. We performed photonic band calculations and calculated reflectivities from transfer matrix simulations\cite{later}, and find that the peak arises from the wide stop band in the stacking direction for close-packed structures. For the fcc structure this corresponds to the stop band between the lowest bands 2 and 3. Our calculations find that the existence and position of the stop band in the stacking direction are insensitive to the stacking sequence of the spheres (fcc (ABC) or hcp (ABAB)) \cite{later}. The stop band corresponds to the known pseudogap in the photonic densities of states, and persists even for lower n $\sim$ 2 over a large range of filling fractions. Such a refractive index for titania may be expected from the considerably lower density of the solid titania matrix, as would be expected from these processing conditions and from earlier sintering studies of nanocrystalline titania\cite{Hahn}. Further porosimetry measurements \cite{later}, will be performed to estimate the density of the titania matrix. The refractive index of the background skeletal titania may be improved by sintering at higher temperatures\cite{Hahn}. Quantitative calculations of peak wavelengths will be presented later, when accurate interlayer spacings are determined from optical diffraction measurements, but preliminary estimates indicate that the observed peak frequencies are consistent with what we know about the geometry and filling fraction of the films. The success in the fabrication of large-area optical photonic crystals using rapid, economical, and reproducible ceramic techniques will open the way towards the experimental observation of many interesting effects involving the control of light emission and propagation in these materials. We thank J. Kavanaugh, G. Tuttle, P. Canfield, A. Panchula, H. Kang, and W. Leung for help with various measurements and C. M. Soukoulis and S. John for helpful discussions. This research was supported by the Office of Basic Energy Sciences, and Ames Laboratory is operated for the U.S. DOE by Iowa State University.
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Conclusions} \labelSec{conclusions} We have presented an algorithm to generate higher-order conservative finite volume discretizations for Poisson's equation on cut cell grids. The Poisson operator is written in terms of face fluxes, which we approximate using a polynomial interpolant. The key to the method is to use weighted least squares to generate stable stencils. In particular, the linear system for the face flux stencil is underdetermined, and we can use weights to pick a stable stencil from the space of solutions. By applying the method to a variety of geometries, we have demonstrated that the method achieves second and fourth order accuracy. In each of these examples, we have also shown that the discrete Laplacian operator is stable; that is, it has strictly negative eigenvalues. We are currently studying the effect of different weighting functions on the operator spectrum for a future theory paper. We are also looking into how other choices, like neighbor selection and centering of the interpolant, modify the spectrum. Our current method produces a Laplacian operator with eigenvalues that depend on the inverse of the smallest volume fractions. The operator spectrum also contains a small cluster of eigenvalues with non-negligible imaginary components. We hope to alleviate both problems in the future. Finally, we plan to study the effect of using standard regular stencils in the interior for the theory paper. The method described in this paper is applicable to other problems in div-flux form. For example, we are working on a fourth-order method for advection-diffusion. For this problem, we are using an upwind weighting system. Another application for the future is the variable coefficient Poisson's equations for smoothly varying coefficients. \section{Introduction} There are many numerical approaches to solve Poisson's equation in complex geometries. Green function approaches \cite{McKenneyFMM1994, Greengard1996, Cheng1999}, such as the fast multipole method, are fast and near-optimal in complexity, but they are not conservative. Also, they cannot be easily extended to variable and tensor coefficient Poisson operators, which are important in the earth sciences and multi-material problems. Another popular approach is to use the finite element method, which has a number of advantages. These advantages include negative-definite discrete operators, higher-order accuracy, and ease of extension to variable coefficients. The conditioning and accuracy of the discrete finite element operator can be strongly mesh-dependent, however \cite{BRENNERFEMBOOK}. Unfortunately, generating meshes with higher-order conforming elements for complex 3D domains is still an expensive, globally-coupled computation, and an open area of research \cite{PirzadehFEMGridGen2010}. This motivates the need for simpler grid generation. Cut cells are a simple way of addressing this. In a cut cell (or embedded boundary) method, the discrete domain is the intersection of the complex geometry with a regular Cartesian grid. Such intersections are local, and can be calculated very efficiently in parallel, enabling fast computation of solution-dependent moving boundaries \cite{AftosmisBergerMelton, EBGeometryPaper}. The complexity of dealing with complex geometries is shifted back to the discretization approach. The cut-cell approach has been used successfully to solve Poisson's equation in finite volume \cite{johansenColella:1998, schwartzETAL:2006} and finite difference \cite{GibouFedkiw2005, LevequeLing1994} discretizations. For many problems, such as heat and mass transfer, discrete conservation is important. Finite volume methods are discretely conservative by construction because they are in discrete flux-divergence form \cite{LEVEQUEBOOK}. Previous finite volume methods for Poisson's equation are first order in truncation error near the embedded boundary and second order in solution error \cite{johansenColella:1998, schwartzETAL:2006}. We present a method for generating higher-order finite volume discretizations for Poisson's equation on Cartesian cut cell grids in two and three dimensions. The discretization is in flux-divergence form. We compute stencils for the flux by solving small weighted least squares systems. In principle, the method can produce discretizations for any given order of accuracy. In Section \refSec{results}, we apply the method to solve Poisson's equations on a number of geometries, and we demonstrate the method can achieve both second order and fourth order convergence in both truncation and solution error. This paper is organized as follows. In Section \refSec{method}, we introduce our method. In Section \refSec{results}, we present 2D and 3D examples that demonstrate convergence with grid refinement. We also show that the Laplacian operator has only stable eigenvalues. Finally, we demonstrate that the method is robust under small perturbations in the geometry. \section{Method} \labelSec{method} We design a conservative finite volume method to solve Poisson's equation \begin{equation*} \Delta \phi = \rho \end{equation*} for the potential $\phi$ on a domain $\Omega$ with a charge distribution $\rho$. First, we write the equation in flux-divergence form: \begin{equation*} \nabla \cdot \nabla \phi = \rho . \end{equation*} Integrating over an arbitrary region $V \subseteq \Omega$ and applying the divergence theorem gives \begin{equation} \labelEq{laplacian over V} \int_{V} \nabla \cdot \nabla \phi d\mathcal{V}= \int_{\partial V} \nabla \phi \cdot {\boldsymbol{n}} d\mathcal{A}. \end{equation} Our method is based on using a higher-order interpolant of $\phi$ to approximate the flux $\int_{\partial V} \nabla \phi \cdot {\boldsymbol{n}} d\mathcal{A}$. \subsection{Spatial Notation} \quad Our computational domain is a set of distinct, contiguous volumes, $\{\mathcal{V}_v\}$, each of which is part of an intersection of $\Omega$ with a cell $\mathcal{V}_\ib$ in a regular grid of grid spacing $h$, \begin{equation} \nonumber \mathcal{V}_\ib = [i_1 h, (i_1 + 1)h] \otimes ... \otimes [i_D h, (i_D + 1)h] \equiv [\ib h, (\ib + \ub)h] , \end{equation} where the index $\ib = (i_1, ..., i_D)$, and $\ub = (1, ..., 1)$. Note that we use the index $v$ to uniquely identify a volume; for a given regular cell $\mathcal{V}_\ib$, there may be more than one $\mathcal{V}_v$ such that $\cup \{\mathcal{V}_v \cap \mathcal{V}_\ib \} = \mathcal{V}_\ib \cap \Omega$, especially in the case of very complex geometries. The grid-aligned faces associated with $\mathcal{V}_\ib$ in the $\pm d$ directions are identified by an additional half index, $\ib \pm \half\vec{e}^d$, where $\vec{e}^d$ is the unit vector with components $e^d_i = 1$ if $i = d,\, 0$ otherwise. For example, \begin{equation} \nonumber \mathcal{A}_{\ib + \half\vec{e}^d} \equiv \left[\ib h, (\ib + \ub - \vec{e}^d)h \right] \, . \end{equation} For a given volume $\mathcal{V}_v$, its surface $\partial \mathcal{V}_v$ is discretized into grid-aligned faces, $\mathcal{A}_{v \pm \half\vec{e}^d} = \mathcal{A}_{\ib \pm \half\vec{e}^d} \cap \partial \mathcal{V}_v$, which are shared between neighboring volumes. $\mathcal{V}_v$ may also contain a portion of the domain boundary, which we indicate with $\mathcal{A}^b_v = \partial \Omega \cap \partial \mathcal{V}_v$. In either case, we use the index $f$ to provide a unique global index into the set of all such faces, $\{f(v): \mathcal{A}_f \in \partial \mathcal{V}_v \}$. When discussing volume- or face-average quantities in the sections below, we will use notational shortcuts where $v$ or $f$ are associated with $\mathcal{V}_v$ or $\mathcal{A}_f$, respectively. For example, \begin{align} \labelEq{volavg} \avg{\phi}_v & \equiv \frac{1}{|\mathcal{V}_v|} \int_{\mathcal{V}_v} \phi \, d\mathcal{V} \hbox{ , and } \\ \nonumber \avg{\phi}_f & \equiv \frac{1}{|\mathcal{A}_f|} \int_{\mathcal{A}_f} \phi \, d\mathcal{A} \, . \end{align} Volumes and faces contained within $\Omega$ that do not contain a portion of the domain boundary are called ``full,'' whereas those that do are ``cut'' by the embedded boundary. We will often identify irregular faces and volumes in terms of their ``fraction'' of a regular one, that is: \begin{align} \nonumber |\mathcal{V}_v| & = \kappa_v h^D \hbox{ , and } \\ \nonumber |\mathcal{A}_f| & = \alpha_f h^{D-1} \, , \end{align} where $\kappa_v$ and $\alpha_f$ are called the volume- and area-fraction, respectively. Finally, we define the volume moments and face moments that show up in our discretization. In the paper, we use multi-index notation. In particular, for ${\boldsymbol{p}} = (p_1, ..., p_D)$, ${\boldsymbol{x}} = (x_1, ..., x_D)$, and ${\boldsymbol{y}} = (y_1, ..., y_D)$, we define \begin{equation} \nonumber ({\boldsymbol{x}} + {\boldsymbol{y}})^{\boldsymbol{p}} = \prod_{d=1}^D (x_d + y_d)^{p_d} . \end{equation} The ${\boldsymbol{p}}$th volume moment of $\mathcal{V}_v$ is \begin{equation} \labelEq{volume moment} m^{{\boldsymbol{p}}}_v({\boldsymbol{x}_0}) = \int_{\mathcal{V}_v} (\xb - {\boldsymbol{x}_0})^{{\boldsymbol{p}}} d\mathcal{V} . \end{equation} The ${\boldsymbol{p}}$th face moment of $\mathcal{A}_{f}$ is \begin{equation} \labelEq{face moment} m^{{\boldsymbol{p}}}_{f}({\boldsymbol{x}_0}) = \int_{\mathcal{A}_{f}} (\xb - {\boldsymbol{x}_0})^{{\boldsymbol{p}}} d\mathcal{A} . \end{equation} For an embedded boundary face, it is useful to define a second face moment that includes the normal to the face: \begin{equation} \labelEq{face moment with normal} m^{{\boldsymbol{p}}}_{d, f}({\boldsymbol{x}_0}) = \int_{\mathcal{A}_{f}} (\xb - {\boldsymbol{x}_0})^{{\boldsymbol{p}}} n_d(\xb) d\mathcal{A} , \end{equation} where $n_d$ is the $d$th component of the outward unit normal to $f$. \subsection{Flux Approximation} Setting $V = \mathcal{V}_v$ in \refEq{laplacian over V} and dividing by the volume of $\mathcal{V}_v$ we get \begin{equation} \nonumber \frac{1}{|\mathcal{V}_v|} \int_{\mathcal{V}_v} \nabla \cdot \nabla \phi \, d\mathcal{V} = \frac{1}{|\mathcal{V}_v|} \sum_{f(v)} \int_{\mathcal{A}_f} \, \nabla \phi \cdot {\boldsymbol{n}} \, d\mathcal{A} \, . \end{equation} We approximate the flux by replacing $\phi$ by a polynomial interpolant. Suppose \begin{equation} \nonumber \psi(\xb) = \sum_{|{\boldsymbol{p}}| < P} c_{{\boldsymbol{p}}} (\xb - {\boldsymbol{x}_0})^{{\boldsymbol{p}}} \end{equation} is a polynomial interpolant of $\phi$ such that $\phi(\xb) = \psi({\boldsymbol{x}_0}) + O(|\xb - {\boldsymbol{x}_0}|^P)$. Then \begin{equation} \nonumber \int_{\mathcal{A}_f} \nabla \phi \cdot {\boldsymbol{n}} d\mathcal{A} = \sum_{|{\boldsymbol{p}}| < P} c_{{\boldsymbol{p}}} \int_{\mathcal{A}_f} \nabla (\xb - {\boldsymbol{x}_0})^{{\boldsymbol{p}}} \cdot {\boldsymbol{n}} d\mathcal{A} + O(h^{P+D-1}). \end{equation} \subsubsection{Calculating the Polynomial Interpolant} \label{sec: polynomial interpolant} To solve for the coefficients $c_{\boldsymbol{p}}$, we create a system of equations using voume-averaged values of neighboring volumes and face-averaged values of neighboring boundary faces. For a face $f$, we require \begin{equation} \nonumber \avg{\psi}_v = \avg{\phi}_v , \end{equation} for all neighboring volumes $\mathcal{V}_v$. Using equations \refEq{volavg} and \refEq{volume moment}, this simplifies to \begin{equation} \labelEq{volume moment eq for cp} \frac{1}{|\mathcal{V}_v|} \sum_{|{\boldsymbol{p}}| < P} c_{{\boldsymbol{p}}} m^{{\boldsymbol{p}}}_v = \avg{\phi}_v . \end{equation} If we are given the Dirichlet boundary condition $\phi = g$ on a neighboring boundary face $f_b$, then we require \begin{equation*} \frac{1}{|\mathcal{A}_{f_b}|} \sum_{|{\boldsymbol{p}}| < P} c_{{\boldsymbol{p}}} \int_{\mathcal{A}_{f_b}} (\xb - {\boldsymbol{x}_0})^{{\boldsymbol{p}}} d\mathcal{A} = \frac{1}{|\mathcal{A}_{f_b}|} \int_{\mathcal{A}_{f_b}} g d\mathcal{A} , \end{equation*} or \begin{equation} \labelEq{Dir moment eq for cp} \frac{1}{|\mathcal{A}_{f_b}|} \sum_{|{\boldsymbol{p}}| < P} c_{{\boldsymbol{p}}} m^{{\boldsymbol{p}}}_{f_b}({\boldsymbol{x}_0}) = \frac{1}{|\mathcal{A}_{f_b}|} \int_{\mathcal{A}_{f_b}} g d\mathcal{A} , \end{equation} using equation \refEq{face moment}. If, instead, the Neumann boundary condition $\nabla \phi \cdot {\boldsymbol{n}} = g$ is specified on $f_b$ then we require \begin{equation*} \frac{1}{|\mathcal{A}_{f_b}|} \sum_{d=1}^D \sum_{|{\boldsymbol{p}}| < P} c_{{\boldsymbol{p}}} \int_{\mathcal{A}_{f_b}} \partial^{{\boldsymbol{e}^d}}(\xb - {\boldsymbol{x}_0})^{{\boldsymbol{p}}} n_d d\mathcal{A} = \frac{1}{|\mathcal{A}_{f_b}|} \int_{\mathcal{A}_{f_b}} g d\mathcal{A} , \end{equation*} or \begin{equation} \labelEq{Neu moment eq for cp} \frac{1}{|\mathcal{A}_{f_b}|} \sum_{d=1}^{D}\sum_{|{\boldsymbol{p}}| < P} p_d c_{{\boldsymbol{p}}} m^{{\boldsymbol{p}}-{\boldsymbol{e}^d}}_{d, f_b} = \frac{1}{|\mathcal{A}_{f_b}|} \int_{\mathcal{A}_{f_b}} g d\mathcal{A} , \end{equation} using equation \refEq{face moment with normal}. Here, we set $m^{{\boldsymbol{p}}-{\boldsymbol{e}^d}}_{d, f_b} = 0$ if ${\boldsymbol{p}} \leq {\boldsymbol{e}^d}$ to simplify the notation. Equations \refEq{volume moment eq for cp}, \refEq{Dir moment eq for cp}, and \refEq{Neu moment eq for cp} represent a linear system of equations for the polynomial coefficients $c_{{\boldsymbol{p}}}$. Let $c = [\dots \, c_{{\boldsymbol{p}}} \, \dots]^T$ and \begin{equation} \nonumber A_{v \pb} = \frac{m^{{\boldsymbol{p}}}_v({\boldsymbol{x}_0})}{m^{{\boldsymbol{0}}}_v}. \end{equation} Let \begin{equation} \nonumber b_{f_b, {\boldsymbol{p}}} = \frac{m^{\boldsymbol{p}}_{f_b}}{m^{{\boldsymbol{0}}}_{f_b}}, \end{equation} if Dirichlet boundary conditions are prescribed on $f_b$ and \begin{equation} \nonumber b_{f_b, {\boldsymbol{p}}} = \frac{1}{m^{{\boldsymbol{0}}}_{f_b}} \sum \limits_{d=1}^D p_d \, m^{{\boldsymbol{p}} - {\boldsymbol{e}^d}}_{d,f_b} . \end{equation} if Neumann boundary conditions are prescribed on $f_b$. Recall, $m^{{\boldsymbol{0}}}_v = |\mathcal{V}_v|$ and $m^{{\boldsymbol{0}}}_f = |\mathcal{A}_f|$ (see equations \refEq{volume moment} and \refEq{face moment}). Finally, let $\Phi_{\tvol} = [\dots \, \avg{\phi}_v \, \dots]^T$ and $G = [\dots \, \avg{g}_{f_b} \, \dots]^T$. Then the combined linear system is \begin{equation} \nonumber \begin{bmatrix} \dots \\ \dots \, A_{v \pb} \, \dots \\ \dots \\ \dots \, b_{f_b, \pb} \, \dots \\ \dots \end{bmatrix} c = \begin{bmatrix} \Phi_{\tvol} \\ G \end{bmatrix} , \end{equation} or \begin{equation} \nonumber Ac = \Phi, \end{equation} where \begin{equation} \labelEq{def of A} A = \begin{bmatrix} \dots \\ \dots \, A_{v \pb} \, \dots \\ \dots \\ \dots \, b_{f_b, \pb} \, \dots \\ \dots \end{bmatrix} \end{equation} and \begin{equation} \labelEq{def of phi vec} \Phi = \begin{bmatrix} \Phi_{\tvol} \\ G \end{bmatrix} . \end{equation} If we have sufficiently many neighbors, this is an overdetermined full-rank system. We use weighted least squares to obtain the coefficients $c$: \begin{equation} \nonumber c = (WA)^\dagger W\Phi . \end{equation} In this paper, we only consider an invertible diagonal weighting matrix $W$. The weights are extra degrees of freedom in the system, which we use to generate a stable Laplacian operator with eigenvalues that lie on the left half of the complex plane. Let $F =[\dots \, F_{\boldsymbol{p}} \, \dots]^T$, where \begin{equation} \nonumber F_{{\boldsymbol{p}}} = \int_{\mathcal{A}_f} \nabla (\xb - {\boldsymbol{x}_0})^{{\boldsymbol{p}}} \cdot {\boldsymbol{n}} d\mathcal{A} . \end{equation} Then \begin{align} \nonumber \sum_{|{\boldsymbol{p}}| < P} c_{{\boldsymbol{p}}} \int_{\mathcal{A}_{f}} (\xb - {\boldsymbol{x}_0})^{{\boldsymbol{p}}} d\mathcal{A} &= F^Tc \\ &= F^T(WA)^\dagger W\Phi \nonumber. \end{align} The vector \begin{equation} \labelEq{stencil equation solution} s = W (A^T W)^\dagger F \end{equation} is a stencil for computing the flux through $f$. Note that $s$ satisfies the equation \begin{equation} \labelEq{stencil equation} A^T s = F , \end{equation} which is an underdetermined system for $s$. The case \begin{equation} \nonumber s = (A^T)^\dagger F \end{equation} is the solution with minimum $L_2$ norm. This flux stencil does not decay with distance from the face (see Figure \ref{fig:laplacian stencils}), and it produces an unstable Laplacian operator with large positive eigenvalues (data not shown). Equation \refEq{stencil equation solution} is the solution that minimizes $\| W^{-1} s\|_2$. By choosing weights that decay with distance, we can force the flux stencil to also decay with distance (see Figure \ref{fig:laplacian stencils}). Our particular choice of weights is given in Section \ref{sec: neighs and weights}. \begin{figure} \begin{subfigure}{0.5\textwidth} \includegraphics[width=\textwidth]{noweightsReg.pdf} \caption{Stencil for a regular cell using standard least squares.} \end{subfigure} \hspace{0.03\textwidth} \begin{subfigure}{0.5\textwidth} \includegraphics[width=\textwidth]{2dstencil.pdf} \caption{Stencil for a regular cell using weighted least squares.} \end{subfigure} \begin{subfigure}{0.5\textwidth} \includegraphics[width=\textwidth]{noweightsEB.pdf} \caption{Stencil for a cut cell using standard least squares.} \end{subfigure} \hspace{0.03\textwidth} \begin{subfigure}{0.5\textwidth} \includegraphics[width=\textwidth]{ebstencil.pdf} \caption{Stencil for a cut cell using weighted least squares.} \end{subfigure} \caption{Fourth-order Laplacian stencil for the cell highlighted in red. Without weights, the stencil values do not decay with distance.} \label{fig:laplacian stencils} \end{figure} \subsection{Order of Accuracy} In general, we cannot calculate the moments exactly. The geometry and the corresponding moments are constructed using the method in \cite{EBGeometryPaper}. Let $M_{d,f}^{{\boldsymbol{p}}}({\boldsymbol{x}_0})$ denote the approximation to the exact moment $m_{d,f}^{{\boldsymbol{p}}}({\boldsymbol{x}_0})$, so that \begin{equation} \nonumber M_{d,f}^{{\boldsymbol{p}}}({\boldsymbol{x}_0}) = m_{d,f}^{{\boldsymbol{p}}}({\boldsymbol{x}_0}) + O(h^R) . \end{equation} Recall that \begin{equation} \nonumber \phi(\xb) = \sum_{|{\boldsymbol{p}}| < P } c_{{\boldsymbol{p}}} (\xb - {\boldsymbol{x}_0})^{{\boldsymbol{p}}} + O(|\xb - {\boldsymbol{x}_0}|^P) . \end{equation} In this subsection, we show that the truncation error for the kappa-weighted Laplacian operator is $O(h^{P-2})$ if $R = P+D-2$. We assume that for face $f$, ${\boldsymbol{x}_0}$ is some point such that $|\xb - {\boldsymbol{x}_0}| \leq Ch$ for all points $\xb$ on the face. Then \begin{align*} \int_{\mathcal{A}_f} \nabla \phi \cdot {\boldsymbol{n}} d\mathcal{A} &= \sum_{|{\boldsymbol{p}}| < P} c_{{\boldsymbol{p}}} \int_{\mathcal{A}_f} \nabla (\xb - {\boldsymbol{x}_0})^{{\boldsymbol{p}}} \cdot {\boldsymbol{n}} d\mathcal{A} + O(h^{P+D - 2}) \\ &= \sum_{d=1}^D \sum_{|{\boldsymbol{p}}| < P}c_{{\boldsymbol{p}}} \int_{\mathcal{A}_{f}} p_d (\xb - {\boldsymbol{x}_0})^{{\boldsymbol{p}}-{\boldsymbol{e}^d}} n_d(\xb) d\mathcal{A} + O(h^{P+D - 2}) \\ &= \sum_{d=1}^D \sum_{|{\boldsymbol{p}}| < P}p_d c_{{\boldsymbol{p}}} m_{d,f}^{{\boldsymbol{p}}-{\boldsymbol{e}^d}}({\boldsymbol{x}_0}) + O(h^{P+D - 2}) \\ &= \sum_{d=1}^D\sum_{|{\boldsymbol{p}}| < P}p_d c_{{\boldsymbol{p}}} M_{d,f}^{{\boldsymbol{p}}-{\boldsymbol{e}^d}}({\boldsymbol{x}_0}) + O(h^R) + O(h^{P+D - 2}) . \\ \end{align*} We get \begin{align*} \frac{\kappa_v}{|\mathcal{V}_v|} \int_{\mathcal{V}_v} \nabla \cdot \nabla \phi \, d\mathcal{V} &= \frac{1}{h^D}\sum_{f(v)} \int_{\mathcal{A}_f} \nabla \phi \cdot {\boldsymbol{n}} d\mathcal{A} \\ &= \frac{1}{h^D} \sum_{f(v)} \sum_{d=1}^D\sum_{|{\boldsymbol{p}}| < P}p_d c_{{\boldsymbol{p}}} M_{d,f}^{{\boldsymbol{p}}-{\boldsymbol{e}^d}}({\boldsymbol{x}_0}) + O(h^R) + O(h^{P+D - 2}) . \\ \end{align*} At volumes $\mathcal{V}_v$ that are sufficiently far from the boundary (i.e. they only include regular volumes in their Laplacian stencils), the truncation error is $O(h^{P-1})$ if $R = P+D-1$. The $O(h^{P-2})$ term in the polynomial interpolation error cancels out because of symmetry in the flux stencils. The truncation error is not the same for different choices of norms because of this disparity in the order of accuracy between volumes near the boundary and interior volumes. The error in the $L_\infty$ norm is $O(h^{P-2})$. In the $L_1$ norm, however, it is $O(h^{P-1})$. If $N = 1/h$, then the number of volumes near the boundary is $O(N)$ whereas the number of interior volumes is $O(N^2)$. So, the $L_1$ error looks like \begin{equation} \nonumber \sum |e_{ij}|\Delta V \approx N^2(h^{P-1})(h^2) + N(h^{P-2})(h^2) = h^{P-1} . \end{equation} In contrast, the solution error has the same order of accuracy in all the norms. Using potential theory arguments (see \cite{Johansen:1998:EBPoisson} for details), it can be shown that the error is $O(h^{P})$ near the boundary for Dirichlet boundary conditions and $O(h^{P-1})$ for Neumann boundary conditions. In the interior, the solution error is $O(h^{P-1})$. So, the order of accuracy in the solution is $Q$ if $P = Q+1$ and $R = Q+D$. \subsection{Neighbors and Weights} \label{sec: neighs and weights} Recall, the polynomial interpolant $\psi$ is \begin{equation} \nonumber \psi = \sum_{|{\boldsymbol{p}}| < P} c_{{\boldsymbol{p}}} (\xb - {\boldsymbol{x}_0})^{{\boldsymbol{p}}}. \end{equation} In our method, we choose ${\boldsymbol{x}_0}$ to be the face center when $f$ is a grid-aligned face. If $f$ is an embedded boundary face, ${\boldsymbol{x}_0}$ is the center of the cell cut by $f$. We take the neighbors of a volume $V$ to be the volumes in the physical domain that are $R_n$ cells away from $V$. We call $R_n$ the path radius. If a neighboring volume touchs a boundary face, then that boundary face is also regarded as a neighbor of $V$. The neighbors of a face $f$ are the neighbors of its adjacent volumes. See Figure \ref{fig:stencil ranges}. For simplicity, our method uses one path radius for all volumes in the domain. To pin down the interpolant for a boundary face, we need a large path radius. In our results, we set $R_n = 2$ for the second-order method, and $R_n=3$ for the fourth-order method. \begin{figure} \begin{subfigure}{0.5\textwidth} \includegraphics[width=\textwidth]{faceneighbors.pdf} \caption{Neighbors of a grid-aligned face (in blue) and two embedded boundary faces (whose corresponding cut cells are highlighted in red and green). In this diagram, the path radius $R_n$ is 3.} \end{subfigure} \hspace{0.03\textwidth} \begin{subfigure}{0.5\textwidth} \includegraphics[width=\textwidth]{stencilrange.pdf} \caption{Volumes involved in the Laplacian stencils for three volumes (highlighted in blue, green, and red).} \end{subfigure} \caption{Neighbors used to construct stencils (flux stencils for faces and Laplacian stencils for volumes).} \label{fig:stencil ranges} \end{figure} As explained at the end of Section \ref{sec: polynomial interpolant}, we want a weighting that emphasizes nearest neighbors, and gives little weight to far away neighbors. Our results were constructed with the weighting matrix $W$, with entries \begin{equation} \nonumber W_{ii} = \begin{cases} 1 & \frac{|\xb_i - {\boldsymbol{x}_0}|}{h} < \half \\ (2 \frac{|\xb_i - {\boldsymbol{x}_0}|}{h})^{-5} & \frac{|\xb_i - {\boldsymbol{x}_0}|}{h} \ge \half \end{cases} . \end{equation} If the $i$th row of $A$ (equation \refEq{def of A}) corresponds to a volume, then $\xb_i$ is the cell center. If it corresponds to a grid-aligned boundary face, then $\xb_i$ is the face center. Finally, if it corresponds to an embedded boundary face, then $\xb_i$ is the cell center of the cut cell. \section{Results} \labelSec{results} In this section, we apply the method to solve Poisson's equation on several different geometries, and demonstrate that the method achieves second and fourth order accuracy and produces a stable Laplacian operator. \subsection{Approximate Moments} The results presented in this paper were all obtained using $O(h^{6+D-1})$ accurate face moments and $O(h^{6+D})$ accurate volume moments. (Note that for $D=2$, the grid-aligned face moments are one-dimensional and are calculated exactly.) \subsection{Solver} We use the algebraic multigrid (AMG) method in the PETSc solver framework for our linear equation solves \cite{petsc-web-page, petsc-user-ref, petsc-efficient}. PETSc provides interfaces to several third party AMG solver and has a build in AMG solver, GAMG. We use a GMRES solver preconditioned with GAMG, which implements a smoothed aggregation AMG method \cite{Adams-03a}. To compute eigenvalues we use SLEPc, an extension of PETSc \cite{Hernandez:2003:SSL,slepc-users-manual}. The eigenvalues are found using Davidson methods \cite{Romero:2014:PID}. \subsection{Computing Convergence Rates} To compute convergence rates, we prescribe an exact solution and compare the computed quantities to the exact quantities. Convergence rates are calculated for the face fluxes, the operator truncation error, and the solution error. For a face $f$, the error in the face flux is \begin{equation} \nonumber e^{\text{flux}}_f = \frac{1}{A_f}s_f^T\Phi - \frac{1}{A_f}\int_f \nabla \phi \cdot n d\mathcal{A} , \end{equation} where $s_f$ is the flux stencil (equation \refEq{stencil equation solution}) and $\Phi$ is defined in equation \refEq{def of phi vec}. Here, we have written $s_f$ instead of $s$ to highlight that $s$ depends on $f$. The flux error is computed for all of the faces, including boundary faces. The (kappa-weighted) truncation error for a volume $\mathcal{V}_v$ is \begin{equation} \nonumber e^{\text{lapl}}_v = \frac{\kappa_v}{|\mathcal{V}_v|}\sum_{f(v)}s^T_{f(v)}\Phi - \frac{\kappa}{|\mathcal{V}_v|}\int_{\mathcal{V}_v} \Delta \phi d\mathcal{V} . \end{equation} Finally, the solution error for $\mathcal{V}_v$ is the difference between the computed volume-average of $\phi$ and the exact volume-average. The $L_\infty$ error is the maximum absolute error over the faces or the cells. For the flux, the $L_1$ error is defined as \begin{equation} \nonumber e^{\text{flux}}_{L_1} = \frac{1}{\sum_{f}\alpha_f}\sum_{f} |e^{\text{flux}}_f| \alpha_f. \end{equation} The $L_1$ truncation error is \begin{equation} \nonumber e^{\text{lapl}}_{L_1} = \sum_{\mathcal{V}_v} |e^{\text{lapl}}_v| |\mathcal{V}_v|. \end{equation} Similarily for the solution error. In each of 2D tests, the exact solution is \begin{equation} \nonumber \phi(x,y) = \sin(2\pi(x-\sqrt{2}/2))\sin(2\pi(x-\sqrt{3}/2)) . \end{equation} The exact solution for the 3D example is given in Section \ref{sec: 3D example}. \subsection{Circle} Our first test geometry is the circle \begin{equation} \nonumber (x-0.5)^2 + (y-0.5)^2 = 0.25^2. \end{equation} We apply the method to the region outside of this circle. For this geometry, we simulate three cases: \begin{enumerate} \item 2nd order method with Dirichlet boundary conditions on the boundaries, \item 4th order method with Dirichlet boundary conditions on the boundaries, \item 4th order method with Dirichlet boundary conditions on the square and Neumann boundary conditions on the circle. \end{enumerate} Figure \ref{fig: circle solution error} shows the solution error for test cases 2 and 3. Convergence rates are given in Table \ref{table: circle convergence rates} for the three cases. Figure \ref{fig: circle spectrum} shows the spectra for the Laplacian operator in test cases 2 and 3. We have not weighted the operator by the volume fraction. In both cases the eigenvalues of the operator lie in the left half plane. There are a few eigenvalues with large negative real part. The eigenvectors corresponding to these eigenvalues are concentrated in the small cut cells. There is also a cluster of eigenvalues whose imaginary parts are relatively large. If this operator is combined with a temporal discretization like the trapezoidal rule to solve the heat equation, these eigenvalues would introduce oscillations in the solution; however, these oscillations are quickly damped out because the eigenvalues also have large negative real parts. Without weights, the operator has large positive eigenvalues and is unstable (data not shown). The eigenvectors for the positive eigenvalues are supported on the small cells. The operator also has a pair of eigenvalues with significantly large imaginary parts, with eigenvectors that couple the smallest cells. As a point of reference, we have plotted the spectrum for the second-order operator generated using the method in \cite{schwartzETAL:2006}, (see Figure \ref{fig: ebamrpoisson spectrum}). This operator is weighted by the volume fraction. Like our fourth-order operator, this operator also has eigenvalues with non-negligible imaginary components. \begin{figure} \begin{subfigure}{0.5\textwidth} \includegraphics[width = \textwidth]{circleDirN64.png} \caption{Dirichlet boundary conditions on the circle.} \end{subfigure} \hspace{0.03\textwidth} \begin{subfigure}{0.5\textwidth} \includegraphics[width = \textwidth]{circleNeuN64.png} \caption{Neumann boundary conditions on the circle.} \end{subfigure} \caption{Solution error for fourth order method applied to area outside of circle with center $(0.5, 0.5)$ and radius $0.25$. Dirichlet boundary conditions are prescribed on the square. The meshwidth of the Cartesian grid is $h = 1/64$.} \label{fig: circle solution error} \end{figure} \begin{table} \begin{subtable}{\textwidth} \begin{center} \begin{tabular}{| c | c | c | c | c | c |} \hline Test & $e_{32}$ & Order & $e_{64}$ & Order & $e_{128}$\\ \hline 2nd order, Dirichlet EB & 5.01 & 1.03 & 2.44 & 1.72 & 7.43e-01 \\ \hline 4th order, Dirichlet EB & 9.36e-02 & 3.96 & 6.00e-03 & 2.62 & 9.72e-04 \\ \hline 4th order, Neumann EB & 5.30e-02 & 2.00 & 1.33e-02 & 2.83 & 1.87e-03 \\ \hline \end{tabular} \end{center} \caption{$L_\infty$ convergence rates for $\kappa$-weighted truncation error.} \label{table: circle Linf kappa trunc} \end{subtable} \\ \begin{subtable}{\textwidth} \begin{center} \begin{tabular}{| c | c | c | c | c | c |} \hline Test & $e_{32}$ & Order & $e_{64}$ & Order & $e_{128}$\\ \hline 2nd order, Dirichlet EB & 2.39e-01 & 1.91 & 6.39e-02 & 1.96 & 1.64e-02 \\ \hline 4th order, Dirichlet EB & 4.89e-03 & 3.97 & 3.11e-04 & 3.89 & 2.10e-05 \\ \hline 4th order, Neumann EB & 5.37e-03 & 3.87 & 3.68e-04 & 3.80 & 2.64e-05 \\ \hline \end{tabular} \end{center} \caption{$L_1$ convergence rates for $\kappa$-weighted truncation error.} \label{table: circle L1 kappa trunc} \end{subtable} \\ \begin{subtable}{\textwidth} \begin{center} \begin{tabular}{| c | c | c | c | c | c |} \hline Test & $e_{32}$ & Order & $e_{64}$ & Order & $e_{128}$\\ \hline 2nd order, Dirichlet EB & 1.36e-03 & 1.90 & 3.65e-04 & 1.92 & 9.64e-05 \\ \hline 4th order, Dirichlet EB & 2.56e-05 & 4.17 & 1.43e-06 & 3.86 & 9.80e-08 \\ \hline 4th order, Neumann EB & 5.24e-05 & 3.89 & 3.52e-06 & 3.94 & 2.29e-07 \\ \hline \end{tabular} \end{center} \caption{$L_\infty$ convergence rates for solution error.} \label{table: circle Linf soln error} \end{subtable} \\ \begin{subtable}{\textwidth} \begin{center} \begin{tabular}{| c | c | c | c | c | c |} \hline Test & $e_{32}$ & Order & $e_{64}$ & Order & $e_{128}$\\ \hline 2nd order, Dirichlet EB & 4.06e-04 & 2.04 & 9.90e-05 & 1.97 & 2.52e-05 \\ \hline 4th order, Dirichlet EB & 5.70e-06 & 3.91 & 3.78e-07 & 3.91 & 2.52e-08 \\ \hline 4th order, Neumann EB & 1.07e-05 & 4.05 & 6.48e-07 & 3.96 & 4.16e-08\\ \hline \end{tabular} \end{center} \caption{$L_1$ convergence rates for solution error.} \label{table: circle L1 soln error} \end{subtable} \caption{Convergence rates for method applied to area outside of circle with center $(0.5, 0.5)$ and radius $0.25$. $e_N$ is the error for a grid with meshwidth $h=1/N$. Dirichlet boundary conditions are prescribed on the square.} \label{table: circle convergence rates} \end{table} \begin{figure} \begin{subfigure}{\textwidth} \includegraphics[width = \textwidth]{circleExEigs.png} \caption{Spectrum for fourth order method applied to domain outside of circle wih center $(0.5, 0.5)$ and radius $0.25$. Dirichlet boundary conditions are prescribed on the square. The operator is not weighted by volume fraction. Only the eigenvalues with positive imaginary components are plotted because eigenvalues come in complex conjugate pairs. Independent of the circle boundary condition type, the operator has a few eigenvalues with large imaginary component (indicated by green circle). It also has large negative eigenvalues (indicated by the red circle for Dirchlet boundary conditions and the blue circle for Neumann boundary conditions) because small cells are present. The Cartesian grid is $64\times 64$.} \label{fig: circle spectrum} \end{subfigure} \hspace{0.01\textwidth} \begin{subfigure}{\textwidth} \begin{center} \begin{tabular}{| c | c | c | c |} \hline Test Case & $\lambda^{*}$ & $\lambda_{\text{max}}$ & $\lambda_{\text{min}}$ \\ \hline $x_0=(0.5,0.5)$, $r = 0.255$, Dirichlet BCs on EB &-3.6e4 + 28$i$ & -106 & -1.0e6\\ \hline $x_0=(0.501,0.501)$, $r = 0.25$, Dirichlet BCs on EB & -3.1e4 + 5.0$i$ & -103 & -2.5e6 \\ \hline $x_0=(0.5,0.5)$, $r = 0.255$, Neumann BCs on EB & -3.0e4 + 24$i$ &-39 &-9.1e4 \\ \hline $x_0=(0.501,0.501)$, $r = 0.25$, Neumann BCs on EB & -3.5e4 + 17$i$& -38 &-1.3e5 \\ \hline $x_0=(0.51,0.5)$, $r = 0.25$, Neumann BCs on EB & -2.0e4 + 10$i$ & -38 & -4.0e5\\ \hline \end{tabular} \end{center} \caption{Summary of spectra information for the fourth order method applied to the circle geometries. $x_0$ is the center of the circle, $r$ is the radius. The three columns are: the eigenvalue with the largest imaginary component ($\lambda^{*}$), the eigenvalue with the largest real component ($\lambda_{\text{max}}$), and the eigenvalue with the smallest real component ($\lambda_{\text{min}}$). Note that the eigenvalues come in complex conjugate pairs, and only the eigenvalue with a positive imaginary component is given in the first column.} \end{subfigure} \caption{} \label{fig: geometric perturbation spectra summary} \end{figure} \begin{figure} \includegraphics[width = \textwidth]{ebamrpoissonEigs.png} \caption{Spectrum for operator generated using the method in \cite{schwartzETAL:2006}. The operator has been weighted by volume fraction. The domain is the region outside of the circle wih center $(0.5, 0.5)$ and radius $0.25$. Dirichlet boundary conditions are prescribed on the boundaries. Like the operator for the weighted least squares method, this operator has a cluster of eigenvalues with non-negligible imaginary component (indicated by the green circle). The Cartesian grid is $64\times 64$.} \label{fig: ebamrpoisson spectrum} \end{figure} \subsubsection{Geometric Perturbations} Perturbations in the geometry can produce small cells. Our next set of tests demonstrates that the method is robust under small changes in the geometry. In particular, we perturb the radius and center of the circle in the last example, and apply the fourth order method to solve Poisson's equation on the perturbed domains. Table \ref{table: smallest volume fractions for geometric perturbations} lists the smallest volume fractions for the original circle example and three perturbations of this circle on the three grids used in the study. Note that perturbing the circle's center from $x_0 = (0.5, 0.5)$ to $x_0 = (0.51, 0.5)$ changes the smallest volume fraction by a magnitude! Despite these differences in the geometry, the $\kappa$-weighted truncation error and solution error hardly change. Tables \ref{table: geometric perturbation dirichlet convergence rates} and \ref{table: geometric perturbation neumann convergence rates} display the convergence rates in the case of Dirichlet boundary conditions and Neumann boundary conditions on the embedded boundary, respectively. The spectrum for each perturbation is similar to the spectrum in the previous example. The eigenvalues lie in the left half-plane for each of the perturbations. Also, there are a few eigenvalues with large negative real part corresponding to the small cells and a cluster of eigenvalues with non-negligible imaginary part. We summarize the main features of each spectrum in Figure \ref{fig: geometric perturbation spectra summary}. \begin{table} \begin{center} \begin{tabular}{|c | c | c | c |} \hline Geometry & N=32 & N=64 & N=128 \\ \hline $x_0 = (0.5, 0.5)$, $r= 0.25$ & 4.5e-3 & 1.0e-2 & 2.5e-4\\ \hline $x_0 = (0.5, 0.5)$, $r= 0.255$ & 1.7e-2 & 6.8e-3 & 9.5e-5 \\ \hline $x_0 = (0.501, 0.501)$, $r= 0.25$ & 4.0e-4 & 1.6e-3 & 7.2e-6 \\ \hline $x_0 = (0.51, 0.5)$, $r= 0.25$ & 3.6e-4 & 2.3e-4 & 4.5e-5 \\ \hline \end{tabular} \end{center} \caption{Smallest volume fraction on a $N$x$N$ Cartesian grid for the domain outside of circle with center $x_0$ and radius $r$. } \label{table: smallest volume fractions for geometric perturbations} \end{table} \begin{table} \begin{subtable}{\textwidth} \begin{center} \begin{tabular}{| c | c | c | c | c | c |} \hline Test & $e_{32}$ & Order & $e_{64}$ & Order & $e_{128}$\\ \hline $x_0 = (0.5, 0.5)$, $r= 0.255$ & 7.24e-02 & 3.59 & 6.00e-03 & 3.10 & 7.01e-04 \\ \hline $x_0 = (0.501, 0.501)$, $r=0.25$ & 5.20e-02 & 3.12 & 6.00e-03 & 3.10 & 7.01e-04 \\ \hline $x_0 = (0.51, 0.5) $, $r= 0.25$ & 7.41e-02 & 3.46 & 6.71e-03 & 2.39 & 1.28e-03 \\ \hline \end{tabular} \end{center} \caption{$L_\infty$ convergence rates for $\kappa$-weighted truncation error} \end{subtable} \\ \begin{subtable}{\textwidth} \begin{center} \begin{tabular}{| c | c | c | c | c | c |} \hline Test & $e_{32}$ & Order & $e_{64}$ & Order & $e_{128}$\\ \hline $x_0 = (0.5, 0.5)$, $r= 0.255$ & 4.93e-03 & 3.92 & 3.26e-04 & 3.91 & 2.17e-05 \\ \hline $x_0 = (0.501, 0.501)$, $r=0.25$ & 4.87e-03 & 3.95 & 3.14e-04 & 3.87 & 2.14e-05\\ \hline $x_0 = (0.51, 0.5) $, $r= 0.25$ & 5.06e-03 & 3.98 & 3.21e-04 & 3.91 & 2.14e-05 \\ \hline \end{tabular} \end{center} \caption{$L_1$ convergence rates for $\kappa$-weighted truncation error} \end{subtable} \\ \begin{subtable}{\textwidth} \begin{center} \begin{tabular}{| c | c | c | c | c | c |} \hline Test & $e_{32}$ & Order & $e_{64}$ & Order & $e_{128}$\\ \hline $x_0 = (0.5, 0.5)$, $r= 0.255$ & 2.01e-05 & 3.85 & 1.40e-06 & 3.88 & 9.48e-08 \\ \hline $x_0 = (0.501, 0.501)$, $r=0.25$ & 3.38e-05 & 4.55 & 1.45e-06 & 3.88 & 9.84e-08 \\ \hline $x_0 = (0.51, 0.5) $, $r= 0.25$ & 3.02e-05 & 3.34 & 2.97e-06 & 4.91 & 9.88e-08 \\ \hline \end{tabular} \end{center} \caption{$L_\infty$ convergence rates for solution error} \end{subtable} \\ \begin{subtable}{\textwidth} \begin{center} \begin{tabular}{| c | c | c | c | c | c |} \hline Test & $e_{32}$ & Order & $e_{64}$ & Order & $e_{128}$\\ \hline $(x_0, y_0) = (0.5, 0.5) $, $r=0.255$ & 5.56e-06 & 3.90 & 3.73e-07 & 3.92 & 2.47e-08 \\ \hline $(x_0, y_0) = (0.501, 0.501)$, $r=0.25$ & 6.06e-06 & 4.00 & 3.79e-07 & 3.91 & 2.51e-08\\ \hline $(x_0, y_0) = (0.51, 0.5)$, $r=0.25$ & 5.94e-06 & 3.93 & 3.90e-07 & 3.96 & 2.50e-08\\ \hline \end{tabular} \end{center} \caption{$L_1$ convergence rates for solution error} \end{subtable} \caption{Convergence rates for 4th order method applied to domain outside of the circle $(x-x_0)^2 + (y-y_0)^2 = r^2$. Dirichlet boundary conditions are prescribed on the boundaries.} \label{table: geometric perturbation dirichlet convergence rates} \end{table} \begin{table} \begin{subtable}{\textwidth} \begin{center} \begin{tabular}{| c | c | c | c | c | c |} \hline Test & $e_{32}$ & Order & $e_{64}$ & Order & $e_{128}$\\ \hline $x_0 = (0.5, 0.5)$, $r= 0.255$ & 5.19e-02 & 2.29 & 1.06e-02 & 2.48 & 1.90e-03 \\ \hline $x_0 = (0.501, 0.501)$, $r=0.25$ & 5.17e-02 & 2.03 & 1.27e-02 & 2.69 & 1.96e-03 \\ \hline $x_0 = (0.51, 0.5) $, $r= 0.25$ & 5.53e-02 & 2.09 & 1.30e-02 & 2.63 & 2.10e-03 \\ \hline \end{tabular} \end{center} \caption{$L_\infty$ convergence rates for $\kappa$-weighted truncation error} \end{subtable} \\ \begin{subtable}{\textwidth} \begin{center} \begin{tabular}{| c | c | c | c | c | c |} \hline Test & $e_{32}$ & Order & $e_{64}$ & Order & $e_{128}$\\ \hline $x_0 = (0.5, 0.5)$, $r= 0.255$ & 5.41e-03 & 3.77 & 3.95e-04 & 3.90 & 2.64e-05 \\ \hline $x_0 = (0.501, 0.501)$, $r=0.25$ & 5.42e-03 & 3.86 & 3.72e-04 & 3.85 & 2.581e-05\\ \hline $x_0 = (0.51, 0.5) $, $r= 0.25$ & 5.40e-03 & 3.82 & 3.83e-04 & 3.91 & 2.542e-05 \\ \hline \end{tabular} \end{center} \caption{$L_1$ convergence rates for $\kappa$-weighted truncation error} \end{subtable} \\ \begin{subtable}{\textwidth} \begin{center} \begin{tabular}{| c | c | c | c | c | c |} \hline Test & $e_{32}$ & Order & $e_{64}$ & Order & $e_{128}$\\ \hline $x_0 = (0.5, 0.5)$, $r= 0.255$ & 5.41e-05 & 3.95 & 3.49e-06 & 3.94 & 2.27e-07 \\ \hline $x_0 = (0.501, 0.501)$, $r=0.25$ & 5.32e-05 & 3.91 & 3.53e-06 & 3.95 & 2.29e-07 \\ \hline $x_0 = (0.51, 0.5) $, $r= 0.25$ & 5.52e-05 & 3.95 & 3.58e-06 & 3.94 & 2.33e-07 \\ \hline \end{tabular} \end{center} \caption{$L_\infty$ convergence rates for solution error} \end{subtable} \\ \begin{subtable}{\textwidth} \begin{center} \begin{tabular}{| c | c | c | c | c | c |} \hline Test & $e_{32}$ & Order & $e_{64}$ & Order & $e_{128}$\\ \hline $x_0 = (0.5, 0.5) $, $r=0.255$ & 1.09e-05 & 4.08 & 6.45e-07 & 3.97 & 4.12e-08 \\ \hline $x_0 = (0.501, 0.501)$, $r=0.25$ & 1.08e-05 & 4.06 & 6.48e-07 & 3.96 & 4.17e-08 \\ \hline $x_0 = (0.51, 0.5)$, $r=0.25$ & 1.11e-05 & 4.08 & 6.55e-07 & 3.96 & 4.21e-08 \\ \hline \end{tabular} \end{center} \caption{$L_1$ convergence rates for solution error} \end{subtable} \caption{Convergence rates for 4th order method applied to domain outside of the circle $(x-x_0)^2 + (y-y_0)^2 = r^2$. Dirichlet boundary conditions are prescribed on the square, and Neumann boundary conditions are prescribed on the circle.} \label{table: geometric perturbation neumann convergence rates} \end{table} \subsection{Other Geometries} In the next few sections, we apply the fourth order method on different geometries. In each case, the operator spectrum looks similar to the circle example: there are a few eigenvalues with large negative part that correspond to small cells and a cluster of eigenvalues with non-negligible imaginary part. As a result, we summarize the main features of the spectra in Table \ref{table: spectra summary}. \clearpage \subsubsection{Trignometric Curve} Our next domain is the region underneath the curve \begin{equation} y = 0.25 + \frac{\sqrt{2}}{2}(1-\cos(2\pi x)) . \end{equation} We apply the fourth order method to solve Poisson's equation on this geometry. Dirichlet boundary conditions are prescribed on the boundaries. Figure \ref{fig: sine curve solution error} shows the solution error on the grid with meshwidth $h = 1/128$, and Table \ref{table: sine curve convergence rates} lists the convergence rates. It is clear that the method converges at fourth order for this example. \begin{figure} \includegraphics[width = \textwidth]{sineSolnErrorN128.png} \caption{Solution error from fourth order method applied to area underneath the curve $y = 0.25 + \frac{\sqrt{2}}{2}(1-\cos(2\pi x)) $. Dirichlet boundary conditions are prescribed on the boundaries. The Cartesian grid is $128\times128$.} \label{fig: sine curve solution error} \end{figure} \begin{table} \begin{center} \begin{tabular}{| c | c | c | c | c | c |} \hline Test & $e_{32}$ & Order & $e_{64}$ & Order & $e_{128}$\\ \hline $\kappa$-weighted truncation error, $L_{\infty} $ & 7.82e-02 & 3.38 & 7.53e-03 & 3.06 & 9.03e-04 \\ \hline $\kappa$-weighted truncation error, $L_1$ & 3.99e-03 & 3.75 & 2.97e-04 & 3.86 & 2.05e-05 \\ \hline Solution error, $L_{\infty} $ & 4.03e-05 & 3.99 & 2.53e-06 & 4.00 & 1.58e-07 \\ \hline Solution error, $L_1$ & 1.10e-05 & 3.91 & 7.32e-0 & 3.93 & 4.82e-08 \\ \hline \end{tabular} \end{center} \caption{Convergence rates for fourth order method applied to area underneath the curve $y = 0.25 + \frac{\sqrt{2}}{2}(1-\cos(2\pi x)) $. Dirichlet boundary conditions are prescribed on the boundaries.} \label{table: sine curve convergence rates} \end{table} \subsubsection{Four circles} Next, we consider the domain outside of the four circles with centers $(0.25, 0.25)$, $(0.75, 0.25)$, $(0.25, 0.75)$, $(0.75, 0.75)$ and all with radius $0.215$. We apply the fourth order method; Dirichlet boundary conditions are prescribed on the boundaries. Figure \ref{fig: four circles solution error} shows the solution error on the grid with meshwidth $1/128$. Unlike the previous examples, we simulate this example on grids with $N = $64, 128, and 256 because the $N = 32$ is too coarse to compute a fourth order flux. On the $64x64$ grid, each of the circles is only $3$ grid cells away from the square boundary. Thus, a face close to the boundary has the minimum information to construct a fourth order flux: 3 cells, 1 embedded boundary face, and 1 domain boundary face. Despite this limited information, the $\kappa$-weighted truncation error and solution error converge at fourth order. Also, the operator is stable (see Table \ref{table: spectra summary}). \begin{figure} \includegraphics[width = \textwidth]{fourCirclesSolnErrorN128.png} \caption{Solution error from 4th order applied to the area outside of four circles that are close to the square boundary. Dirichlet boundary conditions are prescribed on the boundaries. The Cartesian grid is $128\times 128$.} \label{fig: four circles solution error} \end{figure} \begin{table} \begin{center} \begin{tabular}{| c | c | c | c | c | c |} \hline Test & $e_{64}$ & Order & $e_{128}$ & Order & $e_{256}$\\ \hline Truncation error with kappa, $L_{\infty}$ & 3.45e-02 & 5.24 & 9.13e-04 & 3.19 & 1.00e-04\\ \hline Truncation error with kappa, $L_1$ & 1.12e-03 & 4.61 & 4.62e-05 & 3.90 & 3.10e-06 \\ \hline Solution error, $L _{\infty}$ & 1.04e-06 & 4.04 & 6.30e-08 & 3.94 & 4.11e-09 \\ \hline Solution error, $L_1$ & 1.73e-07 & 4.12 & 9.98e-09 & 3.88 & 6.77e-10 \\ \hline \end{tabular} \end{center} \caption{Convergence rates for fourth order method where the domain is the outside of four circles that are close to the square boundary.} \label{table: four circle convergence rates} \end{table} \begin{table} \begin{center} \begin{tabular}{| c | c | c | c |} \hline Test Case & $\lambda^{*}$ & $\lambda_{\text{max}}$ & $\lambda_{\text{min}}$ \\ \hline $x_0=(0.5,0.5)$, $r = 0.25$, Dirichlet BCs on EB & -2.5e4 + 11$i$ & -104 & -9.5e6\\ \hline $x_0=(0.5,0.5)$, $r = 0.255$, Dirichlet BCs on EB &-3.6e4 + 28$i$ & -106 & -1.0e6\\ \hline $x_0=(0.501,0.501)$, $r = 0.25$, Dirichlet BCs on EB & -3.1e4 + 5.0$i$ & -103 & -2.5e6 \\ \hline $x_0=(0.5,0.5)$, $r = 0.25$, Neumann BCs on EB & -3.5e4 + 12$i$ & -38 &-6.9e4 \\ \hline $x_0=(0.5,0.5)$, $r = 0.255$, Neumann BCs on EB & -3.0e4 + 24$i$ &-39 &-9.1e4 \\ \hline $x_0=(0.501,0.501)$, $r = 0.25$, Neumann BCs on EB & -3.5e4 + 17$i$& -38 &-1.3e5 \\ \hline $x_0=(0.51,0.5)$, $r = 0.25$, Neumann BCs on EB & -2.0e4 + 10$i$ & -38 & -4.0e5\\ \hline Other geometries & & & \\ \hline Four circles, Dirichlet BCs on EB & -4.4e4 + 1.6e3$i$ & -240 & -6.4e5\\ \hline Sine curve, Dirichlet BCs on EB & -3.3e4 + 600$i$& -51 & -1.8e6 \\ \hline \end{tabular} \end{center} \caption{Summary of spectra information for the 4th order method applied to various geometries. The three columns are: the eigenvalue with the largest imaginary component ($\lambda^{*}$), the eigenvalue with the largest real component ($\lambda_{\text{max}}$), and the eigenvalue with the smallest real component ($\lambda_{\text{min}}$). Note that the eigenvalues come in complex conjugate pairs, and only the eigenvalue with a positive imaginary component is given in the first column. The meshwidth of the Cartesian grid is $1/64$. Dirichlet boundary conditions are prescribed on the square boundary. } \label{table: spectra summary} \end{table} \subsection{Sphere} \label{sec: 3D example} In this section, we apply a fourth order method to solve Poisson's equation on the inside of a sphere, with center $(0.5, 0.5, 0.5)$ and radius 0.45. The exact solution for this test problem is \begin{equation} \phi(x,y,z) = \sin(2\pi(x-x_0))\sin(2\pi(y-y_0))\sin(2\pi(z-z_0)), \end{equation} where $x_0 = \sqrt{2}/2$, $y_0 = \sqrt{3}/2$, and $z_0 = 0.54321$. Table \ref{table: 3D soln error for sphere} shows the solution error convergence rates. It is clear that the method is fourth order. Figure \ref{fig: 3D soln error for sphere} shows the solution error on a grid with meshwidth $h = 1/128$. \begin{table}[p] \begin{center} \begin{tabular}{| c | c | c | c |} \hline Norm & $e_{32}$ & Order & $e_{64}$ \\ \hline $L_\infty$ & 9.16e-05 & 3.87 & 6.27e-06 \\ \hline $L_1$ & 1.38e-05 & 3.81 & 9.81e-07\\ \hline $L_2$ & 2.17e-05 & 3.81 & 1.54e-06 \\ \hline \end{tabular} \end{center} \caption{Solution error convergence rates in the $L_\infty$, $L_1$, and $L_2$ norms for 4th order method applied to inside of sphere with center $(0.5, 0.5, 0.5)$ and radius $0.45$.} \label{table: 3D soln error for sphere} \end{table} \begin{figure} \includegraphics[width = 1.25\textwidth]{3DFineError4Views.png} \caption{Solution error for 4th order method applied to inside of sphere with center $(0.5, 0.5, 0.5)$ and radius $0.45$. Each image shows three cross-sections through the center of the sphere. The triad next to each image shows the orientation of the sphere in the image.} \label{fig: 3D soln error for sphere} \end{figure}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section*{References}} \journal{Chaos, Solitons $\&$ Fractals } \begin{document} \begin{frontmatter} \title{$q$-Fibonacci bicomplex quaternions} \author[rvt]{F.~Torunbalc{\i} Ayd{\i}n\corref{cor1}} \ead{ftorunay@gmail.com} \cortext[cor1]{Corresponding author} \address[rvt]{Yildiz Technical University, Faculty of Chemical and Metallurgical Engineering, Department of Mathematical Engineering, Davutpasa Campus, 34220, Esenler, \.{I}stanbul, TURKEY} \begin{abstract} In the paper, we define the $q$-Fibonacci bicomplex quaternions and the $q$-Lucas bicomplex quaternions, respectively. Then, we give some algebraic properties of $q$-Fibonacci bicomplex quaternions and the $q$-Lucas bicomplex quaternions. \end{abstract} \begin{keyword} Bicomplex number; $q$-integer; Fibonacci number; Bicomplex Fibonacci quaternion; $q$-quaternion; $q$-Fibonacci quaternion. \end{keyword} \end{frontmatter} \section{Introduction} The real quaternions were first described by Irish mathematician William Rowan Hamilton in 1843. The real quaternions constitute an extension of complex numbers into a four-dimensional space and can be considered as four-dimensional vectors, in the same way that complex numbers are considered as two-dimensional vectors. Hamilton \cite{hamilton1866elements} introduced the set of quaternions which can be represented as \begin{equation}\label{1 H=\left\{ \,q={{q}_{0}}+i\,{{q}_{1}}+j\,{{q}_{2}}+k\,{{q}_{3}}\,\left. {} \right|\ \ {{q}_{0}},\,{{q}_{1}},\,{{q}_{2}},\,{q}_{3}\in \mathbb R\, \right\} \end{equation} where \begin{equation}\label{2} {{i}^{2}}={{j}^{2}}={{k}^{2}}=-1\,,\ \ i\ j=-j\ i=k\,,\quad j\ k=-k \ j=i\,,\quad k\ i=-i\ k=j\,. \end{equation} Horadam \cite{horadam1963complex,horadam1993quaternion} defined complex Fibonacci and Lucas quaternions as follows \begin{equation}\label{3} Q_n=F_n+F_{n+1}\,i+F_{n+2}\,j+F_{n+3}\,k \end{equation} and \begin{equation}\label{4} K_n=L_n+L_{n+1}\,i+L_{n+2}\,j+L_{n+3}\,k \end{equation} where $F_n$ and $L_n$ denote the $n-th$ Fibonacci and Lucas numbers, respectively. Also, the imaginary quaternion units $i\,j,\,\,k$ have the following rules \begin{equation*} i^2=j^2=k^2=-1\,,\,\, i\,j=-j\,i=k\,,\quad j\,k=-k \,j=i\,,\quad k\,i=-i\,k=j \end{equation*} There are several studies on quaternions as the Fibonacci and Lucas quaternions and their generalizations, for example,\cite{iyer1969some, iyer1969note, koshy2019fibonacci, vajda1989fibonacci, swamy1973generalized, halici2012fibonacci, halici2013complex, nurkan2015dual,clifford1882preliminary,yuce2016new, akkus2019quaternions}.\\ Bicomplex numbers were introduced by Corrado Segre in 1892 \cite{fulton2006corrado}. In 1991, G. Baley Price, the bicomplex numbers are given in his book based on multicomplex spaces and functions \cite{price1991introduction}. In recent years, fractal structures of these numbers have also been studied \cite{rochon2004algebraic, nurkan2015note}. The set of bicomplex numbers can be expressed by the basis $\{1\,,i\,,j\,,i\,j\,\}$ (Table 1) as, \begin{equation}\label{5} \begin{aligned} \mathbb{C}_2=\{\, q=q_1+i\,q_2+j\,q_3+i\,j\,q_4 \ | \, q_1,q_2,q_3,q_4\in \mathbb R\} \end{aligned} \end{equation} where $i$,$j$ and $i\,j$ satisfy the conditions \begin{equation}\label{6} i^2=-1,\,\,\,j^2=-1,\,\,\,i\,j=j\,i. \end{equation}\\ In 2018, the bicomplex Fibonacci quaternions defined by Ayd{\i}n Torunbalc{\i} \cite{aydin2018bicomplex} as follows \begin{equation}\label{7} \begin{aligned} \mathbb{BF}_{n} & = {F}_{n}+i\,{F}_{n+1}+j\,{F}_{n+2}+i\,j\,{F}_{n+3} \\ & = ({F}_{n}+i\,{F}_{n+1})+({F}_{n+2}+i\,{F}_{n+3})\,j \end{aligned} \end{equation} where $i$, $j$ and $i\,j$ satisfy the conditions (\ref{6}).\\ The theory of the quantum $q$-calculus has been extensively studied in many branches of mathematics as well as in other areas in biology, physics, electrochemistry, economics, probability theory, and statistics \cite{arfken1999mathematical,adler1995quaternionic}. For $n\in\mathbb{N}_{0}$ , the $q$-integer \, ${\lbrack{n}\rbrack}_{q}$ is defined as follows \begin{equation}\label{8} \begin{aligned} {\lbrack{n}\rbrack}_{q} &= \frac{1-q^n}{1-q} = 1 + q + q^2 +\,\ldots\,+ q^{n-1}. \end{aligned} \end{equation} By (\ref{8}), for all $m,n\in{\mathbb{Z}}$, can be easily obtained\, $\lbrack{m + n}\rbrack_{q} = \lbrack{m}\rbrack_{q} + q^m \,\lbrack{n}\rbrack_{q}$. For more details related to the quantum $q$-calculus, we refer to \cite{andrews1999special,kac2002quantum}.\\ In 2019, $q$-Fibonacci hybrid and $q$-Lucas hybrid numbers defined by K{\i}z{\i}late\c{s} \cite{kizilatecs2020new} as follows \begin{equation}\label{9} \begin{aligned} \mathbb{HF}_n(\alpha;q) =& \alpha^{n-1}\lbrack{n}\rbrack_{q}+\alpha^{n1}\lbrack{n+1}\rbrack_{q}\,\bold{i}+\alpha^{n+1}\lbrack{n+2}\rbrack_{q}\,\,\boldsymbol{\varepsilon}\\ &+\alpha^{n+2}\lbrack{n+3}\rbrack_{q}\,\bold{h}\, \\ \end{aligned} \end{equation} and \begin{equation}\label{10} \begin{aligned} \mathbb{HL}_n(\alpha;q) =& \alpha^{n}\frac{{\lbrack{2\,n}\rbrack}_{q}}{{\lbrack{n}\rbrack}_{q}}+\alpha^{n+1}\,\frac{{\lbrack{2\,n+2}\rbrack}_{q}}{{\lbrack{n+1}\rbrack}_{q}}\,\bold{i}+\alpha^{n+2}\,\frac{{\lbrack{2\,n+4}\rbrack}_{q}}{{\lbrack{n+2}\rbrack}_{q}}\,\,\boldsymbol{\varepsilon}\\ &+\alpha^{n+3}\frac{{\lbrack{2\,n+6}\rbrack}_{q}}{{\lbrack{n+3}\rbrack}_{q}}\,\bold{h} \\ \end{aligned} \end{equation} where $i$,\,\,$\varepsilon$ and $h$ satisfy the conditions \begin{equation}\label{11} \bold{i^2}=-1,\,\,\,\boldsymbol{\varepsilon^2}=0,\,\,\,\bold{h^2}=1,\,\,\bold{i}\,\bold{h}=\bold{h}\,\bold{i}=\boldsymbol{\varepsilon}+\bold{i}. \end{equation} \, Also, K{\i}z{\i}late\c{s} derived several interesting properties of these numbers such as Binet-Like formulas, exponential generating functions, summation formulas, Cassini-like identities, Catalan-like identities and d'Ocagne-like identities \cite{kizilatecs2020new}.\\ \section{$q$-Fibonacci bicomplex quaternions} In this section, we define $q$-Fibonacci bicomplex quaternions and $q$-Lucas bicomplex quaternions by using the basis $\{1,\,i\,,j\,,i\,j\}$, where $i$,\,$j$ \,and\, $i\,j$ satisfy the conditions (\ref{6}) as follows \begin{equation}\label{12} \begin{array}{rl} \mathbb{BF}_n(\alpha;q) =&\alpha^{n-1}\lbrack{n}\rbrack_{q}+\alpha^{n}\lbrack{n+1}\rbrack_{q}\,\,i+\alpha^{n+1}\lbrack{n+2}\rbrack_{q}\,\,j+\alpha^{n+2}\lbrack{n+3}\rbrack_{q}\,i\,j \\ \\ =&\alpha^{n}\,(\frac{1-q^n}{\alpha-\alpha\,q})+\alpha^{n+1}\,(\frac{1-q^{n+1}}{\alpha-\alpha\,q})\,i \\ \\ &+\alpha^{n+2}\,(\frac{1-q^{n+2}}{\alpha-\alpha\,q})\,j+\alpha^{n+3}\,(\frac{1-q^{n+3}}{\alpha-\alpha\,q})\,\,i\,j\\ \\ =&\frac{\alpha^{n}}{\alpha -(\alpha\,q)}\,[\,1+\alpha\,i+\alpha^2\,j+\alpha^3\,i\,j\,] \\ \\ &-\frac{(\alpha\,q)^{n}}{\alpha -(\alpha\,q)}\,\,[\,1+(\alpha\,q)\,i+(\alpha\,q)^2\,j+(\alpha\,q)^3\,i\,j\,] \end{array} \end{equation} and \begin{equation}\label{13} \begin{array}{rl} \mathbb{BL}_n(\alpha;q) =&\alpha^{n}\frac{{\lbrack{2\,n}\rbrack}_{q}}{{\lbrack{n}\rbrack}_{q}}+\alpha^{n+1}\,\frac{{\lbrack{2\,n+2}\rbrack}_{q}}{{\lbrack{n+1}\rbrack}_{q}}\,i+\alpha^{n+2}\,\frac{{\lbrack{2\,n+4}\rbrack}_{q}}{{\lbrack{n+2}\rbrack}_{q}}\,j +\alpha^{n+3}\frac{{\lbrack{2\,n+6}\rbrack}_{q}}{{\lbrack{n+3}\rbrack}_{q}}\,i\,j\\ \\ =&\alpha^{2n}\,(\frac{1-q^{2n}}{\alpha^n-(\alpha\,q)^n})+\alpha^{2n+2}\,(\frac{1-q^{2n+2}}{\alpha^{n+1}-(\alpha\,q)^{n+1}})\,i \\ \\ &+\alpha^{2n+4}\,(\frac{1-q^{2n+4}}{\alpha^{n+2}-(\alpha\,q)^{n+2}})\,j+\alpha^{2n+6}\,(\frac{1-q^{2n+6}}{\alpha^{n+3}-(\alpha\,q)^{n+3}})\,i\,j \\ \\ =&{\alpha^{n}}\,(1+\alpha\,i+\alpha^2\,j+\alpha^3\,i\,j\,)\\ \\ &-(\alpha\,q)^{n}\,(1+(\alpha\,q)\,i+(\alpha\,q)^2\,j+(\alpha\,q)^3\,i\,j\,) \end{array} \end{equation} For $\alpha=\frac{1+\sqrt{5}}{2}$ and \,$(\alpha\,q)=\frac{-1}{\alpha}$,\,\,\,{q}-Fibonacci bicomplex quaternion \,$\mathbb{BF}_n(\alpha;q)$\,\,become the bicomplex Fibonacci quaternions $\mathbb{BF}_n$.\\ \\ The addition, substraction and multiplication by real scalars of two $q$-Fibonacci bicomplex quaternions gives $q$-Fibonacci bicomplex quaternion.\\ Then, the addition, subtraction and multiplication by scalar of $q$-Fibonacci bicomplex quaternions are defined by \begin{equation}\label{14} \begin{array}{rl} \mathbb{BF}_n(\alpha;q)\pm\mathbb{BF}_m(\alpha;q)=&(\alpha^{n-1}\lbrack{n}\rbrack_{q}+\alpha^{n}\lbrack{n+1}\rbrack_{q}\,i+\alpha^{n+1}\lbrack{n+2}\rbrack_{q}\,j\\ &+\alpha^{n+2}\lbrack{n+3}\rbrack_{q}\,i\,j) \\ &\pm(\alpha^{m-1}\lbrack{m}\rbrack_{q}+\alpha^{m}\lbrack{m+1}\rbrack_{q}\,i+\alpha^{m+1}\lbrack{m+2}\rbrack_{q}\,j\\ &+\alpha^{m+2}\lbrack{m+3}\rbrack_{q}\,i\,j) \\ =&[\alpha^{n}(\frac{1-q^n}{\alpha-\alpha\,q})\pm\alpha^{m}(\frac{1-q^m}{\alpha-\alpha\,q})] \\ &+[\alpha^{n+1}(\frac{1-q^{n+1}}{\alpha-\alpha\,q})\pm\alpha^{m+1}(\frac{1-q^{m+1}}{\alpha-\alpha\,q})]\,i \\ &+[\alpha^{n+2}(\frac{1-q^{n+2}}{\alpha-\alpha\,q})\pm\alpha^{m+2}(\frac{1-q^{m+2}}{\alpha-\alpha\,q})]\,j \\ &+[\alpha^{n+3}(\frac{1-q^{n+3}}{\alpha-\alpha\,q})\pm\alpha^{m+3}(\frac{1-q^{m+3}}{\alpha-\alpha\,q})]\,i\,j \\ =&\frac{1}{\alpha-\alpha\,q}\,\{\,(\alpha^{n}\pm\alpha^{m})(1+\alpha\,i+\alpha^2\,j+\alpha^3\,i\,j) \\ &+((\alpha\,q)^n\pm(\alpha\,q)^m\,)(1+(\alpha\,q)\,i+(\alpha\,q)^2\,j\\ &+(\alpha\,q)^3\,i\,j\,)\,\}. \end{array} \end{equation} The multiplication of $q$-Fibonacci bicomplex quaternion by the real scalar $\lambda$ is defined as \begin{equation}\label{15} \begin{array}{rl} {\lambda}\,\mathbb{BF}_n(\alpha;q)&=\lambda\,\alpha^{n}(1+\alpha\,i+\alpha^2\,j+\alpha^3\,i\,j\,)\\ &+\lambda\,(\alpha\,q)^n\,(\,1+(\alpha\,q)\,i+(\alpha\,q)^2\,j+(\alpha\,q)^3\,i\,j\,).\\ \end{array} \end{equation} \\ The scalar and the vector part of \, $\mathbb{BF}_n(\alpha;q)$ which is the $n-th$ term of the $q$-Fibonacci bicomplex quaternion are denoted by \begin{equation}\label{17} {S}_{\mathbb{BF}_n(\alpha;q)}=\alpha^{n-1}\lbrack{n}\rbrack_{q},\,\, {V}_{\mathbb{BF}_n(\alpha;q)}=\alpha^{n}\lbrack{n+1}\rbrack_{q}\,i+\alpha^{n+1}\lbrack{n+2}\rbrack_{q}\,j+\alpha^{n+2}\lbrack{n+3}\rbrack_{q}\,i\,j. \end{equation} \\ Thus, the $q$-Fibonacci bicomplex quaternion $\mathbb{BF}_n(\alpha;q)$ is given by \\ $$\mathbb{BF}_n(\alpha;q)={S}_{\mathbb{BF}_n(\alpha;q)}+{V}_{\mathbb{BF}_n(\alpha;q)}$$. \\ The multiplication of two $q$-Fibonacci bicomplex quaternions is defined by \begin{equation}\label{16} \begin{array}{rl} \mathbb{BF}_n(\alpha;q)\times\,\mathbb{BF}_m(\alpha;q)=&(\alpha^{n-1}\lbrack{n}\rbrack_{q}+\alpha^{n}\lbrack{n+1}\rbrack_{q}\,i+\alpha^{n+1}\lbrack{n+2}\rbrack_{q}\,j\\ &+\alpha^{n+2}\lbrack{n+3}\rbrack_{q}\,i\,j)\\ &\times\,(\alpha^{m-1}\lbrack{m}\rbrack_{q}+\alpha^{m}\lbrack{m+1}\rbrack_{q}\,i+\alpha^{m+1}\lbrack{m+2}\rbrack_{q}\,j\\ &+\alpha^{m+2}\lbrack{m+3}\rbrack_{q}\,i\,j) \\ \\ =&\frac{1}{\alpha-\alpha\,q}\,(\alpha^{n+m})\,\{\,(1-\alpha^2-\alpha^4+\alpha^6)\\ &+2\,i\,(\alpha-\alpha^5)+2\,j\,(\alpha^2-\alpha^4)+4\,i\,j\,(\alpha^3)\}\\ &-q^m\,\{(1-\alpha(\alpha\,q)-\alpha^2(\alpha\,q)^2+\alpha^3(\alpha\,q)^3)\\ &+i\,(\alpha+(\alpha\,q)-\alpha^2(\alpha\,q)^3-\alpha^3(\alpha\,q)^2)\\ &+j\,(\alpha^2+(\alpha\,q)^2-\alpha(\alpha\,q)^3-\alpha^3(\alpha\,q))\\ &+i\,j\,(\alpha^3+(\alpha\,q)^3-\alpha(\alpha\,q)^2-\alpha^2(\alpha\,q))\}\\ &-q^n\,\{(1-\alpha(\alpha\,q)-\alpha^2(\alpha\,q)^2+\alpha^3(\alpha\,q)^3)\\ &+i\,(\alpha+(\alpha\,q)-\alpha^2(\alpha\,q)^3-\alpha^3(\alpha\,q)^2)\\ &+j\,(\alpha^2+(\alpha\,q)^2-\alpha(\alpha\,q)^3-\alpha^3(\alpha\,q))\\ &+i\,j\,(\alpha^3+(\alpha\,q)^3+\alpha^2(\alpha\,q)+\alpha(\alpha\,q)^2)\}\\ &+q^{n+m}\,\{(1-(\alpha\,q)^2-(\alpha\,q)^4+(\alpha\,q)^6)\\ &+2\,i\,((\alpha\,q)-(\alpha\,q)^5)+2\,j\,((\alpha\,q)^2-(\alpha\,q)^4)\\ &+4\,i\,j\,((\alpha\,q)^3\,)\}\\ =&\mathbb{BF}_m(\alpha;q)\times\,\mathbb{BF}_n(\alpha;q) \end{array} \end{equation} Here, quaternion multiplication is done using bicomplex quaternionic units (table 1), and this product is commutative. \\ \begin{table}[] \centering \caption{Multiplication scheme of bicomplex units} \begin{tabular}{c rrrr} \hline $x$& $1$& $i$& $j$& $i\,j$\\ \hline $1$& $1$& $i$& $j$& $i\,j$\\ $i$& $i$& $-1$& $i\,j$& $-j$\\ $j$& $j$& $i\,j$& $-1$& $-i$\\ $i\,j$& $i\,j$& $-j$& $-i$& $1$\\ \hline \end{tabular} \end{table} \\ Also, the $q$-Fibonacci bicomplex quaternion product may be obtained as follows \\ \begin{equation}\label{18} \begin{array}{lr} \mathbb{BF}_n(\alpha;q)\times\,\mathbb{BF}_m(\alpha;q)=\\ \\ \scriptsize{ \begin{matrix} &\left(\begin{array}{cccc} \alpha^{n-1}\lbrack{n}\rbrack_{q} & -\alpha^{n}\lbrack{n+1}\rbrack_{q} & -\alpha^{n+1}\lbrack{n+2}\rbrack_{q} & \alpha^{n+2}\lbrack{n+3}\rbrack_{q} \\ \alpha^{n}\lbrack{n+1}\rbrack_{q} & -\alpha^{n-1}\lbrack{n}\rbrack_{q} & -\alpha^{n+2}\lbrack{n+3}\rbrack_{q} & -\alpha^{n+1}\lbrack{n+2}\rbrack_{q} \\ \alpha^{n+1}\lbrack{n+2}\rbrack_{q} & -\alpha^{n+2}\lbrack{n+3}\rbrack_{q} & \alpha^{n-1}\lbrack{n}\rbrack_{q} & -\alpha^{n}\lbrack{n+1}\rbrack_{q} \\ \alpha^{n+2}\lbrack{n+3}\rbrack_{q} & \alpha^{n+1}\lbrack{n+2}\rbrack_{q} & \alpha^{n}\lbrack{n+1}\rbrack_{q} & \alpha^{n-1}\lbrack{n}\rbrack_{q} \end{array} \right) \end{matrix}} . \scriptsize{ \begin{matrix} \left(\begin{array}{c} \alpha^{m-1}\lbrack{m}\rbrack_{q} \\ \alpha^{m}\lbrack{m+1}\rbrack_{q} \\ \alpha^{m+1}\lbrack{m+2}\rbrack_{q} \\ \alpha^{m+2}\lbrack{m+3}\rbrack_{q} \end{array} \right) \end{matrix}} \end{array} \end{equation} \medskip Three kinds of conjugation can be defined for bicomplex numbers \cite{rochon2004algebraic, nurkan2015note}. Therefore, conjugation of the $q$-Fibonacci bicomplex quaternion is defined in three different ways as follows \begin{equation} \label{19} \begin{aligned} (\mathbb{BF}_n(\alpha;q))^{*_1}=&(\alpha^{n-1}\lbrack{n}\rbrack_{q}-i\,\alpha^{n}\lbrack{n+1}\rbrack_{q}+j\,\alpha^{n+1}\lbrack{n+2}\rbrack_{q}-i\,j\,\alpha^{n+2}\lbrack{n+3}\rbrack_{q}), \\ \end{aligned} \end{equation} \begin{equation} \label{20} \begin{aligned} (\mathbb{BF}_n(\alpha;q))^{*_2}=&(\alpha^{n-1}\lbrack{n}\rbrack_{q}+i\,\alpha^{n}\lbrack{n+1}\rbrack_{q}-j\,\alpha^{n+1}\lbrack{n+2}\rbrack_{q}-i\,j\,\alpha^{n+2}\lbrack{n+3}\rbrack_{q}), \\ \end{aligned} \end{equation} \begin{equation} \label{21} \begin{aligned} (\mathbb{BF}_n(\alpha;q))^{*_3}=&(\alpha^{n-1}\lbrack{n}\rbrack_{q}-i\,\alpha^{n}\lbrack{n+1}\rbrack_{q}-j\,\alpha^{n+1}\lbrack{n+2}\rbrack_{q}+i\,j\,\alpha^{n+2}\lbrack{n+3}\rbrack_{q}). \end{aligned} \end{equation} \\ Therefore, the norm of the $q$-Fibonacci bicomplex quaternion ${\,\mathbb{BF}_n(\alpha;q)}$ is defined in three different ways as follows \begin{equation}\label{22} \begin{array}{rl} {N}_(\mathbb{BF}_n(\alpha;q))^{*_1}=&\|(\mathbb{BF}_n(\alpha;q))\times\,(\mathbb{BF}_n(\alpha;q))^{*_1}\|^2, \end{array} \end{equation} \begin{equation}\label{23} \begin{array}{rl} {N}_(\mathbb{BF}_n(\alpha;q))^{*_2}=&\|(\mathbb{BF}_n(\alpha;q))\times\,(\mathbb{BF}_n(\alpha;q))^{*_2}\|^2, \end{array} \end{equation} \begin{equation}\label{24} \begin{array}{rl} {N}_(\mathbb{BF}_n(\alpha;q))^{*_3}=&\|(\mathbb{BF}_n(\alpha;q))\times\,(\mathbb{BF}_n(\alpha;q))^{*_3}\|^2. \end{array} \end{equation} \\ \begin{thm} \textbf{(Binet's Formula)}. Let ${\mathbb{BF}_n(\alpha;q)}$ and ${\mathbb{BL}_n(\alpha;q)}$be the $q$-Fibonacci bicomplex quaternion and the $q$-Lucas bicomplex quaternion. For $n\ge 1$, Binet's formula for these quaternions respectively, is as follows: \begin{equation}\label{25} \mathbb{BF}_n(\alpha;q)=\frac{\alpha^n\,\widehat{\gamma}-(\alpha\,q)^n\,\widehat{\delta}}{\alpha-\alpha\,q}, \end{equation} and \begin{equation}\label{26} \mathbb{BL}_n(\alpha;q)={\alpha^n\,\widehat{\gamma}+(\alpha\,q)^n\,\widehat{\delta}} \end{equation} where \begin{equation*} \begin{array}{l} \widehat{\gamma }=1+{\alpha}\,i+{\alpha}^2\,j+{\alpha}^3\,i\,j,\,\,\,\,\, \alpha=\frac{1+\sqrt{5}}{2} \end{array} \end{equation*} and \begin{equation*} \begin{array}{l} \widehat{\delta }=1+(\alpha\,q)\,i+(\alpha\,q)^2\,j+(\alpha\,q)^3\,i\,j,\,\,\,\,\, \alpha\,q=\frac{-1}{\alpha}. \end{array} \end{equation*} \end{thm} \begin{proof} (\ref{25}): Using (\ref{8}) and (\ref{12}), we find that \begin{equation*} \begin{array}{rl} \mathbb{BF}_n(\alpha;q)=&\alpha^{n-1}\lbrack{n}\rbrack_{q}+\alpha^{n}\lbrack{n+1}\rbrack_{q}\,i+\alpha^{n+1}\lbrack{n+2}\rbrack_{q}\,j+\alpha^{n+2}\lbrack{n+3}\rbrack_{q}\,i\,j\, \\ \\ =&\alpha^n\,\frac{1-q^n}{\alpha -\alpha\,q}+\alpha^{n+1}\,\frac{1-q^{n+1}}{\alpha -\alpha\,q}\,i+\alpha^{n+2}\,\frac{1-q^{n+2}}{\alpha -\alpha\,q}\,j+\alpha^{n+3}\,\frac{1-q^{n+3}}{\alpha -\alpha\,q}\,i\,j \\ \\ =&\frac{\alpha^{n}\,[\,1+\alpha\,i+\alpha^2\,j+\alpha^3\,i\,j\,]-(\alpha\,q)^{n}\,[\,1+(\alpha\,q)\,i+(\alpha\,q)^2\,j+(\alpha\,q)^3\,i\,j\,]}{\alpha -(\alpha\,q)} \\ \\ =&\frac{\alpha^n\,\widehat{\gamma}-(\alpha\,q)^n\,\widehat{\delta}}{\alpha-\alpha\,q} \end{array} \end{equation*} \\ In a similar way, equality (\ref{26}) can be derived as follows \begin{equation*} \begin{array}{rl} \mathbb{BL}_n(\alpha;q) =&\alpha^{n}\frac{{\lbrack{2\,n}\rbrack}_{q}}{{\lbrack{n}\rbrack}_{q}}+\alpha^{n+1}\,\frac{{\lbrack{2\,n+2}\rbrack}_{q}}{{\lbrack{n+1}\rbrack}_{q}}\,i+\alpha^{n+2}\,\frac{{\lbrack{2\,n+4}\rbrack}_{q}}{{\lbrack{n+2}\rbrack}_{q}}\,j +\alpha^{n+3}\frac{{\lbrack{2\,n+6}\rbrack}_{q}}{{\lbrack{n+3}\rbrack}_{q}}\,i\,j\\ =&\alpha^{2n}\,(\frac{1-q^{2n}}{\alpha^n-(\alpha\,q)^n})+\alpha^{2n+2}\,(\frac{1-q^{2n+2}}{\alpha^{n+1}-(\alpha\,q)^{n+1}})\,i \\ \\ &+\alpha^{2n+4}\,(\frac{1-q^{2n+4}}{\alpha^{n+2}-(\alpha\,q)^{n+2}})\,j+\alpha^{2n+6}\,(\frac{1-q^{2n+6}}{\alpha^{n+3}-(\alpha\,q)^{n+3}})\,i\,j \\ =&\frac{\alpha^{2n}}{\alpha^n}\,(1+\alpha\,i+\alpha^2\,j+\alpha^3\,i\,j\,)\\ \\ &-\frac{(\alpha\,q)^{2n}}{(\alpha\,q)^n}\,(1+(\alpha\,q)\,i+(\alpha\,q)^2\,j+(\alpha\,q)^3\,i\,j\,)\\ \\ =&\alpha^n\,\widehat{\gamma}-(\alpha\,q)^n\,\widehat{\delta}. \end{array} \end{equation*} where \, $\widehat{\gamma }=1+{\alpha}\,i+{\alpha}^2\,j+{\alpha}^3\,i\,j$, \, \, $\widehat{\delta }=1+(\alpha\,q)\,i+(\alpha\,q)^2\,j+(\alpha\,q)^3\,i\,j$ and $\widehat{\gamma }\,\widehat{\delta}=\widehat{\delta}\,\widehat{\gamma}$.\\ \end{proof} \medskip \begin{thm} \textbf{(Exponential generating function)} \\ Let $\mathbb{BF}_n(\alpha;q)$ be the $q$-Fibonacci bicomplex quaternion. For the exponential generating function for these quaternions is as follows: \begin{equation}\label{27} \begin{aligned} g_{\mathbb{BF}_n(\alpha;q)}\,(\frac{t^n}{n!})=&\sum\limits_{n=0}^{\infty}\,{\mathbb{BF}_n(\alpha;q)}\,\frac{t^n}{n!}=\frac{\widehat{\gamma}\,e^{\alpha\,t}\,-\widehat{\delta}\,e^{(\alpha\,q)\,t}}{\alpha-\alpha\,q}\, \end{aligned} \end{equation} \end{thm} \begin{proof} Using the definition of exponential generating function, we obtain \begin{equation}\label{28} \begin{array}{rl} \sum\limits_{n=0}^{\infty}\,{\mathbb{BF}_n(\alpha;q)}\,\frac{t^{n}}{n!}&=\sum\limits_{n=0}^{\infty}\,(\frac{\alpha^n\,\widehat{\gamma}-(\alpha\,q)^n\,\widehat{\delta}}{\alpha-\alpha\,q})\,\frac{t^n}{n!}\\ &=\frac{\widehat{\gamma}}{\alpha-\alpha\,q}\,\sum\limits_{n=0}^{\infty}\,\frac{t^{n}}{n!}-\frac{\widehat{\delta}}{\alpha-\alpha\,q}\,\sum\limits_{n=0}^{\infty}\,\frac{(\alpha\,q\,t)^{n}}{n!}\\ &=\frac{\widehat{\gamma}\,e^{\alpha\,t}\,-\widehat{\delta}\,e^{(\alpha\,q)\,t}}{\alpha-\alpha\,q}. \end{array} \end{equation} Thus, the proof is completed. \end{proof} \medskip \begin{thm} \textbf{(Honsberger identity)} \\ For \,$n,m\ge 0$ the Honsberger identity for the $q$-Fibonacci bicomplex quaternions ${\mathbb{BF}_n(\alpha;q)}$ and ${\mathbb{BF}_m(\alpha;q)}$ \, is given by \begin{equation}\label{29} \begin{array}{lr} \mathbb{BF}_n(\alpha;q)\,\mathbb{BF}_m(\alpha;q)+\mathbb{BF}_{n+1}(\alpha;q)\,\mathbb{BF}_{m+1}(\alpha;q)\\ \\ =\frac{\alpha^{n+m}}{\alpha-\alpha\,q}\,\{\,(1+\alpha^2)\,\widehat{\gamma}-\widehat{\gamma}\,\delta\,(1+\alpha(\alpha\,q))\,(q^n+q^m)+(1+(\alpha\,q)^2\,q^{n+m}\,\widehat{\delta^2}\,\}. \end{array} \end{equation} \end{thm} \begin{proof} (\ref{29}): By using (\ref{12}) and (\ref{25}) we get, \begin{equation*} \begin{array}{lr} \mathbb{BF}_n(\alpha;q)\,\mathbb{BF}_m(\alpha;q)+\mathbb{BF}_{n+1}(\alpha;q)\,\mathbb{BF}_{m+1}(\alpha;q)\\ \\ =(\frac{\alpha^n\,\widehat{\gamma}-(\alpha\,q)^n\,\widehat{\delta}}{\alpha-\alpha\,q})\,(\frac{\alpha^m\,\widehat{\gamma}-(\alpha\,q)^m\,\widehat{\delta}}{\alpha-\alpha\,q})+(\frac{\alpha^{n+1}\,\widehat{\gamma}-(\alpha\,q)^{n+1}\,\widehat{\delta}}{\alpha-\alpha\,q})\,(\frac{\alpha^{m+1}\,\widehat{\gamma}-(\alpha\,q)^{m+1}\,\widehat{\delta}}{\alpha-\alpha\,q})\\ \\ =\frac{\alpha^{n+m}}{(\alpha-\alpha\,q)^2}\,\{\,(\widehat{\gamma}-q^n\,\widehat{\delta})(\widehat{\gamma}-q^m\,\widehat{\delta})\}+\frac{\alpha^{n+m+2}}{(\alpha-\alpha\,q)^2}\,\{\,\widehat{\gamma}-q^{n+1}\,\widehat{\delta})(\widehat{\gamma}-q^{m+1}\,\widehat{\delta})\}\\ \\ =\frac{\alpha^{n+m}}{(\alpha-\alpha\,q)^2}\,\{\,(1+\alpha^2)\,{\widehat{\gamma}}^2-\widehat{\gamma}\,\widehat{\delta}\,(1+\alpha(\alpha\,q))\,(q^n+q^m)+(1+(\alpha\,q)^2)\,q^{n+m}\,{\widehat{\delta}}^2\}. \end{array} \end{equation*} where $\widehat{\gamma }\,\widehat{\delta}=\widehat{\delta}\,\widehat{\gamma}$. \end{proof} \medskip \begin{thm} \textbf{(d'Ocagne's identity)} \\ For $n,m\ge 0$ the d'Ocagne's identity for the $q$-Fibonacci bicomplex quaternions $\mathbb{BF}_n(\alpha;q)$ and $\mathbb{BF}_m(\alpha;q)$ is given by \begin{equation}\label{30} \begin{array}{lr} \mathbb{BF}_m(\alpha;q)\,\mathbb{BF}_{n+1}(\alpha;q)-\mathbb{BF}_{m+1}(\alpha;q)\,\mathbb{BF}_n(\alpha;q)=&\frac{\alpha^{n+m-1}(q^n-q^m)\,\widehat{\gamma}\,\widehat{\delta}}{(1-q)}. \end{array} \end{equation} \end{thm} \begin{proof} (\ref{30}): By using (\ref{12}) and (\ref{25}) we get, \begin{equation*} \begin{array}{lr} \mathbb{BF}_m(\alpha;q)\,\mathbb{BF}_{n+1}(\alpha;q)-\mathbb{BF}_{m+1}(\alpha;q)\,\mathbb{BF}_n(\alpha;q)\\ \\ =(\frac{\alpha^m\,\widehat{\gamma}-(\alpha\,q)^m\,\widehat{\delta}}{\alpha-\alpha\,q})\,(\frac{\alpha^{n+1}\,\widehat{\gamma}-(\alpha\,q)^{n+1}\,\widehat{\delta}}{\alpha-\alpha\,q})-(\frac{\alpha^{m+1}\,\widehat{\gamma}-(\alpha\,q)^{m+1}\,\widehat{\delta}}{\alpha-\alpha\,q})\,(\frac{\alpha^{n}\,\widehat{\gamma}-(\alpha\,q)^{n}\,\widehat{\delta}}{\alpha-\alpha\,q})\\ \\ =\frac{\alpha^{n+m+1}}{(\alpha-\alpha\,q)^2}\,\{(1-q)\,(q^n-q^m)\,\widehat{\gamma}\,\widehat{\delta}\,\}\\ \\ =\frac{\alpha^{n+m-1}(q^n-q^m)\,\widehat{\gamma}\,\widehat{\delta}}{(1-q)}. \end{array} \end{equation*} Here, $\widehat{\gamma}\,\widehat{\delta}=\widehat{\delta}\,\widehat{\gamma}$ is used. \end{proof} \medskip \begin{thm} \textbf{(Cassini Identity)} \\ Let $\mathbb{BF}_n(\alpha;q)$ be the $q$-Fibonacci bicomplex quaternion. For $n\ge 1$, Cassini's identity for $\mathbb{BF}_n(\alpha;q)$ is as follows: \begin{equation}\label{31} \mathbb{BF}_{n+1}(\alpha;q)\,\mathbb{BF}_{n-1}(\alpha;q)-\mathbb{BF}_n(\alpha;q)^2=\frac{\alpha^{2n-2}\,q^n\,(1-q^{-1})\,\widehat{\gamma}\,\widehat{\delta}}{(1-q)} . \end{equation} \end{thm} \begin{proof} (\ref{31}): By using (\ref{12}) and (\ref{25}) we get \begin{equation*} \begin{array}{rl} \mathbb{BF}_{n+1}(\alpha;q)\,\mathbb{BF}_{n-1}(\alpha;q)-\mathbb{BF}_n(\alpha;q)^2=&(\frac{\alpha^{n+1}\,\widehat{\gamma}-(\alpha\,q)^{n+1}\,\widehat{\delta}}{\alpha-\alpha\,q})\,(\frac{\alpha^{n-1}\,\widehat{\gamma}-(\alpha\,q)^{n-1}\,\widehat{\delta}}{\alpha-\alpha\,q})\\ &-(\frac{\alpha^n\,\widehat{\gamma}-(\alpha\,q)^n\,\widehat{\delta}}{(\alpha-\alpha\,q)})^2 \\ =&\frac{\alpha^{2n}\,q^n\,(1-q)(1-q^{-1})\,\widehat{\gamma}\,\widehat{\delta}}{(\alpha-\alpha\,q)^2} \\ =&\frac{\alpha^{2n-2}\,q^n\,(1-q^{-1})\,\widehat{\gamma}\,\widehat{\delta}}{(1-q)}\, . \end{array} \end{equation*} Here, $\widehat{\gamma}\,\widehat{\delta}=\widehat{\delta}\,\widehat{\gamma}$ is used. \end{proof} \medskip \begin{thm} \textbf{(Catalan's Identity)} \\ Let $\mathbb{BF}_n(\alpha;q)$ be the $q$-Fibonacci bicomplex quaternion. For $n\ge 1$, Catalan's identity for $\mathbb{BF}_n(\alpha;q)$ is as follows: \begin{equation}\label{32} \mathbb{BF}_{n+r}(\alpha;q)\,\mathbb{BF}_{n-r}(\alpha;q)-\mathbb{BF}_n(\alpha;q)^2=\frac{\alpha^{2n-2}\,q^n\,(1-q^r)(1-q^{-r})\,\widehat{\gamma}\,\widehat{\delta}}{(1-q)^2}\, . \end{equation} \end{thm} \begin{proof} (\ref{32}): By using (\ref{12}) and (\ref{25}) we get \begin{equation*} \begin{array}{rl} \mathbb{BF}_{n+r}(\alpha;q)\,\mathbb{BF}_{n-r}(\alpha;q)-\mathbb{BF}_n(\alpha;q)^2=&(\frac{\alpha^{n+r}\,\widehat{\gamma}-(\alpha\,q)^{n+r}\,\widehat{\delta}}{\alpha-\alpha\,q})\,(\frac{\alpha^{n-r}\,\widehat{\gamma}-(\alpha\,q)^{n-r}\,\widehat{\delta}}{\alpha-\alpha\,q})\\ &-(\frac{\alpha^n\,\widehat{\gamma}-(\alpha\,q)^n\,\widehat{\delta}}{(\alpha-\alpha\,q)^2} \\ =&\frac{-\alpha^{2n}\,q^{n-r}\,\widehat{\gamma}\,\widehat{\delta}-\alpha^{2n}\,q^{n+r}\,\widehat{\gamma}\,\widehat{\delta}+2\,\alpha^{2n}\,q^{n}\,\widehat{\gamma}\,\widehat{\delta}}{(\alpha-\alpha\,q)^2} \\ =&-\frac{\alpha^{2n}\,q^{n}\,\widehat{\gamma}\,\widehat{\delta}\,[\,(q^{-r}-1)+(q^r-1)\,]}{(\alpha-\alpha\,q)^2 } \\ =&\frac{\alpha^{2n-2}\,q^{n}\,\widehat{\gamma}\,\widehat{\delta}\,(1-q^{-r})(1-q^r)}{(1-q)^2} \end{array} \end{equation*} Here, $\widehat{\gamma}\,\widehat{\delta}=\widehat{\delta}\,\widehat{\gamma}$ is used. \end{proof} \section{Conclusion} In this paper, algebraic and analytic properties of the $q$-Fibonacci bicomplex quaternions are investigated.\\ \bibliographystyle{elsarticle-num}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction, Problem Statement and Main Results} \subsection{Wyner \cite{wyner1978} and Wyner and Ziv \cite{wyner-ziv1976} Lossy Compression Problem and Generalizations} Wyner and Ziv \cite{wyner-ziv1976} derived an operational information definition for the lossy compression problem of Fig.~\ref{fg:blockdiagram} with respect to a single-letter fidelity of reconstruction, when the joint sequence of random variables (RVs) $\{(X_{t}, Y_{t}): t=1,2, \dots\}$ takes values in sets of finite cardinality, $\{\cal{X},\cal{Y}\}$, and it is generated independently according to the joint probability distribution function ${\bf P}_{X, Y}$. Wyner \cite{wyner1978} generalized \cite{wyner-ziv1976} to RVs $\{(X_{t}, Y_{t}): t=1,2, \dots\}$ that takes values in abstract alphabet spaces $\{\cal{X},\cal{Y}\}$, and hence include continuous-valued RVs. \begin{figure} \centering \includegraphics[width=0.75\textwidth]{blockdiagram1} \caption{The Wyner and Ziv \cite{wyner-ziv1976} block diagram of lossy compression. If switch A is closed then the side information is available at both the encoder and the decoder; if switch A is open the side information is only available at the decoder.} \label{fg:blockdiagram} \end{figure} {\it (A) Switch A Closed.} When the side information $\{Y_{t}: t=1,2, \dots\}$ is available, noncausally, at both the encoder and the decoder, Wyner \cite{wyner1978} (see also Berger \cite{berger:1971}) characterized the infimum of all achievable operational rates (denoted by $\overline{R}_1(\Delta_X)$ in \cite{wyner1978}), subject to a single-letter fidelity with average distortion less than or equal to $\Delta_X \in [0,\infty)$, by the single-letter operational information theoretic conditional rate distortion function (RDF): \begin{align} {R}_{X|Y}(\Delta_X) \sr{\triangle}{=}& \inf_{{\cal M}_0(\Delta_X)} I(X; \widehat{X}|Y), \hspace{.1in} \Delta_X \in [0,\infty) \label{eq:OP1ck1}\\ =& \inf_{{\bf P}_{\widehat{X}|X,Y}:{\bf E}\big\{d_X(X, \widehat{X})\big\}\leq \Delta_X} I(X; \widehat{X}|Y) \label{eq:OP1ck1_new} \end{align} where ${\cal M}_0(\Delta_X)$ is specified by the set \begin{align} {\cal M}_0(\Delta_X)\sr{\triangle}{=} & \Big\{ \widehat{X}: \Omega \rightarrow \widehat{\cal X} : \hspace{.1in} {\bf E}\Big\{d_X(X, \widehat{X})\Big\}\leq \Delta_X \Big\} \label{eq:OP2ck1} \end{align} and where $\widehat{X}$ is the reproduction of $X$, $I(X; \widehat{X}|Y)$ is the conditional mutual information between $X$ and $\widehat{X}$ conditioned on $Y$, and $d_X(\cdot, \cdot)$ is the fidelity criterion between $x$ and $\widehat{x}$. The infimum in (\ref{eq:OP1ck1}) is over all joint distributions ${\bf P}_{X,Y, \widehat{X}}$ such that the marginal distribution ${\bf P}_{X,Y}$ is the joint distribution of the source $(X, Y)$. {\it (B) Switch A Open.} When the side information $\{Y_{t}: t=1,2, \dots\}$ is available, noncausally, only at the decoder, Wyner \cite{wyner1978} characterized the infimum of all achievable operational rates (denoted by $R^*(\Delta_X)$ in \cite{wyner1978}), subject to a single-letter fidelity with average distortion less than or equal to $\Delta_X$, by the single-letter operational information theoretic RDF, as a function of an auxiliary RV $Z: \Omega \rightarrow {\cal Z}$: \begin{align} \overline{R}(\Delta_X) \sr{\triangle}{=}& \inf_{ {\cal M}(\Delta_X)} \Big\{I(X;Z)-I(Y;Z)\Big\}, \; \Delta_X \in [0,\infty) \label{rdf_d1_a}\\ =&\inf_{{\cal M}(\Delta_X)} I(X; Z|Y) \label{rdf_d1} \end{align} where ${\cal M}(\Delta_X)$ is specified by the set of auxiliary RVs $Z$, \begin{align} {\cal M}(\Delta_X)\sr{\triangle}{=}& \Big\{ Z: \Omega \rightarrow {\cal Z} : \hspace{.1in} {\bf P}_{Z|X,Y}={\bf P}_{Z|X}, \hspace{.1in} \exists \: \mbox{measurable function $f: {\cal Y}\times {\cal Z} \rightarrow \widehat{\cal X}$}, \; \widehat{X}=f(Y,Z), \nonumber \\ &{\bf E}\big\{d_X(X, \widehat{X})\big\}\leq \Delta_X \Big\}. \label{rdf_d2} \end{align} Wyner's realization of the joint measure ${\bf P}_{X, Y, Z, \widehat{X}}$ induced by the RVs $(X, Y, Z, \widehat{X})$, is illustrated in Fig.~\ref{fg:onlydecoder}, where $Z$ is the output of the ``test channel'', ${\bf P}_{Z|X}$. {\it Special Case of Switch A Open with Causal Side Information.} When the side information $\{Y_{t}: t=1,2, \dots\}$ is causally available, only at the decoder, it follows from \cite{wyner1978}, that the infimum of all achievable operational rates, denoted by $R^{*,CSI}(\Delta_X)$, is characterized by a degenerate version of $\overline{R}(\Delta_X)$, given by \begin{align} \overline{R}^{CSI}(\Delta_X) \sr{\triangle}{=}& \inf_{ {\cal M}(\Delta_X)}I(X;Z), \; \Delta_X \in [0,\infty). \label{rdf_d1_a_csi} \end{align} Throughout \cite{wyner1978} the following assumption is imposed. \begin{assumption} \label{ass_1} $ I(X;Y)<\infty $ (see \cite{wyner1978}). \end{assumption} For scalar-valued RVs $(X, Y, \widehat{X},Z)$ with square-error distortion, Wyner \cite{wyner1978} constructed the optimal realizations $\widehat{X}$ and $\widehat{X}=f(X, Z)$ that achieve the characterizations of the RDFs $R_{X|Y}(\Delta_X)$ and $\overline{R}(\Delta_X)$, respectively, and showed $\overline{R}(\Delta_X)= R_{X|Y}(\Delta_X)$. The main objective of this paper is to generalize Wyner's \cite{wyner1978} optimal realizations $\widehat{X}$ and $\widehat{X}=f(X, Z)$ that achieve the RDFs $R_{X|Y}(\Delta_X)$ and $\overline{R}(\Delta_X)$, to multivariate-valued RVs $(X, Y, \widehat{X},Z)$, and to show $\overline{R}(\Delta_X)= R_{X|Y}(\Delta_X)$. Our main contribution lies in the derivation of structural properties of the optimal test channels that achieve the RDFs, and their realizations. Further, these structural properties are indispensable in other problems of rate distortion theory. In particular, it is verified (see Remark~\ref{comment}) that the optimal realization of the test channel that achieves the RDF of the remote source coding problem\footnote{The RDF of the remote sensor problem is a generalization Wyner's RDF $\overline{R}(\Delta_X)$, with the encoder observing a noisy version of the RVs generated by the source.}, given in \cite[Theorem~4 and Abstract]{tian-chen2009} and \cite[Theorem~3A]{Zahedi-Ostegraard-2014}, when specilized to scalar RVs, and Wyner's RDF $\overline{R}(\Delta_X)$, do not generate Wyner's value of the RDF $\overline{R}(\Delta_X)$ and optimal test channel realization that achieves it, contrary to the current belief, i.e., \cite{tian-chen2009} and \cite[Theorem~3A]{Zahedi-Ostegraard-2014}. The remote sensor problem\footnote{Remark~\ref{comment} implies that, for multivariate-valued Gaussian RVs, the characterization of the RDF of the remote sensor problem and the test channel that achieves it, are currently not known.} is introduced by Draper and Wornell in \cite{draper-wornell2004}. {\it (C) Marginal RDF.} If there is no side information $\{Y_t: t=1, \ldots\}$, or the side information is independent of the source $\{X_t: t=1, \ldots\}$, the RDFs ${R}_{X|Y}(\Delta_X), \overline{R}(\Delta_X)$ degenerate to the marginal RDF $R_{X}(\Delta_{X})$, defined by \begin{align} {R}_{X}(\Delta_X) \sr{\triangle}{=} \inf_{{\bf P}_{\widehat{X}|X}:{\bf E}\big\{d_X(X, \widehat{X})\big\}\leq \Delta_X } I(X; \widehat{X}), \hspace{.1in} \Delta_X \in [0,\infty). \label{eq:cl} \end{align} {\it (D) Gray's Lower Bounds.} Related to the RDF $R_{X|Y}(\Delta_X)$ is Gray's \cite[Theorem~3.1]{gray1973} lower bound, \begin{eqnarray} R_{X|Y}(\Delta_X)\geq R_{X}(\Delta_X)-I(X;Y). \label{gray_lb_1} \end{eqnarray} {\it (E) The Draper and Wornell \cite{draper-wornell2004} Distributed Remote Source Coding Problem.} Draper and Wornell \cite{draper-wornell2004} generalized the RDF $\overline{R}(\Delta_X)$, when the source to be estimated at the decoder is $S: \Omega \rightarrow {\cal S}$, and it is not directly observed at the encoder. Rather, the encoder observes a RV $X:\Omega \rightarrow {\cal X} $ (which is correlated with $S$), while the decoder observes another RV, as side information, $Y:\Omega \rightarrow {\cal Y}$, which provides information on $(S,X)$. The aim is to reconstruct $S$ at the decoder by $\widehat{S}: \Omega \rightarrow \widehat{\cal S} $, subject to an average distortion ${\bf E}\{d_S(S,\widehat{S})\}\leq \Delta_S$, by a function $\widehat{S}=f(Y,Z)$. \\ The RDF for this problem, called distributed remote source coding problem, is defined by \cite{draper-wornell2004} \begin{align} \overline{R}^{PO}(\Delta_S) =&\inf_{{\cal M}^{PO}(\Delta_S)} I(X; Z|Y) \label{rdf_po_1} \end{align} where ${\cal M}^{PO}(\Delta_S)$ is specified by the set of auxiliary RVs $Z$, \begin{align} {\cal M}^{PO}(\Delta_S)\sr{\triangle}{=}& \Big\{ Z: \Omega \rightarrow {\cal Z} : \hspace{.1in} {\bf P}_{Z|S, X,Y}={\bf P}_{Z|X}, \hspace{.1in} \exists \: \mbox{measurable function $f^{PO}: {\cal Y}\times {\cal Z} \rightarrow \widehat{\cal S}$}, \nonumber \\ &\widehat{S}=f^{PO}(Y,Z), \hspace{.1in} {\bf E}\big\{d_S(S, \widehat{S})\big\}\leq \Delta_S \Big\}. \label{rdf_po_2} \end{align} It should be mentioned that if $S=X-$a.s (almost surely) then $\overline{R}^{PO}(\Delta_S)$ degenerates\footnote{This implies the optimal test channel that achieves the characterization of the RDF $\overline{R}^{PO}(\Delta_S)$ should degenerate to the optimal test channel that achieves the characterization of the RDF $\overline{R}(\Delta_X)$.} to $\overline{R}(\Delta_X)$. \\ For scalar-valued jointly Gaussian RVs $(S, X, Y, Z, \widehat{X})$ with square-error distortion, Draper and Wornell \cite[eqn(3) and Appendix~A]{draper-wornell2004} derived the characterization of the RDF $\overline{R}^{PO}(\Delta_S)$, and constructed the optimal realization $\widehat{S}=f^{PO}(Y,Z)$ that achieves this characterization. In \cite{tian-chen2009} the authors investigated the RDF $\overline{R}^{PO}(\Delta_S)$ for the multivariate jointly Gaussian RVs $(S, X, Y, Z, \widehat{X})$, with square-error distortion, and derived a characterization for the RDF $\overline{R}^{PO}(\Delta_S)$ in \cite[Theorem~4]{tian-chen2009} and \cite[Theorem~3A]{Zahedi-Ostegraard-2014} (see \cite[eqn(26)]{Zahedi-Ostegraard-2014}). However, as it apparent in Remark~\ref{comment}, when $S=X-$almost surely, and hence $\overline{R}^{PO}(\Delta_S)=\overline{R}(\Delta_X)$, the optimal test channel realizations that are used to derive \cite[Theorem~4]{tian-chen2009} and \cite[Theorem~3A]{Zahedi-Ostegraard-2014}, when substituted into the RDF $\overline{R}(\Delta_X)$, i.e., (\ref{rdf_d1_a}), do not produce Wyner's \cite{wyner1978} value of the the RDF, test channel realization (and also do not produce the known RDF and test channel realization of memoryless sources). This observation is sufficient to raise concerns regarding the validity of the water-filling solution given in \cite[Theorem~4]{tian-chen2009} and \cite[Theorem~3A]{Zahedi-Ostegraard-2014}. \begin{figure} \centering \includegraphics[width=0.65\textwidth]{onlydecoder} \caption{Test channel when side information is only available to the decoder} \label{fg:onlydecoder} \end{figure} \subsection{Problem Statement and Main Contributions} \par In this paper we consider a tuple of jointly independent and identically distributed multivariate Gaussian random variables (RVs) $(X^n, Y^n)= \{(X_{t}, Y_{t}): t=1,2, \ldots,n\}$, with respect to the square-error fidelity, as defined below. \begin{align} &X_{t} : \Omega \rightarrow {\mathbb R}^{n_x}= {\cal X}, \; Y_{t} : \Omega \rightarrow {\mathbb R}^{n_y}={\cal Y},\; t=1,2, \ldots, n,\label{prob_1} \\ & X_t \in N(0, Q_X), \hspace{.2in} Y_t \in N(0, Q_Y), \label{prob_2}\\ &Q_{(X_t, Y_t)} = {\mathbb E} \Big\{ \left[ \begin{array}{c} X_t \\ Y_t \end{array} \right] \left[ \begin{array}{c} X_t \\ Y_t \end{array} \right]^T \Big\}= \left[ \begin{array}{cc} Q_X & Q_{X,Y} \\ Q_{X,Y}^T & Q_Y \end{array} \right], \\ &{\bf P}_{X_t, Y_t}={\bf P}_{X,Y} \hspace{.1in} \mbox{multivariate Gaussian distribution}, \label{prob_7}\\ & \widehat{X}_t: \Omega \rightarrow {\mathbb R}^{n_x}= {\cal X}, \hspace{.1in} t = 1,2 \ldots, n,\label{prob_8} \\ & D_{X} (x^n, \widehat{x}^n)= \frac{1}{n} \sum_{t=1}^n ||x_{t}-\widehat{x}_{t}||_{{\mathbb R}^{n_x}}^2\label{prob_4} \end{align} where $n_x, n_y$ are arbitrary positive integers, $I_{n_y}$ is the $n_y \times n_y$ diagonal matrix, $X \in N(0,Q_X)$ means $X$ is a Gaussian RV, with zero mean and covariance matrix $Q_X$, and $||\cdot||_{{\mathbb R}^{n_x}}^2$ is the Euclidean distance on ${\mathbb R}^{n_x}$.\\ To give additional insight we often consider the following realization of side information\footnote{The condition $DD^T \succ 0$ ensures $I(X;Y)<\infty$, and hence Assumption~\ref{ass_1} is respected.}. \begin{align} &Y_t = CX_t + DV_t, \label{eq:sideInfo}\\ &V_t \in N(0, Q_V),\label{prob_3} \\ &C\in \mathbb{R}^{n_y\times n_x}, \hspace{.1in} D \in \mathbb{R}^{n_y\times n_y}, \hspace{.1in} D D^T \succ 0, \hspace{.1in} Q_V=I_{n_y} \label{prob_9_a}\\ &V^n \hspace{.1in} \mbox{independent of $X^n$}.\label{prob_9} \end{align} For the above specification of the source and distortion criterion, we derive the following results. \begin{enumerate} \item {\bf Theorem~\ref{thm_rw}, Fig.~\ref{fg:realization}.} Structural properties of optimal realization of $\widehat{X}$ that achieves the RDFs, $R_{X|Y}(\Delta_X)$, closed form expression for $R_{X|Y}(\Delta_X)$. \item {\bf Theorem~\ref{thm:dec}.} Structural properties of optimal realization of $\widehat{X}$ and $\widehat{X}=f(Y,Z)$ that achieve the RDF, $\overline{R}(\Delta_X)$, and closed form expressions for $\overline{R}(\Delta_X)$. \item A proof that $\overline{R}(\Delta_X)$ and $R_{X|Y}(\Delta_X)$ coincide, calculation of positive surface such that Gray's lower bound (\ref{gray_lb_1}) holds with equality, and a proof that the optimal test channel realization for the remote sensor problem, that is used to derive \cite[Theorem~4]{tian-chen2009} is incorrect (Remark~\ref{comment}). \end{enumerate} In Remark~\ref{rem-wyner}, we consider the tuple of scalar-valued, jointly Gaussian RVs $(X, Y)$, with square error distortion function, and verify that our optimal realizations of $\widehat{X}$ and closed form expressions for $R_{X|Y}(\Delta_X)$ and $\overline{R}(\Delta_X)$ are identical to Wyner's \cite{wyner1978} realizations and RDFs. We emphasize that past literature often deals with the calculation of RDFs using optimization techniques, without much emphasis on the structural properties of the realizations of the test channels, that achieve the characterizations of the RDFs. Because of this, often the optimization problems appear intractable, while closed form solutions are rare. It will be indefensible to claim that solving an optimization problem of a RDF, without specifying the realization of the optimal test channel that achieves the value of the RDF, fully characterizes the RDF. As demonstrated by Wyner \cite{wyner1978} for a tuple of scalar jointly Gaussian RVs $(X,Y)$ with square-error distortion criterion, the identity $\overline{R}(\Delta_X)={R}_{X|Y}(\Delta_X)$ holds, because the realizations that achieve these RDFs are explicitly constructed. Although, in the current paper the emphasis is on 1), 2) above, our derivations are generic and bring new insight into the construction of optimal test channels for other distributed source coding problems. \subsection{Additional Literature Review} The formulation of Fig.~\ref{fg:blockdiagram} is generalized to other multiterminal or distributed lossy compression problems, such as, relay networks, sensor networks etc., under various code formulations and assumptions. Oohama \cite{Oohama1997} analyzed the lossy compression problems for a tuple of scalar correlated Gaussian memoryless sources with square error distortion criterion, and determined the rate-distortion region, in the special case when one source provides partial side information to the other source. Oohama \cite{Oohama2005} analyzed the separate lossy compression problem for $L+1$ scalar correlated Gaussian memoryless sources, when $L$ act as sources partial side information at the decoder for the reconstruction of the remaining source, and gave a partial answer to the rate distortion region. Oohama \cite{Oohama2005} also proved that his problem gives as a special case the additive white Gaussian CEO problem analyzed by Viswanathan and Berger \cite{ViswanathanCEO1997}. In addition, Ekrem and Ulukus \cite{SUlukus2012} and Wang and Chen \cite{JunChen2014} expanded Oohama\cite{Oohama2005} main results, by deriving an outer bound on the rate region of the vector Gaussian multiterminal source. The vast literature on multiterminal or distributed lossy compression of jointly Gaussian sources with square-error distortion (mentioned above), is often confined to a tuple of correlated RVs $X: \Omega \rightarrow {\mathbb R}, Y: \Omega \rightarrow {\mathbb R}$. The above literature treats the optimization problems of RDFs without much emphasis on the structural properties of the optimal test channels that achieve the characterizations of the RDFs. \subsection{Main Theorems of the Paper} The characterizations of the RDFs $R_{X|Y}(\Delta_X)$ and $\overline{R}(\Delta_X)$ are encapsulated in Theorem~\ref{thm_rw} and Theorem~\ref{thm:dec}, stated below. These theorems include, structural properties of optimal test channels or realizations of $\widehat{X}$, that induce joint distributions, which achieve the RDFs, and closed form expressions of the RDFs based on a water-filling. The realization of the optimal test channel of $R_{X|Y}(\Delta_X)$ is shown in Fig.~\ref{fg:realization}. First, we introduce some notation. We denote the covariance of $X$ and $Y$ by \begin{align} Q_{X,Y} \sr{\triangle}{=}\mathrm{cov}\Big(X,Y\Big). \end{align} We denote the covariance of $X$ conditioned on $Y$ by, \begin{align} Q_{X|Y} & \sr{\triangle}{=} \nonumber \mathrm{cov}(X,X |Y)\\ & = {\bf E} \Big\{ \Big(X - {\bf E}\big(X\Big|Y\big) \Big) \Big(X - {\bf E}\big(X\Big|Y\big) \Big)^{\mbox{\tiny T}} \Big\} \hspace{.1in} \mbox{if $(X, Y)$ is jointly Gaussian.} \end{align} where the second equality is due to a property of jointly Gaussian RVs. The first theorem gives the optimal test channel that achieves the characterization of the RDF $R_{X|Y}(\Delta_X)$, and its water-filling representation. \begin{theorem} Characterization and water-filling solution of $R_{X|Y}(\Delta_X)$\\ \label{thm_rw} Consider the RDF $R_{X|Y}(\Delta_X)$ defined by (\ref{eq:OP1ck1}), for the multivariate Gaussian source with mean-square error distortion defined by (\ref{prob_1})-(\ref{prob_9}).\\ Then the following hold. (a) The optimal realization $\widehat{X}$ that achieves $R_{X|Y}(\Delta_X)$ is represented by \begin{align} \widehat{X} &= H X + \Big(I_{n_x}- H\Big)Q_{X,Y}Q_Y^{-1} Y + W \label{eq:realization_sp} \\ &= H X + \Big(I_{n_x}- H\Big)Q_{X,Y}Q_Y^{-1} Y + H \Psi \label{eq:realization_sp_new} \end{align} where \begin{align} &H \sr{\triangle}{=} I_{n_x} - \Sigma_{\Delta} Q_{X|Y}^{-1}=I_{n_x} - Q_{X|Y}^{-1} \Sigma_{\Delta}=H^{\mbox{\tiny T}} \succeq 0, \label{eq:realization_nn_1_sp} \\ &W = H \Psi, \hspace{.1in} \Psi \in N(0, Q_\Psi), \hspace{.1in} Q_\Psi \sr{\triangle}{=} \Sigma_\Delta H^{-1}=H^{-1} \Sigma_\Delta, \\ & Q_W \sr{\triangle}{=} H \Sigma_\Delta= \Sigma_\Delta -\Sigma_\Delta Q_{X|Y}^{-1} \Sigma_\Delta= \Sigma_{\Delta} H \succeq 0,\\ & \Sigma_{\Delta} \sr{\triangle}{=} {\bf E} \Big\{ \Big( X - \widehat{X} \Big) \Big(X - \widehat{X} \Big)^{\mbox{\tiny T}} \Big\}, \\ &Q_{X|Y}= Q_X -Q_{X,Y} Q_Y^{-1} Q_{X,Y}^{\mbox{\tiny T}}, \ \ Q_{X,Y}= Q_X C^{\mbox{\tiny T}}, \ \ Q_Y= C Q_X C^{\mbox{\tiny T}} + D D^{\mbox{\tiny T}} \label{eq:realization_nn_1_sp_new} \end{align} Moreover, the following structural properties hold:\\ (1) The optimal test channel satisfies \begin{align} &(i) \hspace{.1in} {\bf P}_{X|\widehat{X}, Y}={\bf P}_{X|\widehat{X}}, \label{pr_1} \\ &(ii) \hspace{.1in} {\bf E}\Big\{X\Big|\widehat{X}, Y\Big\}={\bf E}\Big\{X\Big|\widehat{X}\Big\}=\widehat{X} \hspace{.1in} \Longrightarrow \hspace{.1in} {\bf E}\Big\{X\Big|Y\Big\}= {\bf E}\Big\{\widehat{X}\Big|Y\Big\}. \label{pr_1_a} \end{align} (2) The matrices \begin{align} \big\{\Sigma_\Delta,& Q_{X|Y}, H, Q_W\big\} \hspace{.1in} \mbox{have spectral} \nonumber \\ & \mbox{decompositions w.r.t the same unitary matrix $U U^{\mbox{\tiny T}}=I_{n_x}, U^{\mbox{\tiny T}} U=I_{n_x}$.} \label{spe_d} \end{align} \par (b) The RDF $ R_{X|Y}(\Delta_X)$ is given by the water-filling solution: \begin{equation} R_{X|Y}(\Delta_X) =\frac{1}{2} \log \max\big\{1, \det(Q_{X|Y}\Sigma_\Delta^{-1})\big\}=\frac{1}{2} \sum_{i=1}^{n_x} \log \frac{\lambda_{i}}{\delta_{i}} \label{thm_wf_1} \end{equation} where \begin{equation} {\bf E}\big\{||X-\widehat{X}||_{{\mathbb R}^{n_x}}\big\}=\trace\big(\Sigma_\Delta\big)= \sum_{i=1}^{n_x} \delta_{i} = \Delta_X, \hspace{.2in} \delta_{i}= \left\{ \begin{array}{lll} \mu, & \mbox{if} & \mu < \lambda_i \\ \lambda_i, & \mbox{if} & \mu \geq \lambda_i \end{array} \right. \label{thm_wf_2} \end{equation} and where $\mu \in [0,\infty)$ is a Lagrange multiplier (obtained from the Kuch-Tucker conditions), and \begin{align} Q_{X|Y}&= U\Lambda U^{\mbox{\tiny T}}, \ \ \Lambda =\diag{\{\lambda_{1},\dots, \lambda_{n_x}\}}, \ \ \lambda_{1} \geq \lambda_2 \geq \dots \geq \lambda_{n_x} \\ \Sigma_{\Delta} &= U \Delta U^{\mbox{\tiny T}},\ \ \Delta = \diag{\{\delta_{1},\ldots, \delta_{{n_x}}\}}, \ \ \delta_{1}\geq \delta_{2} \geq \ldots \geq \delta_{{n_x}} . \end{align} \par (c) The optimal $\widehat{X}$ of part (a) that achieves $R_{X|Y}(\Delta_X)$ is realized by the parallel channel scheme depicted in Fig.~\ref{fg:realization}. \par (d) If $X$ and $Y$ are independent or $Y$ is replaced by a RV that generates the trivial information, i.e., the $\sigma-$algebra of $Y$, is $\sigma\{Y\}=\{\Omega, \emptyset\}$ (or $C=0$ in (\ref{eq:sideInfo})), then (a)-(c) hold with $Q_{X|Y}=Q_X, Q_{X,Y}=0$, and $R_{X|Y}(\Delta_X)=R_X(\Delta_X)$, i.e. becomes the marginal RDF of $X$. \end{theorem} \begin{figure} \centering \includegraphics[scale=0.5]{realization1} \caption{$R_{X|Y}(\Delta_X)$: A realization of optimal reproduction $\widehat{X}$ over parallel additive Gaussian noise channels, where $h_{i}\sr{\triangle}{=} 1-\frac{\delta_{i}}{\lambda_i}\geq 0, i=1, \ldots, n_x$ are the diagonal element of the spectral decomposition of the matrix $H=U \diag\{h_1, \ldots, h_{nx}\} U^{\mbox{\tiny T}}$, and $W_i\in N(0,h_i\delta_{i}), i=1, \ldots, n_x$ .} \label{fg:realization} \end{figure} The proof of Theorem \ref{thm_rw} is given Section \ref{mainres}, and it is based on the derivation of the structural properties. Some of the implications are briefly described below. \\ {\it Conclusion 1.} The construction and the structural properties of the optimal test channel ${\bf P}_{X|\widehat{X}, Y}$ that achieves the water-filling characterization of the RDF $R_{X|Y}(\Delta_X)$ of Theorem \ref{thm_rw}, are not documented elsewhere in the literature. (i) Structural property (\ref{pr_1}) strengthens Gray's \cite[Theorem~3.1]{gray1973} (see proof of (\ref{gray_lb_1})), inequality, \begin{eqnarray} I(X; \widehat{X}|Y)\geq I(X; \widehat{X})-I(X;Y). \end{eqnarray} to the equality \begin{align} I(X; \widehat{X}|Y)=& I(X; \widehat{X})-I(X;Y)\in [0,\infty) \hspace{.2in} \mbox{if} \hspace{.1in} {\bf P}_{X|\widehat{X}, Y}={\bf P}_{X|\widehat{X}} \\ =&\frac{1}{2} \log\big\{ \det(Q_{X|Y}\Sigma_\Delta^{-1})\big\},\hspace{.1in} Q_{X|Y}-\Sigma_{\Delta}\succeq 0, \hspace{.1in} {\bf E}\big\{||X-\widehat{X}||_{{\bf R}^{n_x}}^2\big\}=\trace\big(\Sigma_\Delta\big)\leq \Delta_X \end{align} Structural property (\ref{pr_1_a}) means the subtraction of equal quantities ${\bf E}\Big\{X\Big|Y\Big\}$ and ${\bf E}\Big\{\widehat{X}\Big|Y\Big\}$ at the encoder and decoder, respectively, without affecting the information measure, see Fig.~\ref{fg:realization}. \\ Theorem~\ref{thm_rw}.(b), (c), are obtained, with the aid of part (a), and Hadamard's inequality that shows $Q_{X|Y}$ and $\Sigma_\Delta$ have the same eigenvectors.\\ Structural propery (\ref{pr_1}) implies that Gray's \cite[Theorem~3.1]{gray1973} lower bound (\ref{gray_lb_1}) holds with equality for a strictly positive surface\footnote{See Gray \cite{gray1973} for definition.} $\Delta_X \leq {\cal D}_C(X|Y)\subseteq [0,\infty)$, i.e., \begin{eqnarray} R_{X|Y}(\Delta_X)= R_{X}(\Delta_X)-I(X;Y),\hspace{.2in} \Delta_X \leq {\cal D}_C(X|Y)\sr{\triangle}{=} \big\{\Delta_X\in [0,\infty): \Delta_X \leq n_x \lambda_{n_x}\big\}. \end{eqnarray} That is, the set ${\cal D}_C(X|Y)$ excludes values of $\Delta_X\in [0,\infty)$ for which water-filling is active in (\ref{thm_wf_1}), (\ref{thm_wf_2}). (ii) Structural property (2), i.e., the matrices $\{\Sigma_\Delta, Q_{X|Y}, H, Q_W\}$ are nonnegative symmetric, and have a spectral decomposition with respect to the same unitary matrix $U U^{\mbox{\tiny T}}=I_{n_x}$\cite{Horn:2013}, implies that the test channel is equivalently represented by parallel additive Gaussian noise channels (subject to pre-processing and post-processing at the encoder and decoder). (iii) Remark~\ref{rem-wyner} shows that the realization of optimal $\widehat{X}$ of Fig.~\ref{fg:realization} that achieves the RDF of Theorem~\ref{thm_rw}, degenerates to Wyner's \cite{wyner1978} optimal realization that achieves the RDF $R_{X|Y}(\Delta_X)$, for the tuple of scalar-valued, jointly Gaussian RVs $(X, Y)$, with square error distortion function. The second theorem gives the optimal test channel that achieves the characterization of the RDF $\overline{R}(\Delta_X)$, and further states that, there is no loss of compression rate if side information is only available at the decoder. That is, although in general, $\overline{R}(\Delta_X)\geq R_{X|Y}(\Delta_X)$, an optimal reproduction $\widehat{X}=f(Y,Z)$ of $X$, where $f(\cdot, \cdot)$ is linear, is contructed such that the inequality holds with equality. \begin{theorem} Characterization and water-filling solution of $\overline{R}(\Delta_X)$\\ \label{thm:dec} Consider the RDF $\overline{R}(\Delta_X)$ defined by (\ref{rdf_d1}), for the multivariate Gaussian source with mean-square error distortion, defined by (\ref{prob_1})-(\ref{prob_9}).\\ Then the following hold. (a) The characterization of the RDF, $\overline{R}(\Delta_X)$, satisfies \begin{align} \overline{R}(\Delta_X) \geq R_{X|Y}(\Delta_X) \label{lower_b_d_1} \end{align} where $R_{X|Y}(\Delta_X)$ is given in Theorem~\ref{thm_rw}.(b). (b) The optimal realization $\widehat{X}=f(Y, Z)$ that achieves the lower bound in (\ref{lower_b_d_1}), i.e., $\overline{R}(\Delta_X)=R_{X|Y}(\Delta_X)$, is represented by \begin{align} \widehat{X}=&f(Y,Z) \label{real_d1}\\ =&\Big(I-H\Big)Q_{X,Y}Q_Y^{-1} Y +Z, \\ Z=& H \Big(X + H^{-1} W\Big), \\ (H,& Q_W) \hspace{.1in} \mbox{given by (\ref{eq:realization_nn_1_sp})-(\ref{eq:realization_nn_1_sp_new}), and (\ref{spe_d}) holds} \label{real_d2}. \end{align} Moreover, the following structural properties hold:\\ (1) The optimal test channel satisfies \begin{align} &(i) \hspace{.1in} {\bf P}_{X|\widehat{X},Y,Z}={\bf P}_{X|\widehat{X},Y}={\bf P}_{X|\widehat{X}}, \label{str_p_1} \\ &(ii) \hspace{.1in} {\bf E}\Big\{X\Big|\widehat{X}, Y,Z\Big\}={\bf E}\Big\{X\Big|\widehat{X}\Big\}=\widehat{X} \hspace{.1in} \Longrightarrow \hspace{.1in} {\bf E}\Big\{X\Big|Y\Big\}= {\bf E}\Big\{\widehat{X}\Big|Y\Big\}.\label{str_p_2} \end{align} (2) Structural property (2) of Theorem~\ref{thm_rw}.(a) holds. \end{theorem} The proof of Theorem \ref{thm:dec} is given Section \ref{mainres}, and it is based on the derivation of the structural properties and Theorem~\ref{thm_rw}. Some implications are discussed below.\\ {\it Conclusion 2.} The optimal reproduction $\widehat{X}=f(X,Z)$ or test channel distribution ${\bf P}_{X|\widehat{X},Y,Z}$ that achieve the RDF $\overline{R}(\Delta_X)$ of Theorem \ref{thm:dec}, are not reported in the literature. (i) From the structural property (1) of Theorem~\ref{thm:dec}, i.e., (\ref{str_p_1}) then follows the lower bound $\overline{R}(\Delta_X) \geq R_{X|Y}(\Delta_X)$ is achieved by the realization $\widehat{X}=f(Y,Z)$ of Theorem \ref{thm:dec}.(b), i.e., for a given $Y=y$, then $\widehat{X}$ uniquely defines $Z$. (ii) If $X$ is independent of $Y$ or $Y$ generates a trivial information, then the RDFs $\overline{R}(\Delta_X)=\overline{R}_{X|Y}(\Delta_X)$ degenerate to the classical RDF of the source $X$, i.e., $R_{X}(\Delta_X)$, as expected. This is easily verified from (\ref{real_d1}), (\ref{real_d2}), i.e., $Q_{X,Y}=0$ which implies $\widehat{X}=Z$. \\ For scalar-valued RVs, $X : \Omega \rightarrow {\mathbb R}, Y : \Omega \rightarrow {\mathbb R}, X \in N(0, \sigma_X^2)$, and $X$ independent of $Y$, then the optimal realization reduces to \begin{align} &\widehat{X}=Z= \Big(1-\frac{\Delta_X}{\sigma_X^2}\Big) X + \sqrt{\Big(1-\frac{\Delta_X}{\sigma_X^2}\Big)\Delta_X} \overline{W}, \hspace{.1in} \overline{W}\in N(0,1), \hspace{.1in} \sigma_X^2 \leq \Delta_X, \label{mem_scal_1} \\ &Q_{\widehat{X}}=Q_{Z}=\sigma_{\widehat{X}}^2=\sigma_X^2 -\Delta_X.\label{mem_scal_2} \end{align} as expected. (iii) Remark~\ref{rem-wyner} shows that the realization of optimal $\widehat{X}=f(Y,Z)$ that achieves the RDF $\overline{R}(\Delta_X)$ of Theorem \ref{thm:dec}, degenerates to Wyner's \cite{wyner1978} realization that achieves the RDF $\overline{R}(\Delta_X)$, of the tuple of scalar-valued, jointly Gaussian RVs $(X, Y)$, with square error distortion function. (iv) Remark~\ref{comment} shows that, when specialized to Wyner's RDF $\overline{R}(\Delta_X)$, the optimal test channel realizations that achieve the RDFs of the distributed remote source coding problems in \cite[Theorem~4]{tian-chen2009}, does not degenerate to Wyner's optimal test channel realization, $(\widehat{X}, Z)$, that achieves the RDF $\overline{R}(\Delta_X)$, contrary to what is expected \cite[Abstract]{tian-chen2009}. The next corollary follows from the above two theorems. \begin{corollary} Characterization of $\overline{R}^{CSI}(\Delta_X)$\\ \label{cor:c-dec} Consider the RDF $\overline{R}^{CSI}(\Delta_X)$ defined by (\ref{rdf_d1_a_csi}), for the multivariate Gaussian source with mean-square error distortion, defined by (\ref{prob_1})-(\ref{prob_9}).\\ The optimal test channel of the RDF $\overline{R}^{CSI}(\Delta_X)$ is induced by the realization $Z= H \Big(X + H^{-1} W\Big)$, where $(H,Q_W)$ are given by (\ref{eq:realization_nn_1_sp})-(\ref{eq:realization_nn_1_sp_new}) of Theorem \ref{thm:dec}, and \begin{align} \overline{R}^{CSI}(\Delta_X)=&\inf_{{\cal Q}(\Delta_X) } \Big\{H(X)- H(X|Z)\Big\}\\ =& \inf_{{\cal Q}(\Delta_X) } \frac{1}{2}\log\Big\{ \det (Q_XQ_{X|Z}^{-1})\Big\} \end{align} where $(H, Q_W)$ are given by (\ref{eq:realization_nn_1_sp})-(\ref{eq:realization_nn_1_sp_new}), and \begin{align} &{\cal Q}(\Delta_X)\sr{\triangle}{=} \bigg\{\Sigma_{\Delta} \succeq 0: \trace \big( \Sigma_{\Delta} \big)\leq{\Delta_X}, \hspace{.2in} HQ_X H^{\mbox{\tiny T}} \Big(HQ_XH^{\mbox{\tiny T}} +Q_W\Big)^{-1}HQ_X H^{\mbox{\tiny T}} \preceq Q_X \bigg\}\\ &Q_{X|Z}=Q_X-HQ_X H^{\mbox{\tiny T}}\Big(HQ_X H^{\mbox{\tiny T}} +Q_W\Big)^{-1} HQ_X H^{\mbox{\tiny T}}. \end{align} \end{corollary} The rest of the paper is organized as follows. In Section~\ref{sect:wyner} we review of Wyner's \cite{wyner1978} operational definition of lossy compression and state a fundamental theorem on mean-square estimation that we use throughout the paper. In Section~\ref{sect:proofs} we prove the structural properties and the two main theorems. \section{Preliminaries} \label{sect:wyner} \par In this section we review the Wyner \cite{wyner1978} source coding problems with fidelity of Fig, \ref{fg:blockdiagram}. We begin with the notation, which follows closely \cite{wyner1978}. \subsection{Notation} Let ${\mathbb Z} \sr{\triangle}{=} \{\ldots, -1,0,1,\ldots \}$ the set of all integers, ${\mathbb N} \sr{\triangle}{=} \{0, 1,2, \ldots, \}$ the set of natural integers, ${\mathbb Z}_+ \sr{\triangle}{=} \{1,2, \ldots, \}$. For $n \in {\mathbb Z}_+$ denote the following finite subset of the above defined set, ${\mathbb Z}_n\sr{\triangle}{=} \{1,2,\ldots, n\}$. Denote the real numbers by ${\mathbb R}$ and the set of positive and of strictly positive real numbers, respectively, by ${\mathbb R}_+ = [0,\infty)$ and ${\mathbb R}_{++}=(0,\infty)$. For any matrix $A\in \mathbb{R}^{p\times m}, (p,m)\in {\mathbb Z}_+\times {\mathbb Z}_+$, we denote its transpose by $A^{\mbox{\tiny T}}$, and for $m=p$, we denote its trace by $\trace(A)$, and by $\diag\{A\}$, the matrix with diagonal entries $A_{ii},~i=1,\ldots,p$, and zero elsewhere. The identity matrix with dimensions $p\times p$ is designated as $I_p$. Denote an arbitrary set or space by ${\cal U}$ and the product space formed by $n$ copies of it by ${\cal U}^n \sr{\triangle}{=} \times_{t=1}^n {\cal U}$. $u^n \in {\cal U}^n$ denotes the set of $n-$tuples $u^n \sr{\triangle}{=} (u_1,u_2, \ldots, u_n)$, where $u_k \in {\cal U}, k=1, \ldots, n$ are its coordinates. Denote a probability space by $(\Omega, {\cal F}, {\mathbb P})$. For a sub-sigma-field ${\cal G} \subseteq {\cal F}$, and $A \in {\cal F}$, denote by ${\mathbb P}(A|{\cal G})$ the conditional probability of $A$ given ${\cal G}$, i.e., ${\mathbb P}(A|{\cal G})={\mathbb P}(A|{\cal G})(\omega), \omega \in \Omega$ is a measurable function on $\Omega$. On the above probability space, consider two-real valued random variables (RV) $X: \Omega \rightarrow {\cal X}, Y: \Omega \rightarrow {\cal X}$, where $({\cal X}, {\cal B}({\cal X})), ({\cal Y}, {\cal B}({\cal Y}))$ are arbitrary measurable spaces. The measure (or joint distribution if ${\cal X}, {\cal Y}$ are Euclidean spaces) induced by $(X, Y)$ on ${\cal X} \times {\cal Y}$ is denoted by ${\bf P}_{X,Y}$ or ${\bf P}(dx,dy)$ and their marginals on ${\cal X}$ and ${\cal Y}$ by ${\bf P}_X$ and ${\bf P}_Y$, respectively. The conditional measure of RV $X$ conditioned on $Y$ is denoted by ${\bf P}_{X|Y}$ or ${\bf P}(dx|y)$, when $Y=y$ is fixed. On the above probability space, consider three-real values RVs $X: \Omega \rightarrow {\cal X}, Y: \Omega \rightarrow {\cal X}$, $Z: \Omega \rightarrow {\cal Z}$. We say that RVs $(Y, Z)$ are conditional independent given RV $X$ if ${\bf P}_{Y, Z|X}={\bf P}_{Y|X} {\bf P}_{Z|X}-$a.s (almost surely) or equivalently ${\bf P}_{Z|X, Y}={\bf P}_{Z|X}-$a.s; the specification a.s is often omitted. We often denote the above conditional independence by the Markov chain (MC) $Y \leftrightarrow X \leftrightarrow Z$. Finally, for RVs $X, Y$ etc. $H(X)$ denotes differential entropy of $X$, $H(X|Y)$ conditional differential entropy of $X$ given $Y$, $I(X;Y)$ the mutual information between $X$ and $Y$, as defined in standard books on information theory, \cite{Gallager:1968}, \cite{pinsker:1964}. The notation $X \in N(0, Q_X)$ means $X$ is a Gaussian distributed RV with zero mean and covariance $Q_X\succeq 0$, where $Q_X \succeq 0$ (resp. $Q_X \succ 0 $) means $Q_X$ is positive semidefinite (resp. positive definite). \subsection{Wyner's Coding Theorems with Side Information at the Decoder} \label{pr:2} \par For the sake of completeness, we introduce certain results from Wyner's paper \cite{wyner1978}, that we use in this paper. On a probability space $(\Omega, {\cal F}, {\mathbb P})$, consider a tuple of jointly independent and identically distributed RVs $(X^n, Y^n)= \{(Y_{t}, Y_{t}): t=1,2, \ldots,n\}$, \begin{align} X_{t} : \Omega \rightarrow {\cal Y}, \hspace{.1in} Y_{t} : \Omega \rightarrow {\cal Y}, \ \ t = 1,2, \ldots, n \label{jgrv} \end{align} with induced distribution ${\bf P}_{X_t, Y_t}={\bf P}_{X,Y}, \forall t$. Consider also the measurable function $d_{X}: {\cal X} \times \widehat{\cal X} \rightarrow [0,\infty)$, for a measurable space $\widehat{\cal X}$. Let \begin{eqnarray} {\cal I}_M \sr{\triangle}{=} \big\{0,1, \ldots, M-1\big\}, \hspace{.1in} M \in {\mathbb Z}_M. \end{eqnarray} be a finite set. \par A code $(n, M,D_X)$, when switch $A$ is open in Fig.~\ref{fg:blockdiagram}, is defined by two measurable functions, the encoder $F_E$ and the decoder $F_D$, with average distortion, as follows. \begin{align*} &F_{E} : {\cal X}^n \longrightarrow {\cal I}_M, \hspace{.2in} F_D: {\cal I}_M \times {\cal Y}^n \longrightarrow {\cal \widehat{\cal X}}^n, \\ &\frac{1}{n} \mathbb{\bf E} \Big\{\sum_{t=1}^n d_X(X_t,\widehat{X}_t) \Big\} = D_X \end{align*} where $\widehat{X}^n$ is again a sequence of RVs, $\widehat{X}^n= F_D(Y^n,F_E(X^n))\in {\cal \widehat{\cal X}}^n$. A non-negative rate distortion pair $(R, \Delta_X)$ is said to be {\it achievable} if for every $\epsilon >0$, and $n$ sufficiently large there exists a code $(n, M,D_X)$ such that \begin{equation*} M \leq 2^{n(R + \epsilon)}, \hspace{0.4cm} D_X \leq\Delta_X+ \epsilon \end{equation*} Let ${\cal R}$ denote the set of all achievable pairs $(R,\Delta_X)$, and define, for $ \Delta_X\geq 0$, the infimum of all achievable rates by \begin{equation} R^*(\Delta_X) = \inf_{(R,\Delta_X) \in{\cal R}}R \end{equation} If for some $\Delta_X$ there is no $R <\infty$ such that $(R,\Delta_X) \in{\cal R}$, then set $R^*(\Delta_X) =\infty.$ For arbitrary abstract spaces Wyner \cite{wyner1978} characterized the infimum of all achievable rates $R^*(\Delta_X)$, by the single-letter RDF, $\overline{R}(\Delta_X)$ given by (\ref{rdf_d1}), (\ref{rdf_d2}), in terms of an auxiliary RV $Z: \Omega \rightarrow {\cal Z}$. Wyner's realization of the joint measure ${\bf P}_{X, Y, Z, \widehat{X}}$ induced by the RVs $(X, Y, Z, \widehat{X})$, is illustrated in Fig.~\ref{fg:onlydecoder}, where $Z$ is the output of the ``test channel'', ${\bf P}_{Z|X}$. Wyner proved the following coding theorems. \begin{theorem} \cite{wyner1978}\\ \label{the_1_dec} Suppose Assumption~\ref{ass_1} holds. \\ (a) Converse Theorem. For any $\Delta_X\geq 0$, $R^*(\Delta_X) \geq \overline{R}(\Delta_X)$.\\ (b) Direct Theorem. If the conditions stated in \cite[pages~64-65, (i), (ii)]{wyner1978} hold, then $R^*(\Delta_X) \leq \overline{R}(\Delta_X)$, $0 \leq \Delta_X <\infty$. \end{theorem} \par When switch $A$ is closed in Fig.~\ref{fg:blockdiagram}, and the tuple of jointly independent and identically distributed RVs $(X^n, Y^n)$, is defined as in Section~\ref{pr:2}, Wyner \cite{wyner1978} generalized Berger's \cite{berger:1971} characterization of all achievable pairs $(R,\Delta_X)$, from finite alphabet spaces to abstract alphabet spaces. \par A code $(n, M,D_X)$, when switch $A$ is closed in Fig.~\ref{fg:blockdiagram}, is defined as in Section~\ref{pr:2}, with the encoder $F_E$, replaced by \begin{align} F_{E} : {\cal X}^n \times {\cal Y}^n \longrightarrow {\cal I}_M . \end{align} Let ${\cal R}_1$ denote the set of all achievable pairs $(R,\Delta_X)$, again as defined in Section~\ref{pr:2}. For $\Delta_X\geq 0$, define the infimum of all achievable rates by \begin{equation} \overline{R}_1(\Delta_X) = \inf_{(R,\Delta_X) \in{\cal R}_1}R \end{equation} \par Wyner \cite{wyner1978} characterized the infimum of all achievable rates $\overline{R}_1(\Delta_X)$, by the single-letter RDF $R_{X|Y}(\Delta_X)$ given by (\ref{eq:OP1ck1}), (\ref{eq:OP2ck1}). The coding Theorems are given by Theorem~\ref{the_1_dec} with $R^*(\Delta_X)$ and $\overline{R}(\Delta_X)$ replaced by $\overline{R}_1(\Delta_X)$ and $R_{X|Y}(\Delta_X)$, respectively. That is, $\overline{R}_1(\Delta_X)=R_{X|Y}(\Delta_X)$ (using Wyner's notation \cite[Appendix~A]{wyner1978}) These coding theorems generalized earlier work of Berger \cite{berger:1971} for finite alphabet spaces. Wyner also derived a fundamental lower bound on $R^*(\Delta_X)$ in terms of $\overline{R}_1(\Delta_X)$, as stated in the next remark. \begin{remark} Wyner \cite[Remarks, page 65]{wyner1978}\\ \label{rem_lb} (A) For $Z \in {\cal M}(\Delta_X)$, $\widehat{X}=f(Y,Z)$, and thus ${\bf P}_{Z|X,Y}={\bf P}_{Z|X}$, then by a property of conditional mutual information and the data processing inequality: \begin{align} I(X;Z|Y)=I(X;Z, f(Y,Z)|Y) \geq I(X; \widehat{X}|Y) \geq R_{X|Y}(\Delta_X) \label{in_11} \end{align} where the last equality is defined since $\widehat{X} \in {\cal M}_0(\Delta_X)$ (see \cite[Remarks, page 65]{wyner1978}). Moreover, \begin{align} {R}^*(\Delta_X) \geq R_{X|Y}(\Delta_X). \label{in_1} \end{align} (B) Inequality (\ref{in_1}) holds with equality, i.e., $R^*(\Delta_X) = R_{X|Y}(\Delta_X)$ if the $\widehat{X} \in {\cal M}_0(\Delta_X)$, which achieves $I(X;\widehat{X}|Y)=R_{X|Y}(\Delta_X)$ can be generated as in Fig.~\ref{fg:onlydecoder} with $I(X;Z|Y)= I(X;\widehat{X}|Y)$. This occurs if and only if $I(X;Z|\widehat{X},Y)=0$, and follows from the identity and lower bound \begin{align} I(X;Z|Y)=&I(X;Z,\widehat{X}|Y)=I(X;Z|Y, \widehat{X})+ I(X;\widehat{X}|Y)\\ \geq & I(X;\widehat{X}|Y) \end{align} where the inequality holds with equality if and only if $I(X;Z|\widehat{X},Y)=0$. \end{remark} \subsection{Mean-Square Estimation of Conditionally Gaussian RVs} Below, state a well-known property of conditionally Gaussian RVs, which we use in our derivations. \begin{proposition} Conditionally Gaussian RVs\\ \label{prop_cg} Consider a pair of multivariate RVs $X=(X_1, \ldots, X_{n_x})^{\mbox{\tiny T}}: \Omega \rightarrow \mathbb{R}^{n_x}$ and $Y=(Y_1, \ldots, Y_{n_y})^{\mbox{\tiny T}}: \Omega \rightarrow \mathbb{R}^{n_y}$, $(n_x,n_y) \in {\mathbb Z}_+ \times {\mathbb Z}_+$, defined on some probability distribution $\Big(\Omega, {\cal F}, {\mathbb P}\Big)$. Let ${\cal G}\subseteq {\cal F}$ be a sub$-\sigma-$algebra. Assume the conditional distribution of $(X, Y)$ conditioned on ${\cal G}$, i.e., ${\bf P}(dx, dy |{\cal G})$ is ${\mathbb P}-$a.s. (almost surely) Gaussian, with conditional means \begin{align} \mu_{X|{\cal G}}\sr{\triangle}{=}{\bf E}\Big(X\Big|{\cal G}\Big), \hspace{.2in} \mu_{Y|{\cal G}}\sr{\triangle}{=} {\bf E}\Big(Y\Big|{\cal G}\Big) \end{align} and conditional covariances \begin{align} Q_{X|{\cal G}} \sr{\triangle}{=}\mathrm{cov}\Big(X,X\Big|{\cal G}\Big), \hspace{.2in} Q_{Y|{\cal G}} \sr{\triangle}{=}\mathrm{cov}\Big(Y,Y\Big|{\cal G}\Big), \end{align} \begin{align} Q_{X,Y|{\cal G}} \sr{\triangle}{=}\mathrm{cov}\Big(X,Y\Big|{\cal G}\Big). \end{align} Then, the vectors of conditional expectations $\mu_{X|Y,{\cal G}}\sr{\triangle}{=}{\bf E}\Big(X\Big|Y,{\cal G}\Big)$ and matrices of conditional covariances $Q_{X|Y, {\cal G}}\sr{\triangle}{=}\mathrm{cov}\Big(X, X\Big|Y,{\cal G}\Big)$ are given, ${\mathbb P}-$a.s., by the following expressions\footnote{If the inverse $Q_{Y|{\cal G}}^{-1}$ does not exists then it is replaced by the pseudo inverse $Q_{Y|{\cal G}}^\dagger$.}: \begin{align} &\mu_{X|Y,{\cal G}}=\mu_{X|{\cal G}} + Q_{X,Y|{\cal G}}Q_{Y|{\cal G}}^{-1}\Big( Y- \mu_{Y|{\cal G}}\Big)\label{eq:mean22}, \\ \label{eq:mean} &Q_{X|Y, {\cal G}}\sr{\triangle}{=} Q_{X|{\cal G}}-Q_{X,Y|{\cal G}}Q_{Y|{\cal G}}^{-1}Q_{X,Y|{\cal G}}^{\mbox{\tiny T}} . \end{align} If ${\cal G}$ is the trivial information, i.e., ${\cal G}=\{\Omega, \emptyset\}$, then ${\cal G}$ is removed from the above expressions. \end{proposition} Note that ${\cal G}=\{\Omega, \emptyset\}$ then (\ref{eq:mean22}), (\ref{eq:mean}) reduce to the well-known conditional mean and conditional covariance of $X$ conditioned on $Y$. \section{Proofs of Theorem~\ref{thm_rw} and Theorem~\ref{thm:dec}} \label{mainres} \label{sect:proofs} \par In this section we derive the the statements of Theorem~\ref{thm_rw} and Theorem~\ref{thm:dec}. The proofs are based on several intermediate results, some of which also hold for general abstract alphabet spaces. \subsection{Side Information at Encoder and Decoder} \par We start our analysis with the following achievable lower bound on the conditional mutual information $I(X;\widehat{X}|Y)$, which appears in the definition of $R_{X|Y}(\Delta_X)$, given by (\ref{eq:OP1ck1}), that strengthen Gray's lower bound (\ref{gray_lb_1}), given in \cite[Theorem~3.1]{gray1973}. \begin{lemma}Achievable lower bound on conditional mutual information\\ \label{lem:proof1} Let $(X, Y, \widehat{X})$ be a triple of arbitrary RVs on the abstract spaces ${\cal X} \times {\cal Y}\times \widehat{\cal X}$, with distribution ${\bf P}_{X,Y, \widehat{X}}$ and joint marginal the fixed distribution ${\bf P}_{X,Y}$ of $(X, Y)$. \\ Then the following hold.\\ (a) The inequality holds: \begin{align} I(X;\widehat{X}|Y) \geq I(X;\widehat{X}) - I(X;Y) \label{ineq_1} \end{align} Moreover, if \begin{align} {\bf P}_{X|\widehat{X},Y}={\bf P}_{X|\widehat{X}}- a.s. \label{mc_1} \end{align} or equivalently $Y \leftrightarrow \widehat{X} \leftrightarrow X$ is a MC then the equality holds, \begin{align} I(X;\widehat{X}|Y) =I(X;\widehat{X}) - I(X;Y) \label{ineq_1_neq} \end{align} (b) If $Y \leftrightarrow \widehat{X} \leftrightarrow X$ is a Markov chain then the equality holds \begin{align} R_{X|Y}(\Delta_X)= R_X(\Delta_X) - I(X;Y), \hspace{.1in} \Delta_X \leq {\cal D}_C(X|Y) \label{ineq_1_neq_G} \end{align} for a strictly positive set ${\cal D}_C(X|Y)$. \end{lemma} \begin{proof} See Appendix~\ref{app_A}. \end{proof} The next theorem is used to derive the characterization of $R_{X|Y}(\Delta_X)$. \begin{theorem} Achievable lower bound on conditional mutual information and mean-square error estimation\\ \label{them_lb} (a) Let $(X, Y, \widehat{X})$ be a triple of arbitrary RVs on the abstract spaces ${\cal X} \times {\cal Y}\times \widehat{\cal X}$, with distribution ${\bf P}_{X,Y, \widehat{X}}$ and joint marginal the fixed distribution ${\bf P}_{X,Y}$ of $(X, Y)$.\\ Define the conditional mean of $X$ conditioned on $(\widehat{X},Y)$ by \begin{align} \overline{X}^{cm} \sr{\triangle}{=} {\bf E}\Big(X\Big|\widehat{X},Y\Big). \end{align} Then the inequality holds: \begin{equation} I(X;\widehat{X}|Y) \geq I(X;\overline{X}^{cm}|Y) \label{eq:LB} \end{equation} Moreover, \begin{equation} \mbox{if} \hspace{.2in} \overline{X}^{cm}=\widehat{X}-a.s \hspace{.2in} \mbox{then} \hspace{.2in} I(X;\widehat{X}|Y) = I(X;\overline{X}^{cm}|Y). \label{as_eq} \end{equation} (b) In part (a) let $(X, Y, \widehat{X})$ be a triple of arbitrary RVs on ${\cal X} \times {\cal Y}\times \widehat{\cal X}={\mathbb R}^{n_x} \times {\mathbb R}^{n_y}\times {\mathbb R}^{n_x}$, $(n_x,n_y) \in {\mathbb Z}_+ \times {\mathbb Z}_+$. \\ For all measurable functions $(y, \widehat{x})\longmapsto g(y, \widehat{x})\in {\mathbb R}^{n_x}$ the mean-square error satisfies \begin{align} {\bf E}\Big\{||X-&g(Y, \widehat{X})||_{{\mathbb R}^{n_x}}^2\Big\} \nonumber \\ &\geq {\bf E}\Big\{||X-{\bf E}\Big(X\Big|Y, \widehat{X}\Big)||_{{\mathbb R}^{n_x}}^2\Big\}, \hspace{.1in} \forall g(\cdot). \label{mse_1} \end{align} \end{theorem} \begin{proof} (\textit{a}) By properties of conditional mutual information \cite{pinsker:1964} then \begin{align} I(X;\widehat{X}| Y)\overset{(1)}=& I(X;\widehat{X},\overline{X}^{cm}| Y) \\ \overset{(2)}=& I(X;\widehat{X}| \overline{X}^{cm}, Y)+ I(X;\overline{X}^{cm}| Y) \\ \overset{(3)}\geq& I(X;\overline{X}^{cm}|Y) \end{align} where \((1)\) is due to $\overline{X}^{cm}$ is a function of $(Y,\widehat{X})$, and a well-known property of the mutual information \cite{pinsker:1964}, \((2)\) is due to the chain rule of mutual information \cite{pinsker:1964}, and \((3)\) is due to $I(X;\widehat{X}| \overline{X}^{cm}, Y)\geq 0$. If $\widehat{X} = \overline{X}^{cm}$- a.s, then $I(X;\widehat{X}| \overline{X}^{cm}, Y)=0$, and hence the inequality (\ref{eq:LB}) becomes an equality. \\ (\textit{b}) The inequality (\ref{mse_1}) is well-known, due to the orthogonal projection theorem. \end{proof} \ \ For jointly Gaussian RVs $(X, Y, \widehat{X})$, in the next theorem we identify simple sufficient conditions for the lower bound of Theorem~\ref{them_lb} to be achievable. \begin{theorem} Sufficient conditions for the lower bounds of Theorem~\ref{them_lb} to be achievable\\ \label{them:lb_g} Consider the statement of Theorem~\ref{them_lb} for a triple of jointly Gaussian RVs $(X, Y, \widehat{X})$ on ${\mathbb R}^{n_x} \times {\mathbb R}^{n_y}\times {\mathbb R}^{n_x}$, $(n_x,n_y) \in {\mathbb Z}_+ \times {\mathbb Z}_+$, i.e., ${\bf P}_{X,Y, \widehat{X}}={\bf P}_{X,Y, \widehat{X}}^G$ and joint marginal the fixed Gaussian distribution ${\bf P}_{X,Y}={\bf P}_{X,Y}^G$ of $(X, Y)$\\ Suppose Conditions 1 and 2 hold: \begin{align}\label{eq:condA} &\mbox{Condition 1.}\hspace{.1in} {\bf E}\Big(X\Big|Y\Big) ={\bf E}\Big(\widehat{X}\Big|Y\Big)\\ \label{eq:condB} &\mbox{Condition 2.}\hspace{.1in} \mathrm{cov}(X,\widehat{X}|Y) \mathrm{cov}(\widehat{X},\widehat{X}|Y)^{-1} = I_{n_x} \end{align} Then \begin{eqnarray} \overline{X}^{cm}=\widehat{X}-a.s \end{eqnarray} and the inequality (\ref{eq:LB}) holds with equality, i.e., $I(X;\widehat{X}|Y) = I(X;\overline{X}^{cm}|Y)$. \end{theorem} \begin{proof} By use of Proposition~\ref{prop_cg}, (\ref{eq:mean22}), and letting $Y = \widehat{X}$ and ${\cal G}$ the information generated by $Y$, then \begin{align} \overline{X}^{cm} \sr{\triangle}{=} & {\bf E}\Big(X\Big|\widehat{X},Y\Big)\\ =& {\bf E}\Big(X\Big|Y\Big) \\ &+\mathrm{cov}(X,\widehat{X}|Y) \mathrm{cov}(\widehat{X},\widehat{X}|Y)^{-1}\Big(\widehat{X} - {\bf E}\Big(\widehat{X}\Big|Y\Big)\Big) \\ \overset{(a)}=& \widehat{X} - a.s. \end{align} where $(a)$ is due to Conditions 1 and 2. \end{proof} \ \ \par Now, we turn our attention to the optimization problem $R_{X|Y}(\Delta_X)$ defined by (\ref{eq:OP1ck1}), for the multivariate Gaussian source with mean-square error distortion defined by (\ref{prob_1})-(\ref{prob_9}). In the next lemma we derive a {\it preliminary parametrization} of the optimal reproduction distribution ${\bf P}_{\widehat{X}|X, Y}$ of the RDF ${R}_{X|Y}(\Delta_X)$. \\ \begin{lemma} Preliminary parametrization of optimal reproduction distribution of $R_{X|Y}(\Delta_X)$\\ \label{lemma:par} Consider the RDF $R_{X|Y}(\Delta_X)$ defined by (\ref{eq:OP1ck1}) for the multivariate Gaussian source, i.e., ${\bf P}_{X,Y}={\bf P}_{X,Y}^G$, with mean-square error distortion defined by (\ref{prob_1})-(\ref{prob_9}). (a) For every joint distribution ${\bf P}_{X, Y, \widehat{X}}$ there exists a jointly Gaussian distribution denoted by ${\bf P}_{X,Y, \widehat{X}}^G$, with marginal the fixed distribution ${\bf P}_{X,Y}^G$ , which minimizes $I(X; \widehat{X}|Y)$ and satisfies the average distortion constraint, i.e., with $d_X(x,\widehat{x})=||x-\widehat{x}||_{{\mathbb R}^{n_x}}^2$. (b) The conditional reproduction distribution ${\bf P}_{\widehat{X}|X,Y}={\bf P}_{\widehat{X}|X,Y}^G$ is induced by the parametric realization of $\widehat{X}$ (in terms of $H, G, Q_W$), \begin{align} &\widehat{X} = H X + G Y + W, \label{eq:real}\\ &H \in \mathbb{R}^{n_x\times n_x}, \hspace{.1in} G \in \mathbb{R}^{n_x\times n_y}, \\ &W \in N(0, Q_W), \; Q_W \succeq 0, \\ &W \hspace{.1in} \mbox{independent of $(X, V)$}.\label{eq:real_4} \end{align} and $\widehat{X}$ is a Gaussian RV. (c) $R_{X|Y}(\Delta_X)$ is characterized by the optimization problem. \begin{align} {R}_{X|Y}(\Delta_X) \sr{\triangle}{=}& \inf_{{\cal M}_0^G(\Delta_X)} I(X; \widehat{X}|Y), \hspace{.1in} \Delta_X \in [0,\infty) \label{eq:OP1_char_1} \end{align} where ${\cal M}_0^G(\Delta_X)$ is specified by the set \begin{align} {\cal M}_0^G(\Delta_X)\sr{\triangle}{=} & \Big\{ \widehat{X}: \Omega \rightarrow \widehat{\cal X} : \hspace{.1in} (\ref{eq:real})-(\ref{eq:real_4}) \; \mbox{hold, and} \hspace{.1in} {\bf E}\big\{||X-\widehat{X}||_{{\mathbb R}^{n_x}}^2\big\}\leq \Delta_X \Big\}. \end{align} (d) If there exists $(H, G, Q_W)$ such that $\overline{X}^{cm}=\widehat{X}-a.s$, then a further lower bound on ${R}_{X|Y}(\Delta_X)$ is achieved in the subset ${\cal M}_0^{G,o}(\Delta_X)\subseteq {\cal M}_0^G(\Delta_X) $ defined by \begin{align} {\cal M}_0^{G,o}(\Delta_X)\sr{\triangle}{=}& \Big\{ \widehat{X}: \Omega \rightarrow \widehat{\cal X} : \hspace{.1in} (\ref{eq:real})-(\ref{eq:real_4}) \; \mbox{hold}\hspace{.1in} \widehat{X} = \overline{X}^{cm} - a.s, \hspace{.1in} {\bf E}\big\{||X-\widehat{X}||_{{\mathbb R}^{n_x}}^2\big\}\leq \Delta_X\Big\} \end{align} and the corresponding characterization of the RDF is \begin{align} {R}_{X|Y}(\Delta_X) \sr{\triangle}{=}& \inf_{{\cal M}_0^{G,o}(\Delta_X)} I(X; \widehat{X}|Y), \hspace{.1in} \Delta_X \in [0,\infty) \label{eq:OP1_char_1_new} \end{align} \end{lemma} \begin{proof} (a) This is omitted since it is similar to the classical unconditional RDF $R_X(\Delta_X)$ of a Gaussian message $X \in N(0,Q_X)$. (b) By (a) the conditional distribution ${\bf P}_{\widehat{X}|X,Y}^G$ is such that, its conditional mean is linear in $(X,Y)$, its conditional covariance is nonrandom, i.e., constant, and for fixed $(X, Y)=(x,y)$, ${\bf P}_{\widehat{X}|X,Y}^G$ is Gaussian. Such a distribution is induced by the parametric realization (\ref{eq:real})-(\ref{eq:real_4}). (c) Follows from parts (a) and (b). (d) Follows from Theorem~\ref{them:lb_g} and (\ref{mse_1}), by letting $g(y,\widehat{x})=\widehat{x}$. \end{proof} \ \ In the next theorem we identify the optimal triple $(H,G,Q_W)$ such that $ \overline{X}^{cm}=\widehat{X}-a.s$, and thus establish its existence. We also characterize the RDF by ${R}_{X|Y}(\Delta_X) \sr{\triangle}{=} \inf_{{\cal M}_0^{G,o}(\Delta_X)} I(X; \widehat{X}|Y)$, and construct a realization $\widehat{X}$ that achieves it. \begin{theorem}Characterization of RDF ${R}_{X|Y}(\Delta_X)$\\ \label{thm:proof2} Consider the RDF $R_{X|Y}(\Delta_X)$, defined by (\ref{eq:OP1ck1}) for the multivariate Gaussian source with mean-square error distortion, defined by (\ref{prob_1})-(\ref{prob_9}). The characterization of the RDF $R_{X|Y}(\Delta_X)$ is \begin{align} {R}_{X|Y}(\Delta_X) \sr{\triangle}{=} & \inf_{{\cal Q}(\Delta_X)} I(X; \widehat{X}|Y) \label{eq:optiProbl}\\ =& \inf_{{\cal Q}(\Delta_X)} \frac{1}{2}\log\Big\{ \det(Q_{X|Y}\Sigma_{\Delta} ^{-1})\Big\}\label{eq:optiProbl_nn} \end{align} where \begin{align} {\cal Q}(\Delta_X)\sr{\triangle}{=}& \bigg\{\Sigma_{\Delta}: \trace \big( \Sigma_{\Delta} \big)\leq{\Delta_X} \bigg\}, \\ \Sigma_{\Delta} \sr{\triangle}{=}& {\bf E} \Big\{ \Big( X - \widehat{X} \Big) \Big(X - \widehat{X} \Big)^{\mbox{\tiny T}} \Big\},\\ Q_{X|Y} =& Q_X - Q_{X,Y} Q_Y^{-1}Q_{X,Y}^{\mbox{\tiny T}}, \hspace{.1in} Q_{X|Y}-\Sigma_{\Delta}\succeq 0, \label{eq:optiProbl_n}\\ Q_{X,Y} =& Q_X C^{\mbox{\tiny T}}, \hspace{.1in} Q_Y=C Q_X C^{\mbox{\tiny T}} + D D^{\mbox{\tiny T}} \end{align} and the optimal reproduction $\widehat{X}$ which achieves ${R}_{X|Y}(\Delta_X)$ is \begin{align} \widehat{X} =&H X + \Big(I_{n_x}- H\Big)Q_{X,Y}Q_Y^{-1} Y + W \label{eq:realization} \\ H \sr{\triangle}{=}&I_{n_x} - \Sigma_{\Delta} Q_{X|Y}^{-1}\succeq 0, \hspace{.1in} G \sr{\triangle}{=}\Big(I_{n_x}-H\Big)Q_{X,Y} Q_Y^{-1}, \label{eq:realization_nn_1} \\ Q_W \sr{\triangle}{=}& \Sigma_{\Delta} H^T = \Sigma_\Delta -\Sigma_\Delta Q_{X|Y}^{-1} \Sigma_\Delta= H \Sigma_\Delta \succeq 0. \label{eq:realization_nn} \end{align} Moreover, the realization (\ref{eq:realization}) satisfies, almost surely, \begin{align} &{\bf P}_{X|\widehat{X},Y}={\bf P}_{X|\widehat{X}}, \\ & {\bf E}\Big(X\Big|\widehat{X},Y\Big)= \widehat{X},\\ &{\bf E}\Big(X\Big|Y\Big)={\bf E}\Big(\widehat{X}\Big|Y\Big)= Q_{X,Y} Q_Y^{-1} Y, \label{prop_1} \\ & \mathrm{cov}(X,\widehat{X}|Y)= \mathrm{cov}(\widehat{X},\widehat{X}|Y). \end{align} \end{theorem} \begin{proof} See Appendix~\ref{app_B}. \end{proof} \ \ \begin{remark} Structural properties of realization of Theorem~\ref{thm:proof2} \label{rk:1_ed} \par For the characterization of the RDF $R_{X|Y}(\Delta_X)$ of Theorem~\ref{thm:proof2}, for the tuple of multivariate jointly Gaussian RVs $(X,Y)$, we can proceed one step further to show that the optimal $\widehat{X}$ defined by (\ref{eq:realization})-(\ref{eq:realization_nn}) in terms of the matrices $\big\{\Sigma_\Delta, Q_{X|Y}, H, Q_W\big\}$, is such that \begin{align} &\mbox{i)} \hspace{.1in} H=H^T\succeq 0, \\ &\mbox{ii)} \hspace{.1in} \big\{\Sigma_\Delta, \Sigma_{X|Y}, H, Q_W\big\} \hspace{.1in} \mbox{have spectral decompositions w.r.t the same unitary matrix $U U^{\mbox{\tiny T}}=I_{n_x}$}. \end{align} We show this in Corollary~\ref{cor:equivalent}. \end{remark} To prove the structural property of Remark~\ref{rk:1_ed} we use the next corollary, which is a degenerate case of \cite[Lemma~2]{charalambous-charalambous-kourtellaris-vanschuppen-2020} (i.e., the structural properties of test channel of Gorbunov and Pinsker \cite{gorbunov-pinsker1974} nonanticipatory RDF of Markov sources). \begin{corollary} Structural properties of realization of optimal $\widehat{X}$ of characterization of $R_{X|Y}(\Delta_X)$\\ \label{cor:sp_rep} Consider the characterization of the RDF $ R_{X|Y}(\Delta_X)$ of Theorem~\ref{thm:proof2}. Suppose $Q_{X|Y} \succeq 0$ and $\Sigma_\Delta\succeq 0$ commute, that is, \begin{align} Q_{X|Y} \Sigma_\Delta =\Sigma_\Delta Q_{X|Y} . \label{suff_c} \end{align} Then \begin{align} &\mbox{(1)} \hspace{.1in} H= I_{n_x}-\Sigma_\Delta Q_{X|Y}^{-1}= H^{\mbox{\tiny T}}, \hspace{.1in} Q_W = \Sigma_\Delta H^{\mbox{\tiny T}} = \Sigma_\Delta H =H\Sigma_\Delta= Q_W^{\mbox{\tiny T}} \label{suff_c1} \\ & \mbox{(2)} \hspace{.1in} \big\{\Sigma_\Delta, Q_{X|Y}, H, Q_W\big\} \hspace{.1in} \mbox{have spectral} \nonumber \\ & \hspace{.2in} \mbox{decompositions w.r.t the same unitary matrix $U U^{\mbox{\tiny T}}=I_{n_x}, \;U^{\mbox{\tiny T}} U=I_{n_x} $}. \end{align} that is, the following hold. \begin{align} &Q_{X|Y} =U \diag{\{\lambda_{1},\dots, \lambda_{n_x}\}}U^{\mbox{\tiny T}}, \hspace{.1in} \lambda_1 \geq \lambda_2 \geq \ldots \geq \lambda_{n_x}, \\ &\Sigma_\Delta =U \diag{\{\delta_{1},\dots, \delta_{n_x}\}}U^{\mbox{\tiny T}},\hspace{.1in} \delta_1 \geq \delta_2 \geq \ldots \geq \delta_{n_x}, \\ &H =U \diag\{1-\frac{\delta_{1}}{\lambda_1},\dots, 1-\frac{\delta_{n_x}}{\lambda_{n_x}}\}U^{\mbox{\tiny T}}, \\ &Q_W =U \diag\{\big(1-\frac{\delta_{1}}{\lambda_1}\big)\delta_1,\dots, \big(1-\frac{\delta_{n_x}}{\lambda_{n_x}}\big)\delta_{n_x}\}U^{\mbox{\tiny T}}. \label{deco_1} \end{align} \end{corollary} \begin{proof} See Appendix~\ref{app_C}. \end{proof} In the next corollary we re-express the realization of $\widehat{X}$ which characterizes the RDF of Theorem~\ref{thm:proof2} using a translation of $X$ and $\widehat{X}$, by subtracting their conditional means with respect to $Y$, making use of (\ref{prop_1}). Then we apply corollary~\ref{cor:sp_rep} to establish that the optimal matrices of the RDF $ R_{X|Y}(\Delta_X)$ of Theorem~\ref{thm:proof2} are such that $\big\{\Sigma_\Delta, Q_{X|Y}, H, Q_W\big\}$ have a spectral decomposition w.r.t the same unitary matrix $U U^{\mbox{\tiny T}}=I_{n_x}$. \begin{corollary} \label{cor:equivalent} Equivalent characterization of $ R_{X|Y}(\Delta_X)$\\ Consider the characterization of the RDF $ R_{X|Y}(\Delta_X)$ of Theorem~\ref{thm:proof2}. Define the translated RVs \begin{equation} \mathbf{X} \sr{\triangle}{=} X- {\bf E}\Big\{X\Big|Y\Big\}= X- Q_{X,Y}Q_Y^{-1}Y,\hspace{.2in} \mathbf{\widehat{X}} \sr{\triangle}{=} \widehat{X} - {\bf E}\Big\{\widehat{X}\Big|Y\Big\}= \widehat{X}- Q_{X,Y}Q_Y^{-1} Y \label{trans_1} \end{equation} where the equalities are due to (\ref{prop_1}). Let \begin{align} & Q_{X|Y} =U \diag{\{\lambda_{1},\dots, \lambda_{n_x}\}}U^{\mbox{\tiny T}}, \hspace{.1in} U U^{\mbox{\tiny T}} =I_{n_x}, U^{\mbox{\tiny T}} U=I_{n_x}, \hspace{.1in} \lambda_1 \geq \lambda_2 \geq \ldots \geq \lambda_{n_x}, \\ & \overline{\mathbf{X}} \sr{\triangle}{=} U^{\mbox{\tiny T}} \mathbf{X} , \hspace{.1in} \widehat{\overline{\mathbf{X}}} \sr{\triangle}{=} U^{\mbox{\tiny T}} \widehat{\mathbf{X}}. \end{align} Then \begin{align} &\mathbf{\widehat{X}} = H\mathbf{X} +W,\label{eq:equivalent_m}\\ &I(X;\widehat{X}|Y) = I(\mathbf{X} ;\mathbf{\widehat{X}} )= I(U^{\mbox{\tiny T}}\mathbf{X} ;U^{\mbox{\tiny T}}\mathbf{\widehat{X}} ), \label{eq:equivalent}\\ &{\bf E}\big\|X-\widehat{X}\big\|_{{\mathbb R}^{n_x}}^2 = {\bf E}\big\|\mathbf{X} - \mathbf{\widehat{X}} \big\|_{{\mathbb R}^{n_x}}^2 = {\bf E}\big\|U^{\mbox{\tiny T}}{\mathbf{X}} -U^{\mbox{\tiny T}}\widehat{{\mathbf{X}}} \big\|_{{\mathbb R}^{n_x}}^2 =\trace \big( \Sigma_{\Delta} \big).\label{eq:equivalent_d} \end{align} where $(H, Q_W)$ are given by (\ref{eq:realization_nn_1}) and (\ref{eq:realization_nn}).\\ Further, the characterization of the RDF $R_{X|Y}(\Delta_X)$ (\ref{eq:optiProbl_nn}) satisfies the following equalities and inequality: \begin{align} {R}_{X|Y}(\Delta_X) \sr{\triangle}{=} & \inf_{{\cal Q}(\Delta_X)} I(X; \widehat{X}|Y) = \inf_{{\cal Q}(\Delta_X)} \frac{1}{2}\log \max\Big\{1, \det(Q_{X|Y}\Sigma_{\Delta}^{-1})\Big\} \label{equiv_100} \\ =& \inf_{ {\bf E}\big\|\mathbf{X} - \mathbf{\widehat{X}} \big\|_{{\mathbb R}^{n_x}}^2\leq \Delta_X } I(\mathbf{X} ;\mathbf{\widehat{X}} ) \label{equiv_101}\\ =& \inf_{ {\bf E}\big\|U^{\mbox{\tiny T}}\mathbf{X} - U^{\mbox{\tiny T}}\mathbf{\widehat{X}} \big\|_{{\mathbb R}^{n_x}}^2\leq \Delta_X } I(U^{\mbox{\tiny T}}\mathbf{X} ;U^{\mbox{\tiny T}}\mathbf{\widehat{X}} ) \label{equiv_1010}\\ \geq & \inf_{ {\bf E}\big\|U^{\mbox{\tiny T}}\mathbf{X} - U^{\mbox{\tiny T}}\mathbf{\widehat{X}} \big\|_{{\mathbb R}^{n_x}}^2\leq \Delta_X } \sum_{t=1}^{n_x} I(\overline{\mathbf{X}}_t ; \widehat{\mathbf{\overline{X}}}_t ) \label{equiv_1011} \end{align} Moreover, the inequality (\ref{equiv_1011}) is achieved if $Q_{X|Y} \succeq 0$ and $\Sigma_\Delta\succeq 0$ commute, that is, if (\ref{suff_c}) holds, and \begin{equation} R_{X|Y}(\Delta_X) =\inf_{ \sum_{i=1}^{n_x} \delta_i \leq \Delta_X} \frac{1}{2} \sum_{i=1}^{n_x} \log \max\Big\{1, \frac{\lambda_i}{\delta_i}\Big\} \label{fchar_1} \end{equation} where \begin{align} \diag\{{\bf E}\Big(U^{\mbox{\tiny T}}{\mathbf{X}} -U^{\mbox{\tiny T}}\widehat{{\mathbf{X}}}\Big) \Big(U^{\mbox{\tiny T}}{\mathbf{X}} -U^{\mbox{\tiny T}}\widehat{{\mathbf{X}}}\Big)^{\mbox{\tiny T}}\}=\diag\{\delta_1,\delta_2, \ldots, \delta_{n_x}\}. \label{fchar_2} \end{align} \end{corollary} \begin{proof} By Theorem~\ref{thm:proof2}, then \begin{align} \widehat{X} =& HX + GY + W\\ =& HX + \Big(I-H\Big)Q_{X,Y}Q_Y^{-1}Y + W\\ =& H\Big(X - Q_{X,Y} Q_Y^{-1}Y\Big) + Q_{X,Y}Q_Y^{-1}Y + W \\ \Longrightarrow & \hspace{.1in} \widehat{X} - Q_{X,Y} Q_Y^{-1} Y = H\Big(X - Q_{X,Y}Q_Y^{-1}Y\Big) + W\\ \Longrightarrow & \hspace{.1in} \mathbf{\widehat{X}} = H\mathbf{X} +W. \label{mem} \end{align} The last equation establishes (\ref{eq:equivalent_m}). By properties of conditional mutual information and the properties of optimal realization $\widehat{X}$ then the following equalities hold. \begin{align} I(X;\widehat{X}|Y)= & I(X-Q_{X,Y}Q_Y^{-1}Y;\widehat{X}-Q_{X,Y}Q_Y^{-1}Y|Y)\\ = & I(\mathbf{X} ;\mathbf{\widehat{X}} |Y), \hspace{.2in} \mbox{ \hspace{.1in} \mbox{by (\ref{trans_1}), i.e., (\ref{prop_1})} } \\ =& H(\mathbf{\widehat{X}} |Y) - H(\mathbf{\widehat{X}} |Y,\mathbf{X} )\\ = & H(\mathbf{\widehat{X}}) - H(\mathbf{\widehat{X}} |Y, \mathbf{X} ), \hspace{.1in} \mbox{by indep. of $\mathbf{X}$ and $Y$}\\ = & H(\mathbf{\widehat{X}}) - H(\mathbf{\widehat{X}} |\mathbf{X}), \hspace{.1in} \mbox{by indep. of $W$ and $Y$ for fixed $X$}\\ = & I(\mathbf{X} ;\mathbf{\widehat{X}} )\\ =&I(U^{\mbox{\tiny T}}\mathbf{X} ; U^{\mbox{\tiny T}}\mathbf{\widehat{X}} ) \label{eq:equivalent}\\ =&I(\overline{\mathbf{X}}_1, \overline{\mathbf{X}}_2, \ldots, \overline{\mathbf{X}}_{n_x} ; \widehat{\overline{\mathbf{X}}}_1, \widehat{\overline{\mathbf{X}}}_2, \ldots, \widehat{\overline{\mathbf{X}}}_{n_x} )\\ \geq &\sum_{t=1}^{n_x} I(\overline{\mathbf{X}}_t; \widehat{\overline{\mathbf{X}}}_t ), \hspace{.1in} \mbox{by mutual independence of $\overline{\mathbf{X}}_t, t=1,2, \ldots, n_x$} \label{eq:equivalent} \end{align} Moreover, inequality (\ref{eq:equivalent}) holds with equality if $(\overline{\mathbf{X}}_t; \widehat{\overline{\mathbf{X}}}_t), t=1,2, \ldots, n_x$ are jointly independent. \\ The average distortion function is then given by \begin{align} {\bf E}\big\|X-\widehat{X}\big\|_{{\mathbb R}^{n_x}}^2 &= {\bf E}\big\|X-\widehat{X}- Q_{X,Y} Q_Y^{-1}Y+Q_{X,Y}Q_Y^{-1}Y\big\|_{{\mathbb R}^{n_x}}^2 \\ &={\bf E}\big\|\mathbf{X} - \mathbf{\widehat{X}} \big\|_{{\mathbb R}^{n_x}}^2, \hspace{.1in} \mbox{by (\ref{trans_1}), i.e., (\ref{prop_1})} \\ &={\bf E}\big\|U^{\mbox{\tiny T}}\mathbf{X} -U^{\mbox{\tiny T}} \mathbf{\widehat{X}} \big\|_{{\mathbb R}^{n_x}}^2 = \trace \big( \Sigma_{\Delta}\big), \hspace{.1in} \mbox{by $UU^{\mbox{\tiny T}}=I_{n_x}$}. \end{align} By Corollary~\ref{cor:sp_rep}, if (\ref{suff_c}) holds, that is, $Q_{X|Y} \succeq 0$ and $\Sigma_\Delta\succeq 0$ satisfy $Q_{X|Y} \Sigma_\Delta =\Sigma_\Delta Q_{X|Y}$ (i.e., commute), then (\ref{suff_c1})-(\ref{deco_1}) hold, and by (\ref{fchar_1}), then \begin{align} \widehat{\overline{\mathbf{X}}} \sr{\triangle}{=}& U^{\mbox{\tiny T}} \widehat{\mathbf{X}}=U^{\mbox{\tiny T}} H {\mathbf{X}}+U^{\mbox{\tiny T}} W=U^{\mbox{\tiny T}} \widehat{\mathbf{X}}=U^{\mbox{\tiny T}} H U U^{\mbox{\tiny T}} {\mathbf{X}}+U^{\mbox{\tiny T}} W \\ =& U^{\mbox{\tiny T}} H U \overline{\mathbf{X}}+U^{\mbox{\tiny T}} W, \hspace{.1in} \mbox{$U^{\mbox{\tiny T}} HU$ is diagonal and $U^{\mbox{\tiny T}} W$ has indep. components}. \end{align} Hence, if (\ref{suff_c}) holds then the lower bound in (\ref{eq:equivalent}) holds with equality, because $(\overline{\mathbf{X}}_t; \widehat{\overline{\mathbf{X}}}_t), t=1,2 \ldots, n_x$ are jointly independent. Moreover, if (\ref{suff_c}) holds then from, say, (\ref{equiv_100}), the expressions (\ref{fchar_1}), (\ref{fchar_2}) are obtained. The above equations establish all claims. \end{proof} \ \ \begin{proposition} Theorem~\ref{thm_rw} is correct. \end{proposition} \begin{proof} By invoking Corollary~\ref{cor:equivalent}, Theorem~\ref{thm:proof2} and the convexity of $R_{X|Y}(\Delta_X)$ given by (\ref{fchar_1}), then we arrive at the statements of Theorem~\ref{thm_rw}, which completely characterize the RDF $R_{X|Y}(\Delta_X)$, and constructs a realization of the optimal $\widehat{X}$ that achieves it. \end{proof} \ \ Next, we discuss the degenerate case, when the statements of Theorem~\ref{thm:proof2} and Theorem~\ref{thm_rw} reduce to the RDF ${R}_{X}(\Delta_X)$ of a Gaussian RV $X$ with square-error distortion function. We illustrate that, the identified structural property of the realization matrices $\big\{\Sigma_\Delta, Q_{X|Y}, H, Q_W\big\}$ leads to to the well-known water-filling solution. \begin{remark} Degenerate case of Theorem~\ref{thm:proof2}\\ \label{rem_crdf_1} Consider the characterization of the RDF ${R}_{X|Y}(\Delta_X)$ of Theorem~\ref{thm:proof2}, and assume $X$ and $Y$ are independent or $Y$ generates the trivial information, i.e., the $\sigma-$algebra of $Y$, is $\sigma\{Y\}=\{\Omega, \emptyset\}$ or $C=0$ in (\ref{eq:sideInfo})-(\ref{prob_9}). (a) By the definitions of $Q_{X,Y}, Q_{X|Y}$ then \begin{align} Q_{X,Y}=0, \hspace{.1in} Q_{X|Y}= Q_X. \label{deg_1} \end{align} Substituting (\ref{deg_1}) into the expressions of Theorem~\ref{thm:proof2}, then the RDF $R_{X|Y}(\Delta_X)$ reduces to \begin{align} {R}_{X|Y}(\Delta_X)=& {R}_{X}(\Delta_X) \sr{\triangle}{=} \inf_{{\cal Q}(\Delta_X)} I(X; \widehat{X}) \label{eq:optiProbl_deg}\\ =& \inf_{{\cal Q}^{m}(\Delta_X)} \frac{1}{2}\log \Big\{\det( Q_{X}\Sigma_{\Delta}^{-1}) \Big\} \end{align} where \begin{align} {\cal Q}^m(\Delta_X)\sr{\triangle}{=}& \bigg\{\Sigma_{\Delta}: \trace \big( \Sigma_{\Delta} \big)\leq{\Delta_X} \bigg\} \end{align} and the optimal reproduction $\widehat{X}$ reduces to \begin{align} \widehat{X} =& \Big(I_{n_x} - \Sigma_{\Delta} Q_X^{-1}\Big) X + W, \hspace{.1in} Q_X \succeq \Sigma_\Delta, \label{eq:realizatio_x}\\ Q_W=&\Big( I_{{n_x}} - \Sigma_{\Delta} Q_X^{-1}\Big) \Sigma_{\Delta} \succeq 0. \label{eq:realization_n_deg} \end{align} Thus, $R_X(\Delta_X)$ is the well-known RDF of a multivariate memoryless Gaussian RV $X$ with square-error distortion. (b) For the RDF ${R}_{X}(\Delta_X)$ of part (a), it is known \cite{Ihara:1993} that $\Sigma_\Delta$ and $Q_X$ have a spectral decomposition with respect to the same unitary matrix, that is, \begin{align} & Q_X = U\Lambda_X U^{\mbox{\tiny T}}, \hspace{.1in} \Sigma_\Delta = U\Delta U^{\mbox{\tiny T}}, \hspace{.1in} UU^{\mbox{\tiny T}} = I \\ & \Lambda_X = \diag{\{ \lambda_{X,1},\dots, \lambda_{X,n_x}\}} , \hspace{.1in} \Delta = \diag{\{\delta_{1},\dots, \delta_{n_x}\}} \end{align} where the entries of $(\Lambda_X, \Delta)$ are in decreasing order. \\ Define \begin{align} \mathsf{X}^p \sr{\triangle}{=} U^{\mbox{\tiny T}} X, \hspace{.1in} \mathsf{\widehat{X}}^p \sr{\triangle}{=} U^{\mbox{\tiny T}} \widehat{X}, \hspace{.1in} \mathsf{W}^p \sr{\triangle}{=} U^{\mbox{\tiny T}} W. \end{align} Then a parallel channel realization of the optimal reproduction $\mathsf{\widehat{X}}^p$ is obtained given by, \begin{align} &\mathsf{\widehat{X}}^p = \mathsf{H}\mathsf{X}^p + \mathsf{W}^p , \\ & \mathsf{H} = I_{n_x} - \Delta\Lambda_X^{-1}=\diag{\{1-\frac{\delta_1}{\lambda_{X,1}},\dots,1-\frac{\delta_{n_x}}{\lambda_{X,n_x}}\}} , \\ & Q_{\mathsf{W}^p} = \mathsf{H} \Delta = \diag{\{\big(1-\frac{\delta_1}{\lambda_{X,1}}\big)\delta_1,\dots,\big(1-\frac{\delta_{n_x}}{\lambda_{X,n_x}}\big)\delta_{n_x}\}} . \end{align} The RDF $R_{X}(\Delta_X)$ is then computed from the reverse water-filling equations, as follows. \begin{equation} R_{X}(\Delta_X) =\frac{1}{2} \sum_{i=1}^{n_x} \log \frac{\lambda_{X,i}}{\delta_i} \end{equation} where \begin{equation} \sum_{i=1}^{n_x} \delta_i = \Delta_X, \hspace{.2in} \delta_i= \left\{ \begin{array}{lll} \mu, & \mbox{if} & \mu < \lambda_{X,i} \\ \sigma_i, & \mbox{if} & \mu \geq \lambda_{X,i} \end{array} \right. \end{equation} and where $\mu \in [0,\infty)$ is a Lagrange multiplier (obtained from the Kuch-Tucker conditions). \end{remark} \subsection{Side Information only at Decoder} In general, when the side information is available only at the decoder the achievable operational rate $R^*(\Delta_X)$ is greater than the achievable operational rate $\overline{R}_1(\Delta_X)$, when the side information is available to the encoder and the decoder \cite{wyner1978}. By Remark~\ref{rem_lb}, $\overline{R}(\Delta_X) \geq R_{X|Y}(\Delta_X)$, and equality holds if $I(X;Z|\widehat{X}, Y)=0$. \par In view of the characterization of $R_{X|Y}(\Delta_X)$ and the realization of the optimal reproduction $\widehat{X}$ of Theorem~\ref{thm_rw}, which is presented in Fig.~\ref{fg:realization}, we observe that we can re-write (\ref{eq:realization_sp}) as follows. \begin{align} \widehat{X} =& \Big(I_{n_x} - \Sigma_{\Delta} Q_{X|Y}^{-1}\Big) X+ \Sigma_\Delta Q_{X|Y}^{-1} Q_{X,Y} Q_Y^{-1} Y + W, \label{eq:realization-d}\\ =&\Sigma_\Delta Q_{X|Y}^{-1} Q_{X,Y}Q_Y^{-1}Y +Z \\ =&f(Y,Z)\\ Z=&\Big(I_{n_x} - \Sigma_{\Delta} Q_{X|Y}^{-1}\Big) \Big(X + \Big(I_{n_x} - \Sigma_{\Delta} Q_{X|Y}^{-1}\Big)^{-1} W\Big), \\ H=&I_{n_x}-\Sigma_\Delta Q_{X|Y}^{-1}, \; Q_W=H\Sigma_\Delta, \hspace{.1in} \mbox{defined by (\ref{eq:realization_nn_1_sp})-(\ref{spe_d}),} \label{eq:realization_n}\\ {\bf P}_{Z|X,Y}=&{\bf P}_{Z|X}, \hspace{.1in} \mbox{$(\widehat{X}, Y)$ uniquely defined $Z$, which implies $I(X;Z|\widehat{X}, Y)=0$.} \end{align} The realization $\widehat{X}=f(Y,Z)$ is shown in Fig~\ref{fg:realization}. \\ \begin{proposition} Theorem~\ref{thm:dec} is correct. \end{proposition} \begin{proof} From the above realization of $\widehat{X}=f(Y,Z)$, we have the following. (a) By Wyner, see Remark~\ref{rem_lb}, then the inequalities (\ref{in_11}) and (\ref{in_1}) hold, and equalities holds if $I(X;Z| \widehat{X}, Y)=0$. That is, for any $\widehat{X}= f(Y,Z)$, by properties of conditional mutual information then \begin{align} I(X;Z|Y) &\overset{(\alpha)}= I(X;Z,\widehat{X}|Y)\\ &\overset{(\beta)}= I(X;Z|\widehat{X},Y) + I(X;\widehat{X}|Y) \\ &\overset{(\gamma)}\geq I(X;\widehat{X}|Y ) \label{lb_p} \end{align} where $(\alpha)$ is due to $\widehat{X}= f(Y,Z)$, $(\beta)$ is due to the chain rule of mutual information, and $(\gamma)$ is due to $I(X;Z|\widehat{X},Y)\geq 0$. Hence, (\ref{lower_b_d_1}) is obtained (as as in Wyner \cite{wyner1978} for a tuple of scalar jointly Gaussian RVs). (b) Equality holds in (\ref{lb_p}) if there exists an $\widehat{X}= f(Y,Z)$ such that $I(X;Z|\widehat{X},Y)=0$, and the average distortion is satisfied. Taking $\widehat{X}= f(Y,Z) = (I_{n_x}-H)Q_{X,Y} Q_Y^{-1} Y + Z$, where $Z= g(X,W)$ is specified by (\ref{eq:realization-d})-(\ref{eq:realization_n}), then $I(X;Z|\widehat{X},Y)=0$ and the average distortion is satisfied. Since the realization (\ref{eq:realization-d})-(\ref{eq:realization_n}) is identical to the realization (\ref{real_d1})-(\ref{real_d2}), then part (b) is also shown. (c) This follows directly from the optimal realization. \end{proof} \ \ \begin{figure} \centering \begin{subfigure}[b]{0.4\textwidth} \centering \includegraphics[width=\textwidth]{realizationWynerZiv} \caption{RDF $R_{X|Y}(\Delta_X)$: Wyner's \cite{wyner1978} optimal realization of $\widehat{X}$ for RDF $R_{X|Y}(\Delta_X)$ of (\ref{sc_1})-(\ref{sc_4}).} \label{real_ed} \end{subfigure} \hfill \begin{subfigure}[b]{0.4\textwidth} \centering \includegraphics[width=\textwidth]{realization2WynerZiv} \caption{RDF $\overline{R}(\Delta_X)$: Wyner's \cite{wyner1978} optimal realization $\widehat{X}=f(X, Z)$ for RDF $\overline{R}(\Delta_X)$ of (\ref{sc_1})-(\ref{sc_4}).} \label{real_d} \end{subfigure} \caption{Wyner's realizations of optimal reproductions for RDFs $R_{X|Y}(\Delta_X)$ and $\overline{R}(\Delta_X)$} \label{fig:three graphs} \end{figure} \begin{remark} Relation to Wyner's \cite{wyner1978} optimal test channel realizations\\ \label{rem-wyner} Now, we verify that our optimal realizations of $\widehat{X}$ and closed form expressions for $R_{X|Y}(\Delta_X)$ and $\overline{R}(\Delta_X)$ are identical to Wyner's \cite{wyner1978} realizations and RDFs (see Fig.~\ref{fig:three graphs}), for the tuple of scalar-valued, jointly Gaussian RVs $(X, Y)$, with square error distortion function, \begin{align} &X: \Omega \rightarrow {\cal X} \sr{\triangle}{=} {\mathbb R}, \hspace{.1in} Y: \Omega \rightarrow {\cal Y} \sr{\triangle}{=} {\mathbb R},\hspace{.1in} \widehat{X}: \Omega \rightarrow \widehat{\cal X} \sr{\triangle}{=} {\mathbb R}, \label{sc_1}\\ &d_X(x,\widehat{x})=\big(x-\widehat{x}\big)^2, \label{sc_2} \\ &X \in N(0, \sigma_X^2), \; \sigma_X^2>0, \hspace{.1in} Y=\alpha \Big(X+U\Big), \label{sc_3} \\ & U \in N(0,\sigma_U^2), \hspace{.1in} \sigma_U^2>0, \hspace{.1in} \alpha >0. \label{sc_4} \end{align} (a) RDF $R_{X|Y}(\Delta_X)$. First, we show our realization of optimal $\widehat{X}$ of Fig.~\ref{fg:realization} that achieves the RDF of Theorem~\ref{thm_rw}, degenerates to Wyner's \cite{wyner1978} optimal realization that achieves the RDF $R_{X|Y}(\Delta_X)$, for the tuple of scalar-valued, jointly Gaussian RVs $(X, Y)$, with square error distortion function, given by (\ref{sc_1})-(\ref{sc_4}). This is verified below. \\ By Theorem~\ref{thm_rw}.(a) applied to (\ref{sc_1})-(\ref{sc_4}), then \begin{align} &Q_X=\sigma_X^2, \hspace{.1in} Q_{X,Y}=\alpha \sigma_X^2,\hspace{.1in} Q_{Y}=\sigma_Y^2= \alpha^2 \sigma_X^2 + \alpha^2 \sigma_U^2, \hspace{.1in} Q_{X|Y}= c \sigma_U^2, \hspace{.1in} c\sr{\triangle}{=} \frac{\sigma_X^2}{\sigma_X^2 +\sigma_U^2}, \label{deg_W1} \\ &H=1-\Delta_X Q_{X|Y}^{-1}=\frac{c \sigma_U^2 - d}{c\sigma_U^2} \equiv a, \hspace{.1in} Q_{X,Y}Q_Y^{-1}=\frac{c}{\alpha}, \hspace{.1in} HQ_{X,Y}Q_Y^{-1}=\frac{a c}{\alpha},\label{deg_W2} \\ & W= H\Psi = a \Psi, \hspace{.2in} Q_\Psi= H^{-1} \Delta_X= \frac{\Delta_X}{a}= \frac{c\sigma_U^2\Delta_X}{c\sigma_U^2 -\Delta_X}, \hspace{.1in} c\sigma_U^2 -\Delta_X>0 . \label{deg_W3} \end{align} Moreover, by Theorem~\ref{thm_rw}.(b) the optimal reproduction $\widehat{X}\in {\cal M}_0(d)$ and $R_{X|Y}(d)$ are, \begin{align} &\widehat{X} = a(X - \frac{c}{\alpha}Y) + \frac{c}{\alpha}Y+ a\Psi,\hspace{.1in} c\sigma_U^2 -\Delta_X>0 \label{realization_ED_1}\\ \label{eq:rateWynerZivscalar} &R_{X|Y}(\Delta_X) = \left\{ \begin{array}{ll} \frac{1}{2}\log \frac{c \sigma_U^2}{\Delta_X}, & 0 < \Delta_X <c\sigma_U^2\\ 0, & \Delta_X \geq c\sigma_U^2 . \end{array} \right. \end{align} This shows our realization of Fig.~\ref{fg:realization} degenerates to Wyner's \cite{wyner1978} realization of Fig.~\ref{real_ed}. \noindent (b) RDF $\overline{R}(\Delta_X)$. Now, we show that our realization of optimal $\widehat{X}=f(Y,Z)$ that achieves the RDF $\overline{R}(\Delta_X)$ of Theorem \ref{thm:dec}, degenerates to Wyner's \cite{wyner1978} realization that achieves the RDF $\overline{R}(\Delta_X)$, of the tuple of scalar-valued, jointly Gaussian RVs $(X, Y)$, with square error distortion function given by (\ref{sc_1})-(\ref{sc_4}). This is verified below. \\ By Theorem \ref{thm:dec}.(b) applied to (\ref{sc_1})-(\ref{sc_4}), and using the calculations (\ref{deg_W1})-(\ref{realization_ED_1}), then \begin{align} &\widehat{X} =f(Y,Z)= \frac{c}{\alpha}(1-a)Y +Z \hspace{.2in} \mbox{by (\ref{realization_ED_1}), (\ref{realization_D_2})}, \label{realization_D_1} \\ &Z= a \Big(X +\Psi\Big), \hspace{.1in} (a, \Psi) \hspace{.2in} \mbox{defined in (\ref{deg_W2}), (\ref{deg_W3})} \label{realization_D_2}\\ & \overline{R}(\Delta_X)= R_{X|Y}(\Delta_X) =\mbox{ (\ref{eq:rateWynerZivscalar}) } \hspace{.2in} \mbox{by evaluating $I(X;Z)-I(Y;Z)$, i.e., using (\ref{rdf_d1_a}) and (\ref{realization_D_2}).} \label{realization_D_2_n} \end{align} This shows our value of $\overline{R}(\Delta_X)$ and optimal realization $\widehat{X}=f(Y,Z)$, reproduce Wyner's optimal realization and value of $\overline{R}(\Delta_X)$ given in \cite{wyner1978} (i.e., Fig.~\ref{real_d}). \end{remark} \begin{remark} On the optimal test channel realization of distributed source coding problem \cite{tian-chen2009} and \cite{Zahedi-Ostegraard-2014} \\ \label{comment} We show that contrary to the claim in \cite[Abstract]{tian-chen2009} and \cite[Theorem~3A]{Zahedi-Ostegraard-2014}, the optimal test channels used in the derivations of the RDF for the distributed remote source coding problem are incorrect, and do not produce Wyner's value of the RDF $\overline{R}(\Delta_X)$, and the optimal test channel that achieves it (i.e. the solution presented in Remark~\ref{rem-wyner}). (a) Tian and Chen \cite{tian-chen2009} considered the following formulation\footnote{In the notation of \cite{tian-chen2009} the RVs $(S, X, Y, Z, \widehat{S})$ are represented by $(X, Y, Z,W, \widehat{X})$.} of (\ref{rdf_po_1}), (\ref{rdf_po_2}): \begin{align} \overline{R}^{PO,1}(\Delta_S) \sr{\triangle}{=}\inf_{Z: \: {\bf P}_{Z|X, Y, S}= {\bf P}_{Z|X}, \; \widehat{S}={\bf E}\big\{S\big|Z, Y\big\}, \: {\bf E}\big\{||S-\widehat{S})||^2\big\}\leq \Delta_S} I(X; Z|Y). \label{tian-chen} \end{align} For multivariate correlated jointly Gaussian RVs $(S,X, Y, Z, \widehat{S})$, with square-error distortion function $d_S(s,\widehat{s})=||s-\widehat{s}||^2$, the RDF $\overline{R}^{PO,1}(\Delta_S)$ is given in \cite[Theorem~4]{tian-chen2009}.\\ Clearly, \\ (i) if $S=X-$a.s (almost surely) then the RDF $\overline{R}^{PO,1}(\Delta_S)$ degenerates to Wyner's RDF $\overline{R}(\Delta_X)$, and \\ (ii) if $S=X-$a.s and the RV $X$ is independent of the RV $Y$ or $Y$ generates a trivial information, then the RDF $\overline{R}^{PO,1}(\Delta_S)$ degenerates to the classical RDF of the source $X$, i.e., $R_{X}(\Delta_X)$, as verified from (\ref{real_d1})-(\ref{real_d2}), i.e., $Q_{X,Y}=0$ which implies $\widehat{X}=Z$. \\ We examine (i), i.e., under the restriction $S=X-$a.s., by recalling the optimal realization of RVs $(Z, \widehat{X})$ used in the derivation of \cite[Theorem~4]{tian-chen2009}. \\ The derivation of \cite[Theorem~4]{tian-chen2009}, uses the following RVs (see \cite[eqn(4)]{tian-chen2009} adopted to our notation): \begin{align} X &= K_{xy}Y + N_1, \\ S &= K_{sx}X + K_{sy}Y +N_2 \\ S &= K_{sx}\Big(K_{xy}Y + N_1\Big) + K_{sy} Y + N_2, \\ &= \Big(K_{sx}K_{xy}+K_{sy}\Big) Y + K_{sx}N_1+ N_2 \end{align} where $N_1$ and $N_2$ are independent Gaussian RVs with zero mean, $N_1$ is independent $Y$ and $N_2$ is independent of $(X, Y)$. \\ The condition $ X =S-$a.s. implies, \begin{align} &K_{sx}K_{xy}+K_{sy} =K_{xy} , \hspace{.2in} K_{sx}=I, \hspace{.1in} N_2=0-a.s. \label{tian_chen_1}\\ &\Longrightarrow \hspace{.1in} K_{xy}+K_{sy} =K_{xy} \hspace{.1in} \Longrightarrow \hspace{.1in} K_{sy}=0. \end{align} The optimal realization of the auxiliary random variable $Z$ used to achieve the RDF in the derivation of \cite[Theorem~4]{tian-chen2009} (see \cite[3 lines above eqn(32)]{tian-chen2009} using our notation) is \begin{align} Z&= UK_{sx}X + N_3\\ &= UX + N_3, \hspace{.1in} \mbox{by (\ref{tian_chen_1}) } \label{tian_chen_2} \end{align} where $U$ is a unitary matrix and $N_3\in N(0, Q_{N_3})$, i.e., Gaussian, such that $Q_{{N_3}}$ is a diagonal covariance matrix, with diagonal elements given by, \begin{align} \sigma_{3,i}^2 = \frac{\min(\lambda_i,\delta_i)}{\lambda_i-\min(\lambda_i,\delta_i)} \label{tian_chen_3} \end{align} For scalar-valued RVs (\ref{tian_chen_2}), (\ref{tian_chen_3}), reduce to \begin{align} Z&= X + N_3, \hspace{.2in} N_3 \in N \Big(0,\frac{\Delta_X}{\sigma_{X|Y}^2-\Delta_X}\Big)\label{eq:wynerTC} \end{align} It is easy to verify, by letting $(X,Y)$ as in (\ref{sc_3}), (\ref{sc_4}), that the realization of the auxiliary RV $Z$ given by (\ref{eq:wynerTC}) is different from Wyner's auxiliary RV $Z$ given by (\ref{realization_D_2}), and gives a value of $I(X;Z)-I(Y;Z)$, which also different from Wyner's value of the RDF $\overline{R}(\Delta_X)$, i.e., (\ref{eq:rateWynerZivscalar}). In particular, if $\sigma_{X|Y}^2 =\Delta_X$ then it should be $Z=0-$almost surely (as verified from the realization of $Z$ given by (\ref{realization_D_2}), which reduces to $Z=0-$almost surely, if $Q_{X|Y}=\Delta_X$, i.e., the value of parameter $H$ is $H=a=0$), but instead the variance of $Z$ takes the value $+\infty$. \\ We also examine (ii) (above), i.e., setting $S=X-$a.s, and taking $X$ to be independent of $Y$ or $Y$ generates a trivial information. Clearly, RDFs $\overline{R}^{PO,1}(\Delta_S)$ degenerates to the classical RDF of the source $X$, i.e., $R_{X}(\Delta_X)$, as it is verified from (\ref{real_d1})-(\ref{real_d2}), i.e., $Q_{X,Y}=0$ which implies $\widehat{X}=Z$. For scalar-valued RVs the optimal reproduction $\widehat{X}=Z$ degenerates to (\ref{mem_scal_1}), (\ref{mem_scal_2}). On the other hand, (\ref{eq:wynerTC}) does not reduce to (\ref{mem_scal_1}), (\ref{mem_scal_2}), and moreover the variance of $Z$ defined by (\ref{eq:wynerTC}) is $\sigma_Z^2=\sigma_X^2 + \frac{\Delta_X}{\sigma_{X}^2-\Delta_X}$, and this is fundamentally different from the variance $Q_{\widehat{X}}=\sigma_{\widehat{X}}^2=\sigma_Z^2=\sigma_X^2 -\Delta_X$ of (\ref{mem_scal_2}). (b) Similarly to part (a) above, if we repeat the above steps, under the condition $S=X-$a.s., the RDF of the remote sensor problem analyzed in \cite{Zahedi-Ostegraard-2014} reduces to Wyner's RDF $\overline{R}(\Delta_X)$. Moreover, optimal realization of the auxiliary RV $Z$, which is used to achieve the RDF in the derivation of \cite[Theorem~3A]{Zahedi-Ostegraard-2014} (see \cite[eqn(26)]{Zahedi-Ostegraard-2014} using our notation) reduces to \begin{align} Z = U X + \nu \label{og_1} \end{align} where $U$ is a unitary matrix and $\nu\in N(0,Q_{\nu})$ is a zero mean Gaussian vector with independent components, with variances across the diagonal of $Q_{\nu}$ given by, \begin{align} \sigma_{\nu_i}^2 = \frac{\lambda_i \min{(\lambda_i ,\delta_i)} }{\lambda_i - \min{(\lambda_i ,\delta_i)}}. \label{og_2} \end{align} For scalar-valued RVs (\ref{og_1}), (\ref{og_2}), reduce to \begin{align} Z = X + \nu, \hspace{.1in} \nu \in N \bigg(0, \frac{\sigma_{X|Y}^2}{\sigma_{X|Y}^2-\Delta_X}\Delta_X\bigg)\label{og_3} \end{align} Clearly, by letting letting $(X,Y)$ as in (\ref{sc_3}), (\ref{sc_4}), the auxiliary RV $Z$ given by (\ref{og_3}) is different from Wyner's auxiliary RV $Z$ given by (\ref{realization_D_2}), and does not produce Wyner's value $\overline{R}(\Delta_X)=I(X;Z)-I(Y;Z)$ given by (\ref{realization_D_2_n}). In particular, if $\sigma_{X|Y}^2 =\Delta_X$ then it should be $Z=0-$almost surely (as verified from the realization of $Z$ given by (\ref{realization_D_2}), which reduces to $Z=0-$almost surely, if $Q_{X|Y}=\Delta_X$, i.e., the value of parameter $H$ is $H=a=0$), but instead the variance of $Z$ takes the value $+\infty$. It is also noted that the variance of the auxiliary RV $Z$ of \cite{tian-chen2009} given by (\ref{eq:wynerTC}) is different from the variance of the auxiliary RV of \cite{Zahedi-Ostegraard-2014} given by (\ref{og_3}), although both are designed to achieve the same value of $I(X;Z)-I(Y;Z)$. \end{remark} \begin{remark} On the realization of test channels \label{rk:1_n} \par (a) It should be mentioned that unless a realization of $\widehat{X}$ is identified that achieves the RDFs $R_{X|Y}(\Delta_X)$ and $\overline{R}(\Delta_X)$, such that the joint distribution ${\bf P}_{X,Y, \widehat{X}}$ has marginal the fixed source distribution ${\bf P}_{X,Y}$, then the characterization of the RDFs is incomplete. (b) Corollary~\ref{cor:c-dec} follows from the two main theorems, and its complete solution is generated similarly to the two main theorems. \end{remark} \section{Conclusion} We derived structural properties of optimal test channels realizations that achieve the characterizations of RDFs for a tuple of multivariate jointly independent and identically distributed Gaussian random variables with mean-square error fidelity, when side information is available to the decoder and not to the encoder, and when side information is available to both. We derived achievable lower bounds on conditional mutual information, and applied properties of mean-square error estimation to identify structural properties of optimal test channels that achieve these bounds. We also applied the structural properties of optimal test channels to construct realizations of optimal reproductions. \section{Appendix} \subsection{Proof of Lemma~\ref{lem:proof1}} \label{app_A} (a) By the chain rule of mutual information then \begin{align} I(X;\widehat{X},Y)=&I(X;Y|\widehat{X}) + I(X;\widehat{X})\\ =& I(X;\widehat{X}|Y) + I(X;Y) \end{align} Since $I(X;Y|\widehat{X})\geq 0$ then from above it follows \begin{align} I(X;\widehat{X}) \leq & I(X;\widehat{X}|Y) + I(X;Y)\\ I(X;\widehat{X}|Y) \geq & I(X;\widehat{X}) - I(X;Y) \label{ineq_1_new} \end{align} The above shows (\ref{ineq_1}). To show equality, we note the following, \begin{align*} I(X;\widehat{X}|Y) &= {\bf E}\Big[\log \frac{{\bf P}_{X|\widehat{X},Y}}{{\bf P}_{X|Y}}\Big]\\ &= {\bf E}\Big[\log \frac{{\bf P}_{X|\widehat{X},Y}}{{\bf P}_{X|Y}}\frac{{\bf P}_{X}}{{\bf P}_{X}}\Big]\\ &= {\bf E}\Big[\log \frac{{\bf P}_{X|\widehat{X},Y}}{{\bf P}_{X}} - \log\frac{{\bf P}_{X|Y}}{{\bf P}_{X}}\Big]\\ &= {\bf E}\Big[\log \frac{{\bf P}_{X|\widehat{X}}}{{\bf P}_{X}} - \log\frac{{\bf P}_{X|Y}}{{\bf P}_{X}}\Big], \hspace{.1in} \mbox{if ${\bf P}_{X|\widehat{X}, Y}={\bf P}_{X|\widehat{X}}$}. \end{align*} This completes the statement of equality of (\ref{ineq_1}), i.e., it establishes equality (\ref{ineq_1_neq}). (b) Consider a test channel ${\bf P}_{X|\widehat{X},Y}$ such that ${\bf E}\{||X-\widehat{X}||_{{\mathbb R}^{n_x}}^2\leq \Delta_X$, i.e., $\widehat{X} \in {\cal M}_0(\Delta_X)$, and such that ${\bf P}_{X|\widehat{X}, Y}={\bf P}_{X|\widehat{X}}$, for $\Delta_X \leq {\cal D}_C(X|Y)\subseteq [0,\infty)$. By (\ref{ineq_1_neq}) taking the infimum of both sides over $\widehat{X} \in {\cal M}_0(\Delta_X)$ such that ${\bf P}_{X|\widehat{X}, Y}={\bf P}_{X|\widehat{X}}$ then (\ref{ineq_1_neq_G}) is obtained, for a nontrivial surface $\Delta_X \leq {\cal D}_C(X|Y)$, which exists due to continuity and convexity of $R_{X}(\Delta_X)$ for $\Delta_X \in (0,\infty)$. This completes the proof. \subsection{Proof of Theorem~\ref{thm:proof2}} \label{app_B} We identify the triple $(H,G, Q_W)$ that satisfied Conditions 1 and 2 of Theorem~\ref{them:lb_g}, which then implies $\widehat{X}=\overline{X}^{mse}$, from which the claimed statements follow. Consider the realization given by $(\ref{eq:real})$. \\ {\it Condition 1, i.e., $( \ref{eq:condA})$.} The left hand side part of $( \ref{eq:condA})$ is given by (this follows from mean-square estimation theory, or an application of (\ref{eq:mean22}) with ${\cal G}=\{\Omega, \emptyset\}$) \begin{align} {\bf E}\Big(X\Big|Y\Big)=& {\bf E}\Big(X\Big) + \mathrm{cov}(X,Y) \mathrm{cov}(Y,Y)^{-1}\Big(Y - {\bf E}\Big(Y\Big)\Big) \\ \nonumber =&\mathrm{cov}(X,Y) \mathrm{cov}(Y,Y)^{-1}Y\\ =&Q_{X,Y}Q_Y^{-1} Y\\ =&Q_XC^{\mbox{\tiny T}} Q_Y^{-1}Y \hspace{.2in} \mbox{by model (\ref{eq:sideInfo})-(\ref{prob_9})}. \label{eq:condAL} \end{align} Similarly, the right hand side of $( \ref{eq:condA})$ is given by \begin{align} {\bf E}\Big(\widehat{X}\Big|Y\Big) =& {\bf E}\Big(\widehat{X}\Big) + \mathrm{cov}(\widehat{X},Y) \mathrm{cov}(Y,Y)^{-1}\Big(Y - {\bf E}\Big(Y\Big)\Big) \\ \nonumber =&\mathrm{cov}(\widehat{X},Y) \mathrm{cov}(Y,Y)^{-1}Y\\ =& \Big(HQ_{X,Y} +GQ_Y\Big) Q_Y^{-1}Y \label{condx_1} \\ =& \Big(HQ_XC^{\mbox{\tiny T}} +GQ_Y\Big) Q_Y^{-1}Y \hspace{.2in} \mbox{by (\ref{eq:sideInfo})-(\ref{prob_9})} \label{eq:condAR} \end{align} Equating (\ref{eq:condAL}) and (\ref{condx_1}) or (\ref{eq:condAR}) then \begin{align} &{\bf E}\Big({X}\Big|Y\Big) ={\bf E}\Big(\widehat{X}\Big|Y\Big)\\ &\Longrightarrow \hspace{.1in} Q_{X,Y}Q_Y^{-1}Y= \Big(HQ_{X,Y} +GQ_Y\Big) Q_Y^{-1}Y \hspace{.2in} \mbox{by (\ref{condx_1})} \\ &\Longrightarrow \hspace{.1in} Q_XC^{\mbox{\tiny T}} Q_Y^{-1}Y=\Big(HQ_XC^{\mbox{\tiny T}} +GQ_Y\Big) Q_Y^{-1}Y \hspace{.2in} \mbox{by (\ref{eq:sideInfo})-(\ref{prob_9}), (\ref{eq:condAR})}\\ &\Longrightarrow \hspace{.1in} G=\Big(I-H\Big)Q_XC^{\mbox{\tiny T}} Q_Y^{-1}\\ &\Longrightarrow \hspace{.1in} G=\Big(I-H\Big)Q_{X,Y} Q_Y^{-1} \end{align} Hence, $G$ is obtained, and the reproduction is represented by \begin{align} &\widehat{X}=H X +\Big(I-H\Big) Q_{X,Y} Q_Y^{-1}Y +W, \label{step_1}\\ &\mathrm{cov}(\widehat{X},Y)=Q_{X,Y}, \hspace{.1in} {\bf E}\Big(\widehat{X}\Big|Y\Big)=Q_{X,Y}Q_Y^{-1}Y={\bf E}\Big(X\Big|Y\Big), \label{step_2}\\ &\widehat{X}- {\bf E}\Big(\widehat{X}\Big|Y\Big)=HX -HQ_{X,Y}Q_Y^{-1} Y +W.\label{step_3} \end{align} {\it Condition 2, i.e., $( \ref{eq:condB})$.} To apply $( \ref{eq:condB})$ the following calculations are needed. \begin{align} Q_{X|Y} &\sr{\triangle}{=}\mathrm{cov}(X,X|Y)\\ & = {\bf E} \Big\{ \Big(X - {\bf E}\Big(X\Big|Y\Big)\Big)\Big(X - {\bf E}\Big(X\Big|Y\Big)\Big)^{\mbox{\tiny T}} \Big\} \nonumber \\ &= Q_X - Q_{X,Y} Q_Y^{-1}Q_{X,Y}^{\mbox{\tiny T}}\\ &= Q_X - Q_XC^{\mbox{\tiny T}} Q_Y^{-1}CQ_X \hspace{.2in} \mbox{by (\ref{eq:sideInfo})-(\ref{prob_9})} \end{align} \begin{align} \mathrm{cov}(X,\widehat{X}|Y) & \sr{\triangle}{=}{\bf E} \Big\{ \Big(X - {\bf E}\Big(X\Big|Y\Big)\Big)\Big(\widehat{X} - {\bf E}\Big(\widehat{X}\Big|Y\Big)\Big)^{\mbox{\tiny T}} \Big\} \nonumber \\ &={\bf E} \Big\{ \Big(X - {\bf E}\Big(X\Big|Y\Big)\Big)\Big(\widehat{X} - {\bf E}\Big(X\Big|Y\Big)\Big)^{\mbox{\tiny T}} \Big\} \hspace{.2in} \mbox{by (\ref{step_2}) }\\ &={\bf E} \Big\{ \Big(X - {\bf E}\Big(X\Big|Y\Big)\Big)\Big(\widehat{X} \Big)^{\mbox{\tiny T}} \Big\} \hspace{.2in} \mbox{by orthogonality }\\ &= Q_XH^{\mbox{\tiny T}} - Q_{X,Y}Q_Y^{-1}Q_{Y,X}H^{\mbox{\tiny T}} \hspace{.2in} \mbox{by (\ref{step_1}), (\ref{step_2}) }\\ &= Q_XH^{\mbox{\tiny T}} - Q_XC^{\mbox{\tiny T}} Q_Y^{-1}CQ_XH^{\mbox{\tiny T}} \hspace{.2in} \mbox{by (\ref{eq:sideInfo})-(\ref{prob_9})} \\ \nonumber &= \Big(Q_X - Q_XC^{\mbox{\tiny T}} Q_Y^{-1}CQ_X\Big)H^{\mbox{\tiny T}} \\ &= Q_{X|Y}H^{\mbox{\tiny T}} . \label{eq:condBL} \end{align} \begin{align} \mathrm{cov}(\widehat{X},\widehat{X}|Y) & \sr{\triangle}{=}{\bf E} \Big\{ \Big(\widehat{X} - {\bf E}\Big(\widehat{X}\Big|Y\Big)\Big)\Big(\widehat{X} -{\bf E}\Big(\widehat{X}\Big|Y\Big)\Big)^{\mbox{\tiny T}} \Big\}\\ \nonumber &= HQ_X H^{\mbox{\tiny T}} +Q_W -H Q_{X,Y} Q_Y^{-1}Q_{Y,X}H^{\mbox{\tiny T}} \hspace{.2in} \mbox{by (\ref{step_3})}\\ &= HQ_XH^{\mbox{\tiny T}} +Q_W - HQ_XC^{\mbox{\tiny T}} Q_Y^{-1}CQ_XH^{\mbox{\tiny T}} \hspace{.2in} \mbox{by (\ref{eq:sideInfo})-(\ref{prob_9})} \\ \nonumber &= H\Big(Q_X-Q_XC^{\mbox{\tiny T}} Q_Y^{-1}CQ_X\Big)H^{\mbox{\tiny T}} + Q_W \\ &= \label{eq:condBR}H Q_{X|Y}H^{\mbox{\tiny T}} +Q_W . \end{align} By Condition 2 and (\ref{eq:condBL}) and (\ref{eq:condBR}) then \begin{align} &\mathrm{cov}(X,\widehat{X}|Y) \mathrm{cov}(\widehat{X},\widehat{X}|Y)^{-1} = I_{n_x}\\ \nonumber &\Longrightarrow \hspace{.1in} Q_{X|Y}H^{\mbox{\tiny T}}\Big(H Q_{X|Y}H^{\mbox{\tiny T}} +Q_W \Big)^{-1} = I_{n_x}\\ &\Longrightarrow \hspace{.1in} Q_W = Q_{X|Y}H^{\mbox{\tiny T}} - H\Sigma_{X|Y}H^{\mbox{\tiny T}}\\ &\Longrightarrow\hspace{.1in} Q_W = \Big(I_{n_x} - H\Big)Q_{X|Y} H^{\mbox{\tiny T}} . \label{eq:KW} \end{align} Now, we determine $H$ as follows. \begin{align} \Sigma_{\Delta} \triangleq& \mathrm{cov}(X,X|Y\widehat{X})\\ = & \mathrm{cov}(X,X|Y)-\mathrm{cov}(X,\widehat{X}|Y)\mathrm{cov}(\widehat{X},\widehat{X}|Y)^{-1} \mathrm{cov}(X,\widehat{X}|Y)^{\mbox{\tiny T}}, \ \ \ \ \mbox{by prop.~\ref{prop_cg}, (\ref{eq:mean22})} \\ =& \mathrm{cov}(X,X|Y)- \mathrm{cov}(X,\widehat{X}|Y)^{\mbox{\tiny T}}, \hspace{.1in} \mbox{by ( \ref{eq:condB})}\\ =&Q_{X|Y} - HQ_{X|Y}, \hspace{.1in} \mbox{by (\ref{eq:condBL})}. \label{eq:sigmas} \\ &\Longrightarrow \hspace{.1in} H Q_{X|Y} = Q_{X|Y} - \Sigma_{\Delta} \\ & \Longrightarrow \hspace{.1in} H = I-\Sigma_{\Delta} Q_{X|Y}^{-1} \label{H} \end{align} Hence, $H$ is obtained. Moreover, $Q_W$ is obtained by substituting (\ref{H}) into (\ref{eq:KW}). From the above specification of parameters $(H,G,Q_W)$ then the realization (\ref{eq:realization})-(\ref{eq:realization_nn}) follows. From the realization (\ref{eq:realization})-(\ref{eq:realization_nn}) it then follows the property ${\bf P}_{X|\widehat{X},Y}={\bf P}_{X|\widehat{X}}-as$. Moreover, (\ref{eq:optiProbl})-(\ref{eq:optiProbl_n}) are obtained from the realization. \subsection{Proof of Corollary~\ref{cor:sp_rep}} \label{app_C} (a) This part is a special case of a related statement in \cite{charalambous-charalambous-kourtellaris-vanschuppen-2020}. However, we include it for completeness. By linear algebra \cite{Horn:2013}, given two matrices $A \in {\cal S}_{+}^{k \times k} , B \in {\cal S}_{+}^{k \times k}$, then the following statements are equivalent: (1) $AB$ is normal, (2) $AB\succeq 0$, where $AB$ normal means $(AB) (AB)^{\mbox{\tiny T}}= (AB)^{\mbox{\tiny T}} (AB)$. Note that $AB$ is normal if and only if $AB=BA$, i.e., commute. Let $A= U_A D_A U_A^{\mbox{\tiny T}}, B=U_B D_B U_B^{\mbox{\tiny T}}, U_A U_A^{\mbox{\tiny T}}=I_k, U_B U_B^{\mbox{\tiny T}}=I_k$, i.e., there exists a spectral representation of $A, B$ in terms of unitary matrices $U_A, U_B$ and diagonal matrices $D_A, D_B$. Then, $AB \succeq 0$ if and only if the matrices $A$ and $B$ commute, i.e., $AB =BA$, and $A$ and $B$ commute if and only if $U_A=U_B$. \\ Suppose (\ref{suff_c}) holds. Letting $A=Q_{X|Y}, B=\Sigma_\Delta$, then $A = U_{A} D_{A} U_{A}^{\mbox{\tiny T}}, B = U_{B} D_{B} U_{B}^{\mbox{\tiny T}}, U_{A}U_{A}^{\mbox{\tiny T}}=I_{n_x}, U_{B}U_{B}^{\mbox{\tiny T}}=I_{n_x}, U_{A}=U_{B}$. Since $Q_{X|Y}^{-1}=A^{-1}=U_{A} D_{A}^{-1} U_{A}^{\mbox{\tiny T}},$ then $\Sigma_{\Delta} Q_{X|Y}^{-1}= Q_{X|Y}^{-1}\Sigma_{\Delta}$, i.e., they commute. Hence, \begin{align} H_t^{\mbox{\tiny T}}=&(I_{n_x}- (\Sigma_{\Delta} Q_{X|Y}^{-1})^{\mbox{\tiny T}}= I_{n_x}- (Q_{X|Y}^{-1})^{\mbox{\tiny T}} \Sigma_{\Delta}^{\mbox{\tiny T}}= I_{n_x}- Q_{X|Y}^{-1} \Sigma_{\Delta} \nonumber \\ =& I_{n_x}- \Sigma_{\Delta}Q_{X|Y}^{-1}=H \ \ \ \ \mbox{since $Q_{X|Y}$ and $\Sigma_{\Delta}$ commute.} \label{com_1} \end{align} By the definition of $Q_W$ given by (\ref{eq:realization_nn}) we have \begin{align} Q_{W} =\Sigma_{\Delta}H^{\mbox{\tiny T}} =Q_{W}^{\mbox{\tiny T}}= H \Sigma_{\Delta}. \label{inv_com} \end{align} Substituting (\ref{com_1}) into (\ref{inv_com}), then \begin{align} Q_{W} =\Sigma_{\Delta} H. \end{align} Hence, $\{\Sigma_{\Delta}, \Sigma_{X|Y}, H, Q_{W}\}$ are all elements of ${\cal S}_{+}^{p \times p}$ having a spectral decomposition wrt the same unitary matrix $UU^{\mbox{\tiny T}} =I_{n_x}$. \section{Acknowledgements} \par This work was supported in parts by the European Regional Development Fund and the Republic of Cyprus through the Research Promotion Foundation (Project: EXCELLENCE/1216/0365). \label{Bibliography} \bibliographystyle{IEEEtran}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{#1}} \begin{document} \renewcommand{\thefootnote}{\fnsymbol{footnote}} \newpage \setcounter{page}{0} \pagestyle{empty} \rightline{DFTT-25/93} \rightline{hep-th/9306019} \rightline{May 1993} \vs{15} \begin{center} { \LARGE {Explicit Construction of the BRST Charge for {\sf W}$_4$ } }\\[1cm] {\large K.\ Hornfeck}\footnote{e-mail: HORNFECK@TO.INFN.IT; \hspace{0.5cm} 31890::HORNFECK}\\[0.5cm] {\em INFN, Sezione di Torino, Via Pietro Giuria 1, I-10125 Torino, Italy} \\[1cm] \end{center} \vs{15} \centerline{ \bf{Abstract}} We give the explicite form of the BRST charge ${\mbox{$Q$}}$ for the algebra {\sf W}$_4 = $ {\sf WA}$_3$ in the basis where the spin-3 and the spin-4 field are primary as well as for a basis where the algebra closes quadratically. \renewcommand{\thefootnote}{\arabic{footnote}} \setcounter{footnote}{0} \newpage \pagestyle{plain} For the construction of {\sf W}-strings~\cite{BG89}, especially its physical states, it is important to know the cohomology of the BRST charge~\cite{DDR91,Ram91,PRS91,PSSW92,LNP92,Wes92,LPSW93,FW93}. Unfortunately, for the {\sf W}-strings based on the algebras {\sf W}$_n$ = {\sf WA}$_{n-1}$ only the one for the simplest algebra, the Zamolodchikokv \linebreak {\sf W}$_3$-algebra~\cite{Zam85} has been constructed previously~\cite{ThM87}. Since so far there is no simplifying ansatz of the terms appearing in {\mbox{$Q$}} for a given (general) {\sf W}-algebra~\footnote{as it exists for a certain sub-class of {\sf W}-algebras that close quadratically~\cite{SSvN89}.}, we constructed the BRST charge for the {\sf W}$_4$-algebra by taking into account all possible terms of even parity (where the matter field $W_3$ and the ghost fields $c_3$ and $b_3$ have parity odd, all other fields have parity even) and determining the coefficients by demanding that {\mbox{$Q$}}\ is nilpotent. However, contrary to the case of the {\sf W}$_3$-algebra, one has to be careful about two points: We have the freedom of redefining the spin-4 field $W_4$ (that is usually taken to be quasi-primary) by transforming $W_4 \rightarrow W_4 + \kappa\, \Lambda$, $\Lambda$ being the spin-4 quasi-primary $T T - 3/10\, T''$ (all products of fields are considered to be normal ordered in the standard way whenever necessary). There are two obvious choices: Either $W_4$ is such that the algebra closes quadratically or else $W_4$ is taken to be primary, leading also to cubic terms (in the Virasoro operator $T$) in the algebra. Whereas in the latter case the operator product expansions (OPEs) (and therefore the commutators) are unique (apart from a sign-ambiguity due to the transformation $W_4 \rightarrow - W_4$, there exist two (equivalent) possibilities for the algebra that closes quadratically. The structure constants of~{\sf W}$_4$ can be found in refs.~\cite{BFK91,KW91}. To the matter part ($T, W_3, W_4$) we have to introduce ghost and anti-ghost fields $(\{c_2, b_2\}, \{c_3, b_3\},$ $\{c_4, b_4\})$, obeying the OPEs \begin{eqnarray} c_i \star c_j & = & b_i \star b_j \, = \, 0 \nonumber \\ c_i \star b_j & = & \delta_{i j} \,\,1 \label{ghostope} \end{eqnarray} and the BRST charge will be of the form \begin{eqnarray} {\mbox{$Q$}} & = & \oint dw \, \left(T \,c_2 + W_3 \,c_3 + (W_4 + \kappa_1 \,TT + \kappa_2 \, T'')\, c_4 \right) (w) \,\, + \\ &&\hspace{5mm} \mbox{contributions containing anti-ghost fields $b_i$} \nonumber \end{eqnarray} We introduced $\kappa_1$ and $\kappa_2$ to include different bases for the field $W_4$. However, the OPEs (\ref{ghostope}) do not fix the ghosts in a unique way. Indeed one might think of changing the ghost and anti-ghost fields by a ``canonical transformation'' to \begin{eqnarray} c_i \,\rightarrow \,\tilde{c}_i & = & c_i + f_i(\{c,b\}) \nonumber \\ b_i \, \rightarrow \, \tilde{b}_i & = & b_i + g_i(\{c,b\}) \label{ghosttrans} \end{eqnarray} where the functions $f_i$ and $g_i$ obbey the following rules: \newline 1) they have ghost-number $+1$ or $-1$, respectively;\newline 2) they have the right conformal dimension;\newline 3) they have the correct parity;\newline 4) the new ghosts $\tilde{c}$ and $\tilde{b}$ obbey exactly the same OPEs as (\ref{ghostope}). These transformations are different from the change of basis of~\cite{LPSW93}, since they involve only the ghost sector and do not mix ghost and matter fields. In the {\sf W}$_3$-algebra one realizes very soon, that there no such transformation exists. For {\sf W}$_4$, however, we can indeed perform such a change of basis by a 8-parameter transformation. The simplest one being \begin{equation} \tilde{c}_2 = c_2 + \alpha \,c_4''; \hspace{2cm} \tilde{b}_4 = b_4 - \alpha \, b_2'' \label{simtrans} \end{equation} and leaving all other ghosts unchanged. This special transformation would lead in {\mbox{$Q$}}\ to a term of the kind $c_4 \, T''$ and can therefore be absorbed into $\kappa_2$, leaving seven free parameters (apart from $\kappa_1$ and $\kappa_2$) that should show up in~{\mbox{$Q$}}, after having imposed that {\mbox{$Q$}}$^2 = 0$. This is indeed the case. The Virasoro operator $T_{\mbox{\small gh}}$ for the ghost-sector is given in the form \begin{equation} T_{\mbox{\small gh}} = (c_2 \,b_2' + 2 c_2' \,b_2) + (2 c_3 \, b_3' + 3 c_3' \, b_3) + (3 c_4 \, b_4' + 4 c_4' \, b_4) \label{ghostVir} \end{equation} with a central charge $c_{\mbox{\small gh}} = - 246$ such that the total Virasoro generator $T_{\mbox{\small total}} = T + T_{\mbox{\small gh}}$ is anomaly free if the matter Virasoro operator $T$ has central charge $c_{\mbox{\small matter}} = 246$. It is clear that in general the transformations (\ref{ghosttrans}) will change the Virasoro operator (\ref{ghostVir}) to $\widetilde{T}_{\mbox{\small gh}}$, even though $\widetilde{T}_{\mbox{\small gh}}$ is again a Virasoro operator with the correct central charge $-246$. There is a four-dimensional subset, however, that leaves $T_{\mbox{\small gh}}$ invariant under (\ref{ghosttrans}) and once demanding that the commutator of {\mbox{$Q$}}\ with $b_2$ reproduces $T_{\mbox{\small total}}$ and absorbing again one parameter into $\kappa_2$ we are finally left with two open parameters. For one of them there is a convenient way to fix by demanding that setting the fields $W_4$, $c_4$ and $b_4$ formally to zero would reproduce the BRST operator of the {\sf W}$_3$-algebra. Since we did not find such a convenient choice for the second free parameter, we leave it open in the result, denoted as $\beta$. This parameter is connected to the transformation \begin{eqnarray} \tilde{c}_4 & = & { c_4} \nonumber \\ \tilde{b}_4 & = & { b_4} - 4\,\beta \,\,{ }{ b_2}'\, { }{ b_2}\,{ }{ c_3}\,{ }{ b_3}\,{ c_4}' - 2\,\beta \,\,{ }{ b_2}'\, { }{ b_2}\,{ }{ c_3}\,{ }{ b_3}'\,{ c_4} - 2\,\beta \,\,{ }{ b_2}'\, { }{ b_2}\,{ }{ c_3}'\,{ }{ b_3}\,{ c_4} - \nonumber \\ && 2\,\beta \,\,{ }{ b_2}''\, { }{ b_2}\,{ }{ c_3}\,{ }{ b_3}\,{ c_4} - 2\,\beta \,\,{ }{ b_2}''\,{ }{ b_2}\,{ c_4}' - \beta \,\,{ }{ b_2}''\,{ }{ b_2}'\,{ c_4} - \beta \,\,{ }{ b_2}^{(3)}\,{ }{ b_2}\,{ c_4} \nonumber \\ \tilde{c}_3 & = & { c_3} - 2\,\beta \,\,{ }{ b_2}'\, { }{ b_2}\,{ }{ c_3}\,{ }{ c_4}'\,{ c_4} \nonumber \\ \tilde{b}_3 & = & { b_3} + 2\,\beta \,\,{ }{ b_2}'\, { }{ b_2}\,{ }{ b_3}\,{ }{ c_4}'\,{ c_4} \\ \tilde{c}_2 & = & { c_2} - 2\,\beta \,\,{ }{ b_2}\, { }{ c_3}\,{ }{ b_3}\,{ }{ c_4}''\,{ c_4} - 2\,\beta \,\,{ }{ b_2}\,{ }{ c_3}\, { }{ b_3}'\,{ }{ c_4}'\,{ c_4} - 2\,\beta \,\,{ }{ b_2}\,{ }{ c_3}'\, { }{ b_3}\,{ }{ c_4}'\,{ c_4} + \nonumber \\ && \beta \,\,{ }{ b_2}\,{ }{ c_4}''\,{ c_4}' + \beta \,{ }{ b_2}\,{ }{ c_4}^{(3)}\,{ c_4} - 4\,\beta \,\,{ }{ b_2}'\, { }{ c_3}\,{ }{ b_3}\,{ }{ c_4}'\,{ c_4} + 2\,\beta \,\,{ }{ b_2}'\,{ }{ c_4}''\,{ c_4} \nonumber \\ \tilde{b}_2 & = & { b_2} \nonumber \end{eqnarray} Of course, one could also set all seven parameters to different values (working with a different~$\widetilde{T}_{\mbox{\small gh}}$ if necessary), but the final result written in all open parameters is too complicated to be presented here. In the table are given the various fields $j_n$ contributing to the BRST charge ${\mbox{$Q$}} = \oint dw \sum j_n(w) \, a_n$ together with the coefficient $a_n$ both for the primary basis as well as for one of the bases in which the algebra closes quadratically and for that the structure constant $(C_{44}^4)^2$ is given by $\left[ 54/5 \,(c+47)^2 /((c+22) \,(33+14\,c)) \right]$. In both cases we took the algebra with the positive root for $C_{44}^4$. We did not try to find a transformation similar to the one proposed in~\cite{LPSW93} for {\sf W}$_3$, that splits~{\mbox{$Q$}}\ into various anticommuting and nilpotent parts of different ($c_2,c_3,c_4$)-ghost-number, but on the basis of the result presented here it should not be too difficult to obtain. The calculations have been performed using the {\sc Mathematica} package for computing OPEs by K.~Thielemans~\cite{Thi91}. I am grateful to M.~Freeman for discussions. \vs{15} \large{Table of contributions to the BRST charge} \begin{center} \begin{tabular}{l|l|l} $j_n$ & $a_n$, primary basis & $a_n$, quadratic basis \\ \hline $ { }T\,{ c_2} $& $ 1 $ & $ 1 $ \\[3mm] $ { }{ W_3}\,{ c_3}$& $ 1 $ & $ 1 $ \\[3mm] $ { }{ W_4}\,{ c_4}$& $ 1 $ & $ 1 $ \\[3mm] $ { }{ c_2}'\,{ }{ c_2}\,{ b_2}$& $ -1 $ & $ -1 $ \\[3mm] $ { }{ b_2}\,{ }{ c_3}''\,{ c_3}'$& $ {{49}\over {626}} $ & $ -{{11706}\over {16445}} $ \\[1mm] $ { }{ b_2}\,{ }{ c_3}^{(3)}\,{ c_3}$& $ -{{49}\over {939}} $ & $ {{7804}\over {16445}} $ \\[3mm] $ { }T\,{ }{ b_2}\,{ }{ c_3}'\,{ c_3}$& $ -{4\over {313}} $ & $ -{5\over {253}} $ \\[3mm] $ { }{ W_3}\,{ }{ b_2}'\,{ }{ c_3}\,{ c_4}$& $ -9\,{\sqrt{{{17}\over {19638872}}}} $ & $ {{-251\,{\sqrt{{5\over {155306}}}}}\over {13}} $ \\[1mm] $ { }{ W_3}\,{ }{ b_2}\,{ }{ c_3}'\,{ c_4}$& $ {{-353}\over {{\sqrt{83465206}}}} $ & $ {{-5521}\over {13\,{\sqrt{776530}}}} $ \\[1mm] $ { }{ W_3}\,{ }{ b_2}\,{ }{ c_3}\,{ c_4}'$& $ {{553}\over {3\,{\sqrt{83465206}}}} $ & $ {{711\,{\sqrt{{8\over {388265}}}}}\over {13}} $ \\[3mm] $ { }{ b_2}\,{ }{ c_4}^{(5)}\,{ c_4}$& $ {{14\,\beta }\over {15}} $ & $ {{14\,\beta }\over {15}} $ \\[1mm] $ { }{ b_2}\,{ }{ c_4}^{(4)}\,{ c_4}'$& $ {{13\,\left( 15861457 + 5320659456\,\beta \right) }\over {63847913472}} $ & $ {{-39373278251 + 152860862336\,\beta }\over {141102334464}} $ \\[1mm] $ { }{ b_2}\,{ }{ c_4}^{(3)}\,{ c_4}''$& $ {{-\left( 818537 + 591184384\,\beta \right) }\over {506729472}} $ & $ {{6488840259 - 58792639360\,\beta }\over {50393690880}} $ \\[3mm] $ { }T\,{ }{ b_2}''\,{ }{ c_4}'\,{ c_4}$& $ -{{19577}\over {31670592}} $ & $ {{90912053}\over {1049868560}} $ \\[1mm] $ { }T\,{ }{ b_2}\,{ }{ c_4}''\,{ c_4}'$& $ {{4055111 - 5320659456\,\beta }\over {5320659456}} $ & $ {{-\left( 5204269653 + 58792639360\,\beta \right) }\over {58792639360}} $ \\[1mm] $ { }T\,{ }{ b_2}\,{ }{ c_4}^{(3)}\,{ c_4}$& $ {{-\left( 7350521 + 5320659456\,\beta \right) }\over {5320659456}} $ & $ {{18757441753 - 176377918080\,\beta }\over {176377918080}} $ \\[1mm] $ { }T\,{ }{ b_2}'\,{ }{ c_4}''\,{ c_4}$& $ {{-\left( 1709695 + 1773553152\,\beta \right) }\over {886776576}} $ & $ {{6316984623 - 58792639360\,\beta }\over {29396319680}} $ \\[3mm] \end{tabular} \end{center} \newpage \begin{center} \begin{tabular}{l|l|l} $ { }{ W_4}\,{ }{ b_2}\,{ }{ c_4}'\,{ c_4}$& $ 9\, {\sqrt{{{17}\over {4909718}}}} $ & $ {{251\,{\sqrt{{{10}\over {77653}}}}}\over {13}} $ \\[3mm] $ { }T\,{ }T\,{ }{ b_2}\, { }{ c_4}'\,{ c_4}$& $ -{{553}\over {2969118}} $ & $ 0 $ \\[3mm] $ { }{ c_2}'\,{ }{ c_3}\,{ b_3}$& $ -2 $ & $ -2 $ \\[1mm] $ { }{ c_2}\,{ }{ c_3}'\,{ b_3}$& $ 1 $ & $ 1 $ \\[3mm] $ { }{ c_3}^{(3)}\,{ }{ b_3}\,{ c_4}$& $ {{3297}\over {{\sqrt{5341773184}}}} $ & $ {{-19693}\over {13\,{\sqrt{12424480}}}} $ \\[1mm] $ { }{ c_3}''\,{ }{ b_3}\,{ c_4}'$& $ {{-13\, {\sqrt{{{2048}\over {41732603}}}}}\over 3} $ & $ {{16979}\over {13\,{\sqrt{3106120}}}} $ \\[1mm] $ { }{ c_3}'\,{ }{ b_3}\,{ c_4}''$& $ -91\,{\sqrt{{{17}\over {314221952}}}} $ & $ {{-67533}\over {13\,{\sqrt{12424480}}}} $ \\[1mm] $ { }{ c_3}\,{ }{ b_3}\,{ c_4}^{(3)}$& $ {{-7\, {\sqrt{{{391}\over {13661824}}}}}\over 3} $ & $ {{-3949}\over {13\,{\sqrt{12424480}}}} $ \\[3mm] $ { }T\,{ }{ c_3}\,{ }{ b_3}'\,{ c_4}$& $ 0 $ & $ {{79\,{\sqrt{{{40}\over {77653}}}}}\over {13}} $ \\[1mm] $ { }T\,{ }{ c_3}'\,{ }{ b_3}\,{ c_4}$& $ 0 $ & $ {{-2133\,{\sqrt{{2\over {388265}}}}}\over {13}} $ \\[1mm] $ { }T\,{ }{ c_3}\,{ }{ b_3}\,{ c_4}'$& $ 0 $ & $ {{553\,{\sqrt{{{32}\over {388265}}}}}\over {13}} $ \\[3mm] $ { }{ W_3}\,{ }{ b_3}\,{ }{ c_4}'\,{ c_4}$& $ -{{1565}\over {50592}} $ & $ -{{18975}\over {621224}} $ \\[3mm] $ { }{ c_2}'\,{ }{ c_4}\,{ b_4}$& $ -3 $ & $ -3 $ \\[1mm] $ { }{ c_2}\,{ }{ c_4}'\,{ b_4}$& $ 1 $ & $ 1 $ \\[3mm] $ { }{ c_3}'\,{ }{ c_3}\,{ b_4}$& $ -3\,{\sqrt{{{16864}\over {79189}}}} $ & $ {{{\sqrt{{{621224}\over 5}}}}\over {253}} $ \\[3mm] $ { }{ c_4}^{(3)}\,{ }{ c_4}\,{ b_4}$& $ {{1703}\over {3\, {\sqrt{1335443296}}}} $ & $ {{47}\over {{\sqrt{3106120}}}} $ \\[1mm] $ { }{ c_4}''\,{ }{ c_4}'\,{ b_4}$& $ {{1013}\over {3\, {\sqrt{5341773184}}}} $ & $ {{40171}\over {13\,{\sqrt{12424480}}}} $ \\[3mm] $ { }T\,{ }{ c_4}'\,{ }{ c_4}\,{ b_4}$& $ 0 $ & $ {{-79\,{\sqrt{{{160}\over {77653}}}}}\over {13}} $ \\[3mm] $ { }{ b_2}'\,{ }{ b_2}\, { }{ c_3}^{(3)}\,{ }{ c_3}\,{ c_4}$& $ {{5947 - 84994560\,\beta }\over {45\,{\sqrt{341873483776}}}} $ & $ {{-1967633671 + 11758527872\,\beta }\over {128271\,{\sqrt{795166720}}}} $ \\[1mm] $ { }{ b_2}'\,{ }{ b_2}\, { }{ c_3}''\,{ }{ c_3}'\,{ c_4}$& $ {{-\left( 102617195 + 15961978368\,\beta \right) }\over {63\,{\sqrt{33493003332050944}}}} $ & $ {{144013865183 + 176377918080\,\beta }\over {897897\,{\sqrt{19879168000}}}} $ \\[1mm] $ { }{ b_2}'\,{ }{ b_2}\, { }{ c_3}''\,{ }{ c_3}\,{ c_4}'$& $ {{421962613 + 239429675520\,\beta }\over {315\,{\sqrt{33493003332050944}}}} $ & $ {{-\left( 57786879077 + 529133754240\,\beta \right) }\over {897897\,{\sqrt{19879168000}}}} $ \\[1mm] $ { }{ b_2}'\,{ }{ b_2}\, { }{ c_3}'\,{ }{ c_3}\,{ c_4}''$& $ {{-16580597 + 15961978368\, \beta }\over {63\,{\sqrt{33493003332050944}}}} $ & $ {{7500539549 - 13567532160\,\beta }\over {69069\,{\sqrt{19879168000}}}} $ \\[1mm] $ { }{ b_2}''\,{ }{ b_2}'\, { }{ c_3}'\,{ }{ c_3}\,{ c_4}$& $ {{-497867}\over {9\, {\sqrt{523328177063296}}}} $ & $ {{164168647}\over {128271\,{\sqrt{12424480}}}} $ \\[3mm] \end{tabular} \end{center} \newpage \begin{center} \begin{tabular}{l|l|l} $ { }T\,{ }{ b_2}'\, { }{ b_2}\,{ }{ c_3}'\,{ }{ c_3}\,{ c_4}$& $ {{{\sqrt{{{32384}\over {16160084519}}}}}\over 3} $ & $ {{2089\,{\sqrt{{{40}\over {77653}}}}}\over {3289}} $ \\[3mm] $ { }{ W_3}\,{ }{ b_2}'\, { }{ b_2}\,{ }{ c_3}\,{ }{ c_4}'\,{ c_4}$& $ {{622159409 + 747848245760\,\beta }\over {373924122880}} $ & $ {{-2487841011 + 58792639360\,\beta }\over {29396319680}} $ \\[3mm] $ { }{ b_2}'\,{ }{ b_2}\, { }{ c_4}^{(4)}\,{ }{ c_4}'\,{ c_4}$& $ {{1942860007649 + 833561113674240\,\beta }\over {567\,{2^{{{27}\over 2}}}\,{\sqrt{253}}\,{{164951}^{{3\over 2}}}}} $ & $ {{-\left( 85903480307899 + 363044548048000\,\beta \right) }\over {3549\,{{49697920}^{{3\over 2}}}}} $ \\[1mm] $ { }{ b_2}'\,{ }{ b_2}\, { }{ c_4}^{(3)}\,{ }{ c_4}''\,{ c_4}$& $ {{-51906446611 + 25140115929600\,\beta }\over {81\,{2^{{{25}\over 2}}}\,{\sqrt{253}}\,{{164951}^{{3\over 2}}}}} $ & $ {{-84802144919311 + 10195878310450560\,\beta }\over {46137\,{2^{{{19}\over 2}}}\,{{388265}^{{3\over 2}}}}} $ \\[1mm] $ { }{ b_2}''\,{ }{ b_2}'\, { }{ c_4}''\,{ }{ c_4}'\,{ c_4}$& $ {{-17078068387}\over {81\,{2^{{{19}\over 2}}}\,{\sqrt{253}}\,{{164951}^{{3\over 2}}}}} $ & $ {{9160909287863}\over {6591\,{2^{{{13}\over 2}}}\,{{388265}^{{3\over 2}}}}} $ \\[3mm] $ { }T\,{ }{ b_2}'\, { }{ b_2}\,{ }{ c_4}''\,{ }{ c_4}'\,{ c_4}$& $ {{4143551}\over {27\,{\sqrt{506}}\,{{164951}^{{3\over 2}}}}} $ & $ {{-\left( 6276667282831 + 67811430237824\,\beta \right) }\over {15379\,{2^{{{13}\over 2}}}\,{\sqrt{5}}\,{{77653}^{{3\over 2}}}}} $ \\[3mm] $ { }{ b_2}\,{ }{ c_3}\, { }{ b_3}^{(3)}\,{ }{ c_4}'\,{ c_4}$& $ {{\left( 2291791 + 8867765760\,\beta \right) }\over {5320659456}} $ & $ {{100359111 + 3094349440\,\beta }\over {1856609664}} $ \\[1mm] $ { }{ b_2}\,{ }{ c_3}'\, { }{ b_3}''\,{ }{ c_4}'\,{ c_4}$& $ {{5289}\over {659804}} $ & $ {{34701759}\over {52493428}} $ \\[1mm] $ { }{ b_2}\,{ }{ c_3}\, { }{ b_3}''\,{ }{ c_4}''\,{ c_4}$& $ {{-\left( 31075 - 57211392\,\beta\right) }\over {14302848}} $ & $ {{-1979773899 + 58792639360\,\beta }\over {14698159840}} $ \\[1mm] $ { }{ b_2}\,{ }{ c_3}''\, { }{ b_3}'\,{ }{ c_4}'\,{ c_4}$& $ {{-\left( 8701813 + 26603297280\, \beta \right)}\over {5320659456}} $ & $ {{5\,\left( 1141980071 - 11758527872\,\beta \right) }\over {11758527872}} $ \\[1mm] $ { }{ b_2}\,{ }{ c_3}'\, { }{ b_3}'\,{ }{ c_4}''\,{ c_4}$& $ {{\left( 43797 + 23838080\,\beta \right) }\over {2979760}} $ & $ {{-1146814353 + 29396319680\,\beta }\over {3674539960}} $ \\[1mm] $ { }{ b_2}\,{ }{ c_3}\, { }{ b_3}'\,{ }{ c_4}^{(3)}\,{ c_4}$& $ {{-\left(135509501 + 26603297280\,\beta \right)}\over {26603297280}} $ & $ {{2676877295 - 11758527872\,\beta }\over {11758527872}} $ \\[1mm] $ { }{ b_2}\,{ }{ c_3}\, { }{ b_3}'\,{ }{ c_4}''\,{ c_4}'$& $ {{-\left(62263121 - 79809891840\,\beta \right)}\over {13301648640}} $ & $ {{-1269768317 + 13567532160\,\beta }\over {2261255360}} $ \\[1mm] $ { }{ b_2}\,{ }{ c_3}^{(3)}\, { }{ b_3}\,{ }{ c_4}'\,{ c_4}$& $ {{-\left(2060917 + 8867765760\, \beta \right)}\over {1330164864}} $ & $ {{177222537 - 4522510720\,\beta }\over {678376608}} $ \\[1mm] $ { }{ b_2}\,{ }{ c_3}''\, { }{ b_3}\,{ }{ c_4}''\,{ c_4}$& $ {{-\left( 30543 - 591184384\,\beta\right) }\over {591184384}} $ & $ {{7914129141 + 58792639360\,\beta }\over {58792639360}} $ \\[1mm] $ { }{ b_2}\,{ }{ c_3}'\, { }{ b_3}\,{ }{ c_4}^{(3)}\,{ c_4}$& $ {{689443}\over {158352960}} $ & $ -{{4322106}\over {13123357}} $ \\[1mm] $ { }{ b_2}\,{ }{ c_3}'\, { }{ b_3}\,{ }{ c_4}''\,{ c_4}'$& $ {{\left( 7654343 + 8867765760\,\beta \right) }\over {1266823680}} $ & $ {{1780415853 + 58792639360\,\beta }\over {8398948480}} $ \\[1mm] $ { }{ b_2}\,{ }{ c_3}\, { }{ b_3}\,{ }{ c_4}^{(4)}\,{ c_4}$& $ {{-\left(206198941 + 168487549440\,\beta \right)}\over {53206594560}} $ & $ {{118119834753 - 1117060147840\,\beta }\over {352755836160}} $ \\[1mm] $ { }{ b_2}\,{ }{ c_3}\, { }{ b_3}\,{ }{ c_4}^{(3)}\,{ c_4}'$& $ {{-\left(408011 - 8867765760\,\beta\right) }\over {1662706080}} $ & $ {{-306567231 + 7349079920\,\beta }\over {1377952485}} $ \\[3mm] $ { }T\,{ }{ b_2}'\, { }{ c_3}\,{ }{ b_3}\,{ }{ c_4}'\,{ c_4}$& $ {{\left( 6055439 + 8867765760\,\beta \right) }\over {2216941440}} $ & $ {{-3771447139 + 58792639360\,\beta }\over {14698159840}} $ \\[1mm] $ { }T\,{ }{ b_2}\, { }{ c_3}\,{ }{ b_3}'\,{ }{ c_4}'\,{ c_4}$& $ {{\left( 17862797 + 26603297280\,\beta \right) }\over {13301648640}} $ & $ {{-5064153547 + 58792639360\,\beta }\over {29396319680}} $ \\[1mm] $ { }T\,{ }{ b_2}\, { }{ c_3}'\,{ }{ b_3}\,{ }{ c_4}'\,{ c_4}$& $ {{\left( 10285679 + 8867765760\,\beta \right) }\over {4433882880}} $ & $ {{-16822056571 + 58792639360\,\beta }\over {29396319680}} $ \\[1mm] $ { }T\,{ }{ b_2}\, { }{ c_3}\,{ }{ b_3}\,{ }{ c_4}''\,{ c_4}$& $ {{\left( 1434213 + 2955921920\,\beta \right) }\over {1477960960}} $ & $ {{673172789 + 58792639360\,\beta }\over {29396319680}} $ \\ \end{tabular} \end{center} \newpage \begin{center} \begin{tabular}{l|l|l} $ { }{ b_2}\,{ }{ c_3}'\, { }{ c_3}\,{ }{ c_4}\,{ b_4}'$& $ {{12}\over {313}} $ & $ -{{17037}\over {16445}} $ \\[1mm] $ { }{ b_2}\,{ }{ c_3}''\, { }{ c_3}\,{ }{ c_4}\,{ b_4}$& $ 0 $ & $ 0 $ \\[1mm] $ { }{ b_2}\,{ }{ c_3}'\, { }{ c_3}\,{ }{ c_4}'\,{ b_4}$& $ {{16}\over {313}} $ & $ -{{22716}\over {16445}} $ \\[3mm] $ { }{ b_2}\,{ }{ c_4}''\, { }{ c_4}'\,{ }{ c_4}\,{ b_4}'$& $ {{-\left( 15931409 + 19509084672\, \beta \right)}\over {1773553152}} $ & $ {{6371092053 - 129343806592\,\beta }\over {11758527872}} $ \\[1mm] $ { }{ b_2}\,{ }{ c_4}^{(3)}\, { }{ c_4}'\,{ }{ c_4}\,{ b_4}$& $ {{-\left( 4061585 + 5320659456\, \beta \right)}\over {1330164864}} $ & $ {{1332601887 - 11758527872\,\beta }\over {2939631968}} $ \\[3mm] $ { }{ c_3}'\,{ }{ c_3}\, { }{ b_3}'\,{ }{ b_3}\,{ c_4}$& $ {{-647}\over {{\sqrt{1335443296}}}} $ & $ {{-8559}\over {13\,{\sqrt{3106120}}}} $ \\[3mm] $ { }{ b_3}'\,{ }{ b_3}\, { }{ c_4}''\,{ }{ c_4}'\,{ c_4}$& $ {{-25\,{{{{313}\over {527}}}^{{3\over 2}}}\, {\sqrt{{{23}\over {11}}}}}\over {9\,{2^{{{17}\over 2}}}}} $ & $ {{17457\,{5^{{7\over 2}}}}\over {{2^{{{11}\over 2}}}\,{{77653}^{{3\over 2}}}}} $ \\[3mm] $ { }{ c_3}\,{ }{ b_3}\, { }{ c_4}'\,{ }{ c_4}\,{ b_4}'$& $ {{2789}\over {{\sqrt{1335443296}}}} $ & $ {{48249}\over {13\,{\sqrt{3106120}}}} $ \\[1mm] $ { }{ c_3}'\,{ }{ b_3}\, { }{ c_4}'\,{ }{ c_4}\,{ b_4}$& $ {{1259}\over {{\sqrt{333860824}}}} $ & $ {{297\,{\sqrt{{{67}\over {11590}}}}}\over {13}} $ \\[1mm] $ { }{ c_3}\,{ }{ b_3}\, { }{ c_4}''\,{ }{ c_4}\,{ b_4}$& $ 9\,{\sqrt{{{34}\over {2454859}}}} $ & $ {{567\,{\sqrt{{{40}\over {77653}}}}}\over {13}} $ \\[3mm] $ { }{ b_2}''\,{ }{ b_2}'\, { }{ b_2}\,{ }{ c_3}'\, { }{ c_3}\,{ }{ c_4}'\,{ c_4}$& $ {{-\left( 994248158327 + 1393241281850880\,\beta \right) }\over {2106688508305920}} $ & $ {{3\,\left( -5374570365239 + 87024864780672\,\beta \right) }\over {27624141550720}} $ \\[3mm] $ { }{ b_2}'\,{ }{ b_2}\, { }{ c_3}\,{ }{ b_3}'\, { }{ c_4}''\,{ }{ c_4}'\,{ c_4}$& $ {{-\left(160494387499 + 176832117020160\,\beta \right)}\over {135\,{2^{{{21}\over 2}}}\,{\sqrt{253}}\,{{164951}^{{3\over 2}}}}} $ & $ {{-\left( 175846788774911 + 2804350104832640\,\beta \right) }\over {15379\,{{12424480}^{{3\over 2}}}}} $ \\[1mm] $ { }{ b_2}'\,{ }{ b_2}\, { }{ c_3}'\,{ }{ b_3}\, { }{ c_4}''\,{ }{ c_4}'\,{ c_4}$& $ {{-\left(423757135087 + 110962352954880\,\beta \right)}\over {315\,{2^{{{23}\over 2}}}\,{\sqrt{253}}\,{{164951}^{{3\over 2}}}}} $ & $ {{-3\,\left( 71812352639835 + 69927965254784\,\beta \right) }\over {15379\,{2^{{{17}\over 2}}}\,{\sqrt{5}}\,{{77653}^{{3\over 2}}}}} $ \\[1mm] $ { }{ b_2}'\,{ }{ b_2}\, { }{ c_3}\,{ }{ b_3}\, { }{ c_4}^{(3)}\,{ }{ c_4}'\,{ c_4}$& $ {{-\left( 38872420967 + 62603470343680\,\beta\right) }\over {105\,{2^{{{21}\over 2}}}\,{\sqrt{253}}\,{{164951}^{{3\over 2}}}}} $ & $ {{22901871803033 - 306838784819840\,\beta }\over {15379\,{2^{{{11}\over 2}}}\,{{388265}^{{3\over 2}}}}} $ \\[3mm] $ { }{ b_2}'\,{ }{ b_2}\, { }{ c_3}'\,{ }{ c_3}\, { }{ c_4}'\,{ }{ c_4}\,{ b_4}$& $ {{-\left( 13632557 + 26603297280\,\beta \right) }\over {105\,{\sqrt{2093312708253184}}}} $ & $ {{-9669258947 + 58792639360\,\beta }\over {299299\,{\sqrt{1242448000}}}} $ \\[3mm] $ { }{ b_2}\,{ }{ c_3}'\, { }{ c_3}\,{ }{ b_3}'\, { }{ b_3}\,{ }{ c_4}'\,{ c_4}$& $ {{4424047 + 8867765760\, \beta }\over {886776576}} $ & $ {{-855872103 + 4522510720\,\beta }\over {452251072}} $ \\[3mm] $ { }{ b_2},{ }{ c_3}\, { }{ b_3}\,{ }{ c_4}''\, { }{ c_4}'\,{ }{ c_4}\,{ b_4}$& $ {{-\left( 7808239 + 8867765760\, \beta\right) }\over {1108470720}} $ & $ {{-\left( 1624519269 + 58792639360\,\beta \right) }\over {7349079920}} $ \\[3mm] \end{tabular} \end{center} \newpage
{ "redpajama_set_name": "RedPajamaArXiv" }
\section*{APPENDIX: NP-completeness of Hartree-Fock} One of the precursors of DFT which is still widely used is the Hartree-Fock method. It is similar to DFT in that it reduces the $N$-electron equation to a problem of individual electrons moving in an external field which depends only on the average electron distribution. Different from DFT, Hartree-Fock is based on a particular ansatz for the wave function and is therefore not guaranteed to give the true ground state energy; on the other hand, it is an iterative method which can be applied without prior assumptions, whereas in DFT some \emph{a priori} guess on the form of the universal functional has to be made. The starting point is a Hamiltonian \begin{equation} \label{hf-ham} \mathcal H=\sum H^{(1)}_{i,j}a^\dagger_ia_j+ \sum H^{(2)}_{ij,kl}a^\dagger_ia^\dagger_ja_ka_l+\dots\ , \end{equation} which is local, meaning the series terminates at some fixed $H^{(r)}$, and with a number of fermionic modes $M\ge N$; if it is derived from two-particle interactions as in the Schr\"odinger equation (\ref{gen-ham}), $r=2$. The Hartree-Fock method approximates the ground state of this Hamiltonian using the ansatz $b^\dagger_N\cdots b^\dagger_1\ket\Omega$ with $b_i=\sum u_{ij} a_j$ (where $\ket\Omega$ is the vacuum). Note that this corresponds to an antisymmetrized product of single-particle wave functions, which is how Hartree-Fock is usually presented. In the following, we show that approximating the ground state energy using the Hartree-Fock method is an \textsf{NP}-complete problem. More precisely, we consider the problem of deciding whether the lowest energy of (\ref{hf-ham}) within the Hartree-Fock ansatz is below some $a$ or above some $b>a$. We show that the problem is inside \textsf{NP}\ for up to an exponential accuracy $b-a$ and for any $r$, and that for \textsf{NP}-completeness a polynomial accuracy $b-a<1/\mathrm{poly}(N)$ and $r=2$ are sufficient. To see that the problem is in \textsf{NP}, note that a Hartree-Fock state is fully characterized by the $u_{ij}$'s, and that from there its energy can be computed efficiently. Conversely, the problem is shown to be \textsf{NP}-hard by mapping it to the ground state problem for Ising spin glasses which is known to be \textsf{NP}-hard: Given an $L\times L\times 2$ lattice of two-level spins $S_i=\pm 1$ with a nearest neighbor Ising coupling $\mathcal H=\sum J_{ij}S_iS_j$, $J_{ij}\in\{0,-1,1\}$, determine whether the ground state energy is the minimum one allowed by the individual $J_{ij}$'s or not. Therefore, embed the $N=2L^2$ classical spins into a fermionic system with $2N$ modes occupied by $N$ fermions. The modes come in pairs $(a_{2i},a_{2i+1})$, and a Hamiltonian term $\lambda n_{2i}n_{2i+1}$, $\lambda=O(N^2)$ penalizes double occupancy, so that in the ground state exactly one mode per pair is occupied, giving an effective spin degree of freedom~\cite{liu}; the coupling $J_{ij}S_iS_j$ of these spins is realized as $J_{ij}\sum_{p,q=0,1}(-1)^{p+q} n_{2i+p}n_{2j+q}$. As the ground state of the system is a classical spin state, it can be expressed as a Hartree-Fock state where $b_i=a_{2i}$ or $b_i=a_{2i+1}$, respectively, and since the classical Hamiltonian has a constant gap while perturbations from the penalized subspace are at most $O(1/\lambda^2)$, a polynomial accuracy is sufficient to make the problem \textsf{NP}-hard. \vspace{1cm} \begin{center} \textbf{SUPPLEMENTARY MATERIAL} \end{center} \vspace{0.6cm} \noindent\textbf{1.\ Second order perturbation theory} \vspace{0.3cm} We start with a Hamiltonian $H=H_0\oplus H_1$ and a perturbation \[ V=\left(\begin{array}{cc}V_0&V_{01}\\mathcal{V}_{10}&V_1\end{array}\right) \] with $\|H_0\|,\|V\|\le v$ and $H_1\ge\Delta\gg v$, and want to show that the low-energy spectrum of $H_\mathrm{tot}=H+V$ is well approximated by $H_\mathrm{eff}=H_0+V_0-V_{01}H_1^{-1}V_{10}$. To this end, rotate $H_\mathrm{tot}$ by a unitary $U=e^{S}$, \[ S=\left(\begin{array}{cc}0&X\\ -X^\dagger&0\end{array}\right)\ , \] where $X=-V_{01}H_1^{-1}+V_{01}H_1^{-1}V_1H_1^{-1}- V_0V_{01}H_1^{-2}-H_0V_{01}H_1^{-2}$ is chosen such as to make the Hamiltonian as diagonal as possible. For systematic constructions of $S$ for any order of perturbation theory, see Ref.~\cite{schrieffer-wolf}. By expanding $S$ to second order in $v/\Delta$, we obtain a new Hamiltonian \[ \tilde H_\mathrm{tot}= UH_\mathrm{tot}U^\dagger= \left(\begin{array}{cc}H_\mathrm{eff}+O(\tfrac{v^3}{\Delta^2}) & O(\tfrac{v^3}{\Delta^2}) \\ O(\tfrac{v^3}{\Delta^2}) & H_1+V_1+O(\tfrac{v^2}\Delta) \end{array}\right), \] where all $O(\cdot)$ symbols are bounds in operator norm. Now compare this Hamiltonian with the block-diagonal Hamiltonian $\tilde H_\mathrm{diag}$ obtained from $\tilde H_\mathrm{tot}$ by setting the off-diagonal blocks to zero: The low-energy spectrum of $\tilde H_\mathrm{diag}$ is given by $H_\mathrm{eff}+O(\tfrac{v^3}{\Delta^2})$, and since $\|\tilde H_\mathrm{diag}-\tilde H_\mathrm{tot}\|_\mathrm{op}=O(\tfrac{v^3}{\Delta^2})$, it follows that all eigenvalues are $O(\tfrac{v^3}{\Delta^2})$\,-\,close to each other, and thus the low-energy spectrum of $H+V$ is given by \begin{equation} \label{eq:2nd-order-pert} H_\mathrm{eff}=H_0+V_0-V_{01} H_1^{-1} V_{10}+O(\tfrac{v^3}{\Delta^2})\ . \end{equation} Note that when applying the gadgets to an $N$-qubit system with an extensive number of local perturbations, the error bound will depend on $N$ since $v\propto N$. \vspace{0.6cm} \noindent\textbf{2.\ Gadgets} \vspace{0.3cm} Let us now turn towards the gadget constructions. Starting from (\ref{eq:h_2d}), we want to show how its low-energy subsector can be obtained as an effective theory from (\ref{hubb-ham}). Let us first note that all perturbation gadgets of one level can be applied simultaneously, since they are second order gadgets and the transition term $V_{10}$ is a sum of two-body terms which excite \emph{one} qubit only; to return to the ground state subspace in the next step, this excitation has to hop to one of the adjacent sites. Thus, there will be no cross-gadget terms and the action of each layer of gadgets can be investigated on the level of a single gadget. We aim to approximate (\ref{eq:h_2d}) with strength $\lambda_{ij}\le 1$ up to a precision $O(1/q)$, with $q\equiv q(N)$ a polynomial in $N$ (we will in the following omit the parameter $N$ for most polynomials). For $A$ and $B$ Pauli matrices, we can obtain a tunable Pauli coupling $\lambda_T A\otimes B$ from \begin{equation} V=\lambda_{P}A\otimes X\otimes\openone +\lambda_P\openone\otimes Y\otimes B\ , \label{eq:diff-pos-pauli-cpl} \end{equation} by acting with a Hamiltonian $H=B_P\ket{e_\phi}\bra{e_\phi}$, $B_P\gg\lambda_P$, and the excited state $\ket{e_\phi}=\ket0-e^{i\phi}\ket1$. Then, to second order, the system is described by the effective Hamiltonian \[ H_T=+2\lambda_P^2/B_P\sin\phi\cos\phi\; A\otimes B+O(\lambda_P^3/B_P^2) \] (up to a constant, and times the ground state projector on the middle qubit); note that when combining the gadgets, the total error grows with the third power of the total strength of $V$, and thus as $N^3$. As we aim to implement any $|\lambda_T|\le1$, we set $\lambda_P^2=B_P$ and tune the actual value using $\phi$. Choosing $\lambda_P=N^4q$, $B_P=N^8q^2$ [$N^4q\equiv N^4q(N)$], we find that the \emph{total} error is at most $N^3O(\lambda_P^3/B_P^2)=O(1/Nq)$ and thus much below the targeted precision $O(1/q)$. Note that in particular, this allows to split any Pauli interaction in two interactions of the form $X\otimes Y$, i.e.\ with two different Pauli matrices and positive sign. Let us now show how such an $X\otimes Y$ coupling as in Eq.~(\ref{eq:diff-pos-pauli-cpl}) can be reduced to Ising interactions. To this end, consider the Hamiltonian \begin{equation} \label{eq:ising-cpl-ham} V=-\lambda_IX\otimes X\otimes \openone-\lambda_I\openone\otimes Y\otimes Y +H_\mathrm{loc} \end{equation} and apply a field $H=B_I\ket{e_{\pi/4}}\bra{e_{\pi/4}}$. Here, $H_\mathrm{loc}$ represents the local fields of the preceding gadget layers. They act on the qubits remaining after the present gadget, i.e.\ do not induce transitions to excited states, and are thus first-order terms which are left untouched in (\ref{eq:2nd-order-pert}). We choose $B_I=N^{20}q^5\gg N\lambda_I$, and $\lambda_I=N^{12}q^3\gg \|H_\mathrm{loc}\|= B_P=N^9q^2$, which results in an effective Hamiltonian \[ H_P=+\lambda_I^2/B_I X\otimes Y+O( \lambda_I^3/B_I^2)+H_\mathrm{loc}\ , \] for which $\lambda_I^2/B_I=\lambda_P$, and $N^3O(\lambda_I^3/B_I^2)=1/Nq\ll O(1/q)$. Note that due to rotational invariance, this construction holds for any type of Pauli coupling. Ising interactions Eq.~(\ref{eq:ising-cpl-ham}) can in turn be reduced to $XX$-type interactions, \begin{equation} \label{eq:XX-cpl-ham} V=-\lambda_{XX}(X\otimes X+Y\otimes Y)\otimes\openone- \openone\otimes(X\otimes X+Y\otimes Y)+H_\mathrm{loc} \end{equation} by putting a field in the $Y$ direction, $H=B_{XX}(\openone-Y)/2$. This cancels all $Y$ contributions in the Hamiltonian, since $\bra{0_y}Y\ket{1_y}=0$, and one remains with the $X$ part of $V$, \[ H_{XX}=-2\lambda_{XX}^2/B_{XX}X\otimes X+H_\mathrm{loc}+O(\lambda_{XX}^3/B_{XX}^2)\ . \] (The factor $2$ is due to the fact that either an $X$ on the left can excite the middle qubit, which then decays towards the right, or vice versa.) We choose $\lambda_{XX}=N^{28}q^7/4$ and $B=N^{44}q^{11}/8$, which ensures that $2\lambda_{XX}^2/B_{XX}=\lambda_I$, $B_{XX}\gg\lambda_{XX}\gg B_I$, and the total error is $1/Nq\ll O(1/q)$, as required. In a last step, we reduce the Hamiltonian with $XX$ type couplings to an antiferromagnetic Heisenberg Hamiltonian with local fields. To this end, consider \begin{align} V=&\lambda_H\sum_{S=X,Y,Z} (S\otimes S\otimes\openone+ \openone\otimes S\otimes S)+H_\mathrm{loc}-\dots\nonumber\\ &-\lambda_H^2/B_H (Z\otimes\openone\otimes\openone+\openone\otimes \openone\otimes Z) \label{eq:v-for-xx-from-heis} \end{align} and place a strong field in $Z$ direction, $H=B_H(\openone-\sigma_Z)/2$, on the central qubit. Intuitively, the $X\otimes X+Y\otimes Y$ part describes the hopping of an excitation from one side through the central qubit to the other side; since the excitation can also hop back to the original site, it however also induces an additional local field which is compensated by the extra term in Eq.~(\ref{eq:v-for-xx-from-heis}). The effective Hamiltonian obtained is then \[ H_H=-2\lambda_H^2/B_H(X\otimes X+Y\otimes Y)\ , \] and by choosing $B_H=N^{92}q^{23}/512$, $\lambda_H=N^{60}q^{15}/64$, we find that $2\lambda_H^2/B_H=\lambda_{XX}$, $B_H\gg\lambda_H\gg B_{XX}$, and the total error is again $O(1/Nq)$. By combining these gadgets, we find that each Pauli coupling can be reduced to a line of $16$ Heisenberg couplings with variable local fields. Note that it should be possible to significantly reduce the order of magnitude of the fields by going to higher order perturbation theory: Each second order gadget couples the two outer qubits by an excitation hopping through the middle qubit. Therefore, it should be possible to choose all fields of equal magnitude and go to $16$th order perturbation theory, which is the lowest non-vanishing order, and to which solely hopping terms contribute. Note further that one can decrease the length of the chain to $12$ couplings (and thus to $12$th order perturbation theory), as one can equally combine one $XY$ Pauli and one Ising interaction to obtain an arbitrary Pauli coupling, including antiferromagnetic Ising couplings. \vspace{0.6cm} \noindent\textbf{3.\ Erasure gadget} \vspace{0.3cm} The sparse Heisenberg lattice Fig.~\ref{figsparse}a can be straightforwardly reduced to a full 2D Heisenberg lattice with local fields. To this end, add fields $H=B_e(1-\sigma_z)/2$ on all qubits to be erased, while \[ V=\lambda_H\sum_{<ij>}\vec\sigma_i\cdot\vec\sigma_j +\sum_i\vec B_i\cdot\vec\sigma_i \] is the 2D Heisenberg lattice. Then, according to Eq.~(\ref{eq:2nd-order-pert}), $H_\mathrm{eff}=H_0+V_0+O(\|V\|^2/B_e)$, which yields the Heisenberg Hamiltonian on the sparse lattice. In particular, given that $\|V\|\le N\lambda_H$, by choosing $B_e=N^3\lambda_H^2q$ we find that $B_e\gg V$, and the total error is $O(1/Nq)$. \vspace{0.6cm} \noindent\textbf{4.\ Reduction from Heisenberg to Hubbard model} \vspace{0.3cm} The final reduction step shows how the Heisenberg model can be reduced to the Hubbard model Eq.~(\ref{dft-min}) (see, e.g., Ref.~\onlinecite{auerbach}). To this end, choose an one-dimensional ordering of the Hubbard lattice, e.g.\ row-wise from left to right, and always place the spin-up mode before the spin-down mode. This results in a one-dimensional ordering of the modes of the Hubbard model, $(a_{1,\uparrow},a_{1,\downarrow},a_{2,\uparrow},a_{2,\downarrow},\dots)$. Now apply a Jordan-Wigner transform, \[ a_{i,s}\rightarrow\left( \prod \sigma^z_{i',s'}\right) \sigma^{-}_{i,s} \] where the product runs over all $(i',s')$ left of $(i,s)$. This transforms (\ref{hubb-ham}) to a two-level system with a Hamiltonian $H+V$, \begin{align*} H=&U\sum_{i}n_{i,\uparrow}n_{i,\downarrow}\\ V=&-t\sum_{<i,j>,s}\sigma_{i,s}^{+}\big[\Pi\,\sigma_{k,s'}^z\big]\sigma_{j,s}^-\\ &+\sum_j\left[(B^x_j-iB^y_j)\sigma_{j,\uparrow}^+ \sigma_{j,\downarrow}^-+\mathrm{h.c.}\right] + B_z(n_{j,\uparrow}-n_{j,\downarrow}) \end{align*} with $n=\sigma^+\sigma^-=\ket1\bra1$. We consider $V$ as a perturbation to $H$, i.e.\ $U\gg t,\vec B$, and do a second-order expansion. Since we operate the system in the half-occupancy regime, the ground state of $H$ satisfies $n_{i,\uparrow}+n_{i,\downarrow}=1$, which makes the $\sigma^z$ string in the tunneling term vanish on all but the sites $i$ and $j$. The half-occupancy allows to interpret the ground-state subspace as a system of spin $\tfrac12$ particles by grouping modes $(i,\uparrow)$ and $(i,\downarrow)$. The magnetic term in $V$ contributes only to first order (and yields the magnetic field operator on the resulting two-level system), so that the second-order term is found by considering four sites $((l,\uparrow),(l,\downarrow),(r,\uparrow),(r,\downarrow))$, with a Hamiltonian $H+V$, \begin{align*} H=&U(\ket{11}\bra{11}_l\otimes\openone_r+\openone_l\otimes\ket{11}\bra{11}_r)\ , \\ V=&-t(\sigma^+\otimes\sigma^z\otimes\sigma^-\otimes\openone+ \openone\otimes\sigma^+\otimes\sigma^z\otimes\sigma^-+\mathrm{h.c.})\ . \end{align*} A straightforward calculation on the subspace $\{\ket{1001},\ket{0110}\}$ -- the only one with non-vanishing second order contributions -- shows that this leads to a term \[ H=-\frac{4t^2}{U}(\ket{01}-\ket{10})(\bra{01}-\bra{10}) \] expressed in the effective spin $\tfrac12$'s described above, which up to a constant equals the antiferromagnetic Heisenberg Hamiltonian $(2t^2/U) \vec\sigma\cdot\vec\sigma$. Choosing $U=N^8\lambda_H^3q^2/8$ and $t=N^4\lambda_H^2q/4$, we have that $U\gg t,\|B\|=O(NB_e)$, $2t^2/U=\lambda_H$, and the error $N^3t^3/U^2\ll 1/Nq$ as desired. Let us note that as with the gadgets before, no cross-terms appear when applying the gadgets together, as the only way to return to the ground state subspace in second order are processes within a single gadget. \vspace{0.6cm} \noindent\textbf{5.\ Reduction of the Hubbard model to the Schr\"odinger equation} \vspace{0.3cm} In the following, we show how finding the ground state energy of the Hubbard model with a local field up to $1/\mathrm{poly}(N)$ precision can be reduced to answering the same question for the Schr\"odinger equation (where $N$ is both the number of sites of the Hubbard model and the number of electrons). \emph{Structure of the proof.---}% Let us first give an overview of the proof, highlighting the crucial steps. In the first part, we use the kinetic term together with an appropriate external electrostatic potential in the Schr\"odinger equation [terms $T$ and $V$ in (\ref{gen-ham})] to construct an exactly solvable model with the following property: The Hamiltonian can be decomposed as \begin{equation} \label{eq:sm:HisH0plusH1} H=H_0 \oplus H_1\ , \end{equation} where \begin{equation} \label{eq:sm:H0H1Ham} H_0=-t\sum_{\langle i,j\rangle,s} a_{i,s}^\dagger a_{j,s} + O(N^{-2\tau+1}) +\mathrm{const.} \end{equation} and the constant is chosen such that $H_0\le -\Delta$, and with $H_1\ge0$. Note that beyond the gap above the $H_0$ band, we do not care about the properties of $H_1$. In the second part of the proof, we show how to incorporate the magnetic field and the Coulomb interaction which will yield the on-site repulsion term. Loosely speaking, we will treat the Coulomb interaction as a perturbation to the original Hamiltonian, and obtain the on-site repulsion in first order perturbation theory. However, this cannot be done using the tools for perturbative expansions used for spin systems (cf.~Sec.~1 of the Supplementary Material) due to the unbounded nature of the Coulomb interaction. Instead, we will use a direct estimate to bound the effect on the ground state energy which stems from off-diagonal elements of the Coulomb interaction (i.e.\ those coupling the $H_0$ and the $H_1$ subspace). We then find that the ground state energy of the Hubbard model with local magnetic fields equals the ground state energy of the Schr\"odinger equation with an appropriately chosen external potential up to $1/\mathrm{poly}(N)$, as claimed. (Note that the result obtained by some perturbation expansions is stronger since the whole low-energy spectrum is reproduced; however, this is not necessary for the reduction.) Before we start with the derivation, let us fix the desired scaling of the variables: We aim to obtain a Hubbard model (\ref{hubb-ham}) with arbitrary local fields on an $N:=L_x\times L_y$ lattice, and with the following scaling of the parameters: $t=N^{-\tau}$, $B_\mathrm{max}=\max |B_i|=O(N^{-\tau})$, $U=\mathrm{const.}\times N^{-\zeta}$, and a precision in energy of $O(N^{-2\zeta+2})$, where we have that $0<\zeta<\tau-3$. Note that the \emph{relative} accuracy increases as $\zeta$ and $\tau$ are scaled up, which allows us to obtain the polynomial accuracies needed for the perturbation gadgets discussed above. \emph{The exactly solvable hopping model.---}% We start by constructing the 2D hopping model. We first consider a 1D exactly solvable model, the Kronig-Penney model, from which we then construct an exactly solvable model in 3D. (We set up a 3D lattice since we consider the Schr\"odinger equation in three-dimensional space; the same reduction would also work in 2D right away.) The 1D Kronig-Penney model on $[0,L]$ with periodic boundary conditions is defined by \begin{equation} \label{eq:sm:V1Ddef} V(r)=-\mathcal{V}\sum_{n=0}^{L-1}\delta(r-n)\ , \end{equation} where we choose $\mathcal{V}=\tau\log N$. This model is exactly solvable: The eigenfunctions are Bloch waves \begin{equation} \label{eq:sm:blochwaves} \psi_k(r+n)=\tfrac{1}{\mathcal N}e^{ikn} \left[e^{-\kappa r}+Ye^{-\kappa(d-r)}\right] \end{equation} (where $r\in[0,1]$, $n=0,\dots,L-1$), with \[ Y=\frac{e^{ik+\kappa}-1}{e^\kappa-e^{ik}}=e^{ik}+O(e^{-\kappa}) \] and normalization $\mathcal N^2=L/\mathcal{V}+O(e^{-\mathcal{V}})$. The dispersion relation for the lowest Bloch band of the Kronig-Penney model can be approximately solved as \[ E_k=-\kappa^2=-\mathcal{V}^2-4 \mathcal{V} e^{-\mathcal{V}}\cos(k)+O(\mathcal{V}^2 e^{-2\mathcal{V}})\ . \] This band is the only one with bound states, with a gap of $\Delta=\mathcal{V}^2-O(\mathcal{V} N^{-\tau})$ above. Expressing the Hamiltonian in the lowest Bloch band in terms of the creation/annihilation operators $a_l$ corresponding to the Wannier functions $w_l=\sum e^{i k l}\psi_k/\sqrt{L}$, one finds \begin{equation} \label{eq:hopping-1d} H_0^{\mathrm{1D}}=-\mathcal{V}^2\sum_l{a_l^\dagger a_l}- t\sum_{\langle i,j\rangle} a_i^\dagger a_j+O(L\mathcal{V}^2N^{-2\tau})\ , \end{equation} where $t\equiv e^{-\mathcal{V}}=N^{-\tau}$. In order to obtain a three-dimensional solvable model, we use a potential $V(r_1,r_2,r_3)=V(r_1)+V(r_2)+V(r_3)$ with the one-dimensional potentials of Eq.~(\ref{eq:sm:V1Ddef}). This choice of the potential leads to a product ansatz for the wavefunction, where the behavior of the lowest band is still described by the hopping Hamiltonian (\ref{eq:hopping-1d}), but on a three-dimensional lattice; the energy gap to the next band is still given by $\Delta$, the gap of the 1D model. Using this potential, we can set up a $L_x\times L_y\times 1$ lattice, $N:=L_xL_y$. Using (\ref{eq:sm:blochwaves}), we find that the Wannier functions of the model are of the form \begin{equation} \label{eq:sm:wannierfunc} w_0(r)=\mathcal{V}^{3/2}e^{-\mathcal{V} |r|_1}+O(\sqrt\mathcal{V} e^{-\mathcal{V}})\ , \end{equation} and $w_i(r)=w_0(r-i)$, where $i=(i_1,i_2)\in\{0,\dots,L_x-1\} \times \{0,\dots,L_y-1\}$ is the site index in the 2D lattice. Thus, we obtain the system described by (\ref{eq:sm:H0H1Ham}). Clearly, we can include the spin degree of freedom without affecting the model at the current stage, as the Hamiltonian currently does not include any magnetic field. As a result, the Wannier functions get an an additional spin index, $w_{n,s}(r)\equiv w_n(r)\otimes\ket{s}$. \emph{Treating magnetic field and Coulomb repulsion.---}% Let us now show how to account for the effect of the magnetic field and the Coulomb repulsion. We obtain the magnetic field of the Hubbard model by putting a magnetic potential \[ V_\mathrm{mag}(r)=\sum \vec B_n \chi(r+n) \] in (\ref{gen-ham}). Here, $\chi(r)=(1-\exp(-\mathcal{V}))^3$ for $-\tfrac{1}{2}\le r_i\le\tfrac{1}{2}$ and zero otherwise. This choice ensures the following: \\ i) $\bra{w_{n,s}}V_\mathrm{mag}(r)\ket{w_{n,s'}}= \bra{s}\vec B_n\cdot\vec\sigma\ket{s'}$ yields the effect of the field $B_n$ on the spin degree of freedom. \\ ii) $\bra{w_{n,s}}V_\mathrm{mag}(r)\ket{w_{m,s}}=O(\mathcal{V} N^{-2\tau+1})$ is sufficiently small for $n\ne m$, using (\ref{eq:sm:wannierfunc}); the unwanted contribution from the magnetic field is any state is thus $O(\mathcal{V} N^{-2\tau+3})$. \\ iii) For any state $\ket\chi$, $\bra{\chi}V_\mathrm{mag}\ket{\chi}\ge - N^2 B_{\max} = O(N^{-\tau+2})$. (This bound can e.g.\ be obtained by neglecting the antisymmetry of the wave function.) Before incorporating the Coulomb term $I$, note that the strength $\gamma$ of the Coulomb interaction can be tuned relative to the other terms by rescaling the spatial coordinates of the system; we choose $\gamma=N^{-\zeta}/2\mathcal{V}$. The Coulomb term $I$ has properties very analogous to those of $V_\mathrm{mag}$: \\ i) The on-site repulsion is $\bra{w_{n,0}\otimes w_{n,1}}I\ket{w_{n,0}\otimes w_{n,1}}= 0.8984(\dots)N^{-\zeta}+O(\mathcal{V}^4N^{-\tau-\zeta+2})$ [we explain the calculation of the integral, including the evaluation of the prefactor, later; the error term is from (\ref{eq:sm:wannierfunc})]; again, by neglecting antisymmetry, this yields a bound $O(N^{-\zeta+2})$ for the total on-site repulsion in the lowest band. \\ ii) $\bra{w_{n,s}\otimes w_{n',s'}}I\ket{w_{m,t}\otimes w_{m',t'}}= O(\mathcal{V}^4N^{-\tau-\zeta+2})$ unless $n=n'=m=m'$, using (\ref{eq:sm:wannierfunc}); the unwanted cross-terms from the Coulomb repulsion are thus $O(\mathcal{V}^4N^{-\tau-\zeta+4})$. \\ iii) For any state $\ket\chi$ (in particular for any state in the excited band), $\bra{\chi}I\ket{\chi}\ge 0$. \emph{Dealing with unbounded perturbations.---}By using the properties i)-iii) above, we will now be able to show that the ground state energy of the total Hamiltonian $H_\mathrm{tot}=H_\mathrm{ex}+V_\mathrm{mag}+I$ is well approximated by the energy of $\Pi_0 H_\mathrm{tot}\Pi_0$, where $\Pi_0$ projects onto the lowest band of $H_\mathrm{ex}$ (which gives the first order perturbation expansion). Let $\ket\psi\equiv\sqrt{1-p}\ket\phi+\sqrt{p}\ket\chi$ be a ground state of $H_\mathrm{tot}$, where $\ket\phi$ is supported in $\Pi_0$ and $\ket\chi$ in the orthogonal subspace $\Pi_1=1-\Pi_0$ of high-energy states. We claim that then, $p$ is very small and thus $\bra\phi H_\mathrm{tot}\ket\phi$ has almost the same ground state energy, i.e., the ground state energy of $\Pi_0 H_\mathrm{tot}\Pi_0$ is a good approximation to the ground state energy of $H_\mathrm{tot}$. (Since we will find that $p$ is very small, this actually also implies that the ground state of the projected Hamiltonian is close to the true ground state.) The error made in the energy by replacing $\ket\psi$ by $\ket\phi$ is \begin{align} \nonumber \Delta E &= \bra\phi H_\mathrm{ex} + V_\mathrm{mag} + I \ket\phi -\bra\psi H_\mathrm{ex} + V_\mathrm{mag} + I \ket\psi \\ \nonumber &=p\big[\bra\phi H_\mathrm{ex}+V_\mathrm{mag}+I\ket\phi- \bra\chi H_\mathrm{ex}+V_\mathrm{mag}+I\ket\chi\big] \\ \nonumber &\qquad +2\sqrt{(1-p)p} \big[\mathrm{Re}\bra\phi V_\mathrm{mag}\ket\chi +\mathrm{Re}\bra\phi I\ket\chi\big] \end{align} To bound $\Delta E$, we use the following facts (obtained by combining the statements about $V_\mathrm{mag}$ and $I$ made before):\\ i) $\bra{\phi}H_\mathrm{ex}+V_\mathrm{mag}+ I\ket{\phi}\le-\Delta+O(N^{-\zeta+2})$; \\ ii) $\bra{\chi}H_\mathrm{ex}+ V_\mathrm{mag}+I\ket{\chi}\ge -O(N^{-\tau+2})$; \\ iii) From the Cauchy-Schwarz inequality, \begin{align*} \mathrm{Re}\bra\phi M\ket\chi & \le |\bra\phi M\ket\chi| \\ &\le \sqrt{\bra\phi M M^\dagger \ket\phi \bra\chi\chi\rangle } = \sqrt{\bra\phi M M^\dagger \ket\phi} \end{align*} Combining i)-iii), this yields a bound \[ \Delta E\le p(- \Delta+\alpha) +2\sqrt{(1-p)p}\,\beta \] with $\alpha=O(N^{-\zeta+2})$ [from i) and ii)], and $\beta^2=\bra\phi I^2+V_\mathrm{mag}^2\ket\phi= O(N^{-2\zeta+2})$ (the dominating $I^2$ term can be derived solely from scaling arguments, see later). Using $\Delta\gg\alpha,\beta$, it is straighforward to show that the maximum of the above expression [found at $p=O(\beta^2/\Delta^2)$] is $O(\beta^2/\Delta)=O(N^{-2\zeta+2})$, which bounds the error in the ground state energy we make by replacing $H_\mathrm{tot}$ by $\Pi_0 H_\mathrm{tot} \Pi_0$. \emph{Evaluation of Coulomb energies.---}% Let us now show how to compute the strength of the on-site repulsion from the Coulomb interaction. Following (\ref{eq:sm:wannierfunc}), we have to evaluate the integral \[ \mathcal{V}^6\gamma \int \mathrm{d}^3r \mathrm{d}^3s \frac{e^{-2\mathcal{V}(|r|_1+|s|_1)}}{|r-s|_2} = \frac{\mathcal{V}\gamma}{32} \underbrace{\int \mathrm{d}^3r \mathrm{d}^3s \frac{e^{-(|r|_1+|s|_1)}}{|r-s|_2}}_{c_U} \] The latter integral is a constant, $c_U=28.7496(\ldots)$. Moreover, it is possible to compute $c_U$ to any accuracy $\epsilon$ in a time $1/\mathrm{poly}(\epsilon)$ which is sufficient to obtain an efficient reduction. To this end, first rewrite the integral as \begin{equation} \label{eq:sm:greens-int} c_U=\int \mathrm{d}^3 q \frac{1}{|q|_2} G(q)\ , \end{equation} where $G(q)$ is the Greens function \begin{align*} G(q)&=\int \mathrm{d}^3 r \mathrm{d}^3 s\; e^{-(|r|_1+|s|_1)}\delta(r-s-q) \\ & = \prod_i(1+|q_i|)\,e^{-|q_i|} \end{align*} Rewriting (\ref{eq:sm:greens-int}) in spherical coordinates and integrating over $r$, we are left with \[ c_U=8 \int_0^{\pi/2}\!\!\!\!\!\mathrm{d}\phi \int_0^{\pi/2}\!\!\!\!\!\mathrm{d}\theta\,\, \frac{n(\phi,\theta)}{d(\phi,\theta)} \] where $d(\phi,\theta)=((\cos\theta+\sin\theta)\sin\phi+\cos\phi)^5\ge1$ and $n(\phi,\theta)$ are trigonometric polynomials. Since the integrand and its derivatives are bounded, the integral can be evaluated numerically to precision $\epsilon$ using a grid of size $1/\mathrm{poly}(\epsilon)$. \emph{The effective Hamiltonian.---}% Putting all steps together, we obtain the Hubbard model (\ref{hubb-ham}) with tunneling $t=N^{-\tau}$, on-site repulsion $U=0.8984(\dots)\,N^{-\zeta}$, and the desired magnetic fields. Collecting all error terms, one finds that the total error in the ground state energy is given by $O(N^{-2\zeta+2})$, as desired. \emph{Remarks.---}A few notes: First, the fact that we are using a $\delta$-potential for our model does not affect our claims about DFT, since only the electron density, which is free of singularities, is passed to the functional; particularly, all these densities arise from $N$-electron states. Second, in the Schr\"odinger equation (\ref{gen-ham}) we have omitted the coupling of the magnetic field to the orbit of the electrons: the variant of DFT arising from this approximation is known as ``spin-density functional theory''~\cite{DFT-book-1,DFT-book-2}, and our hardness result holds for exactly this variant. Note also that a coupling to the orbit would result in a so-called Peierls phase $e^{i\phi_{kl}}a_k^\dagger a_l$ in the tunneling term of the Hubbard model, which gives non-vanishing terms only for non-trivial loops, i.e.\ only from fourth order perturbation theory on, and can therefore be neglected. \end{document}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Conclusions} \label{sec:conclusions} In this paper we have systematically compared lossy compression algorithms for constrained sensor networking, by investigating whether energy savings are possible depending on signal statistics, compression performance and hardware characteristics. Our results revealed that, for wireless transmission scenarios, the energy required by compression algorithms is of the same order of magnitude of that spent in the transmission at the physical layer. In this case, the only class of algorithms that provides some energy savings is that based on piecewise linear approximations, as these algorithms have the smaller computational cost. In addition, we have also considered underwater channels which, due to the nature of the transmission medium, require more energy demanding acoustic modems. In this scenario, techniques based on Fourier transforms are the algorithms of choice, as these provide the highest compression performance. Finally, we have obtained fitting formulas for the best compression methods to relate their computational complexity, approximation accuracy and compression ratio performance. These have been validated against real datasets and can be used to assess the effectiveness of the selected compression schemes for further hardware architectures. \section{Introduction} \label{sec:introduction} In recent years, wireless sensors and mobile technologies have experienced a tremendous upsurge. Advances in hardware design and micro-fabrication have made it possible to potentially embed sensing and communication devices in every object, from banknotes to bicycles, leading to the vision of the Internet of Things (IoT)~\cite{IoT-Magazine-2010}. It is expected that physical objects in the near future will create an unprecedented network of interconnected physical things able to communicate information about themselves and/or their surroundings and also capable of interacting with the physical environment where they operate. Wireless Sensor Network (WSN) technology has now reached a good level of maturity and is one of the main enablers for the IoT vision: notable WSN application examples include environmental monitoring~\cite{Szewczyk-2004}, geology~\cite{Werner-2006} structural monitoring~\cite{Xu-2004}, smart grid and household energy metering~\cite{Kappler-2004,SmartMetering-2011}. The basic differences between WSN and IoT are the number of devices, that is expected to be very large for IoT, and their capability of seamlessly communicating via the Internet Protocol, that will make IoT technology pervasive. The above mentioned applications require the collection and the subsequent analysis of large amounts of data, which are to be sent through suitable routing protocols to some data collection point(s). One of the main problems of IoTs is thus related to the foreseen large number of devices: if this number will keep increasing as predicted in~\cite{Dodson-2003}, and all signs point toward this direction, the amount of data to be managed by the network will become prohibitive. Further issues are due to the constrained nature of IoT devices in terms of limited energy resources (devices are often battery operated) and to the fact that data transmission is their main source of energy consumption. This, together with the fact that IoT nodes are required to remain unattended (and operational) for long periods of time, poses severe constrains on their transmitting capabilities. Recently, several strategies have been developed to prolong the lifetime of battery operated IoT nodes. These comprise processing techniques such as data aggregation~\cite{Fasolo-2007}, distributed~\cite{Pattem-2004} or temporal~\cite{Sharaf-2003} compression as well as battery replenishment through energy harvesting~\cite{Harvesting-Magazine-2010}. The rationale behind data compression is that we can trade some additional energy for compression for some reduction in the energy spent for transmission. As we shall see in the remainder of this paper, if the energy spent for processing is sufficiently smaller than that needed for transmission, energy savings are in fact possible. In this paper we focus on the energy saving opportunities offered by data processing and, in particular, on the effectiveness of the {\it lossy temporal compression} of data. With lossy techniques, the original data is compressed by however discarding some of the original information in it, so that at the receiver side the decompressor can reconstruct the original data up to a certain accuracy. Lossy compression makes it possible to trade some reconstruction accuracy for some additional gains in terms of compression ratio with respect to lossless schemes. Note that 1) these gains correspond to further savings in terms of transmission needs, 2) depending on the application, some small inaccuracy in the reconstructed signal can in fact be acceptable, 3) given this, lossy compression schemes introduce some additional flexibility as one can tune the compression ratio as a function of energy consumption criteria. We note that much of the existing literature has been devoted to the systematic study of lossless compression methods. \cite{Marcelloni-2009} proposes a simple Lossless Entropy Compression (LEC) algorithm, comparing LEC with standard techniques such as gzip, bzip2, rar and classical Huffman and arithmetic encoders. A simple lossy compression scheme, called Lightweight Temporal Compression (LTC)~\cite{2004-Schoellhammer}, was also considered. However, the main focus of this comparison has been on the achievable compression ratio, whereas considerations on energy savings are only given for LEC. \cite{Byl-2009} examines Huffman, Run Length Encoding (RLE) and Delta Encoding (DE), comparing the energy spent for compression for these schemes. \cite{Liang-2011} treats lossy (LTC) as well as lossless (LEC and Lempel-Ziv-Welch) compression methods, but only focusing on their compression performance. Further work is carried out in~\cite{Sadler-2006}, where the energy savings from lossless compression algorithms are evaluated for different radio setups, in single- as well as multi-hop networks. Along the same lines,~\cite{Barr-2006} compares several lossless compression schemes for a StrongArm CPU architecture, showing the unexpected result that data compression may actually cause an increase in the overall energy expenditure. A comprehensive survey of practical lossless compression schemes for WSN can be found in~\cite{Compression-Survey-2012}. The lesson that we learn from these papers is that lossless compression can provide some energy savings. These are however smaller than one might expect because, for the hardware in use nowadays (CPU and radio), the energy spent for the execution of the compression algorithms (CPU) is of the same order of magnitude of that spent for transmission (radio). Some further work has been carried out for what concerns lossy compression schemes. LTC~\cite{Liang-2011}, PLAMLiS~\cite{Liu-2007} and the algorithm of~\cite{Pham-2008} are all based on Piecewise Linear Approximation (PLA). Adaptive Auto-Regressive Moving Average (A-ARMA)~\cite{Lu-2010} is instead based on ARMA models (these schemes will be extensively reviewed in the following Section~\ref{sec:compression_methods}). Nevertheless, we remark that no systematic energy comparison has been carried out so far for lossy schemes. In this case, it is not clear whether lossy compression can be advantageous in terms of energy savings and what the involved tradeoffs are in terms of compression ratio {\it vs} representation accuracy and yet how these affect the overall energy expenditure. In addition, it is unclear whether the above mentioned linear and autoregressive schemes can provide at all advantages as compared with more sophisticated techniques such as Fourier-based transforms, which have been effectively used to compress audio and video signals and for which fast and computationally efficient algorithms exist. In this paper, we fill these gaps by systematically comparing existing lossy compression methods among each other and against polynomial and Fourier-based (FFT and DCT) compression schemes. Our comparison is carried out for two wireless communication setups, i.e., for radio and acoustic modems (used for underwater sensor networking, see, e.g.,~\cite{Casari-2011}) and fitting formulas for the relevant performance metrics are obtained for the best performing algorithms in both cases. Specifically, the main contributions of this paper are: \begin{itemize} \item we thoroughly review lossy compression methods from the literature as well as polynomial, FFT- and DCT-based schemes, quantifying their performance in terms of compression efficiency, computational complexity (i.e., processing energy) and energy consumption for two radio setups, namely, wireless (IEEE~802.15.4) and underwater (acoustic modems) radios. For FFT- and DCT-based methods we propose our own algorithms, which exploit the properties of these transformations. \item We assess whether signal compression may actually help in the reduction of the overall energy consumption, depending on the compression algorithm, the chosen reconstruction accuracy, the signal statistics and the transmission technology (i.e., wireless versus underwater). In fact, we conclude that signal compression may be helpful; however, in quite a few cases processing and transmission costs are of the same order of magnitude. Also, in some other cases, the former may even dominate the latter. Notably, our results indicate that PLA methods (and in particular among them LTC) are the best option for wireless radios, whereas DCT-based compression is the algorithm of choice for acoustic modems. Thus, the choice of the compression algorithm itself is highly dependent on the energy consumption associated with radio transmission. \item We provide formulas, obtained through numerical fittings and validated against real datasets, to gauge the computational complexity, the overall energy consumption and the signal representation accuracy of the best performing algorithms in each scenario, as a function of the most relevant system parameters. These can be used to generalize the results obtained here for the selected radio setups to other architectures. \end{itemize} The rest of the paper is organized as follows. Section~\ref{sec:compression_methods} discusses modeling techniques from the literature, along with some lossy compression schemes that we introduce in this paper. In Section~\ref{sec:results} we carry out a thorough performance evaluation of all considered methods, whereas our conclusions are drawn in Section~\ref{sec:conclusions}. \section{Lossy Compression for Constrained Sensing Devices} \label{sec:compression_methods} In the following, we review existing lossy signal compression methods for constrained sensor nodes, we present an improved ARMA-based compressor and we apply well known FFT and DCT techniques to achieve efficient lossy compression algorithms. We start with the description of what we refer to here as ``adaptive modeling techniques'' in Section~\ref{sec:adaptive_modeling}. Hence, in Section~\ref{sec:fft_based_techniques} we discuss techniques based on Fourier transforms. \subsection{Compression Methods Based on Adaptive Modeling} \label{sec:adaptive_modeling} For the Adaptive Modeling schemes, some signal model is iteratively updated over pre-determined time windows, exploiting the correlation structure of the signal through linear, polynomial or autoregressive methods; thus, signal compression is achieved by sending the model parameters in place of the original data. \subsubsection{Piecewise Linear Approximations (PLA)} \label{sec:PLA_intro} The term Piecewise Linear Approximation (PLA) refers to a family of linear approximation techniques. These build on the fact that, for most time series consisting of environmental measures such as temperature and humidity, linear approximations work well enough over short time frames. The idea is to use a sequence of line segments to represent an input time series $x(n)$ with a bounded approximation error. Further, since a line segment can be determined by only two end points, PLA leads to quite efficient representations of time series in terms of memory and transmission requirements. \begin{figure}[ht] \centering \includegraphics[width=0.3\textwidth]{segment_fit} \caption{Approximation of a time series $x(n)$ by a segment.} \label{fig:segment_fit} \end{figure} For the reconstruction at the receiver side, at the generic time $n$ observations are approximated through the vertical projection of the actual samples over the corresponding line segment (i.e., the white-filled dots in Fig.~\ref{fig:segment_fit}). The approximated signal in what follows is referred to as $\hat{x}(n)$. The error introduced is the distance from the actual samples (i.e., the black dots in the figure) to the segment along this vertical projection, i.e., $|\hat{x}(n)-x(n)|$. Most PLA algorithms use standard least squares fitting to calculate the approximating line segments. Often, a further simplification is introduced to reduce the computation complexity, which consists of forcing the end points of each line segment to be points of the original time series $x(n)$. This makes least squares fitting unnecessary as the line segments are fully identified by the extreme points of $x(n)$ in the considered time window. Following this simple idea, several methods have been proposed in the literature. Below we review the most significant among them.\\ \noindent \textbf{Lightweight Temporal Compression (LTC)~\cite{2004-Schoellhammer}:} the LTC algorithm is a low complexity PLA technique. Specifically, let $x(n)$ be the points of a time series with $n=1,2,\dots,N$. The LTC algorithm starts with $n=1$ and fixes the first point of the approximating line segment to $x(1)$. The second point $x(2)$ is transformed into a vertical line segment that determines the set of all ``acceptable'' lines $\Omega_{1,2}$ with starting point $x(1)$. This vertical segment is centered at $x(2)$ and covers all values meeting a maximum tolerance $\varepsilon \geq 0$, i.e., lying within the interval $[x(2)-\varepsilon, x(2)+\varepsilon]$, see Fig.~\ref{fig:LTC_a}. The set of acceptable lines for $n=3$, $\Omega_{1,2,3}$, is obtained by the intersection of $\Omega_{1,2}$ and the set of lines with starting point $x(1)$ that are acceptable for $x(3)$, see Fig.~\ref{fig:LTC_b}. If $x(3)$ falls within $\Omega_{1,2,3}$ the algorithm continues with the next point $x(4)$ and the new set of acceptable lines $\Omega_{1,2,3,4}$ is obtained as the intersection of $\Omega_{1,2,3}$ and the set of lines with starting point $x(1)$ that are acceptable for $x(4)$. The procedures is iterated adding one point at a time until, at a given step $s$, $x(s)$ is not contained in $\Omega_{1,2,\dots,s}$. Thus, the algorithm sets $x(1)$ and $x(s-1)$ as the starting and ending points of the approximating line segment for $n=1,2,\dots,s-1$ and starts over with $x(s-1)$ considering it as the first point of the next approximating line segment. In our example, $s=4$, see Fig.~\ref{fig:LTC_c}. \begin{figure}[ht] \centering \subfigure[]{% \includegraphics[width=0.3\textwidth]{LTCa \label{fig:LTC_a} } \subfigure[]{% \includegraphics[width=0.3\textwidth]{LTCb}% \label{fig:LTC_b} } \subfigure[]{% \includegraphics[width=0.3\textwidth]{LTCc \label{fig:LTC_c} } \caption{Lightweight Temporal Compression example.} \label{fig:LTC} \end{figure} When the inclusion of a new sample does not comply with the allowed maximum tolerance, the algorithm starts over looking for a new line segment. Thus, it self-adapts to the characteristics of $x(n)$ without having to fix beforehand the lapse of time between subsequent updates.\\ \noindent \textbf{PLAMLiS~\cite{Liu-2007}:} as LTC, PLAMLiS represents the input data series $x(n)$ through a sequence of line segments. Here, the linear fitting problem is converted into a set-covering problem, trying to find the minimum number of segments that cover the entire set of values over a given time window. This problem is then solved through a greedy algorithm that works as follows: let $x(n)$ be the input time series over a window $n=1,2,\dots,N$, with $\mathcal X = \{ x(1),x(2),\dots,x(N) \}$. For each $x(i) \in \mathcal X$ segments are built associating $x(i)$ with $x(j)$ ($j > i$), which is the farthest away from $x(i)$ such that the line segment $(x(i),x(j))$ meets the given error bound $\varepsilon$. That is, the difference between the compressed signal $\hat{x}(k)$ and $x(k)$ is no larger than $\varepsilon$ for $i<k<j$ . Let $\mathcal F_i$ denote the subset consisting of all the points covered by this line segment, formally: \begin{eqnarray} \mathcal F_i & = & \big \{x(k) \in X, i \leq k \leq j, s.t., j-i \textrm{ is maximized}, \nonumber \\ & & \textrm{given } |\hat{x}(k) - x(k) | \leq \varepsilon, \forall \; i < k < j \big \} \, . \nonumber \end{eqnarray} After having iterated this for all points in $\mathcal X$ we obtain set $\mathcal F = \{\mathcal F_1,\mathcal F_2,\dots, \mathcal F_N\}$. Now, the PLAMLiS problem amounts to picking the least number of subsets from $\mathcal F$ that cover all the elements in $\mathcal X$, which is the {\it minimum set cover problem} and is known to be NP-complete. The authors of~\cite{Liu-2007} suggest an approximate solution to it through a greedy algorithm. Set $\mathcal F$ is scanned by picking the subset $\mathcal F_i$ that covers the largest number of still uncovered points in $\mathcal X$, this set is then removed from $\mathcal F$ and added to the empty set $\mathcal S$, i.e., $\mathcal F \leftarrow \mathcal F \setminus \mathcal F_i$, $\mathcal S \leftarrow \mathcal S \cup \mathcal F_i$ and the algorithm is reiterated with the new set $\mathcal F$ until all points in $\mathcal X$ are covered. The subsets in $\mathcal S$ define the approximating segments for $x(n)$. \\ \noindent \textbf{Enhanced PLAMLiS~\cite{Pham-2008}:} several refinements to PLAMLiS have been proposed in the literature to reduce its computational cost. In~\cite{Pham-2008} a top-down recursive segmentation algorithm is proposed. As above, consider the input time series $x(n)$ and a time window $n=1,2,\dots,N$. The algorithm starts by taking a first segment $(x(1),x(N))$, if the maximum allowed tolerance $\varepsilon$ is met for all points along this segment the algorithm ends. Otherwise, the segment is split in two segments at the point $x(i)$, $1<i<N$, where the error is maximum, obtaining the two segments $(x(1),x(i))$ and $(x(i),x(N))$. The same procedure is recursively applied on the resulting segments until the maximum error tolerance is met for all points. \subsubsection{Polynomial Regression (PR)} \label{sec:PR} The above methods can be modified by relaxing the constraint that the endpoints of the segments $x(1)$ and $x(N)$ must be actual points from $x(n)$. In this case, polynomials of given order $p \geq 1$ are used as the approximating functions, whose coefficients are found through standard regression methods based on least squares fitting~\cite{Polynomial-book-2003}. Specifically, we start with a window of $p$ samples, for which we obtain the best fitting polynomial coefficients. Thus, we keep increasing the window length of one sample at a time, computing the new coefficients, while the target error tolerance is met. However, tracing a line between two fixed points as done by LTC and PLAMLiS has a very low computational complexity, while least squares fitting can have a significant cost. Polynomial regression obtains better results in terms of approximation at the cost of higher computational complexities, which increase with the polynomial order, with respect to the linear models of Section~\ref{sec:PLA_intro}. \subsubsection{Auto-Regressive (AR) Methods} \label{sec:auto_regressive} Auto Regressive (AR) models in their multiple flavors (AR, ARMA, ARIMA, etc.) have been widely used for time series modeling and forecasting in fields like macro-economics or market analysis. The basic idea is to build up a model based on the history of the sampled data, i.e., on its correlation structure. Many environmental and physical quantities can be modeled through AR, and hence these models are specially indicated for WSN monitoring applications. When used for signal compression AR obtains a model from the input data and sends it to the receiver in place of the actual time series. The reconstructed model is thus used at the data collection point (the sink) for data prediction until it receives model updates from the sensor nodes. Specifically, each node locally verifies the accuracy of the predicted data values with respect to the collected samples. If the accuracy is within a prescribed error tolerance, the node assumes that the sink can rebuild the data correctly and it does not transmit any data. Otherwise, it computes a new model and communicates the corresponding parameters to the sink. \\ \noindent \textbf{Adaptive Auto-Regressive Moving Average (A-ARMA)~\cite{Lu-2010}:} the basic idea of A-ARMA~\cite{Lu-2010} is that of having each sensor node compute an ARMA model based on fixed-size windows of $N^\prime < N$ consecutive samples. Compression is achieved through the transmission of the model parameters to the sink in place of the original data, as discussed above. In order to reduce the complexity in the model estimation process, adaptive ARMA employs low-order models, whereby the validity of the model being used is checked through a moving window technique. The algorithm works as follows. First, once a WSN node has collected $N^\prime$ samples starting from sample $n$, builds an ARMA Model $M^{(n)} = ARMA(p,q,N^\prime,n)$, where the order of $p$ and $q$ of the ARMA process, and the window length $N^\prime$ must be fixed a priori. For this model, the current estimation window goes from step $n$ to step $n+N^\prime-1$ (covering samples $\{x(n),\dots,x(n+N^\prime-1)\}$). Upon collecting the subsequent $K$ samples, $M^{(n)}$ is used to obtain the predicted values $\{\hat{x}(n+N^\prime),\dots,\hat{x}(n+N^\prime+K-1)\}$. Thus, the RMS error between predicted and actual values is computed. If this error is within the allowed error tolerance, the sensor node keeps using its current ARMA model for the next $K$ values, i.e., its prediction window is moved $K$ steps to the right, covering steps $n+N^\prime+K-1$ to $n+N^\prime+2K-1$. In this case, the decoder at the WSN sink uses $M^{(n)}$ to reconstruct the signal from step $n+N^\prime$ to $n+N^\prime+K-1$ obtaining $\{\hat{x}(n+N^\prime),\dots,\hat{x}(n+N^\prime+K-1)\}$. However, if the target tolerance is not met, the node moves its window and recomputes the new ARMA model parameters using the most recent $N^\prime$ samples.\\ \noindent \textbf{Modified Adaptive Auto-Regressive (MA-AR):} the A-ARMA algorithm was designed with the main objective of reducing the complexity in the model estimation process. For each model update, only one estimation is performed at the beginning of the stage, and always over a fixed window of $N^\prime$ samples. A drawback of this approach is that, especially for highly noisy environments, the estimation over a fixed window can lead to poor results when used for forecasting. In order to avoid this, we hereby propose a modified version of A-ARMA which uses a $p$-order AR model, dividing the time in terms of {\it prediction cycles}, whose length $N^\prime$ is variable and adapts to the characteristics of the signal. Let $n$ be the the time index at the beginning of a prediction cycle. The first $p$ collected samples $\{x(n),\dots,x(n+p-1)\}$ must be encoded and transmitted; these will be used at the receiver to initialize the predictor. Upon collecting sample $x(n+p)$, the $p$ parameters of a AR model $M^{(n,1)}=AR(n,p,1)$ are computed, where $n$ is the starting point of the estimation window and $N^\prime=p+1$ is its window size, i.e., the support points for the estimation are $\{x(n),\dots,x(n+p)\}$). $M^{(n,1)}$ is thus used to predict $\hat{x}(n+p)$, considering $\{x(n),\dots,x(n+p-1)\}$ as initial values. If the target tolerance is met, that is, if $|\hat{x}(n+p)-x(n+p)|<\varepsilon$, the model is temporally stored as valid. When the next sample $x(n+p+1)$ becomes available, a new model $M^{(n,2)}=AR(n,p,2)$ is obtained over the new estimation window $\{x(n),\dots,x(n+p+1)\}$ of size $N^\prime=p+2$. Then, $M^{(n,2)}$ is used to predict $\hat{x}(n+p)$ (one-step ahead) and $\hat{x}(n+p+1)$ (two-step ahead), with the model $M^{(n,2)}$ initialized with the values $\{x(n),\dots,x(n+p-1)\}$, and predicted values are compared with the real samples to check whether $|\hat{x}(n+p+i)-x(n+p+i)| < \varepsilon$ for $i=0,1$. This process is iterated until, for some value $k \geq 1$, $M^{(n,k)}$ is no longer capable of meeting the target reconstruction accuracy for at least one sample: that is, when $|\hat{x}(n+p+i)-x(n+p+i)| > \epsilon$ for at least one $i \in \{0,\dots,k-1\}$. In this case, the last valid model $M^{(n,k-1)}$, with $k-1 \geq 1$, is encoded and transmitted to the decoder at the receiver side, where $M^{(n,k-1)}$ is initialized with $\{x(n),\dots,x(n+p-1)\}$ and used to obtain the estimates $\{\hat{x}(n+p),\dots,\hat{x}(n+p+k-2)\}$. The length of the final estimation window is $N^\prime=p+k-1$. At this point, a new prediction cycle starts with sample $x(n+p+k-1)$ and the new model $M^{(n+p+k-1,1)}$. The key point in our algorithm is the incremental estimation of the AR parameters: only the contribution of the last sample is considered at each iteration in order to refine the AR model. In this way, the computational cost for recomputing a new model for each sample is minimized. The AR parameters can be obtained through least squares minimization. The one-step ahead predictor for an AR process is defined as: $$\hat{x}(n)= \xi_1 x(n-1) + \dots + \xi_p x(n-p) \, .$$ To simplify the notation, in the following without loosing generality, we assume $n=0$. The least squares method minimizes the total error $\mathcal {E}$ defined as the sum of the individual errors of the one-step ahead predictor for each of the $N^\prime$ samples in the estimation window: $$\mathcal{E} = \sum_{i=p}^{N^\prime} {(x(i)-\hat{x}(i))}^2 = \sum_{i=p}^{N^\prime} [x(i)- (\xi_1 x(i-1) + \dots + \xi_p x(i-p)) ]^2 \, .$$ Minimizing for each $\xi_k$ yields a set of equations: $$ {{\partial \mathcal{E}} \over {\partial \xi_k}} = -2 \sum_{i=p}^{N^\prime} [x(i)-(\xi_1 x(i-1) + \dots + \xi_p x(i-p))]x(i-k)=0 \, , $$ with $k=1,2,\dots,p$, which can be expressed in terms of the following linear system of equations: \begin{equation} \left( \begin{array}{cccc} f(1,1) & f(1,2) & \cdots & f(1,p) \\ f(2,1) & f(2,2) & \cdots & f(2,p) \\ \vdots & & & \vdots \\ f(p,1) & f(p,2) & \cdots & f(p,p) \end{array} \right) \left( \begin{array}{c} \xi_1 \\ \xi_2 \\ \vdots \\ \xi_p \end{array} \right) = \left( \begin{array}{c} f(1,0) \\ f(2,0) \\ \vdots \\ f(p,0) \end{array} \right) \label{eq:AR_LE} \end{equation} where $f(r,s) \triangleq \sum_{i=p}^{N^\prime} x(i-r) x(i-s)$. This is a linear system of $p$ equations and $p$ unknown values (the $\xi_i$ coefficients) that can be solved through a standard Gaussian elimination method. Each entry of the matrix involves $N^\prime-p$ multiplications and $N^\prime-p$ additions. Thus, the estimation of the AR model has complexity $\mathcal O (p^2 (N^\prime-p))$ associated with the matrix construction and $\mathcal O(p^3)$ for solving the linear system (\ref{eq:AR_LE}). Note that, for signals with high temporal correlation $N^\prime-p$ is usually much larger than the AR order $p$ ($p \leq 5$ to bound the computational complexity). In this case, the dominating term is $\mathcal O (p^2 (N^\prime-p))$ and increases with increasing $N^\prime$ and thus, with increasing correlation length, as we will show shortly in Section~\ref{sec:results}. \subsection{Compression Methods Based on Fourier Transforms} \label{sec:fft_based_techniques} For Fourier-based techniques, compression is achieved through sending subsets of the FFT or DCT transformation coefficients. Below, we propose some possible methods that differ in how the transformation coefficients are picked. \subsubsection{Fast Fourier Transform (FFT)} \label{sub:fft} The first method that we consider relies on the simplest way to use the Fourier transform for compression. Specifically, the input time series $x(n)$ is mapped to its frequency representation $X(f) \in \mathbb{C}$ through a Fast Fourier Transform (FFT). We define $X_{\mathcal{R}}(f) \triangleq \mathfrak{Re}\{X(f)\}$, and $X_{\mathcal{I}}(f) \triangleq \mathfrak{Im}\{X(f)\}$ as the real and the imaginary part of $X(f)$, respectively. Since $x(n)$ is a real-valued time series, $X(f)$ is Hermitian, i.e., $X(-f) = \overline{X(f)}$. This symmetry allows the FFT to be stored using the same number of samples $N$ of the original signal. For $N$ even we take $f \in \{f_1,\dots,f_{N/2}\}$ for both $X_{\mathcal{R}}(\cdot)$ and $X_{\mathcal{I}}(\cdot)$, while if $N$ is odd we take $f \in \{f_1,\dots,f_{\lfloor N/2 \rfloor +1}\}$ for the real part and $f \in \{f_1,\dots,f_{\lfloor N/2 \rfloor}\}$ for the imaginary part. The compressed representation $\hat X(f) \triangleq \hat X_{\mathcal{R}}(f) + j \hat X_{\mathcal{I}}(f)$ will also be in the frequency domain and it is built (for the case of $N$ even) as follows: \begin{enumerate} \item initialize $\hat X_{\mathcal{R}}(f) = 0$ and $\hat X_{\mathcal{I}}(f) = 0$, $\forall \; f \in \{f_1,\dots, f_{N/2} \}$ ; \item select the coefficient with maximum absolute value from $X_{\mathcal{R}}$ and $X_{\mathcal{I}}$, i.e., $f^* \triangleq \arg \! \max_{f} \max \{|X_{\mathcal{R}}(f)|,|X_{\mathcal{I}}(f)|\}$ and $M \triangleq \arg \! \max_{i \in \{\mathcal{R},\mathcal{I}\}}\{|X_{i}(f^*)|\}$. \item set $\hat X_{M}(f^*) = X_{M}(f^*)$ and then set $X_{M}(f^*)=0$. \item if $\hat x(n)$, the inverse FFT of $\hat X(f)$, meets the error tolerance constraint continue, otherwise repeat from step 2.; \item encode the values and the positions of the harmonics stored in $\hat X_{\mathcal{R}}$ and $\hat X_{\mathcal{I}}$. \end{enumerate} Hence, the decompressor at the receiver obtains $\hat X_{\mathcal{R}}(f)$ and $\hat X_{\mathcal{I}}(f)$ and exploits the Hermitian symmetry to reconstruct $\hat X(f)$. \subsubsection{Low Pass Filter (FFT-LPF)} \label{sub:low_pass_filter} We implemented a second FFT-based lossy algorithm, which we have termed FFT-LPF. Since the input time series $x(n)$ are in may common cases slow varying signals (i.e., having large temporal correlation) with some high frequency noise superimposed, most of the significant coefficients of $X(f)$ reside in the low frequencies. For FFT-PLF, we start setting $\hat X_{\mathcal{R}}(f) = 0$ for all frequencies. Thus, $X_{\mathcal{R}}(f)$ is evaluated from $f_1$, incrementally moving toward higher frequencies, $f_2,f_3,\dots$. At each iteration $i$, $X_{\mathcal{R}}(f_i)$ is copied onto $\hat X_{\mathcal{R}}(f_i)$ (both real and imaginary part), the inverse FFT is computed taking $\hat X_{\mathcal{R}}(f)$ as input and the error tolerance constraint is checked on the so obtained $\hat x(n)$. If the given tolerance is met the algorithm stops, otherwise it is reiterated for the next frequency $f_{i+1}$. \subsubsection{Windowing} \label{sub:windowing} The two algorithms discussed above suffer from an edge discontinuity problem. In particular, when we take the FFT over a window of $N$ samples, if $x(1)$ and $x(N)$ differ substantially the information about this discontinuity is spread across the whole spectrum in the frequency domain. Hence, in order to meet the tolerance constraint for all the samples in the window, a high number of harmonics is selected by the previous algorithms, resulting in a poor compression and in a high number of operations. To solve this issue, we implemented a version of the FFT algorithm that considers overlapping windows of $N + 2W$ samples instead of disjoint windows of length $N$, where $W$ is the number of samples that overlap between subsequent windows. The first FFT is taken over the entire window and the selection of the coefficients goes on depending on the selected algorithm (either FFT or FFT-LPF), but the tolerance constraint is only checked on the $N$ samples in the central part of the window. With this workaround we can get rid of the edge discontinuity problem and encode the information about the $N$ samples of interest with very few coefficients as it will be seen shortly in Section~\ref{sec:results}. As a drawback, the direct and inverse transforms have to be taken on longer windows, which results in a higher number of operations. \subsubsection{Discrete Cosine Transform (DCT)} \label{sub:dct} We also considered the Discrete Cosine Transform (type II), mainly for three reasons: 1) its coefficients are real, so we did not have to cope with real and imaginary parts, thus saving memory and number of operations; 2) it has a strong ``energy compaction'' property~\cite{Rao-1990}, i.e., most of the signal information tends to be concentrated in a few low-frequency components; 3) the DCT of a signal with $N$ samples is equivalent to a DFT on a real signal of even symmetry with double length, so DCT does not suffer from the edge discontinuity problem. \section*{Acknowledgment} The work in this paper has been supported in part by the MOSAICS project, �MOnitoring Sensor and Actuator networks through Integrated Compressive Sensing and data gathering�, funded by the University of Padova under grant no. CPDA094077 and by the European Commission under the 7th Framework Programme (SWAP project, GA 251557 and CLAM project, GA 258359). We gratefully acknowledge Paolo Casari for helpful discussions on underwater acoustic communications. The work of Ignasi Vilajosana and Borja Martinez has been supported, in part, by Spanish grants PTQ-08-03-08109 and INN-TU-1558. \section*{References} \bibliographystyle{model1-num-names} \section{Performance Comparison} \label{sec:results} The objects of our discussion in this section are: \begin{itemize} \item to provide a thorough performance comparison of the compression methods of Section~\ref{sec:compression_methods}. The selected performance metrics are: 1) compression ratio, 2) computational and transmission energy and 3) reconstruction error at the receiver, which are defined below; \item to assess how the statistical properties of the selected signals impact the performance of the compression methods; \item to investigate whether or not data compression leads to energy savings in single- and multi-hop scenarios for: WSN) a wireless sensor network and UWN) an underwater network. \item to obtain, through numerical fitting, close-formulas which model the considered performance metrics as a function of key parameters. \end{itemize} Toward the above objectives, we present simulation results obtained using synthetic signals with varying correlation length. These signals make it possible to give a fine grained description of the performance of the selected techniques, looking comprehensively at the entire range of variation of their temporal correlation statistics. Real datasets are used to validate the proposed empirical fitting formulas. \subsection{Performance Metrics} \label{sec:performance_metrics} Before delving into the description of the results, in the following we give some definitions. \begin{mydef}{Correlation length}\\ Given a stationary discrete time series $x(n)$ with $n = 1,2,\dots,N$, we define \textbf{correlation length} of $x(n)$ the smallest value $n^\star$ such that the autocorrelation function of $x(n)$ is smaller than a predetermined threshold $\delta$. The autocorrelation is: $$\rho_x(n) = \frac{\mathrm{E}\left[(x(m)-\mu_x)(x(m+n)-\mu_x)\right]}{\sigma_x^2} \; ,$$ where $\mu_x$ and $\sigma_x^2$ are the mean ad the variance of $x(n)$, respectively. Formally, $n^\star$ is defined as: $$n^\star = \mathop{\mathrm{argmin}}_{n > 0} \left\lbrace \rho_x(n) < \delta \right\rbrace \; .$$ \end{mydef} \begin{mydef}{Compression ratio}\\ Given a finite finite time series $x(n)$ and its compressed version $\hat{x}(n)$, we define \textbf{compression ratio} $\eta$ the quantity: $$\eta = \frac{N_b(\hat{x})}{N_b(x)} \;,$$ where $N_b(\hat{x})$ and $N_b(x)$ are the number of bits used to represent the compressed time series $\hat{x}(n)$ and the original one $x(n)$, respectively. \end{mydef} \begin{mydef}{Energy consumption for compression}\\ For every compression method we have recorded the number of operations to process the original time series $x(n)$ accounting for the number of additions, multiplications, divisions and comparisons. Thus, depending on selected hardware architecture, we have mapped these figures into the corresponding number of clock cycles and we have subsequently mapped the latter into the corresponding energy expenditure, which is the energy drained from the battery to accomplish the compression task. \end{mydef} \begin{mydef}{Transmission Energy}\\ Is the energy consumed for transmission, obtained accounting for the radio chip characteristics and the protocol overhead due to physical (PHY) and medium access (MAC) layers. \end{mydef} \begin{mydef}{Total Energy Consumption}\\ Is the sum of the energy consumption for compression and transmission and is expressed in [Joule]. \end{mydef} In the computation of the energy consumption for compression, we only accounted for the operations performed by the CPU, without considering the possible additional costs of reading and writing in the flash memory of the sensor. For the communication cost we have only taken into consideration the transmission energy, neglecting the cost of turning on and off the radio transceiver and the energy spent at the destination to receive the data. The first are fixed costs that would also be incurred without compression, while the latter can be ignored if the receiver is not a power constrained device. Moreover, for the MAC we do not consider retransmissions due to channel errors or multi-user interference. \subsection{Hardware Architecture} \label{sec:architecture} For both the WSN and the UWN scenarios we selected the TI MSP430~\cite{MSP430-TI-report} micro-controller using the corresponding $16$ bit floating point package for the calculations and for the data representation. In the active state, the MSP430 is powered by a current of $330$ $\mu$A at $2.2$ V and it has a clock rate of $1$ MHz. The resulting energy consumption per CPU cycle is $E_0 = 0.726$ nJ. In Table~\ref{tab:cpu_cycles} the number of clock cycles needed for the floating point operations are listed. \begin{table}[h!] \centering \begin{tabular*}{0.8\columnwidth}{@{\extracolsep{\fill}} l c} \toprule Operation & Clock cycles \\ \midrule Addition & 184 \\ Subtraction & 177 \\ Multiplication & 395 \\ Division & 405 \\ Comparison & 37 \\ \bottomrule \end{tabular*} \caption{CPU Cycles for the TI MSP430 micro-controller, see Section~5 of~\cite{MSP430-TI-report}.} \label{tab:cpu_cycles} \end{table} For the WSN scenario, we selected the TI CC2420 RF transceiver~\cite{CC2420}, an IEEE~802.15.4~\cite{IEEE802.15.4} compliant radio. The current consumption for the transmission is $17.4$ mA at $3.3$ V, for an effective data rate of $250$ kbps. Thus, the energy cost associated with the transmission of a bit is $E_{Tx}^\prime = 230$ nJ, which equals the energy spent by the micro-processor during $316$ clock cycles in the active state. For the UWN scenario, we considered the Aquatec AquaModem~\cite{AquaModem}, an acoustic modem featuring a data rate up to $2000$ bps consuming a power of $20$ W. In this case, the energy spent for the transmission of one bit of data is $E_{Tx}^\prime = 10$~mJ. We remark that the same amount of energy is spent by the micro-processor during $13\cdot 10^{6}$ clock cycles. \subsection{Generation of Synthetic Stationary Signals} \label{sec:synthetic_signals} The stationary synthetic signals have been obtained through a known method to enforce the first and second moments to a white random process, see~\cite{Davies-1987}\cite{Zordan-2011}. Our objective is to obtain a random time series $x(n)$ with given mean $\mu_x$, variance $\sigma_x^2$ and autocorrelation function $\rho_x(n)$. The procedure works as follow: \begin{enumerate} \item A random Gaussian series $G(k)$ with $k=1,2,\dots,N$ is generated in the frequency domain, where $N$ is the length of the time series $x(n)$ that we want to obtain. Every element of $G(k)$ is an independent Gaussian random variable with mean $\mu_G=0$ and variance $\sigma_G^2 = 1$. \item The Discrete Fourier Transform (DFT) of the autocorrelation function $\rho_x(n)$ is computed, $S_x(k) = \mathcal{F}[\rho_x(n)]$, where $\mathcal{F}[\cdot]$ is the DFT operator. \item We compute the entry-wise product $X(k) = G(k) \circ S_x(k)^{\frac{1}{2}}$. \item The correlated time series $x(n)$ is finally obtained as $\mathcal{F}^{-1}[X(k)]$. \end{enumerate} This is equivalent to filter a white random process with a linear, time invariant filter, whose transfer function is $\mathcal{F}^{-1}[S_x(k)^\frac{1}{2}]$. The stability of this procedure is ensured by a suitable choice for the correlation function, which must be square integrable. For the simulations in this paper we have used a Gaussian correlation function~\cite{Abrahamsen-1997}, i.e., $\rho_x(n) = \exp\{-a n^2\}$, where $a$ is chosen in order to get the desired correlation length $n^\star$ as follows: $$a = -\frac{\log(\delta)}{(n^\star)^2} \; .$$ Without loss of generality, we generate synthetic signals with $\mu_x=0$ and $\sigma_x^2=1$. In fact, applying an offset to the generated signals and a scale factor does not change the resulting correlation. For an in deep characterization of the Gaussian correlation function see~\cite{Abrahamsen-1997}. Also, in order to emulate the behavior of real WSN signals, we superimpose a noise to the synthetic signals so as to mimic random perturbations due to limited precision of the sensing hardware and random fluctuations of the observed physical phenomenon. This noise is modeled as a zero mean white Gaussian process with standard deviation $\sigma_{\rm noise}$. \subsection{Simulation Setup} \label{sec:simulation_setup} For the experimental results of the following Sections~\ref{sec:performance_results} and \ref{sub:application_scenarios}, we used synthetic signals with correlation length $n^\star$ varying in $\{1,10,20,50,\dots,500\}$ time slots, where after $20$, $n^\star$ varies in steps of $30$ (we have picked $\delta=0.05$ for all the results shown in this paper). We consider time series of $N=500$ samples (time slots) at a time, progressively taken from a longer realization of the signal, so as to avoid artifacts related to the generation technique. Moreover, a Gaussian noise with standard deviation $\sigma_{\rm noise} = 0.04$ has been added to the signal, as per the signal generation method of Section~\ref{sec:synthetic_signals}. For the reconstruction accuracy, the absolute error tolerance has been set to $\varepsilon = \xi \sigma_{\rm noise}$, with $\xi \geq 0$. In the following graphs, each point is obtained by averaging the outcomes of $10^4$ simulation runs. For a fair comparison, the same realization of the input signal $x(n)$ has been used for all the compression methods, for each simulation run and value of $n^\star$. \subsection{Compression Ratio vs Processing Energy} \label{sec:performance_results} \begin{figure*}[t] \begin{center} \subfigure[]{% \scalebox{0.68}{\input{fig_1a.tex}}% \label{fig:AM_perf_a} } \subfigure[]{% \scalebox{0.68}{\input{fig_1b.tex}}% \label{fig:AM_perf_b} } \end{center} \caption{(a) $\eta$ {\it vs} Correlation Length $n^\star$ and (b) $\eta$ {\it vs} Energy consumption for compression for the Adaptive Modeling methods for fixed $\varepsilon=4 \sigma_{\rm noise}$.} \label{fig:AM_perf} \end{figure*} In the following, we analyze the performance in terms of compression effectiveness and computational complexity (energy) for the lossy compression methods of Section~\ref{sec:compression_methods}.\\ \noindent \textbf{Adaptive Modeling Methods:} in this first set of results we compare the performance of the following compression methods: 1) Modified Adaptive Autoregressive (M-AAR); 2) Polynomial Regression (PR); 3) Piecewise Linear Approximation (PLAMLiS); 4) Enhanced Piecewise Linear Approximation (E-PLAMLiS) and 5) Lightweight Temporal Compression (LTC). For the M-AAR autoregressive filter and the polynomial regression (PR) we used four different orders, namely, $p=\{2,3,4,5\}$. Fig.~\ref{fig:AM_perf_a} shows the Compression Ratio achieved by the five compression methods as a function of the correlation length $n^\star$. These results reveal that for small values of $n^\star$ the compression performance is poor for all compression schemes, whereas it improves for increasing correlation length, by reaching a floor value for sufficiently large $n^\star$. This confirms that $n^\star$ is a key parameter for the performance of all schemes. Also, the compression performance differs among the different methods, with PR giving the best results. This reflects the fact that, differently from all the other methods, PR approximates $x(n)$ without requiring its fitting curves to pass from the points of the given input signal. This entails some inherent filtering, that is embedded in this scheme and makes it more robust against small and random perturbations. Fig.~\ref{fig:AM_perf_b} shows the energy consumption for compression. For increasing values of $n^\star$ the compression ratio becomes smaller for all schemes, but their energy expenditure substantially differs. Notably, the excellent compression capabilities of PR are counterbalanced by its demanding requirements in terms of energy. M-AAR and PLAMLiS also require a quite large amount of processing energy, although this is almost one order of magnitude smaller than that of PR. LTC and E-PLAMLiS have the smallest energy consumption among all schemes. We now discuss the dependence of the computational complexity (which is strictly related to the energy spent for compression) on $n^\star$. LTC encodes the input signal $x(n)$ incrementally, starting from the first sample and adding one sample at a time. Thus, the number of operations that it performs only weakly depends on the correlation length and, in turn, the energy that it spends for compression is almost constant with varying $n^\star$. E-PLAMLiS takes advantage of the increasing correlation length: as the temporal correlation increases, this method has to perform fewer ``divide and reiterate'' steps, so the number of operations required gets smaller and, consequently, also the energy spent for compression. For the remaining methods the complexity grows with $n^\star$. For PLAMLiS, this is due to the first step of the algorithm, where for each point the longest segment that respects the given error tolerance has to be found, see Section~\ref{sec:compression_methods}. When $x(n)$ is highly correlated, these segments become longer and PLAMLiS has to check a large number of times the tolerance constraint for each of the $N$ samples of $x(n)$. For M-AAR and PR every time a new sample is added to a model (autoregressive for the former and polynomial for the latter), this model must be updated and the error tolerance constraint has to be checked. These tasks have a complexity that grows with the square of the length of the current model. Increasing the correlation length of the input time series also increases the length of the models, leading to smaller compression ratios and, in turn, a higher energy consumption.\\ \begin{figure*}[t] \begin{center} \subfigure[]{% \scalebox{0.68}{\input{fig_2a.tex}}% \label{fig:FFT_perf_a} } \subfigure[]{% \scalebox{0.68}{\input{fig_2b.tex}}% \label{fig:FFT_perf_b} } \end{center} \caption{(a) $\eta$ {\it vs} Correlation Length $n^\star$ and (b) $\eta$ {\it vs} Energy consumption for compression for the Fourier-based methods for fixed $\varepsilon=4 \sigma_{\rm noise}$.} \label{fig:FFT_perf} \end{figure*} \begin{figure*}[t] \begin{center} \subfigure[]{% \scalebox{0.68}{\input{fig_3a.tex}}% \label{fig:cr_toten_WSN} } \subfigure[]{% \scalebox{0.68}{\input{fig_3b.tex}}% \label{fig:cr_toten_UWN} } \end{center} \caption{Compression Ratio $\eta$ {\it vs} Total Energy Consumption for the two single-hop scenarios considered: (a) WSN and (b) UWN.} \label{fig:cr_toten} \end{figure*} \begin{figure*}[t] \begin{center} \subfigure[]{% \scalebox{0.68}{\input{fig_4a.tex}}% \label{fig:eg_cl_WSN} } \subfigure[]{% \scalebox{0.68}{\input{fig_4b.tex}}% \label{fig:eg_cl_UWN} } \end{center} \caption{Energy Gain {\it vs} Correlation Length $\eta^\star$ for the two single-hop scenarios considered: (a) WSN and (b) UWN.} \label{fig:eg_cl} \end{figure*} \noindent \textbf{Fourier-based Methods:} we now analyze the performance of the Fourier-based compression schemes of Section~\ref{sec:compression_methods}. We consider the same simulation setup as above. Fig.~\ref{fig:FFT_perf_a} shows that also with Fourier-based methods the compression performance improves with increasing $n^\star$. The methods that perform best are FFT Windowed, FFT-LPF Windowed and DCT-LPF, which achieve very small compression ratios, e.g., $\eta$ is around $10^{-2}$ for $n^\star \geq 300$. Conversely, FFT and FFT-LPF, due to their edge discontinuity problem (see Section~\ref{sec:compression_methods}), need to encode more coefficients to meet the prescribed error tolerance constraint and thus their compression ratio is higher, i.e., around $10^{-1}$. The energy cost for compression is reported in Fig.~\ref{fig:FFT_perf_b}, where $n^\star$ is varied as an independent parameter. The compression cost for these schemes is given by a first contribution, which represents the energy needed to evaluate the FFT/DCT of the input signal $x(n)$. Thus, there is a second contribution which depends on the number of transformation coefficients that are picked. Specifically, a decreasing $n^\star$ means that the signal is less correlated and, in this case, more coefficients are to be considered to meet a given error tolerance. Further, for each of them, an inverse transform has to be evaluated to check whether an additional coefficient is required. This leads to a decreasing computational cost for increasing $n^\star$. As a last observation, we note that FFT-based methods achieve the best performance in terms of compression ratio among all schemes of Figs.~\ref{fig:AM_perf_b} and~\ref{fig:FFT_perf_b} (DCT-LPF is the best performing algorithm), whereas PLA schemes give the best performance in terms of energy consumption for compression (LTC is the best among them). \subsection{Application Scenarios} \label{sub:application_scenarios} As discussed above, we evaluated the selected compression methods considering the energy consumed for transmission of typical radios in Wireless Sensor Networks (WSN) and an Underwater Networks (UWN). In the following, we discuss the performance for these application scenarios in single- as well as multi-hop networks.\\ \noindent \textbf{Single-hop Performance:} Fig.~\ref{fig:cr_toten} shows the performance in terms of Compression Ratio $\eta$ {\it vs} Total Energy Consumption for a set of compression methods when applied in the two selected application scenarios. PLAMLiS was not considered as its performance is always dominated by E-PLAMLiS and we only show the performance of the best Fourier-based schemes. In both graphs the large white dot represent the case where no compression is applied to the signal, which is entirely sent to the gathering node. Note that energy savings can only be obtained for those cases where the total energy lies to the left of the no compression case. In the WSN scenario, the computational energy is comparable to the energy spent for transmission, thus, only LTC and Enhanced PLAMLiS can achieve some energy savings (see Fig.\ref{fig:cr_toten_WSN}). All the other compression methods entail a large number of operations and, in turn, perform worse than the no compression case in terms of overall energy expenditure. For the UWN scenario, the energy spent for compression is always a negligible fraction of the energy spent for transmission. For this reason, every considered method provides some energy savings, which for PR and Fourier methods can be substantial. \begin{figure*}[t] \begin{center} \subfigure[]{% \scalebox{0.68}{\input{fig_6a.tex}}% \label{fig:multihop_WSN} } \subfigure[]{% \scalebox{0.68}{\input{fig_6b.tex}}% \label{fig:multihop_UWN} } \end{center} \caption{Energy Gain {\it vs} number of hops for the two multi-hop scenarios considered: (a) WSN and (b) UWN.} \label{fig:multihop} \end{figure*} \begin{figure*}[t] \begin{center} \subfigure[]{% \scalebox{0.68}{\input{fig_ltc_err_fit.tex}}% \label{fig:fitting_a} } \subfigure[]{% \scalebox{0.68}{\input{fig_dct_err_fit.tex}}% \label{fig:fitting_b} } \end{center} \caption{Fitting functions $\xi(n^\star,\eta)$ {\it vs} experimental results: (a) LTC, (b) DTC-LPF.} \label{fig:fitting} \end{figure*} The total energy gain, defined as the ratio between the energy spent for transmission in the case with no compression and the total energy spent for compression and transmission using the selected compression techniques, is shown in Fig.~\ref{fig:eg_cl}. In the WSN scenario, i.e., Fig~\ref{fig:eg_cl_WSN}, the method that offers the highest energy gain is LTC, although other methods such as DCT-LPF can achieve better compression performance (see Fig~\ref{fig:cr_toten_WSN}). Note that in this scenario the total energy is highly influenced by the computational cost. Thus, the most lightweight methods, such as LTC and enhanced PLAMLiS, perform best. In the UWN case, whose results are shown in Fig~\ref{fig:eg_cl_UWN}, the computational cost is instead negligible with respect to the energy spent for transmission. As a consequence, the energy gain is mainly driven by the achievable compression ratio and the highest energy gain is obtained with DCT-LPF. In this scenario, PR, which is computationally demanding, can lead to large energy savings too, whereas the energy gain that can be obtained with more lightweight schemes, such as LTC, is quite limited.\\ \noindent \textbf{Multi-hop Performance:} in Fig.~\ref{fig:multihop} we focus on multi-hop networks, and evaluate whether further gains are possible when the compressed information has to travel multiple hops to reach the data gathering point. Both WSN and UWN scenarios are considered. In this case, both transmitting and receiving energy is accounted for at each intermediate relay node. Only LTC and DCT-LPF are shown, as these are the two methods that respectively perform best in the WSN and UWN scenarios. Their performance is computed by varying the error tolerance $\varepsilon \in \{ 3\sigma_{noise},4\sigma_{noise},5\sigma_{noise} \}$, whereas the correlation length is fixed to $n^\star = 300$. For the WSN scenario the energy gain increases with the number of hops for both compression schemes. As we have already discussed, in this case the energy spent for the compression at the source node is comparable to the energy spent for the transmission. The compression cost (compression energy) is only incurred at the source node, whereas each additional relay node only needs to send the compressed data. This leads to an energy gain that is increasing with the number of hops involved. We also note that DCT-LPF is not energy efficient in single-hop scenarios, but it can actually provide some energy gains when the number of hops is large enough (e.g., larger than $2$ for $\varepsilon \in \{ 4\sigma_{noise},5\sigma_{noise} \}$, see Fig.~\ref{fig:multihop_WSN}). Conversely, in the UWN scenario the energy spent for compression is a negligible fraction of the energy spent for transmission. Henceforth, the overall energy gain over multiple hops is nearly constant and equal to the energy savings achieved over the first hop. \subsection{Numerical Fittings} \label{sec:numerical_fittings} In the following, we provide close-formulas to accurately relate the achievable compression ratio $\eta$ to the relative error tolerance $\xi$ and the computational complexity, $N_c$, which is expressed in terms of number of clock cycles per bit to compress the input signal $x(n)$. These fittings have been computed for the best compression methods, namely, LTC and DCT-LPF. Note that until now we have been thinking of $\eta$ as a performance measure which depends on the chosen error tolerance $\varepsilon=\xi \sigma_{noise}$. This amounts to considering $\xi$ as an input parameter for the compression algorithm. In the following, we approximate the mathematical relationship between $\eta$ and $\xi$, by conversely thinking of $\xi$ as a function of $\eta$, which is now our input parameter. $N_c$ can as well be expressed as a function of $\eta$. We found these relationships through numerical fitting, running extensive simulations with synthetic signals. The relative error tolerance $\xi$ can be related to the compression ratio $\eta$ through the following formulas: \begin{equation} \xi (n^\star, \eta) = \begin{cases} \displaystyle \frac{p_1\eta^2 + p_2\eta + p_3}{\eta + q_1} & \textrm{LTC} \\ \displaystyle \frac{p_1\eta^4 + p_2\eta^3 + p_3\eta^2 + p_4\eta + p_5}{\eta + q_1} & \textrm{DCT-LPF} \, , \end{cases} \label{eq:fitting_xi} \end{equation} where the fitting parameters $p_1,p_2,p_3,p_4,p_5,$ and $q_1$ depend on the correlation length $n^\star$ and are given in Table~\ref{tab:fitting_xi} for LTC and DCT-LPF. These fitting formulas have been validated against real world signals measured from the environmental monitoring WSN testbed deployed on the ground floor of the Department of Information Engineering (DEI), University of Padova, Italy~\cite{Crepaldi-07}. This dataset consists of measures of temperature and humidity, sensed with a sampling interval of $1$ minute for $6$ days. Correlation lengths are $n^\star_T = 563$ and $n^\star_H = 355$ for temperature and humidity signals, respectively. The empirical relationships of Eq.~(\ref{eq:fitting_xi}) are shown in Fig.~\ref{fig:fitting_a} and~\ref{fig:fitting_b} through solid and dashed lines, whereas the markers indicate the performance obtained applying LTC and DCT-LPF to the considered real datasets. As can be noted from these plots, although the numerical fitting was obtained for synthetic signals, Eq.~(\ref{eq:fitting_xi}) closely represents the actual tradeoffs. Also, with decreasing $n^\star$ the curves relating $\xi$ to $\eta$ remain nearly unchanged in terms of functional shape but are shifted toward the right. Finally, we note that the dependence on $n^\star$ is particularly pronounced at small values of $n^\star$, whereas the curves tend to converge for increasing correlation length (larger than $110$ in the figure). \begin{table*}[t] \centering \begin{tabular*}{0.9\textwidth}{@{\extracolsep{\fill}} c | c | c c c c c c} \toprule Compression & \multirow{2}{*}{$n^\star$} & \multicolumn{6}{c}{Fitting coefficients} \\ Method & & $p_1$ & $p_2$ & $p_3$ & $p_4$ & $p_5$ & $q_1$ \\ \midrule \multirow{6}{*}{LTC} & $10 $ & $-0.35034$ & $0.27640$ & $0.92834$ & -- & -- & $-0.15003$ \\ & $20 $ & $-0.51980$ & $0.86851$ & $0.31368$ & -- & -- & $-0.09245$ \\ & $50 $ & $-0.80775$ & $1.38842$ & $0.17465$ & -- & -- & $-0.03705$ \\ & $80 $ & $-0.85691$ & $1.45560$ & $0.18208$ & -- & -- & $-0.02366$ \\ & $110$ & $-0.86972$ & $1.46892$ & $0.19112$ & -- & -- & $-0.01736$ \\ & $290$ & $-0.97242$ & $1.61970$ & $0.17280$ & -- & -- & $-0.00747$ \\ & $500$ & $-1.03702$ & $1.70305$ & $0.17466$ & -- & -- & $ 0.00267$ \\ \midrule \multirow{6}{*}{DCT-LPF} & $10 $ & $ 2.05351$ & $-12.70381$ & $14.49624$ & $-4.52198$ & $ 0.82292$ & $-0.16165$ \\ & $20 $ & $-0.92752$ & $-3.07506$ & $ 3.07560$ & $ 1.06902$ & $ 0.02898$ & $-0.09025$ \\ & $50 $ & $-1.90344$ & $-0.17491$ & $-0.13500$ & $ 2.43821$ & $-0.03826$ & $-0.03929$ \\ & $80 $ & $-2.59629$ & $ 1.41404$ & $-1.40970$ & $ 2.81971$ & $-0.04122$ & $-0.02667$ \\ & $110$ & $-2.57150$ & $ 1.43655$ & $-1.51646$ & $ 2.87138$ & $-0.02747$ & $-0.01913$ \\ & $290$ & $-3.43806$ & $ 3.17964$ & $-2.67444$ & $ 3.13226$ & $-0.01531$ & $-0.00848$ \\ & $500$ & $-3.99007$ & $ 4.17811$ & $-3.22636$ & $ 3.22590$ & $-0.01102$ & $-0.00560$ \\ \bottomrule \end{tabular*} \caption{Fitting coefficients for $\xi(n^\star,\eta)$.} \label{tab:fitting_xi} \end{table*} For the computational complexity, we found that $N_c$ scales linearly with $\eta$ for both LTC and DCT-LPF. Hence, $N_c$ can be expressed through a polynomial as follows: $$ N_c (n^\star, \eta) = \alpha \eta + \gamma n^\star + \beta \; . $$ $N_c$ exhibits a linear dependence on both $n^\star$ and $\eta$; the fitting coefficients are shown in Table~\ref{tab:fitting_Nc}. \begin{table}[h!] \centering \begin{tabular*}{\columnwidth}{@{\extracolsep{\fill}} c c c c} \toprule Compression & \multicolumn{3}{c}{Fitting coefficients} \\ Method & $\alpha$ & $\beta$ & $\gamma$ \\ \midrule LTC & $16.1$ & $105.4$ & $3.1 \cdot 10^{-16}$ \\ DCT-LPF & $48.1 \cdot 10^3$ & $82.3$ & $-2 \cdot 10^{-13}$ \\ \bottomrule \end{tabular*} \caption{Fitting coefficients for $N_c(n^\star,\eta)$.} \label{tab:fitting_Nc} \end{table} Note that the dependence on $n^\star$ is much weaker than that on $\eta$ and for practical purposes can be neglected without loss of accuracy. In fact, for DCT-LPF there is a one-to-one mapping between any target compression ratio and the number of DCT coefficients that are to be sent to achieve this target performance (the computational complexity is directly related to this number of coefficients). Note that, differently from Fig.~\ref{fig:FFT_perf}, this reasoning entails the compression of our data without fixing beforehand the error tolerance $\varepsilon$. For LTC, the dominating term in the total number of operations performed is $\eta$, as this term is directly related to the number of segments that are to be processed. For this reason, in the remainder of this section we consider the simplified relationship: \begin{equation} N_c (\eta) = \alpha \eta + \beta \; . \label{eq:Nc_simplified} \end{equation} The accuracy of Eq.~(\ref{eq:Nc_simplified}) is verified in Fig.~\ref{fig:Nc}, where we plot our empirical approximations against the results obtained for the real world signals described above. The overall energy consumption is obtained as $N_b(x) N_c(\eta) E_0$.\\ \begin{figure}[t] \begin{center} \scalebox{0.68}{\input{fig_fit_b.tex}} \end{center} \caption{Fitting functions $N_c(\eta)$ {\it vs} experimental results.} \label{fig:Nc} \end{figure} \noindent \textbf{Tradeoffs:} in the following, we use the above empirical formulas to generalize our results to any processing and transmission technology, by separating out technology dependent and algorithmic dependent terms. Specifically, a compression method is energy efficient when the overall cost for compression ($E_c(x)$) and transmission of the compressed data ($E_{Tx}(\hat x)$) is strictly smaller than the cost that would be incurred in transmitting $x(n)$ uncompressed ($E_{Tx}(x)$). Mathematically, $E_c(x) + E_{Tx}(\hat x) < E_{Tx}(x)$. Dividing both sides of this inequality by $E_{Tx}(x)$ and rearranging the terms leads to: $$ \frac{E_{Tx}(x)}{E_c(x)} = \frac{E^\prime_{Tx} N_b(x)}{E_0 N_c N_b(x)} > \frac{1}{1-\eta} \; , $$ where the energy for transmission $E_{Tx}(x)$ is expressed as the product of the energy expenditure for the transmission of a bit $E_{Tx}^\prime$ and the number of bits of $x(n)$, $N_b(x)$. The energy for compression is decomposed in the product of three terms: 1) the energy spent by the micro-controller in a clock cycle $E_0$, 2) the number of clock cycles performed by the compression algorithm per (uncompressed) bit of $x(n)$, $N_c$ and 3) the number of bits composing the input signal $x(n)$, $N_b(x)$. With these energy costs and the above fitting Eq.~(\ref{eq:Nc_simplified}) for $N_c$ we can rewrite the above inequality so that the quantities that depend on the selected hardware architecture appear on the left hand side, leaving those that depend on algorithmic aspects on the right hand side. The result is: \begin{equation} \frac{E^\prime_{Tx}}{E_0} > \frac{N_c (\eta)}{1-\eta} = \frac{\alpha \eta + \beta}{1-\eta}\; , \label{eq:tradeoff} \end{equation} where $\alpha$ and $\beta$ are the algorithmic dependent fitting parameters indicated in Table~\ref{tab:fitting_Nc}. Eq.~(\ref{eq:tradeoff}) can be used to assess whether a compression scheme is suitable for a specific device architecture. A usage example is shown in Fig.~\ref{fig:tradeoff}. In this graph, the curves with markers are obtained plotting the right hand side of Eq.~(\ref{eq:tradeoff}) for $\eta < 1$, whereas the lines refer to the expression on the left hand side of Eq.~(\ref{eq:tradeoff}) for our reference scenarios, i.e., WSN (solid line) and UWN (dashed line). In the WSN scenario, $E_{Tx}^\prime = 230$~nJ for the selected CC2420 radio, whereas for the TI MSP430 we have $E_0 = 0.726$~nJ and their ratio is $E_{Tx}^\prime/E_0 \simeq 316$. The graph indicates that, in this case, DCT-LPF is inefficient for any value of $\eta$, whereas LTC provides energy savings for $\eta \le 0.6$, that using the function $\xi(n^\star,\eta)$ for LTC can be translated into the corresponding (expected) error performance. Note that the knowledge of $n^\star$ is needed for this last evaluation. Conversely, for the UWN scenario, where $E_{Tx}^\prime = 10$~mW and $E_0 = 0.726$~nJ, both DCT-LPF and LTC curves lie below the hardware dependent ratio $E_{Tx}^\prime/E_0 \simeq 13\cdot 10^6$, indicating that energy savings are achievable by both schemes for almost all $\eta$. These results can be generalized to any other device technology, by comparing the right hand side of Eq.~(\ref{eq:tradeoff}) against the corresponding ratio $E_{Tx}^\prime/E_0$ and checking whether Eq.~(\ref{eq:tradeoff}) holds. \begin{figure}[t] \begin{center} \scalebox{0.68}{\input{fig_toff.tex}} \end{center} \caption{Energy savings assessment {\it vs} $\eta$ and hardware architecture.} \label{fig:tradeoff} \end{figure}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} In this paper, all groups considered are finite, and all graphs considered are finite, undirected and simple. Let $\Gamma$ be a graph with vertex set $V(\Gamma)$ and edge set $E(\Gamma)$, and let $t$ be a positive integer. A subset $C$ of $V(\Gamma)$ is called \cite{Big, Kr86} a \emph{perfect $t$-code} in $\Gamma$ if every vertex of $\Gamma$ is at distance no more than $t$ to exactly one vertex in $C$, where the \emph{distance} in $\Gamma$ between two vertices is the length of a shortest path between the two vertices or $\infty$ if there is no path in $\Gamma$ joining them. A perfect $1$-code is usually called a {\em perfect code}. Equivalently, a subset $C$ of $V(\Gamma)$ is a perfect code in $\Gamma$ if $C$ is an independent set of $\Gamma$ and every vertex in $V(\Gamma) \setminus C$ has exactly one neighbor in $C$. A subset $C$ of $V(\Gamma)$ is said to be a \emph{total perfect code} \cite{Zhou2016} in $\Gamma$ if every vertex of $\Gamma$ has exactly one neighbor in $C$. It is obvious that a total perfect code in $\Gamma$ induces a matching in $\Gamma$ and therefore has even cardinality. In graph theory, a perfect code in a graph is also called an \emph{efficient dominating set} \cite{DeS} or \emph{independent perfect dominating set }\cite{Le}, and a total perfect code is called an \emph{efficient open dominating set} \cite{HHS}. The concept of $t$-perfect codes in graphs were firstly introduced by Biggs \cite{Big} as a generalization of the classical concept \emph{perfect $t$-error-correcting code} in coding theory \cite{Heden1,HK18,Va75,MS77}. For a set $A$ (usually with an algebraic structure such as group, ring, or field), we use $A^{n}$ to denote the $n$-fold Cartesian product of $A$. In coding theory, $A$ is called an \emph{alphabet} and elements in $A^{n}$ are called \emph{words} of length $n$ over $A$. A \emph{code} $C$ over an alphabet $A$ is simply a subset of $A^{n}$, and every word in $C$ is called a \emph{codeword}. The Hamming distance of two words in $A^{n}$ is the number of positions in which they differ. A code $C$ over $A$ is called a \emph{perfect $t$-error-correcting Hamming code} if every word in $A^{n}$ is at Hamming distance no more than $t$ to exactly one codeword of $C$. The \emph{perfect $t$-error-correcting Lee code} over $A$ is defined in a similar way if $A$ is the ring $\mathbb{Z}_m$ of integers $\pmod m$, where the Lee distance of two words $x=(x_{1},x_{2},\cdots,x_{n}),y=(y_{1},y_{2},\cdots,y_{n})\in \mathbb{Z}_{m}^{n}$ is defined as follows: $d_{L}(x,y)=\sum\limits_{i=1}^{n}\min(|x_{i}-y_{i}|,m-|x_{i}-y_{i}|)$. Recall that the \emph{Hamming graph} $H(n,m)$ is the Cartesian product of $n$ copies of the complete graph $K_m$ and the \emph{grid-like graph} $L(n, m)$ is the Cartesian product of $n$ copies of the $m$-cycle $C_{m}$. It is obvious that the perfect $t$-error-correcting Hamming codes over an alphabet of cardinality $m$ are precisely the perfect $t$-codes in $H(n,m)$. Similarly, the perfect $t$-error-correcting Lee codes over $\mathbb{Z}_m$ ($m\geq3$) are precisely the perfect $t$-codes in $L(n, m)$. A graph $\Gamma$ is called $G$-\emph{vertex-transitive} if $G$ is a subgroup of $\hbox{\rm Aut}(\Gamma)$ acting transitively on $V(\Gamma)$. In particular, a $G$-\emph{vertex-transitive} graph is called a \emph{Cayley graph} on $G$ if $G$ acts \emph{freely} on the vertex set (nonidentity elements fix no vertex). It is well known and easy to check that both $K_m$ and $C_m$ are Cayley graphs on the cyclic group $\mathbb{Z}_m$. Therefore $H(n,q)$ and $L(n,m)$ are both Cayley graphs on the group $\mathbb{Z}_m^n$. Thus perfect $t$-codes in Cayley graphs are generalization of perfect $t$-error-correcting Hamming codes or Lee codes. Perfect codes in Cayley graphs have received considerable attention in recent years; see \cite[Section 1]{HXZ18} for a brief survey and \cite{DSLW16, FHZ, Ta13, Z15,ZZ2021} for a few recent papers. In particular, perfect codes in Cayley graphs which are subgroups of the underlying groups are especially interesting since they are generalizations of perfect linear codes \cite{Va75} in the classical setting. Another interesting avenue of research is to study when a given subset of a group is a perfect code in some Cayley graph of the group. In this regard the following concepts were introduced by Huang et al. in \cite{HXZ18}: A subset $C$ of a group $G$ is called a {\em (total) perfect code} of $G$ if there exists a Cayley graph of $G$ which admits $C$ as a (total) perfect code; a (total) perfect code of $G$ which is also a subgroup of $G$ is called a {\em subgroup (total) perfect code} of $G$. Huang et al. \cite{HXZ18} established a sufficient and necessary condition for the normal subgroups of a given group to be subgroup (total) perfect codes, and proved that every normal subgroup of a group of odd order or odd index is a subgroup perfect code. Ma et al. \cite{MWWZ19} proved that all subgroups of a group are subgroup perfect codes if and only if this group does not contain elements of order $4$. Very recently, Zhang and Zhou \cite{Z15} generalized several results about normal subgroups in \cite{HXZ18} to general subgroups, and in particular they proved that every subgroup of a group of odd order or odd index is a subgroup perfect code. Although every Cayley graph is vertex-transitive, there exist other vertex-transitive graphs that are not Cayley graphs. Somewhat surprisingly, there are very few known results on the perfect codes in vertex-transitive graphs in the literature. This motivates us to write the present paper. It is well known that a graph is $G$-vertex transitive if and only if it can be represented as a coset graph $\hbox{\rm Cos}(G,H,U)$ (see Section \ref{sec:pre} for the details). To study the perfect codes in vertex-transitive graphs, we generalize the concept subgroup (total) perfect code of a finite group as follows: Given a finite group $G$ and a subgroup $H$ of $G$, a subgroup $A$ of $G$ containing $H$ is called a \emph{subgroup (total) perfect code of the pair $(G,H)$} if there exists a coset graph $\hbox{\rm Cos}(G,H,U)$ such that the set consisting of left cosets of $H$ in $A$ is a (total) perfect code in $\hbox{\rm Cos}(G,H,U)$. In this paper, we give a necessary and sufficient condition for a subgroup $A$ of $G$ containing $H$ to be a (total) perfect code of the pair $(G,H)$ and generalize a few known results of subgroup (total) perfect codes of groups. The rest of the paper is organized as follows. In Section \ref{sec:pre}, we recall the definition of coset graph and give a characterization of the relationship between subgroup perfect codes and subgroup total perfect codes of a pair $(G,H)$. In Section \ref{sec:pc}, we prove that $A$ is a perfect code of a pair $(G,H)$ if and only if there exists a left transversal $X$ of $A$ in $G$ such that $XH=HX^{-1}$ (Theorem \ref{lt}). Based on Theorem \ref{lt}, we generalize a few results about subgroup perfect codes in \cite{ZZ2021}. In Section \ref{sec:tpc}, we deduce a few results on total perfect codes which are parallel to some results about perfect codes in Section \ref{sec:pc}. In Section \ref{sec:ep}, we construct several examples and propose a few problems for further research. In particular, we show that $S_{n-1}$ is a perfect code of $(S_{n},S_{3})$ for every positive integer $n\geq5$. \section{Preliminaries} \label{sec:pre} For a group $G$, we write $H\leq G$ to signify that $H$ is a subgroup of $G$ and we set $A/_{\ell}H:=\{aH\mid a\in A\}$ for all subset $A$ of $G$. For more group-theoretic terminology and notation used in the paper, please refer to \cite{KS2004}. The following proposition gives a nice way to represent vertex-transitive graphs. For the proof of this proposition, see \cite{Lor}. \begin{pro}\label{cos} Let $G$ be a group and $H\leq G$. Let $U$ be a union of some double cosets of $H$ in $G$ such that $H\cap U=\emptyset$ and $U^{-1}=U$. Define a graph $\Gamma=\hbox{\rm Cos}(G,H,U)$ as follows: the vertex set of $\Gamma$ is $G/_{\ell}H$, and two vertices $g_{1}H$ and $g_{2}H$ are adjacent if and only if $g_{1}^{-1}g_{2}\in U$. Then we have \begin{enumerate} \item $\Gamma$ is a well defined graph and its valency is the number of left cosets of $H$ in $U$; \item $G$ acts transitively on the vertex-set of $\Gamma$ by left multiplication and the kernel of this action is the core of $H$ in $G$; \item every vertex-transitive graph can be represented as $\hbox{\rm Cos}(G,H,U)$ for some $G$, $H$ and $U$. \end{enumerate} \end{pro} The graph $\Gamma=\hbox{\rm Cos}(G,H,U)$ defined in Proposition \ref{cos} is usually called a \emph{coset graph} on $G/_{\ell}H$. It is straightforward to check that the neighbourhood of $H$ in $\Gamma$ is $U/_{\ell}H$ and $\Gamma$ is connected if and only if $G=\langle U\rangle$. \begin{defi} \label{defi} Let $G$ be a group, $H\leq G$ and $A\subseteq G$. If there exists a coset graph $\hbox{\rm Cos}(G,H,U)$ on $G/_{\ell}H$ admitting a (total) perfect code $A/_{\ell}H$, then $A$ is called a (total) perfect code of the pair $(G,H)$. If further $H\leq A\leq G$, then $A$ is called a subgroup (total) perfect code of the pair $(G,H)$. \end{defi} \begin{rem} It is obvious that $\hbox{\rm Cos}(G,1,U)$ is a Cayley graph on $G$. Thus $A$ is a subgroup (total) perfect code of $(G,1)$ if and only if it is a subgroup (total) perfect code of $G$. Therefore the concept of subgroup (total) perfect code of $(G,H)$ is a generalization of the concept of subgroup (total) perfect code of $G$. \end{rem} The following lemma gives a characterization of the relationship between subgroup perfect codes and subgroup total perfect codes of a pair $(G,H)$. \begin{lem} \label{totallem} Let $G$ be a group and $H\leq A\leq G$. Then $A$ is a total perfect code of $(G,H)$ if and only if $A$ is a perfect code of $(G,H)$ and there exists an element $x\in N_{A}(H)\setminus H$ such that $x^{2}\in H$. \end{lem} \begin{proof} $\Rightarrow$) Suppose that $A$ is a total perfect code of $(G,H)$. Then there exists a coset graph $\hbox{\rm Cos}(G,H,U)$ on $G/_{\ell}H$ such that $A/_{\ell}H$ is a total perfect code of $\hbox{\rm Cos}(G,H,U)$. In particular, $A/_{\ell}H$ induces a matching in $\hbox{\rm Cos}(G,H,U)$. Thus there exists $x\in A\setminus H$ such that $xH$ is the unique vertex in $A/_{\ell}H$ which is adjacent to $H$. By the definition of coset graph, $U$ is a union of some double cosets of $H$, $U^{-1}=U$ and $x\in U$. Therefore $hx,hx^{-1}\in U$ for every $h\in H$. It follows that $hxH$ and $hx^{-1}H$ are both neighbours of $H$ in $\hbox{\rm Cos}(G,H,U)$. Note that $hxH,hx^{-1}H\in A/_{\ell}H$. By the uniqueness of $xH$, we have $hxH=hx^{-1}H=xH$. Therefore $(HxH)^{-1}=HxH=xH$. Set $W=U\setminus HxH$. Then $W^{-1}=W$ and $A/_{\ell}H$ is a perfect code of the coset graph $\hbox{\rm Cos}(G,H,W)$. Therefore $A$ is a perfect code of $(G,H)$. Since $x^{-1}H=xH$, we have $x^{2}\in H$. Recall that $x\in A\setminus H$. Since $hxH=xH$ for every $h\in H$, we get $x^{-1}hx\in H$ and it follows that $x\in N_{A}(H)\setminus H$. $\Leftarrow$) Suppose that $A$ is a perfect code of $(G,H)$ and there exists an element $x\in N_{A}(H)\setminus H$ such that $x^{2}\in H$. Then there exists a coset graph $\hbox{\rm Cos}(G,H,W)$ on $G/_{\ell}H$ such that $A/_{\ell}H$ is a perfect code of $\hbox{\rm Cos}(G,H,W)$. Since $x\in N_{A}(H)\setminus H$ and $x^{2}\in H$, we have $(HxH)^{-1}=HxH=xH=x^{-1}H$. Set $U=W\cup HxH$. Then $U^{-1}=U$ and $A/_{\ell}H$ is a total perfect code of $\hbox{\rm Cos}(G,H,U)$. Therefore $A$ is a total perfect code of $(G,H)$. \end{proof} \section{Subgroup perfect codes of $(G,H)$} \label{sec:pc} In this section, we deduce some general results about subgroup perfect codes of a pair $(G,H)$. Our first result below gives a necessary and sufficient condition for a subgroup $A$ of $G$ containing $H$ to be a perfect code of $(G,H)$. \begin{theorem} \label{lt} Let $G$ be a group and $H\leq A\leq G$. Then $A$ is a perfect code of $(G,H)$ if and only if there exists a left transversal $X$ of $A$ in $G$ such that $XH=HX^{-1}$. \end{theorem} \begin{proof} $\Rightarrow$) Let $A$ be a perfect code of $(G,H)$. Then there exists a coset graph $\Gamma:=\hbox{\rm Cos}(G,H,U)$ such that $A/_{\ell}H$ is a perfect code of $\Gamma$. By the definition of coset graph, we get $H\cap U=\emptyset$ and $U^{-1}=U$. Assume $|G:A|=n$ and let $T=\{1,t_{1},\ldots,t_{n-1}\}$ be a left transversal of $A$ in $G$. Then $t_{i}\notin A$ for any $i\in \{1,\ldots,n-1\}$. Since $H\leq A$, we get $t_{i}^{-1}H\notin A/_{\ell} H$. Since $A/_{\ell}H$ is a perfect code of $\Gamma$, $t_{i}^{-1}H$ is adjacent to a unique vertex in $A/_{\ell}H$. Therefore there is a unique $a_{i}H\in A/_{\ell}H$ such that $t_{i}a_{i}\in U$. Set $X=\{1,t_{1}a_{1},\ldots,t_{n-1}a_{n-1}\}$. Then $X$ is a left transversal of $A$ in $G$ and $X\setminus\{1\}\subseteq U$. Since $U$ is a union of some double cosets of $H$ in $G$, we have $H(X\setminus\{1\})H\subseteq U$. We will further prove $U\subseteq(X\setminus\{1\})H$. Take an arbitrary $u\in U$. Since $T$ is a left transversal of $A$ in $G$, $u$ can be uniquely written as $u=ta$ where $t\in T$ and $a\in A$. It follows that $ta\in U$ and therefore $t^{-1}H$ and $aH$ are adjacent in $\Gamma$. Since $A/_{\ell}H$ is a perfect code of $\Gamma$, $A/_{\ell}H$ is an independent set of $\Gamma$. Therefore $t\notin A$. Since $T=\{1,t_{1},\ldots,t_{n-1}\}$, we have $t=t_{i}$ for some $i\in\{1,\ldots,n-1\}$. By the uniqueness of $a_{i}H$, we have $aH=a_{i}H$ and then $u=t_{i}a_{i}h$ for some $h\in H$. Therefore $u\in (X\setminus\{1\})H$ and it follows that $U\subseteq (X\setminus\{1\})H$. Now we have proved that $H(X\setminus\{1\})H\subseteq U\subseteq(X\setminus\{1\})H$. Therefore $U=H(X\setminus\{1\})H=(X\setminus\{1\})H$. Since $U^{-1}=U$, we have $H(X\setminus\{1\})^{-1}=U^{-1}=U=(X\setminus\{1\})H$ and it follows that $XH=HX^{-1}$. $\Leftarrow$) Let $X$ be a left transversal of $A$ in $G$ such that $XH=HX^{-1}$. Then $HXH=HHX^{-1}=HX^{-1}$, $(HXH)^{-1}=HX^{-1}H=XH$ and it follows that \begin{equation*} (HXH)^{-1}=HXH=XH=HX^{-1}. \end{equation*} Since $X$ is a left transversal of $A$ in $G$, $X\cap A$ contains a unique element, say $y$. Since $XH=HX^{-1}$, for every $h\in H$ there exists $w\in X$ such that $hy^{-1}\in wH$. Since $H\leq A$ and $hy^{-1}\in A$, we have $w\in A$. Therefore $w=y$ and it follows that $Hy^{-1}=Hy$. Set $U:=H(X\setminus\{y\})H$. Since $(HXH)^{-1}=HXH=XH=HX^{-1}$ and $Hy^{-1}=yH$, we have $U^{-1}=U=(X\setminus\{y\})H=H(X\setminus\{y\})^{-1}$. Furthermore, $A\cap U=H\cap U=\emptyset$. In particular, we obtain a coset graph $\Gamma:=\hbox{\rm Cos}(G,H,U)$ on $G/_{\ell}H$. Since $A\cap U=\emptyset$, $a^{-1}b\notin U$ for any $a,b\in A$ and it follows that $A/_{\ell}H$ is an independent set of $\Gamma$. Now consider an arbitrary vertex $gH\in G/_{\ell}H$ with $g\notin A$. Since $X$ is a left transversal of $A$ in $G$, $g^{-1}$ can be uniquely written as $g^{-1}=xa^{-1}$ where $x\in X$ and $a\in A$. Since $g^{-1}\notin A$, we obtain $x\neq y$. Therefore $x\in U$, that is, $g^{-1}a\in U$. It follows that $aH$ is adjacent to $gH$ in $A/_{\ell}H$. If there is $b\in A$ such that $bH$ is also adjacent to $gH$ in $A/_{\ell}H$, then $g^{-1}b\in U$. Since $U=(X\setminus\{y\})H$, we have $g^{-1}b=zh$ for some $z\in X$ and $h\in H$. Therefore $g^{-1}=zhb^{-1}$. Note that $hb^{-1}\in A$. By the uniqueness of the factorization $g^{-1}=xa^{-1}$, we have $z=x$ and $a^{-1}=hb^{-1}$. Thus $aH=bH$ and it follows that $aH$ is the unique vertex in $A/_{\ell}H$ which is adjacent to $gH$. Therefore $A/_{\ell}H$ is a perfect code of $\Gamma$, that is, $A$ is a perfect code of $(G,H)$. \end{proof} If we replace the word `left' with `right' in Theorem \ref{lt}, this theorem still holds. Actually, we have the following theorem. \begin{theorem} \label{rt} Let $G$ be a group and $H \leq A \leq G$. Then $A$ is a perfect code of $(G,H)$ if and only if there exists a right transversal $Y$ of $A$ in $G$ such that $Y^{-1}H=HY$. \end{theorem} \begin{proof} By Theorem \ref{lt}, $A$ is a perfect code of $(G,H)$ if and only if there exists a left transversal $X$ of $A$ in $G$ such that $XH=HX^{-1}$. Replacing $X^{-1}$ by $Y$, we have that $A$ is a perfect code of $(G,H)$ if and only if there exists a right transversal $Y$ of $A$ in $G$ such that $Y^{-1}H=HY$. \end{proof} Theorem \ref{lt} has several interesting corollaries. The first one below is obvious and we omit its proof. \begin{coro} \label{lrt} Let $G$ be a group and $H \leq A \leq G$. Let $X$ be a left transversal of $A$ in $G$. If $XH=HX^{-1}$, then $A/_{\ell}H$ is a perfect code of the coset graph $\hbox{\rm Cos}(G,H,U)$ where $U=H(X\setminus A)H$. \end{coro} \begin{coro} \label{conj} Let $G$ be a group and $H\leq A\leq G$. If $A$ is a perfect code of $(G,H)$, then for any $g \in G$, $g^{-1}Ag$ is a perfect code of $(G,g^{-1}Hg)$. \end{coro} \begin{proof} By the necessity of Theorem \ref{lt}, there exists a left transversal $X$ of $A$ in $G$ such that $XH=HX^{-1}$. Set $Y=g^{-1}Xg$. Then $Y$ is a left transversal of $g^{-1}Ag$ in $G$ and $Yg^{-1}Hg=g^{-1}HgY^{-1}$. By the sufficiency of Theorem \ref{lt}, $g^{-1}Ag$ is a perfect code of $(G,g^{-1}Hg)$. \end{proof} \begin{coro} \label{sub} Let $G$ be a group and $H \leq A \leq L\leq G$. If $A$ is a perfect code of $(G,H)$, then $A$ is a perfect code of $(L,H)$. \end{coro} \begin{proof} Let $A$ be a perfect code of $(G,H)$. By Theorem \ref{lt}, $A$ has a left transversal $X$ in $G$ such that $XH=HX^{-1}$. Therefore $G=XA$ and $|X\cap A|=1$. Set $Y=X \cap L$. Since $A \leq L\leq G$, we have $L=G\cap L= XA\cap L=(X\cap L)A=YA$ and $|Y\cap A|=|X\cap A|=1$. Therefore $Y$ is a left transversal of $A$ in $L$. Since $H \leq L$, we get $YH=(X\cap L)H=XH\cap LH=XH\cap L$. Since $XH=HX^{-1}$, it follows that $YH=XH\cap L=HX^{-1}\cap HL^{-1}=H(X^{-1}\cap L^{-1})=HY^{-1}$. Therefore, by Theorem \ref{lt}, $A$ is a perfect code of $(L,H)$. \end{proof} It is natural to consider the opposite of Corollary \ref{sub}. The following theorem is a preliminary exploration of that. \begin{theorem} \label{KL} Let $G$ be a group admitting a normal subgroup $K$ and a subgroup $L$ such that $G=KL$ and $K\cap L=\{1\}$. Let $H\leq A\leq L$. Then $A$ is a perfect code of $(G,H)$ if and only if $A$ is a perfect code of $(L,H)$. \end{theorem} \begin{proof} The necessity follows Corollary \ref{sub}. Now we prove the sufficiency. Suppose that $A$ is a perfect code of $(L,H)$. By Theorem \ref{lt}, there exists a left transversal $Y$ of $A$ in $L$ such that $YH=HY^{-1}$. Since $Y$ is a left transversal of $A$ in $L$, we have $L=YA$ and $|L|=|Y||A|$. Set $X=KY$. Since $G=KL$ and $K\cap L=\{1\}$, we have $G=KYA=XA$ and \begin{equation*} |G|=|KL|=|K||L|=|K||YA|=|K||Y||A|=|KY||A|=|X||A|. \end{equation*} Therefore $X$ is a left transversal of $A$ in $G$. Since $K$ is a normal subgroup of $G$ and $YH=HY^{-1}$, we have $XH=KYH=YHK=HY^{-1}K=HX^{-1}$. By Theorem \ref{lt}, $A$ is a perfect code of $(G,H)$. \end{proof} We use $S_{1}\dot{\cup}S_{2}$ to denote the union of two disjoint sets $S_{1}$ and $S_{2}$, and $\dot{\cup}_{i=1}^{m}S_{i}$ the union of pairwise disjoint sets $S_{1},\ldots,S_{m}$. The following theorem generalizes the necessary part of \cite[Theorem 3.1]{ZZ2021}. \begin{theorem} \label{necessity} Let $G$ be a group and $H\leq A \leq G$. If $A$ is a perfect code of $(G,H)$, then for any $g\in G$ either the left coset $gA$ contains an element $x$ such that $x^2\in b^{-1}Hb$ for some $b\in A$ or $A\{g,g^{-1}\}A=\dot{\cup}_{i=1}^{m}g_{i}A$ for some $g_{1},\ldots,g_{m}\in G$ where $m$ is an even integer. \end{theorem} \begin{proof} Suppose that $A$ is a perfect code of $(G,H)$. By Theorem \ref{lt}, $A$ has a left transversal $T$ in $G$ such that $TH=HT^{-1}$. Take an arbitrary $g\in G$. If $g\in A$, then $x\in gA$ and $x^{2}\in H$ for each $x\in H$. Now assume $g\in G\setminus A$. It is obvious that $A\{g,g^{-1}\}A$ is a disjoint union of some left cosets of $A$ in $G$. Suppose that $A\{g,g^{-1}\}A=\dot{\cup}_{i=1}^{m}g_{i}A$ for some $g_{1},\ldots,g_{m}\in G$ where $m$ is an odd integer. It suffices to prove that $gA$ contains an element $x$ such that $x^2\in b^{-1}Hb$ for some $b\in A$. Since $T$ is a left transversal of $A$ in $G$, there is a unique $x_i\in T$ such that $g_{i}A=x_{i}A$ for each $1\leq i\leq m$. Set $X=\{x_1, x_2, ...,x_m\}$. Then $A\{g,g^{-1}\}A=\dot{\cup}_{i=1}^{m}x_{i}A=XA$. Since $X\subseteq T$ and $H\leq A$, we get $XH=TH\cap XA$. Therefore $HX^{-1}=HT^{-1}\cap AX^{-1}$. Since $XA=A\{g,g^{-1}\}A=(A\{g,g^{-1}\}A)^{-1}=AX^{-1}$ and $TH=HT^{-1}$, we have $XH=HX^{-1}$ and it follows that $(HXH)^{-1}=HXH=XH$. Let $Y$ be a subset of $X$ of minimal cardinality such that $HXH=HYH$. Then $HyH\cap Hy'H=\emptyset$ for any pair of distinct elements $y,y'\in Y$. Since $(HYH)^{-1}=HYH$, we can set $Y=\{v_1, ...,v_k,w_1, ...,w_k,z_1, ...,z_\ell\}$ such that $(Hv_iH)^{-1}=Hw_iH$ and $(Hz_jH)^{-1}=Hz_jH$ for all $1\leq i\leq k$ and $1\leq j\leq \ell$. Set $V_{i}=Hv_{i}H\cap X$, $W_{i}=Hw_{i}H\cap X$ and $Z_{j}=Hz_{j}H\cap X$ for all $1\leq i\leq k$ and $1\leq j\leq \ell$. Then $X=(\dot{\cup}_{i=1}^{k}V_{i})\dot{\cup}(\dot{\cup}_{i=1}^{k}W_{i})\dot{\cup} (\dot{\cup}_{j=1}^{\ell}Z_{j})$. Since $HXH=XH$, we have $HXH=(\dot{\cup}_{i=1}^{k}V_{i}H)\dot{\cup}(\dot{\cup}_{i=1}^{k}W_{i}H)\dot{\cup} (\dot{\cup}_{j=1}^{\ell}Z_{j}H)$. Then, since $V_{1}\subseteq Hv_{1}H$ and $(X\setminus V_1)\cap Hv_{1}H=\emptyset$, we have $Hv_{1}H=V_{1}H$. Similarly, $Hv_{i}H=V_{i}H$, $Hw_{i}H=W_{i}H$ and $Hz_{j}H=Z_{j}H$ for all $1\leq i\leq k$ and $1\leq j\leq \ell$. Since $(V_{i}H)^{-1}=(Hv_{i}H)^{-1}=Hw_{i}H=W_{i}H$, we have $|V_{i}H|=|W_{i}H|$ and it follows that $|V_{i}|=|W_{i}|$. Therefore $m=|X|=2(\sum_{i=1}^{k}|V_{i}|)+\sum_{j=1}^{\ell}|Z_{j}|$. Since $m$ is an odd integer, we have $\ell\neq0$. Note that the inequality $\ell\neq0$ ensure the existence of $z_1$. Since $Hz_{1}^{-1}H=(Hz_1H)^{-1}=Hz_1H$, we have $z_{1}^{-1}H=hz_{1}H$ for some $h\in H$. It follows that $(z_{1}h)^{2}\in H$. Since $Az_{1}A=Az_{1}^{-1}A$ and $z_1\in A\{g,g^{-1}\}A$, we have $Az_{1}A=AgA=Ag^{-1}A$. Therefore $z_1=bgc$ for some $b,c\in A$. Set $x=gchb$. Then $x\in gA$ and $x^{2}=gchbgchb=b^{-1}bgchbgchb=b^{-1}(z_{1}h)^{2}b\in b^{-1}Hb$. \end{proof} We leave it as an open problem whether the converse of Theorem \ref{necessity} holds. Now we give two corollaries of Theorem \ref{necessity}. \begin{coro} \label{notp} Let $G$ be a group and $H\leq A \leq G$. If there exists an element $x\in G\setminus A$ such that $x^{2}\in A$, $xA$ contains no element whose square is contained in a conjugate of $H$ in $A$ and $|A:A\cap xAx^{-1}|$ is an odd integer, then $A$ is not a perfect code of $(G,H)$. \end{coro} \begin{proof} Let $x$ be an element in $G\setminus A$ such that $x^{2}\in A$, $xA$ contains no element whose square is contained in a conjugate of $H$ in $A$ and $|A:A\cap xAx^{-1}|$ is an odd integer. Set $|A:A\cap xAx^{-1}|=m$. Since $axA=bxA$ if and only if $a^{-1}b\in xAx^{-1}$ for any $a,b\in A$, we have that $AxA=\dot{\cup}_{i=1}^{m}g_{i}A$ for some $g_{1},\ldots,g_{m}\in G$. Since $x^{2}\in A$, we have $A\{x,x^{-1}\}A=AxA=\dot{\cup}_{i=1}^{m}g_{i}A$. Since $m$ is an odd integer and $xA$ contains no element whose square is contained in a conjugate of $H$ in $A$, it follows from Theorem \ref{necessity} that $A$ is not a perfect code of $(G,H)$. \end{proof} \begin{coro} \label{normal} Let $G$ be a group, $H$ a subgroup of $G$ and $A$ a normal subgroup of $G$ such that $H \leq A \leq G$. If $A$ is a perfect code of $(G,H)$, then for any $x \in G$ with $x^2 \in A$ there exists $b \in A$ such that $(xb)^2 \in H$. \end{coro} \begin{proof} Let $A$ be a normal subgroup of $G$ and a perfect code of $(G,H)$. If $x\in A$, then $(xb)^2=1\in H$ where $b=x^{-1}\in A$. Now consider an arbitrary element $x\in G\setminus A$ with $x^2 \in A$. Since $A$ is normal in $G$, we have $A\{x,x^{-1}\}A=AxA=Ax^{-1}A=xA$. By Theorem \ref{necessity}, $xA$ contains an element $y$ satisfying $y^2\in b_{1}^{-1}Hb_1$ for some $b_1\in A$. Set $y=xa$ where $a\in A$. Since $A$ is normal in $G$, we have $x^{-1}b_1x\in A$. Set $b=x^{-1}b_1xab_{1}^{-1}$. Then $b\in A$. Since $(xb)^2=(b_1xab_{1}^{-1})^2=(b_1yb_{1}^{-1})^2=b_1y^2b_{1}^{-1}$ and $y^2\in b_{1}^{-1}Hb_1$, we have $(xb)^2\in H$. \end{proof} The following result is a generalization of \cite[Theorem 3.7 (i)]{ZZ2021}. \begin{theorem} \label{quotient} Let $G$ be a group and $H\leq A \leq G$. Let $N$ be a normal subgroup of $G$ which is contained in $A$. If $A$ is a perfect code of $(G,H)$, then $A/N$ is a perfect code of $(G/N,H/N)$. \end{theorem} \begin{proof} Suppose that $A$ is a perfect code of $(G,H)$. By Theorem \ref{lt}, there exists a left transversal $X$ of $A$ in $G$ such that $XH=HX^{-1}$. Since $N$ is a normal subgroup of $G$ and $N\leq A$, $X/N$ is a left transversal of $A/N$ in $G/N$. Since $XH=HX^{-1}$, we have $(X/N)(H/N)=XH/N=HX^{-1}/N=(H/N)(X^{-1}/N)$. By Theorem \ref{lt}, $A/N$ is a perfect code of $(G/N,H/N)$. \end{proof} \section{Subgroup total perfect codes of $(G,H)$} \label{sec:tpc} In this section, we deduce a few results on total perfect codes which are parallel to some results about perfect codes we obtained in Section \ref{sec:pc}. \begin{theorem} \label{totallt} Let $G$ be a group and $H\leq A\leq G$. Then $A$ is a total perfect code of $(G,H)$ if and only if there exists a left transversal $X$ of $A$ in $G$ such that $XH=HX^{-1}$ and $X$ contains an element in $A\setminus H$. \end{theorem} \begin{proof} $\Rightarrow$) Suppose that $A$ is a total perfect code of $(G,H)$. By Lemma \ref{totallem}, $A$ is a perfect code of $(G,H)$ and there exists an element $x\in N_{A}(H)\setminus H$ such that $x^{2}\in H$. Then, by Theorem \ref{lt}, there exists a left transversal $Y$ of $A$ in $G$ such that $YH=HY^{-1}$. Since $x\in N_{A}(H)\setminus H$ and $x^{2}\in H$, we have $xH=H x^{-1}$. Since $Y$ a left transversal of $A$, $Y\cap A$ contains a unique element, say $y$. If $y\in A\setminus H$, then we set $X=Y$. If $y\in H$, then we set $X=(Y\setminus \{y\})\cup\{x\}$. In both cases, we have $XH=HX^{-1}$ and $X$ contains an element in $A\setminus H$. $\Leftarrow$) Suppose that there exists a left transversal $X$ of $A$ in $G$ such that $XH=HX^{-1}$ and $X$ contains an element in $A\setminus H$. By Theorem \ref{lt}, $A$ is a perfect code of $(G,H)$. Since $X$ is a left transversal of $A$, $X\cap A$ contains a unique element. Set$X\cap A=\{x\}$ and $T=X\setminus\{x\}$. Since $H\leq A$, we have $xH$ and $Hx^{-1}$ are both contained in $A$. Therefore $TH\cap xH=TH\cap Hx^{-1}=\emptyset$. Then, since $XH=HX^{-1}$, we get $xH=Hx^{-1}$. Thus $x^{2}\in H$ and $x\in N_{A}(H)$. By Lemma \ref{totallem}, $A$ is a total perfect code of $(G,H)$. \end{proof} Similar to Theorem \ref{lt}, the word `left' in Theorem \ref{totallt} can be replaced by `right'. The proof of the following theorem is the same as that of Theorem \ref{rt} and therefore omitted. \begin{theorem} \label{totalrt} Let $G$ be a group and $H \leq A \leq G$. Then $A$ is a total perfect code of $(G,H)$ if and only if there exists a right transversal $Y$ of $A$ in $G$ such that $Y^{-1}H=HY$ and $Y$ contains an element in $A\setminus H$. \end{theorem} The following theorem is a parallel result to Corollary \ref{sub}. \begin{theorem} \label{totalsub} Let $G$ be a group and $H \leq A \leq L\leq G$. If $A$ is a total perfect code of $(G,H)$, then $A$ is a total perfect code of $(L,H)$. \end{theorem} \begin{proof} Let $A$ be a total perfect code of $(G,H)$. By Lemma \ref{totallem}, $A$ is a perfect code of $(G,H)$ and there exists an element $x\in N_{A}(H)\setminus H$ such that $x^{2}\in H$. Since $H \leq A \leq L\leq G$, it follows from Corollary \ref{sub} that $A$ is a perfect code of $(L,H)$. Then, since there exists an element $x\in N_{A}(H)\setminus H$ such that $x^{2}\in H$, Lemma \ref{totallem} ensures that $A$ is a total perfect code of $(L,H)$. \end{proof} The following theorem is the counterpart of Theorem \ref{KL} for total perfect codes. We omit its proof as it can be proceed by using a similar approach to the proof of Theorem \ref{totalsub}. \begin{theorem} \label{totalKL} Let $G$ be a group admitting a normal subgroup $K$ and a subgroup $L$ such that $G=KL$ and $K\cap L=\{1\}$. Let $H\leq A\leq L$. Then $A$ is a total perfect code of $(G,H)$ if and only if $A$ is a total perfect code of $(L,H)$. \end{theorem} \section{Examples and problems} \label{sec:ep} In this section, we construct some examples and propose a few open problems. \begin{problem} Whether the converse of Theorem \ref{necessity} holds? If it does not hold, then what conditions should we add to make it true? \end{problem} For a normal subgroup $N$ of a given group $G$, it is straightforward to check that every coset graph $\hbox{\rm Cos}(G,N,U)$ is a Cayley graph on the quotient group $G/N$. Therefore for every subgroup $A$ of $G$ containing $N$, $A$ is a perfect code of $(G,N)$ if any only if $A/N$ is perfect code of $G/N$. By using this fact and Theorem \ref{KL}, we construct an infinite family of perfect codes as follows. \begin{exam} Let $G$ be the one dimensional affine group over a finite field $F$, and $L$ be the subgroup of $G$ consisting of elements in $G$ fixing the additive identity of $F$. Let $H\leq A\leq L$. Suppose that either the index $|L:A|$ of $A$ in $L$ is odd or the index $|A:H|$ of $H$ in $A$ is odd. Then $A$ is perfect code of $(G,H)$. \end{exam} \begin{proof} By \cite[Example 3.4.1]{DM1996}, $G$ is a Frobenius group, $L$ is a Frobenius complement of $G$ and $L$ is isomorphic to the cyclic group of order $|F|-1$. In particular, $H$ is normal in $L$ and the quotient group $L/H$ is cyclic. Since either $|L:A|$ or $|A:H|$ is odd, either $A/H$ is of odd order or $A/H$ is of odd index in $L/H$. By \cite[Corollary 2.8]{HXZ18}, $A/H$ is a perfect code in $L/H$. Therefore $A$ is perfect code of $(L,H)$. Let $K$ be the Frobenius kernel of $G$. Then $G=KL$, $K\cap L=\{1\}$ and $K$ is normal in $G$. By Theorem \ref{KL}, $A$ is perfect code of $(G,H)$. \end{proof} Let $G$ be a group and $H\leq A\leq G$. If there exists a left transversal $X$ of $A$ in $G$ such that $XH=HX^{-1}$, then we call $(A,H,X)$ a \emph{perfect triple} of $G$. By Theorem \ref{lt}, $A$ is a perfect code of $(G,H)$ if and only if there exists a subset $X$ of $G$ such that $(A,H,X)$ is a perfect triple of $G$. We propose the following problems for further research. \begin{problem} Given a group $G$ and its subgroup $H$, construct or classify the perfect triples $(A,H,X)$ of $G$. \end{problem} \begin{problem} Given a group $G$ and its subgroup $A$, classify the perfect triples $(A,H,X)$ of $G$. \end{problem} Let $S_n$ be the symmetric group on $\{1,2,\ldots,n\}$. For two positive integers $m$ and $n$ with $m<n$, we treat $S_{m}$ as the subgroup of $S_n$ consisting of elements in $S_{n}$ fixing every number in $\{m+1,\ldots,n\}$. \begin{exam} \label{345} Set $X=\{1,(2,5,3),(1,3,5),(1,5,2,3),(4,5)(1,3,2)\}$. Then one can check that $(S_{4},S_{3},X)$ is a perfect triple of $S_{5}$. Therefore $S_{4}$ is perfect code of $(S_{5},S_{3})$. \end{exam} \begin{pro} \label{symm} Let $n$ be a positive integer at least $5$. Then $S_{n-1}$ is a perfect code of $(S_{n},S_{3})$. \end{pro} \begin{proof} We proceed the proof by induction on $n$. By Example \ref{345}, the proposition is true for $n=5$. Suppose that the proposition is true for $n=k$ ($k\geq5$). It suffices to prove that the proposition is true for $n=k+1$. Let $G$ be the subgroup of $S_{k+1}$ consisting of elements in $S_{k+1}$ fixing $k$. Since $k\geq5$, we have $S_{3}< S_{k-1}< G$. Let $z$ be the involution $(k,k+1)$ in $S_{k+1}$. Then $z^{-1}S_{k}z=G$, $z^{-1}S_{3}z=S_{3}$ and $z^{-1}S_{k-1}z=S_{k-1}$. By induction hypothesis, $S_{k-1}$ is a perfect code of $(S_{k},S_{3})$. Therefore, by Corollary \ref{conj}, $S_{k-1}$ is a perfect code of $(G,S_{3})$. By Theorem \ref{lt}, there exists a left transversal $X$ of $S_{k-1}$ in $G$ such that $XS_3=S_3X^{-1}$. Set $Y=X\cup \{z\}$. Since $XS_3=S_3X^{-1}$ and $z^{-1}S_{3}z=S_{3}$, we obtain $YS_3=S_3Y^{-1}$. We will further prove that $Y$ is a left transversal of $S_k$ in $S_{k+1}$. Since $X$ is a left transversal of $S_{k-1}$ in $G$, $x^{-1}y\notin S_{k-1}$ for each pair of distinct elements $x,y\in X$. Therefore $x^{-1}y$ does not fix $k+1$ and it follows that $x^{-1}y\notin S_{k}$. Thus $xS_{k}\neq yS_{k}$. Since $z=(k,k+1)$ and $x^{-1}$ fixes $k$ for every $x\in X$, $x^{-1}z$ takes $k+1$ to $k$. Therefore $x^{-1}z$ does not fix $k+1$ and it follows that $x^{-1}z\notin S_{k}$. Thus $xS_{k}\neq zS_{k}$. From the above discussion, we have that $xS_{k}\neq yS_{k}$ for each pair of distinct elements $x,y\in Y$. Therefore $|YS_k|=|Y||S_{k}|=(k+1)k!=|S_{k+1}|$. Thus $YS_k=S_{k+1}$ and it follows that $Y$ is a left transversal of $S_k$ in $S_{k+1}$. Now we have proved that $Y$ is a left transversal of $S_k$ in $S_{k+1}$ and $YS_3=S_3Y^{-1}$. By Theorem \ref{lt}, $S_{k}$ is a perfect code of $(S_{k+1},S_{3})$. In other word, the proposition is true for $k+1$. Therefore the proposition is true for all positive integer $n\geq 5$. \end{proof} By the proof of Proposition \ref{symm}, if $(S_{k-1}, S_3, X)$ is a perfect triple of $S_{k}$, then $(S_{k}, S_3, z^{-1}Xz\cup \{z\})$ is a perfect triple of $S_{k+1}$ where $z=(k,k+1)$. This provides us with a way to construct perfect triples $(S_{n-1}, S_3, X)$ of $S_{n}$ for every $n\geq 5$. Based on Example \ref{345}, the following two examples are constructed through this approach. \begin{exam} \label{356} Set \begin{equation*} X=\{1,(2,6,3),(1,3,6),(1,6,2,3),(4,6)(1,3,2),(5,6)\}. \end{equation*} Then $(S_{5},S_{3},X)$ is a perfect triple of $S_{6}$. \end{exam} \begin{exam} \label{367} Set \begin{equation*} X=\{1,(2,7,3),(1,3,7),(1,7,2,3),(4,7)(1,3,2),(5,7),(6,7)\}. \end{equation*} Then $(S_{6},S_{3},X)$ is a perfect triple of $S_{7}$. \end{exam} More generally, we have the following proposition. \begin{pro} \label{3n} Let $n\geq 3$ be a positive integer. Set \begin{equation*} X=\{1,(2,n,3),(1,3,n),(1,n,2,3),(4,n)(1,3,2),(5,n),\ldots,(n-1,n)\}. \end{equation*} Then $(S_{n-1},S_{3},X)$ is a perfect triple of $S_{n}$. \end{pro} \noindent {\textbf{Acknowledgements}}~~The first author was supported by the National Natural Science Foundation of China (No.~12071312). The second author was supported by the National Natural Science Foundation of China (No.~11671276), the Basic Research and Frontier Exploration Project of Chongqing (cstc2018jcyjAX0010) and the Foundation of Chongqing Normal University (21XLB006). {\small
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} Only finite undirected graphs without loops or multiple edges are considered. A good reference for any undefined terms is \cite{[2]}. The earliest two theoretical results in hamiltonian graph theory were developed in 1952 due to Dirac \cite{[4]} in forms of a sufficient condition for a graph to be hamiltonian and a lower bound for the circumference $c$ (the length of a longest cycle of a graph), respectively, based on order $n$ and minimum degree $\delta$. \\ \noindent\textbf{Theorem A \cite{[4]}.} Every graph with $\delta\ge n/2$ is hamiltonian. \\ \noindent\textbf{Theorem B \cite{[4]}.} Every 2-connected graph either is hamiltonian or $c\ge 2\delta$. \\ In 1973, Chv\'{a}tal \cite{[3]} introduced the concept of toughness. Since then a lot of research has been done towards finding the exact analogs of classical hamiltonian results under additional 1-tough condition - an alternative and more natural necessary condition for a graph to be hamiltonian. In 1978, Jung \cite{[5]} established the analog of Theorem A for 1-tough graphs.\\ \noindent\textbf{Theorem C \cite{[5]}.} Every 1-tough graph on $n\ge11$ vertices with $\delta\ge(n-4)/2$ is hamiltonian.\\ In 1986, Bauer and Schmeichel \cite{[1]} proved that the bound $2\delta$ in Theorem B can be enlarged to $2\delta+2$ for 1-tough graphs.\\ \noindent\textbf{Theorem D \cite{[1]}.} Every 1-tough graph either is hamiltonian or $c\ge 2\delta+2$. \\ In fact, Theorem D is sharp when $n\equiv 1(mod\ 3)$. In this paper we present the final version of Theorem D which is sharp for each $n$.\\ \noindent\textbf{Theorem 1}. Every 1-tough graph either is hamiltonian or $$ c\ge\left\{ \begin{array}{lll} 2\delta+2 & \mbox{when} & n\equiv 1(mod\ 3), \\ 2\delta+3 & \mbox{when} & n\equiv 2(mod\ 3) \ \ \mbox{or} \ \ n\equiv 1(mod\ 4), \\ 2\delta+4 & \mbox{ } & \mbox{otherwise}. \end{array} \right. $$ Theorem 1 is sharp for each $n$. To see this, let $H_1,H_2,...,H_h$ be disjoint complete graphs with distinct vertices $x_i,y_i\in V(H_i)$ $(i=1,2,...,h)$. Form a new graph $H(t_1,t_2,...,t_h)$ by identifying the vertices $x_1,x_2,...,x_h$ and adding all possible edges between $y_1,y_2,...,y_h$, where $t_i=|V(H_i)|$ $(i=1,2,...,h)$. The graph $H(\delta-1,\delta-1,\delta-1)$ shows that the bound $2\delta+2$ in Theorem 1 cannot be replaced by $2\delta+3$ when $n\equiv 1(mod\ 3)$. Next, the graphs $H(\delta,\delta-1,\delta-1)$ and $H(\delta-1,\delta-1,\delta-1,\delta-1)$ show that the bound $2\delta+3$ cannot be replaced by $2\delta+4$ when $n\equiv 2(mod\ 3)$ or $n\equiv 1(mod\ 4)$. Finally, the graph $H(\delta,\delta,\delta-1)$ shows that the bound $2\delta+4$ cannot be replaced by $2\delta+5$. \section{Notations and preliminaries} The set of vertices of a graph $G$ is denoted by $V(G)$ and the set of edges by $E(G)$. For $S$ a subset of $V(G)$, we denote by $G\backslash S$ the maximum subgraph of $G$ with vertex set $V(G)\backslash S$. We write $\langle S\rangle$ for the subgraph of $G$ induced by $S$. For a subgraph $H$ of $G$ we use $G\backslash H$ short for $G\backslash V(H)$. The neighborhood and the degree of a vertex $x\in V(G)$ will be denoted by $N(x)$ and $d(x)$, respectively. Furthermore, for a subgraph $H$ of $G$ and $x\in V(G)$, we define $N_H(x)=N(x)\cap V(H)$ and $d_H(x)=|N_H(x)|$. Let $s(G)$ denote the number of components of a graph $G$. A graph $G$ is 1-tough if $|S|\ge s(G\backslash S)$ for every subset $S$ of the vertex set $V(G)$ with $s(G\backslash S)>1$. A graph $G$ on $n$ vertices is hamiltonian if $G$ contains a Hamilton cycle, i.e. a cycle of length $n$. Paths and cycles in a graph $G$ are considered as subgraphs of $G$. If $Q$ is a path or a cycle, then the length of $Q$, denoted by $|Q|$, is $|E(Q)|$. We write $Q$ with a given orientation by $\overrightarrow{Q}$. For $x,y\in V(Q)$, we denote by $x\overrightarrow{Q}y$ the subpath of $Q$ in the chosen direction from $x$ to $y$. For $x\in V(C)$, we denote the $h$-th successor and the $h$-th predecessor of $x$ on $\overrightarrow{C}$ by $x^{+h}$ and $x^{-h}$, respectively. We abbreviate $x^{+1}$ and $x^{-1}$ by $x^+$ and $x^-$, respectively. For each $X\subset V(C)$, we define $X^{+}=\{x^{+}|x\in X\}$ and $X^{-}=\{x^{-}|x\in X\}$. \\ \noindent\textbf{Special definitions}. Let $G$ be a graph, $C$ a longest cycle in $G$ and $P=x\overrightarrow{P}y$ a longest path in $G\backslash C$ of length $\overline{p}\ge0$. Let $\xi_1,\xi_2,...,\xi_s$ be the elements of $N_C(x)\cup N_C(y)$ occuring on $\overrightarrow{C}$ in a consecutive order. Set $$ I_i=\xi_i\overrightarrow{C}\xi_{i+1}, \ I_i^\ast=\xi_i^+\overrightarrow{C}\xi_{i+1}^- \ \ (i=1,2,...,s), $$ where $\xi_{s+1}=\xi_1$. $(1)$ The segments $I_1,I_2,...,I_s$ are called elementary segments on $C$ induced by $N_C(x)\cup N_C(y)$. $(2)$ We call a path $L=z\overrightarrow{L}w$ an intermediate path between two distinct elementary segments $I_a$ and $I_b$, if $$ z\in V(I_a^\ast), \ w\in V(I_b^\ast), \ V(L)\cap V(C\cup P)=\{z,w\}. $$ $(3)$ Define $\Upsilon(I_{i_1},I_{i_2},...,I_{i_t})$ to be the set of all intermediate paths between elementary segments $I_{i_1},I_{i_2},...,I_{i_t}$.\\ $(4)$ If $\Upsilon(I_1,...,I_s)\subseteq E$ then the maximum number of intermediate independent edges (having no a common vertex) in $\Upsilon(I_1,...,I_s)$ will be denoted by $\mu(\Upsilon)$.\\ $(5)$ We say that two intermediate independent edges $w_1w_2, w_3w_4$ have a crossing, if either $w_1,w_3,w_2,w_4$ or $w_1,w_4,w_2,w_3$ occur on $\overrightarrow{C}$ in a consecutive order.\\ \noindent\textbf{Lemma 1.} Let $G$ be a graph, $C$ a longest cycle in $G$ and $P=x\overrightarrow{P}y$ a longest path in $G\backslash C$ of length $\overline{p}\ge1$. If $|N_C(x)|\ge2$, $|N_C(y)|\ge2$ and $N_C(x)\not=N_C(y)$ then $$ c\ge\left\{ \begin{array}{lll} 3\delta+\max\{\sigma_1, \sigma_2\}-1\ge3\delta & \mbox{if} & \mbox{ }\overline{p}=1, \\ 4\delta-2\overline{p} & \mbox{if} & \mbox{ \overline{p}\ge2, \end{array} \right. $$ where $\sigma_1=|N_C(x)\backslash N_C(y)|$ and $\sigma_2=|N_C(y)\backslash N_C(x)|$.\\ \noindent\textbf{Lemma 2.} Let $G$ be a graph, $C$ a longest cycle in $G$ and $P=x\overrightarrow{P}y$ a longest path in $G\backslash C$ of length $\overline{p}\ge0$. Let $N_C(x)=N_C(y)$, $|N_C(x)|\ge2$ and $f,g\in\{1,...,s\}$.\\ (a1) If $L\in\Upsilon(I_f,I_g)$ then $$ |I_f|+|I_g|\ge2\overline{p}+2|L|+4. $$ (a2) If $\Upsilon(I_f,I_g)\subseteq E(G)$ and $|\Upsilon(I_f,I_g)|=\varepsilon$\ \ for some $\varepsilon\in\{1,2,3\}$ then $$ |I_f|+|I_g|\ge2\overline{p}+\varepsilon+5, $$ (a3) If $\Upsilon(I_f,I_g)\subseteq E(G)$ and $\Upsilon(I_f,I_g)$ contains two independent intermediate edges then $$ |I_f|+|I_g|\ge2\overline{p}+8. $$ \\ The following result is due to Voss \cite{[6]}. \\ \noindent\textbf{Lemma 3 \cite{[6]}}. Let $G$ be a hamiltonian graph, $\{v_1,v_2,...,v_t\}\subseteq V(G)$ and $d(v_i)\ge t$ $(i=1,2,...,t)$. Then each pair $x,y$ of vertices of $G$ is connected in $G$ by a path of length at least $t$.\\ \section{Proofs} \noindent\textbf{Proof of Lemma 1}. Put $$ A_1=N_C(x)\backslash N_C(y), \ A_2=N_C(y)\backslash N_C(x), \ M=N_C(x)\cap N_C(y). $$ By the hypothesis, $N_C(x)\not=N_C(y)$, implying that $$ \max \{|A_1|,|A_2|\}\ge1. $$ Let $\xi_1,\xi_2,...,\xi_s$ be the elements of $N_C(x)\cup N_C(y)$ occuring on \overrightarrow{C} in a consecutive order. Put $I_i=\xi_i\overrightarrow{C}\xi_{i+1}$ $(i=1,2,...,s)$, where $\xi_{s+1}=\xi_1$. Clearly, $s=|A_1|+|A_2|+|M|$. Since $C$ is extreme, we have $|I_i|\ge2$ $(i=1,2,...,s)$. Next, if $\{\xi_i,\xi_{i+1}\}\cap M\not=\emptyset$ for some $i\in\{1,2,...,s\}$ then $|I_i|\ge\overline{p}+2$. Further, if either $\xi_i\in A_1$, $\xi_{i+1}\in A_2$ or $\xi_i\in A_2$, $\xi_{i+1}\in A_1$ then again $|I_i|\ge\overline{p}+2$. \\ \textbf{Case 1}. $\overline{p}=1$. \textbf{Case 1.1}. $|A_i|\ge1$ $(i=1,2)$. It follows that among $I_1,I_2,...,I_s$ there are $|M|+2$ segments of length at least $\overline{p}+2$. Observing also that each of the remaining $s-(|M|+2)$ segments has a length at least 2, we have $$ c\ge(\overline{p}+2)(|M|+2)+2(s-|M|-2) $$ $$ =3(|M|+2)+2(|A_1|+|A_2|-2)=2|A_1|+2|A_2|+3|M|+2. $$ Since $|A_1|=d(x)-|M|-1$ and $|A_2|=d(y)-|M|-1$, we have $$ c\ge2d(x)+2d(y)-|M|-2\ge3\delta+d(x)-|M|-2. $$ Recalling that $d(x)=|M|+|A_1|+1$, we get $$ c\ge3\delta+|A_1|-1=3\delta+\sigma_1-1. $$ Analogously, $c\ge3\delta+\sigma_2-1$. So, $$ c\ge3\delta+\max \{\sigma_1,\sigma_2\}-1\ge3\delta. $$ \textbf{Case 1.2}. Either $|A_1|\ge1, |A_2|=0$ or $|A_1|=0, |A_2|\ge1$. Assume w.l.o.g. that $|A_1|\ge1$ and $|A_2|=0$, i.e. $|N_C(y)|=|M|\ge2$ and $s=|A_1|+|M|$ . Hence, among $I_1,I_2,...,I_s$ there are $|M|+1$ segments of length at least $\overline{p}+2=3$. Taking into account that $|M|+1=d(y)$ and each of the remaining $s-(|M|+1)$ segments has a length at least 2, we get $$ c\ge 3(|M|+1)+2(s-|M|-1)=3d(y)+2(|A_1|-1) $$ $$ \ge3\delta+|A_1|-1=3\delta+\max\{\sigma_1,\sigma_2\}-1\ge3\delta. $$ \textbf{Case 2}. $\overline{p}\ge2$. \textbf{Case 2.1}. $|A_i|\ge1$ $(i=1,2)$. It follows that among $I_1,I_2,...,I_s$ there are $|M|+2$ segments of length at least $\overline{p}+2$. Further, since each of the remaining $s-(|M|+2)$ segments has a length at least 2, we get $$ c\ge (\overline{p}+2)(|M|+2)+2(s-|M|-2) $$ $$ =(\overline{p}-2)|M|+(2\overline{p}+4|M|+4)+2(|A_1|+|A_2|-2) $$ $$ \ge2|A_1|+2|A_2|+4|M|+2\overline{p}. $$ Observing also that $$ |A_1|+|M|+\overline{p}\ge d(x), \quad |A_2|+|M|+\overline{p}\ge d(y), $$ we have $$ 2|A_1|+2|A_2|+4|M|+2\overline{p} $$ $$ \ge 2d(x)+2d(y)-2\overline{p}\ge4\delta-2\overline{p}, $$ implying that $c\ge4\delta-2\overline{p}$.\\ \textbf{Case 2.2}. Either $|A_1|\ge1, |A_2|=0$ or $|A_1|=0, |A_2|\ge1$. Assume w.l.o.g. that $|A_1|\ge1$ and $|A_2|=0$, that is $|N_C(y)|=|M|\ge2$ and $s=|A_1|+|M|$. It follows that among $I_1,I_2,...,I_s$ there are $|M|+1$ segments of length at least $\overline{p}+2$. Observing also that $|M|+\overline{p}\ge d(y)\ge\delta$, i.e. $2\overline{p}+4|M|\ge 4\delta-2\overline{p}$, we get $$ c\ge(\overline{p}+2)(|M|+1)\ge(\overline{p}-2)(|M|-1)+2\overline{p}+4|M| $$ $$ \ge 2\overline{p}+4|M|\ge4\delta-2\overline{p}. \quad \quad \rule{7pt}{6pt} $$ \noindent\textbf{Proof of Lemma 2}. Let $\xi_1,\xi_2,...,\xi_s$ be the elements of $N_C(x)$ occuring on $\overrightarrow{C}$ in a consecutive order. Put $I_i=\xi_i\overrightarrow{C}\xi_{i+1}$ $(i=1,2,...,s)$, where $\xi_{s+1}=\xi_1.$ To prove $(a1)$, let $L\in \Upsilon(I_f,I_g)$. Further, let $L=z\overrightarrow{L}w$ with $z\in V(I_f^\ast)$ and $w\in V(I_g^\ast)$. Put $$ |\xi_f\overrightarrow{C}z|=d_1, \ |z\overrightarrow{C}\xi_{f+1}|=d_2, \ |\xi_g\overrightarrow{C}w|=d_3, \ |w\overrightarrow{C}\xi_{g+1}|=d_4, $$ $$ C^\prime=\xi_fx\overrightarrow{P}y\xi_g\overleftarrow{C}z\overrightarrow{L}w\overrightarrow{C}\xi_f. $$ Clearly, $$ |C^\prime|=|C|-d_1-d_3+|L|+|P|+2. $$ Since $C$ is extreme, we have $|C|\ge|C^\prime|$, implying that $d_1+d_3\ge\overline{p}+|L|+2$. By a symmetric argument, $d_2+d_4\ge\overline{p}+|L|+2$. Hence $$ |I_f|+|I_g|=\sum_{i=1}^4d_i\ge2\overline{p}+2|L|+4. $$ The proof of $(a1)$ is complete. To prove $(a2)$ and $(a3)$, let $\Upsilon(I_f,I_g)\subseteq E(G)$ and $|\Upsilon(I_f,I_g)|=\varepsilon$ for some $\varepsilon\in \{1,2,3\}$.\\ \textbf{Case 1}. $\varepsilon=1$. Let $L\in\Upsilon(I_f,I_g)$, where $|L|=1$. By (a1), $$ |I_f|+|I_g|\ge2\overline{p}+2|L|+4=2\overline{p}+6. $$ \textbf{Case 2}. $\varepsilon=2$. It follows that $\Upsilon(I_f,I_g)$ consists of two edges $e_1,e_2$. Put $e_1=z_1w_1$ and $e_2=z_2w_2$, where $\{z_1,z_2\}\subseteq V(I_f^\ast)$ and $\{w_1,w_2\}\subseteq V(I_g^\ast)$.\\ \textbf{Case 2.1}. $z_1\not=z_2$ and $w_1\not=w_2$. Assume w.l.o.g. that $z_1$ and $z_2$ occur in this order on $I_f$. \\ \textbf{Case 2.1.1}. $w_2$ and $w_1$ occur in this order on $I_g$. Put $$ |\xi_f\overrightarrow{C}z_1|=d_1, \ |z_1\overrightarrow{C}z_2|=d_2, \ |z_2\overrightarrow{C}\xi_{f+1}|=d_3, $$ $$ |\xi_g\overrightarrow{C}w_2|=d_4, \ |w_2\overrightarrow{C}w_1|=d_5, \ |w_1\overrightarrow{C}\xi_{g+1}|=d_6, $$ $$ C^{\prime}=\xi_f\overrightarrow{C}z_1w_1\overleftarrow{C}w_2z_2\overrightarrow{C}\xi_g x\overrightarrow{P}y\xi_{g+1}\overrightarrow{C}\xi_f. $$ Clearly, $$ |C^{\prime}|=|C|-d_2-d_4-d_6+|\{e_1\}|+|\{e_2\}|+|P|+2 $$ $$ =|C|-d_2-d_4-d_6+\overline{p}+4. $$ Since $C$ is extreme, we have $|C|\ge |C^{\prime}|$, implying that $d_2+d_4+d_6\ge \overline{p}+4$. By a symmetric argument, $d_1+d_3+d_5\ge\overline{p}+4$. Hence $$ |I_f|+|I_g|= \sum_{i=1}^6d_i\ge2\overline{p}+8. $$ \textbf{Case 2.1.2}. $w_1$ and $w_2$ occur in this order on $I_g$. Putting $$ C^{\prime}=\xi_f\overrightarrow{C}z_1w_1\overrightarrow{C}w_2z_2\overrightarrow{C}\xi_g x\overrightarrow{P}y\xi_{g+1}\overrightarrow{C}\xi_f, $$ we can argue as in Case 2.1.1. \\ \textbf{Case 2.2}. Either $z_1=z_2$, $w_1\not=w_2$ or $z_1\not=z_2$, $w_1=w_2$. Assume w.l.o.g. that $z_1\not=z_2$, $w_1=w_2$ and $z_1, z_2$ occur in this order on $I_f$. Put $$ |\xi_f\overrightarrow{C}z_1|=d_1, \ |z_1\overrightarrow{C}z_2|=d_2, \ |z_2\overrightarrow{C}\xi_{f+1}|=d_3, $$ $$ |\xi_g\overrightarrow{C}w_1|=d_4, \ |w_1\overrightarrow{C}\xi_{g+1}|=d_5, $$ $$ C^{\prime}=\xi_f x\overrightarrow{P}y\xi_g\overleftarrow{C}z_1w_1\overrightarrow{C}\xi_f, $$ $$ C^{\prime\prime}=\xi_f\overrightarrow{C}z_2w_1\overleftarrow{C}\xi_{f+1}x\overrightarrow{P}y\xi_{g+1}\overrightarrow{C}\xi_f. $$ Clearly, $$ |C^{\prime}|=|C|-d_1-d_4+|\{e_1\}|+|P|+2=|C|-d_1-d_4+\overline{p}+3, $$ $$ |C^{\prime\prime}|=|C|-d_3-d_5+|\{e_2\}|+|P|+2=|C|-d_3-d_5+\overline{p}+3. $$ Since $C$ is extreme, $|C|\ge |C^{\prime}|$ and $|C|\ge |C^{\prime\prime}|$, implying that $$ d_1+d_4\ge \overline{p}+3, \ d_3+d_5\ge \overline{p}+3. $$ Hence, $$ |I_f|+|I_g|= \sum_{i=1}^5d_i\ge d_1+d_3+d_4+d_5+1\ge2\overline{p}+7. $$ \textbf{Case 3}. $\varepsilon=3$. It follows that $\Upsilon(I_f,I_g)$ consists of three edges $e_1,e_2,e_3$. Let $e_i=z_iw_i$ $(i=1,2,3)$, where $\{z_1,z_2,z_3\}\subseteq V(I_f^\ast)$ and $\{w_1,w_2,w_3\}\subseteq V(I_g^\ast)$. If there are two independent edges among $e_1,e_2,e_3$ then we can argue as in Case 2.1. Otherwise, we can assume w.l.o.g. that $w_1=w_2=w_3$ and $z_1,z_2,z_3$ occur in this order on $I_f$. Put $$ |\xi_f\overrightarrow{C}z_1|=d_1, \ |z_1\overrightarrow{C}z_2|=d_2, \ |z_2\overrightarrow{C}z_3|=d_3, $$ $$ |z_3\overrightarrow{C}\xi_{f+1}|=d_4, \ |\xi_g\overrightarrow{C}w_1|=d_5, \ |w_1\overrightarrow{C}\xi_{g+1}|=d_6, $$ $$ C^{\prime}=\xi_f x\overrightarrow{P}y\xi_g\overleftarrow{C}z_1w_1\overrightarrow{C}\xi_f, $$ $$ C^{\prime\prime}=\xi_f\overrightarrow{C}z_3w_1\overleftarrow{C}\xi_{f+1}x\overrightarrow{P}y\xi_{g+1}\overrightarrow{C}\xi_f. $$ Clearly, $$ |C^{\prime}|=|C|-d_1-d_5+|\{e_1\}|+\overline{p}+2, $$ $$ |C^{\prime\prime}|=|C|-d_4-d_6+|\{e_3\}|+\overline{p}+2. $$ Since $C$ is extreme, we have $|C|\ge |C^{\prime}|$ and $|C|\ge |C^{\prime\prime}|$, implying that $$ d_1+d_5\ge \overline{p}+3, \ d_4+d_6\ge \overline{p}+3. $$ Hence, $$ |I_f|+|I_g|= \sum_{i=1}^6d_i\ge d_1+d_4+d_5+d_6+2\ge2\overline{p}+8. \quad \quad \rule{7pt}{6pt} $$ \\ \noindent\textbf{Proof of Theorem 1}. Let $G$ be a 1-tough graph. If $c\ge2\delta+4$ then we are done. Hence, we can assume that $$ c\le2\delta+3. \eqno{(1)} $$ Let $C$ be a longest cycle in $G$ and $P=x_1\overrightarrow{P}x_2$ a longest path in $G\backslash C$. Put $|P|=|V(P)|-1=\overline{p}$. If $|V(P)|= 0$ then $C$ is a Hamilton cycle and we are done. Let $|V(P)|\ge1$, that is $\overline{p}\ge 0$. Put $X=N_C(x_1)\cup N_C(x_2)$ and let $\xi_1,...,\xi_s$ be the elements of $X$ occuring on $C$ in a consecutive order. Put $$ I_i=\xi_i\overrightarrow{C}\xi_{i+1}, \ I_i^\ast=\xi_i^+\overrightarrow{C}\xi_{i+1}^- \ \ (i=1,...,s), $$ where $\xi_{s+1}=\xi_1.$ Since $G$ is a 1-tough graph, we have $\delta\ge2$. \\ \textbf{Case 1}. $\overline{p}\le\delta-2$. It follows that $s\ge|N_C(x_i)|\ge\delta-\overline{p}\ge2$ $(i=1,2)$. Assume first that $N_C(x_1)\not=N_C(x_2)$, implying that $\overline{p}\ge1$. If $\overline{p}\ge2$ then by Lemma 1, $c\ge4\delta-2\overline{p}\ge2\delta+4$, contradicting (1). Hence $\overline{p}=1$, which yields $\delta\ge\overline{p}+2=3$. By Lemma 1, $c\ge3\delta\ge9$. If $\delta\ge4$ then $c\ge3\delta\ge2\delta+4$, contradicting (1). Let $\delta=3$. Next, we can suppose that $c=9$, since otherwise $c\ge10=3\delta+1=2\delta+4$, contradicting (1). Further, we can suppose that $s\ge3$, since $N_C(x_1)=N_C(x_2)$ when $s=2$, contradicting the hypothesis. Finally, we can suppose that $s=3$, since clearly $c\ge10$ when $s\ge 4$, a contradiction. Thus, $|I_1|=|I_2|=|I_3|=3$ and it is not hard to see that $G\backslash \{\xi_1,\xi_2,\xi_3\}$ has at least four components, contradicting $\tau\ge1$. Now assume that $N_C(x_1)=N_C(x_2)$. Since $C$ is extreme, we have $$ |I_i|\ge|\xi_ix_1\overrightarrow{P}x_2\xi_{i+1}|\ge\overline{p}+2 \ \ (i=1,...,s). $$ \textbf{Case 1.1}. $s\ge\delta-\overline{p}+1$. Clearly, $$ c=\sum_{i=1}^s|I_i|\ge s(\overline{p}+2) $$ $$ \ge(\delta-\overline{p}+1)(\overline{p}+2)=(\delta-\overline{p}-2)\overline{p}+2\delta+\overline{p}+2. \eqno{(2)} $$ If $\overline{p}\ge2$ then by (2), $c\ge2\delta+4$, contradicting (1). Let $\overline{p}\le1$.\\ \textbf{Case 1.1.1}. $\overline{p}=0$. If $\Upsilon(I_1,...,I_s)=\emptyset$ then $G\backslash \{\xi_1,...,\xi_s\}$ has at least $s+1$ components, contradicting the fact that $\tau\ge1$. Otherwise $\Upsilon(I_a,I_b)\not=\emptyset$ for some distinct $a,b\in \{1,...,s\}$. Let $L\in \Upsilon(I_a,I_b)$. By Lemma 2(a1), $$ |I_a|+|I_b|\ge 2\overline{p}+2|L|+4\ge6. $$ Recalling also that $s\ge\delta-\overline{p}+1=\delta+1$, we get $$ c=\sum_{i=1}^s|I_i|\ge|I_a|+|I_b|+2(s-2)=2s+2\ge2\delta+4, $$ contradicting (1).\\ \textbf{Case 1.1.2}. $\overline{p}=1$. By (2), $c\ge3\delta$. We can suppose that $\delta\le3$, since $c\ge3\delta\ge2\delta+4$ when $\delta\ge4$, contradicting (1). On the other hand, by the hypothesis, $\delta\ge\overline{p}+2=3$, implying that $\delta=3$. By the hypothesis, $s\ge\delta-\overline{p}+1=3$. Next, we can suppose that $s=3$, since $c\ge s(\overline{p}+2)\ge12=2\delta+6$ when $s\ge4$, contradicting (1). Further, if $\Upsilon(I_1,I_2,I_3)=\emptyset$ then $G\backslash \{\xi_1,\xi_2,\xi_3\}$ has at least four components, contradicting $\tau\ge1$. Otherwise $\Upsilon(I_a,I_b)\not=\emptyset$ for some distinct $a,b\in \{1,2,3\}$, say $a=1$ and $b=2$. Let $L\in \Upsilon(I_1,I_2)$. By Lemma 2(a1), $$ |I_1|+|I_2|\ge 2\overline{p}+2|L|+4=8, $$ which yields $c\ge|I_1|+|I_2|+|I_3|\ge11=2\delta+5$, contradicting (1).\\ \textbf{Case 1.2}. $s=\delta-\overline{p}$. It follows that $x_1x_2\in E$. Then $x_1x_2\overleftarrow{P}x_1^+$ is another longest path in $G\backslash C$. We can suppose that $N_C(x_1)=N_C(x_1^+)$, since otherwise we can argue as in Case 1. By the same reason, $$ N_C(x_1)=N_C(x_1^+)=N_C(x_1^{+2})=...=N_C(x_2). $$ Since $C$ is extreme, we have $|I_i|\ge|\xi_ix_1\overrightarrow{P}x_2\xi_{i+1}|=\overline{p}+2$ $(i=1,...,s)$. If $\Upsilon(I_1,...,I_s)=\emptyset$ then $G\backslash \{\xi_1,...,\xi_s\}$ has at least $s+1$ components, contradicting $\tau\ge1$. Otherwise $\Upsilon(I_a,I_b)\not=\emptyset$ for some distinct $a,b\in \{1,...,s\}$. Let $L\in \Upsilon(I_a,I_b)$ with $L=z_1\overrightarrow{L}z_2$, where $z_1\in V(I_a^\ast)$ and $z_2\in V(I_b^\ast)$. By Lemma 2(a1), $|I_a|+|I_b|\ge2\overline{p}+6$. Hence $$ c=\sum_{i=1}^s|I_i|\ge|I_a|+|I_b|+(s-2)(\overline{p}+2)\ge s(\overline{p}+2)+2 $$ $$ =(\delta-\overline{p})(\overline{p}+2)+2=2\delta+2+\overline{p}(\delta-\overline{p}-2). \eqno{(3)} $$ \textbf{Claim 1}. $(a1)$ $2\overline{p}+6\le |I_a|+|I_b|\le2\overline{p}+7$ and $|I_i|\le\overline{p}+5$ $(i=1,...,s)$.\ $(a2)$ If $|I_a|+|I_b|=2\overline{p}+7$ then $|I_i|=\overline{p}+2$ for each $i\in \{1,...,s\}\backslash \{a,b\}$. $(a3)$ If $|I_a|+|I_b|=2\overline{p}+6$ then $|I_f|\le\overline{p}+3$ for some $f\in \{1,...,s\}\backslash \{a,b\}$ and $|I_i|=\overline{p}+2$ for each $i\in \{1,...,s\}\backslash \{a,b,f\}$. $(a4)$ If $|I_f|=\overline{p}+5$ for some $f\in\{a,b\}$ then $|I_i|=\overline{p}+2$ for each $i\in \{1,...,s\}\backslash \{f\}$. $(a5)$ For each distinct $f,g,h\in\{1,...,s\}$, $|I_f|+|I_g|+|I_h|\le3\overline{p}+9$. $(a6)$ $\Upsilon(I_1,...,I_s)\subseteq E$. \textbf{Proof}. If $|I_f|\ge\overline{p}+6$ for some $f\in \{1,...,s\}$ then $$ c=\sum_{i=1}^s|I_i|\ge|I_f|+(s-1)(\overline{p}+2)\ge s(\overline{p}+2)+4 $$ $$ =2\delta+4+\overline{p}(\delta-\overline{p}-2)\ge 2\delta+4, $$ contradicting (1). Next, if $|I_a|+|I_b|\ge2\overline{p}+8$ then $$ c\ge|I_a|+|I_b|+(s-2)(\overline{p}+2)\ge s(\overline{p}+2)+4\ge2\delta+4, $$ again contradicting (1). Hence $(a1)$ holds. Statements $(a2)-(a4)$ can be proved by a similar way. To prove $(a5)$, assume the contrary, that is $|I_f|+|I_g|+|I_h|\ge3\overline{p}+10$ for some distinct $f,g,h\in\{1,...,s\}$. Then $$ c=\sum_{i=1}^s|I_i|\ge|I_f|+|I_g|+|I_h|+(s-3)(\overline{p}+2) $$ $$ \ge 3(\overline{p}+2)+4+(s-3)(\overline{p}+2)=2\delta+4+\overline{p}(s-2)\ge2\delta+4, $$ contradicting (1). Statement $(a6)$ follows from Lemma 2(a1) and Claim 1(a1). Claim 1 is proved.\\ \textbf{Claim 2}. $\overline{p}+3\le d_1\le\overline{p}+4$ and $\overline{p}+3\le d_2\le\overline{p}+4$, where $$ d_1=|\xi_a\overrightarrow{C}z_1|+|\xi_b\overrightarrow{C}z_2|, \ \ d_2=|z_1\overrightarrow{C}\xi_{a+1}|+|z_2\overrightarrow{C}\xi_{b+1}|. $$ \textbf{Proof}. Put $$ Q=\xi_ax_1\overrightarrow{P}x_2\xi_b\overleftarrow{C}z_1z_2\overrightarrow{C}\xi_a. $$ Clearly, $|Q|=|C|-d_1+\overline{p}+3$. Since $C$ is extreme, we have $|C|\ge|Q|$, implying that $d_1\ge \overline{p}+3$. By a symmetric argument, $d_2\ge \overline{p}+3$. By Claim 1(a1), $|I_a|+|I_b|=d_1+d_2\le2\overline{p}+7$. If $d_1\ge \overline{p}+5$ then $2\overline{p}+7\ge d_1+d_2\ge\overline{p}+5+d_2$, implying that $d_2\le \overline{p}+2$, a contradiction. Hence, $d_1\le \overline{p}+4$. By a symmetric argument, $d_2\le \overline{p}+4$. Claim 2 is proved.\\ \textbf{Claim 3}. If $v_1\in V(\xi_a^+\overrightarrow{C}z_1^-)$ and $v_2\in V(z_1^+\overrightarrow{C}\xi_{a+1}^-)$ then $v_1v_2\not\in E$. \textbf{Proof}. Assume the contrary, that is $v_1v_2\in E$. Put $$ Q=\xi_a\overrightarrow{C}v_1v_2\overleftarrow{C}z_1z_2\overleftarrow{C}\xi_{a+1}x_1\overrightarrow{P}x_2\xi_{b+1}\overrightarrow{C}\xi_a, $$ $$ |\xi_a\overrightarrow{C}v_1|=d_1, \ \ |v_1\overrightarrow{C}z_1|=d_2, \ \ |z_1\overrightarrow{C}v_2|=d_3, $$ $$ |v_2\overrightarrow{C}\xi_{a+1}|=d_4, \ \ |\xi_b\overrightarrow{C}z_2|=d_5, \ \ |z_2\overrightarrow{C}\xi_{b+1}|=d_6. $$ Clearly, $|Q|=|C|-d_2-d_4-d_6+\overline{p}+4$. Since $C$ is extreme, we have $|Q|\le |C|$, implying that $d_2+d_4+d_6\ge\overline{p}+4$. By a symmetric argument, $d_1+d_3+d_5\ge\overline{p}+4$. By summing, we get $$ \sum_{i=1}^6d_i=|I_a|+|I_b|\ge2\overline{p}+8, $$ contradicting Claim 1(a1). Thus, $v_1v_2\not\in E$. Claim 3 is proved.\\ \textbf{Claim 4}. Let $\xi_f,\xi_g,\xi_h$ occur on $\overrightarrow{C}$ in a consecutive order for some $f,g,h\in \{1,...,s\}$ and $w_1w_2\in E$ for some $w_1\in V(I_f^\ast)$ and $w_2\in V(I_g^\ast)$. If $N(w_3)\cap \{\xi_{f+1},\xi_g\}\not=\emptyset$ for some $w_3\in V(I_h^\ast)$ then $$ |w_1\overrightarrow{C}\xi_{f+1}|+|\xi_{g}\overrightarrow{C}w_2|+|\xi_{h}\overrightarrow{C}w_3|\ge\overline{p}+4. $$ Further, if $N(w_4)\cap \{\xi_{f+1},\xi_g\}\not=\emptyset$ for some $w_4\in V(I_{h-1}^\ast)$ then $$ |w_1\overrightarrow{C}\xi_{f+1}|+|\xi_{g}\overrightarrow{C}w_2|+|w_4\overrightarrow{C}\xi_h|\ge\overline{p}+4. $$ \textbf{Proof}. Assume first that $w_3\xi_{f+1}\in E$. Put $$ Q=\xi_f\overrightarrow{C}w_1w_2\overrightarrow{C}\xi_hx_1\overrightarrow{P}x_2\xi_g\overleftarrow{C}\xi_{f+1}w_3\xi_f. $$ Clearly, $$ |Q|=|C|-|w_1\overrightarrow{C}\xi_{f+1}|-|\xi_g\overrightarrow{C}w_2|-|\xi_h\overrightarrow{C}w_3|+\overline{p}+4. $$ Since $|Q|\le |C|$, the desired result holds immediately. If $w_4\xi_{f+1}\in E$ then we can use the following cycle $$ Q^\prime=\xi_f\overrightarrow{C}w_1w_2\overrightarrow{C}w_4\xi_{f+1}\overrightarrow{C}\xi_gx_2\overleftarrow{P}x_1\xi_h\overrightarrow{C}\xi_f $$ instead of $Q$. By a symmetric argument, the desired result holds when either $w_3\xi_g\in E$ or $w_4\xi_g\in E$. Claim 4 is proved.\\ \textbf{Claim 5}. Every two intermediate independent edges $e_1,e_2$ in $\Upsilon(I_1,...,I_s)$ have a crossing with $e_1,e_2\in \Upsilon(I_f,I_g,I_h)$ for some distinct $f,g,h\in\{1,...,s\}$. \textbf{Proof}. Let $e_1=w_1w_2$ and $e_2=w_3w_4$. We distinguish three different cases. First, if $e_1,e_2\in \Upsilon(I_f,I_g)$ for some distinct $f,g$, then by Lemma 2(a3), $|I_f|+|I_g|\ge2\overline{p}+8$, contradicting Claim 1(a1). Next, if $e_1\in \Upsilon(I_f,I_g)$ and $e_2\in \Upsilon(I_h,I_r)$ for some distinct $f,g,h,r$, then by Lemma 2(a1), $|I_f|+|I_g|\ge2\overline{p}+6$ and $|I_h|+|I_r|\ge2\overline{p}+6$, implying that $$ c\ge|I_f|+|I_g|+|I_h|+|I_r|+(s-4)(\overline{p}+2)=4\overline{p}+12+(s-4)(\overline{p}+2) $$ $$ =s(\overline{p}+2)+4=2\delta+4+\overline{p}(\delta-\overline{p}-2)\ge2\delta+4, $$ which again contradicts (1). Finally, let $e_1\in \Upsilon(I_f,I_g)$ and $e_2\in \Upsilon(I_f,I_h)$ for some distinct $f,g,h$. Assume w.l.o.g. that $\xi_f,\xi_g,\xi_h$ occur on $\overrightarrow{C}$ in a consecutive order and $w_1,w_3\in V(I_f^\ast)$, $w_2\in V(I_g^\ast)$, $w_4\in V(I_h^\ast)$. We can assume also that $w_3$ and $w_1$ occur on $I_f$ in a consecutive order, since otherwise $e_1$ and $e_2$ have a crossing and we are done. Put $$ Q=\xi_f\overrightarrow{C}w_3w_4\overleftarrow{C}w_2w_1\overrightarrow{C}\xi_gx_2\overleftarrow{P}x_1\xi_{h+1}\overrightarrow{C}\xi_f, $$ $$ |\xi_f\overrightarrow{C}w_3|=d_1, \ |w_3\overrightarrow{C}w_1|=d_2, \ |w_1\overrightarrow{C}\xi_{f+1}|=d_3, $$ $$ |\xi_g\overrightarrow{C}w_2|=d_4, \ |w_2\overrightarrow{C}\xi_{g+1}|=d_5, \ |\xi_h\overrightarrow{C}w_4|=d_6 , \ |w_4\overrightarrow{C}\xi_{h+1}|=d_7. $$ Clearly, $|Q|=|C|-d_2-d_4-d_7+\overline{p}+4$. Since $C$ is extreme, we have $|Q|\le |C|$, implying that $d_2+d_4+d_7\ge\overline{p}+4$. On the other hand, by Lemma 2, $d_3+d_5\ge \overline{p}+3$ and $d_1+d_6\ge \overline{p}+3$. By summing, we get $\sum_{i=1}^7d_i=|I_f|+|I_g|+|I_h|\ge3\overline{p}+10$. Then $$ |C|\ge|I_f|+|I_g|+|I_h|+(s-3)(\overline{p}+2)=s(\overline{p}+2)+4\ge2\delta+4, $$ contradicting (1). Claim 5 is proved.\\ \textbf{Claim 6}. If $\mu(\Upsilon)=1$ then $s\le3$ and either $\xi_a^+\xi_{b+1}^-\in E$ with $\xi_a=\xi_{b+1}$ or $\xi_{a+1}^-\xi_b^+\in E$ with $\xi_{a+1}=\xi_b$. If $\mu(\Upsilon)=1$ and $s=3$ then $|I_1|=|I_2|=|I_3|=\overline{p}+3$. \textbf{Proof}. Since $\mu(\Upsilon)=1$, either one of the vertices $z_1, z_2$, say $z_1$, is a common vertex for all edges in $\Upsilon(I_1,...,I_s)$ or $z_1z_3,z_2z_3\in \Upsilon(I_1,...,I_s)$ for some $z_3\in V(I_f^\ast)$ and $f\in\{1,...,s\}\backslash \{a,b\}$.\\ \textbf{Case a1}. $z_1$ is a common vertex for all edges in $\Upsilon(I_1,...,I_s)$. If $z_1\not\in \{\xi_a^+,\xi_{a+1}^-\}$ then by Claim 3, $G\backslash \{\xi_1,...,\xi_s,z_1\}$ has at least $s+2$ components, contradicting $\tau\ge1$. Let $z_1\in \{\xi_a^+,\xi_{a+1}^-\}$, say $z_1=\xi_a^+$.\\ \textbf{Case a1.1}. $z_1\xi_{b+1}^-\not\in E$. It follows that $z_2\not=\xi_{b+1}^-$. By Claim 2, $|\xi_b\overrightarrow{C}z_2|\ge\overline{p}+2$.\\ \textbf{Case a1.1.1}. $z_1\xi_{b+1}^{-2}\not\in E$. It follows that $|I_b|\ge \overline{p}+5$. By Claim 1(a1), $|I_a|=\overline{p}+2$. Moreover, we have $|I_b|= \overline{p}+5$, $|\xi_b\overrightarrow{C}z_2|=\overline{p}+2$, $z_2=\xi_{b+1}^{-3}$ and $N(z_1)\cap V(I_b^\ast)=\{z_2\}$. By Claim 1(a4), $|I_i|=\overline{p}+2$ for each $i\in\{1,...,s\}\backslash \{b\}$. Next, by Lemma 2(a1), $\Upsilon(I_a,I_i)=\emptyset$ for each $i\in\{1,...,s\}\backslash \{a,b\}$. Thus, if $z_1y\in \Upsilon(I_1,...,I_s)$ then $y=z_2$, implying that $\Upsilon(I_1,...,I_s)=\{z_1z_2\}$. Besides, since $|\xi_b\overrightarrow{C}z_2|=\overline{p}+2\ge2$, we have $z_2\not\in\{\xi_b^+,\xi_{b+1}^-\}$. Therefore, by Claim 3, $G\backslash \{\xi_1,...,\xi_s,z_2\}$ has at least $s+2$ components, contradicting $\tau\ge1$. \\ \textbf{Case a1.1.2}. $z_1\xi_{b+1}^{-2}\in E$. It follows that $|I_b|\ge\overline{p}+4$. Assume first that $|I_b|=\overline{p}+5$. If $z_1\xi_{b+1}^{-3}\not\in E$ then clearly $z_2=\xi_{b+1}^{-2}$ and we can argue as in Case a1.1.1. Otherwise the following cycle $$ \xi_ax_1\overrightarrow{P}x_2\xi_{a+1}\overrightarrow{C}\xi_{b+1}^{-3}z_1\xi_{b+1}^{-2}\overrightarrow{C}\xi_a $$ is longer than $C$, a contradiction. Now assume that $|I_b|=\overline{p}+4$, that is $|\xi_b\overrightarrow{C}\xi_{b+1}^{-2}|=\overline{p}+2$. If $z_1y\in E$ for some $y\in V(\xi_b\overrightarrow{C}\xi_{b+1}^{-3})$ then by Claim 2, $|\xi_b\overrightarrow{C}y|\ge \overline{p}+2$, implying that $|I_b|\ge\overline{p}+5$, a contradiction. Hence, if $z_1y\in \Upsilon(I_a,I_b)$ then clearly $y=\xi_{b+1}^{-2}$. In particular, we have $z_2=\xi_{b+1}^{-2}$. Further, if $z_1y\in \Upsilon(I_a,I_f)$ for some $f\in\{1,...,s\}\backslash\{b\}$, then by Lemma 2(a1), $|I_a|+|I_f|\ge2\overline{p}+6$, that is $|I_a|+|I_b|+|I_f|\ge3\overline{p}+10$, contradicting Claim 1(a5). Thus, $z_2$ is a common vertex for all edges in $\Upsilon(I_1,...,I_s)$. By Claim 3, $G\backslash \{\xi_1,...,\xi_s,z_2\}$ has at least $s+2$ components, contradicting $\tau\ge1$.\\ \textbf{Case a1.2}. $\xi_a^+\xi_{b+1}^-\in E$. By Claim 2, $|\xi_a^+\overrightarrow{C}\xi_{a+1}|\ge\overline{p}+2$ and $|\xi_b\overrightarrow{C}\xi_{b+1}^-|\ge\overline{p}+2$. If $|\xi_a^+\overrightarrow{C}\xi_{a+1}|\ge\overline{p}+3$ and $|\xi_b\overrightarrow{C}\xi_{b+1}^-|\ge\overline{p}+3$ then $|I_a|+|I_b|\ge2\overline{p}+8$, contradicting Claim 1(a1). Hence, we can assume w.l.o.g. that $|\xi_b\overrightarrow{C}\xi_{b+1}^-|=\overline{p}+2$, that is $|I_b|=\overline{p}+3$ and $|I_a|\ge\overline{p}+3$. Further, we have $\xi_b^+\xi_a,\xi_b^+\xi_{b+1}\not\in E$ (by Claim 4) and $\xi_b^+\xi_a^+\not\in E$ (by Claim 2).\\ \textbf{Case a1.2.1}. $N(\xi_b^+)\not\subseteq V(C)$. Let $Q=\xi_b^+\overrightarrow{Q}v$ be a longest path in $G$ with $V(Q)\cap V(C)=\{\xi_b^+\}$. Since $C$ is extreme, we have $V(Q)\cap V(P)=\emptyset$. Next, since $P$ is a longest path in $G\backslash C$, we have $|Q|\le\overline{p}+1$. Further, recalling that $\xi_b^+\xi_a,\xi_b^+\xi_{b+1},\xi_b^+\xi_a^+\not\in E$ (see Case a1.2), we conclude that $v\xi_a, v\xi_{b+1}, v\xi_a^+\not\in E$, as well. If $vy\not\in E$ for each $y\in (\xi_b^{+2}\overrightarrow{C}\xi_{b+1}^-)$ then clearly $$ N(v)\subseteq(V(Q)\cup\{\xi_1,...,\xi_s\})\backslash\{\xi_a,\xi_{b+1}\xi_a^+\}, $$ that is $d(v)\le|Q|+s-2\le\overline{p}+s-1=\delta-1$, a contradiction. Now let $vy\in E$ for some $y\in V(\xi_b^{+2}\overrightarrow{C}\xi_{b+1}^-)$. Assume that $y$ is chosen so as to minimize $|\xi_b^+\overrightarrow{C}y|$. Since $C$ is extreme, we have $|\xi_b^+\overrightarrow{C}y|\ge|Q|+1$. Further, since $$ |N(v)\cap V(y\overrightarrow{C}\xi_{b+1}^-)|\ge\delta-(s-2)-|Q|, $$ we have $$ |\xi_b^+\overrightarrow{C}\xi_{b+1}^-|\ge|Q|+1+2(\delta-s+1-|Q|) $$ $$ =2\delta-|Q|-2s+3\ge2\delta-\overline{p}-2s+2=\overline{p}+2. $$ But then $|I_b|\ge\overline{p}+4$, a contradiction.\\ \textbf{Case a1.2.2}. $N(\xi_b^+)\subseteq V(C)$. Since $\mu(\Upsilon)=1$ and $\xi_b^+\xi_a^+\not\in E$, we have $$ N(\xi_b^+)\subseteq V(\xi_b^{+2}\overrightarrow{C}\xi_{b+1}^-)\cup \{\xi_1,...,\xi_s\}\backslash \{\xi_a,\xi_{b+1}\}. $$ If $\xi_a\not=\xi_{b+1}$ then $d(\xi_b^+)\le\overline{p}+s-1=\delta-1$, a contradiction. Hence $\xi_a=\xi_{b+1}$. \\ \textbf{Case a1.2.2.1}. $|I_f|=\overline{p}+2$ for some $f\in\{1,...,s\}\backslash \{a,b\}$. If $N(\xi_f^+)\subseteq V(C)$ then as above, $$ d(\xi_f^+)\le s-1+|\xi_f^+\overrightarrow{C}\xi_{f+1}^-|=\overline{p}+s-1=\delta-1, $$ a contradiction. If $N(\xi_f^+)\not\subseteq V(C)$ then we can argue as in Case a1.2.1. \\ \textbf{Case a1.2.2.2}. $|I_i|\ge\overline{p}+3$ for each $i\in\{1,...,s\}\backslash \{a,b\}$. If $s\ge4$ then $$ |C|=\sum_{i=1}^s|I_i|\ge s(\overline{p}+3)=(\delta-\overline{p})(\overline{p}+3) $$ $$ =2\delta+2\overline{p}+4+(\delta-\overline{p}-4)(\overline{p}+1)\ge2\delta+4, $$ contradicting (1). Hence, $s\le3$. Moreover, if $s=3$ then by Claim 1(a5), $|I_1|=|I_2|=|I_3|=\overline{p}+3$.\\ \textbf{Case a2}. $z_1z_3,z_2z_3\in \Upsilon(I_1,...,I_s)$, where $z_3\in V(I_f^\ast)$ and $f\in\{1,...,s\}\backslash \{a,b\}$. Assume w.l.o.g. that $\xi_a,\xi_b,\xi_f$ occur on $\overrightarrow{C}$ in a consecutive order. Put $$ |\xi_a\overrightarrow{C}z_1|=d_1, \ |z_1\overrightarrow{C}\xi_{a+1}|=d_2, \ |\xi_b\overrightarrow{C}z_2|=d_3, $$ $$ |z_2\overrightarrow{C}\xi_{b+1}|=d_4, \ |\xi_f\overrightarrow{C}z_3|=d_5, \ |z_3\overrightarrow{C}\xi_{f+1}|=d_6. $$ By Claim 2, $$ d_1+d_3\ge\overline{p}+3, \ d_1+d_5\ge\overline{p}+3, \ d_2+d_4\ge\overline{p}+3, $$ $$ d_2+d_6\ge\overline{p}+3, \ d_3+d_5\ge\overline{p}+3, \ d_4+d_6\ge\overline{p}+3. $$ By summing, we get $$ 2\sum_{i=1}^6d_i=2(|I_a|+|I_b|+|I_f|)\ge6(\overline{p}+3). $$ On the other hand, by Claim 1(a5), $|I_a|+|I_b|+|I_f|\le3(\overline{p}+3)$, implying that $d_1=d_2=...=d_6=(\overline{p}+3)/2$ and $\overline{p}$ is odd. Hence $d_i\ge2$ and using Claim 3, we can state that $G\backslash \{\xi_1,...,\xi_s,z_1,z_2\}$ has at least $s+3$ components, contradicting $\tau\ge1$. Claim 6 is proved.\\ \textbf{Claim 7}. Either $\mu(\Upsilon)=1$ or $\mu(\Upsilon)=3$. \textbf{Proof}. The proof is by contradiction. If $\mu(\Upsilon)=0$ then $G\backslash \{\xi_1,...,\xi_s\}$ has at least $s+1$ components, contradicting $\tau\ge1$. Let $\mu(\Upsilon)\ge 1$.\\ \textbf{Case a1}. $\mu=2$. By Claim 5, $\Upsilon(I_1,...,I_s)$ consists of two crossing intermediate independent edges $w_1w_2\in \Upsilon(I_f,I_g)$ and $w_3w_4\in \Upsilon(I_f,I_h)$ for some distinct $f,g,h$. Assume that both $\xi_f,\xi_g,\xi_h$ and $w_1,w_3,w_2,w_4$ occur on $\overrightarrow{C}$ in a consecutive order. Put $$ Q=\xi_f\overrightarrow{C}w_1w_2\overrightarrow{C}w_4w_3\overrightarrow{C}\xi_gx_2\overleftarrow{P}x_1\xi_{h+1}\overrightarrow{C}\xi_f, $$ $$ |\xi_f\overrightarrow{C}w_1|=d_1, \ |w_1\overrightarrow{C}w_3|=d_2, \ |w_3\overrightarrow{C}\xi_{f+1}|=d_3, $$ $$ |\xi_g\overrightarrow{C}w_2|=d_4, \ |w_2\overrightarrow{C}\xi_{g+1}|=d_5, \ |\xi_h\overrightarrow{C}w_4|=d_6, \ |w_4\overrightarrow{C}\xi_{h+1}|=d_7. $$ Clearly, $|Q|=|C|-d_2-d_4-d_7+\overline{p}+4$. Since $|Q|\le |C|$, we have $d_2+d_4+d_7\ge\overline{p}+4$. If $d_3+d_6\ge\overline{p}+3$ and $d_1+d_5\ge\overline{p}+3$ then $\sum_{i=1}^7d_i=|I_f|+|I_g|+|I_h|\ge3\overline{p}+10$, contradicting Claim 1(a5). Otherwise, either $d_3+d_6\le\overline{p}+2$ or $d_1+d_5\le\overline{p}+2$, say $d_3+d_6\le\overline{p}+2$. Further, if either $d_7=1$ or $\xi_{h+1}^-w_3\in E$ then by Claim 2, $d_3\ge \overline{p}+2$, that is $d_3+d_6\ge\overline{p}+3$, a contradiction. Hence, $d_7\ge2$ and $\xi_{h+1}^-w_3\not\in E$. By Claim 4, $\xi_{h+1}^-\xi_{f+1}, \xi_{h+1}^-\xi_{h}\not\in E$. If $|I_h|\ge \overline{p}+4$ then taking account that $|I_f|+|I_g|\ge2\overline{p}+6$ (by Claim 1(a1)), we get $|I_f|+|I_g|+|I_h|\ge3\overline{p}+10$, contradicting Claim 1(a5). Hence, $|I_h|\le\overline{p}+3$. By a symmetric argument, $|I_g|\le\overline{p}+3$.\\ \textbf{Case a1.1}. $N(\xi_{h+1}^-)\subseteq V(C)$. If $\xi_{h+1}^-w_2\not\in E$ then recalling that $\mu(\Upsilon)=2$, we get $$ N(\xi_{h+1}^-)\subseteq V(w_4\overrightarrow{C}\xi_{h+1}^{-2})\cup\{\xi_1,...,\xi_s\}\backslash\{\xi_{f+1},\xi_h\}, $$ implying that $|N(\xi_{h+1}^-)|\le\overline{p}+s-1=\delta-1$, a contradiction. Now let $\xi_{h+1}^-w_2\in E$. By Claim 1(a1 and a5), $|I_f|=|I_g|=|I_h|=\overline{p}+3$. Moreover, by Claim 2, $d_5=\overline{p}+2$ and $d_4=1$. Then by, by the same reason, $d_1=\overline{p}+2$, implying that $|I_a|\ge\overline{p}+4$, a contradiction.\\ \textbf{Case a1.2}. $N(\xi_{h+1}^-)\not\subseteq V(C)$. We can argue as in the proof of Claim 6 (Case a1.2.1).\\ \textbf{Case a2}. $\mu(\Upsilon)\ge4$. By Claim 5, there are at least four pairwise crossing intermediate independent edges in $\Upsilon(I_1,...,I_s)$, which is impossible. Claim 7 is proved.\\ \textbf{Claim 8}. If $\mu(\Upsilon)=1$ then either $n\equiv1(mod\ 3)$ with $c\ge2\delta+2$ or $n\equiv1(mod\ 4)$ with $c\ge2\delta+3$ or $n\equiv2(mod\ 3)$ with $c\ge2\delta+3$. \textbf{Proof}. By Claim 6, $s\le3$ and either $\xi_a^+\xi_{b+1}^-\in E$ or $\xi_{a+1}^-\xi_b^+\in E$, say $\xi_{a+1}^-\xi_b^+\in E$. \\ \textbf{Case a1}. $s=2$. It follows that $\delta=\overline{p}+s=\overline{p}+2$. Let $a=1$ and $b=2$. By Claim 2, $|\xi_1\overrightarrow{C}\xi_2^-|\ge \overline{p}+2$ and $|\xi_2^+\overrightarrow{C}\xi_1|\ge \overline{p}+2$, implying that $|I_i|\ge \overline{p}+3$ $(i=1,2)$.\\ \textbf{Case a1.1}. $|I_1|=\overline{p}+4$ and $|I_2|=\overline{p}+3$. If $V(G)=V(C\cup P)$ then $n=3\overline{p}+8=3\delta+2\equiv2(mod\ 3)$ with $c=2\overline{p}+7=2\delta+3$, and we are done. Otherwise $N(v_1)\not\subseteq V(C\cup P)$ for some $v_1\in V(C\cup P)$. Observing that $x_1x_2\in E$ and recalling that $P$ is a longest path in $V(G\backslash C)$, we conclude that $v_1\not\in V(P)$. Choose a longest path $Q=v_1\overrightarrow{Q}v_2$ with $V(Q)\cap V(C)=\{v_1\}$. Clearly, $1\le|Q|\le\overline{p}+1=\delta-1$ and $N(v_2)\subseteq V(C\cup Q)$.\\ \textbf{Case a1.1.1}. $v_1\in V(\xi_2^{+2}\overrightarrow{C}\xi_1^-)$. By Claim 1(a6), $N(v_2)\cap V(I_1^\ast)=\emptyset$, that is $N(v_2)\subseteq V(I_1)\cup V(Q)$. Assume that $v_1$ is chosen so as to minimize $|v_1\overrightarrow{C}\xi_1|$, implying that $N(v_2)\cap V(v_1\overrightarrow{C}\xi_1^-)=\emptyset$. Clearly, $|v_1\overrightarrow{C}\xi_1|\le \overline{p}+1$. Then by Claim 4, $v_1\xi_2\not\in E$ and therefore, $v_2\xi_2\not\in E$, as well. \\ \textbf{Case a1.1.1.1}. $v_2\xi_1\in E$. It follows that $N(v_2)\subseteq V(Q)\cup V(\xi_2^+\overrightarrow{C}v_1^-)\cup \{\xi_1\}$. Since $C$ is extreme and $v_2\xi_1\in E$, we have $|v_1\overrightarrow{C}\xi_1|\ge |Q|+1$. If $N(v_2)\subseteq V(Q)\cup \{\xi_1\}$ then clearly $|Q|\ge\delta-1=\overline{p}+1$ and therefore, $|v_1\overrightarrow{C}\xi_1|\ge \overline{p}+2$. But then $|I_2|\ge\overline{p}+4$, a contradiction. Hence, $N(v_2)\not\subseteq V(Q)\cup \{\xi_1\}$, that is $v_2y\in E$ for some $y\in V(\xi_2^+\overrightarrow{C}v_1^-)$. Assume that $y$ is chosen so as to minimize $|y\overrightarrow{C}v_1|$. Observing that $|y\overrightarrow{C}v_1|\ge |Q|+1$ and $\delta=|\xi_2^+\overrightarrow{C}\xi_1|\ge4$, we get $$ |\xi_2^+\overrightarrow{C}\xi_1|\ge2(|Q|+1)+2(\delta-|Q|-2)=2\delta-2\ge\delta+2=\overline{p}+4, $$ a contradiction. \\ \textbf{Case a1.1.1.2}. $v_2\xi_1\not\in E$. It follows that $N(v_2)\subseteq V(Q)\cup V(\xi_2^+\overrightarrow{C}v_1^-)$. If $N(v_2)\subseteq V(Q)$ then $|Q|\ge\delta=\overline{p}+2$, a contradiction. Otherwise $v_2y\in E$ for some $y\in V(\xi_2^+\overrightarrow{C}v_1^-)$. Assume that $y$ is chosen so as to minimize $|y\overrightarrow{C}v_1|$. Since $|y\overrightarrow{C}v_1|\ge |Q|+1$, we have $$ |\xi_2^+\overrightarrow{C}v_1|\ge|Q|+1+2(\delta-|Q|-1)=2\delta-|Q|-1\ge\delta=\overline{p}+2. $$ But then $|I_b|\ge4$, a contradiction.\\ \textbf{Case a1.1.2}. $v_1\in V(\xi_1^{+}\overrightarrow{C}\xi_2^{-3})$. By Claim 1(a6), $N(v_2)\cap V(I_2^\ast)=\emptyset$, that is $N(v_2)\subseteq V(Q)\cup V(I_1)$. Assume that $v_1$ is chosen so as to minimize $|\xi_1\overrightarrow{C}v_1|$, implying that $N(v_2)\cap V(\xi_1^+\overrightarrow{C}v_1^-)=\emptyset$. Clearly, $|\xi_1\overrightarrow{C}v_1|\le \overline{p}+1$. Then by Claim 4, $v_1\xi_2\not\in E$ and therefore, $v_2\xi_2\not\in E$. \\ \textbf{Case a1.1.2.1}. $\xi_2^+\xi_2^{-2}\in E$. By Claim 3, $v_1\xi_2^-\not \in E$, implying that $v_2\xi_2^-\not\in E$.\\ \textbf{Case a1.1.2.1.1}. $v_2\xi_1\in E$. It follows that $N(v_2)\subseteq V(Q)\cup V(v_1\overrightarrow{C}\xi_2^{-2})\cup \{\xi_1\}$. Since $C$ is extreme and $v_2\xi_1\in E$, we have $|\xi_1\overrightarrow{C}v_1|\ge |Q|+1$. If $N(v_2)\subseteq V(Q)\cup \{\xi_1\}$ then $|Q|\ge \delta-1=\overline{p}+1$ and therefore, $|\xi_1\overrightarrow{C}v_1|\ge \overline{p}+2$. But then $|I_1|\ge \overline{p}+5$, a contradiction. Hence, $N(v_2)\not\subseteq V(Q)\cup \{\xi_1\}$, that is $v_2y\in E$ for some $y\in V(v_1^+\overrightarrow{C}\xi_2^{-2})$. Assume that $y$ is chosen so as to minimize $|v_1\overrightarrow{C}y|$. Observing that $|v_1\overrightarrow{C}y|\ge|Q|+1$ and $\delta=|\xi_1\overrightarrow{C}\xi_2^{-2}|\ge4$, we get $$ |\xi_1\overrightarrow{C}\xi_2^{-2}|\ge 2(|Q|+1)+2(\delta-|Q|-2)=2\delta-2\ge\delta+2=\overline{p}+4, $$ a contradiction.\\ \textbf{Case a1.1.2.1.2}. $v_2\xi_1\not\in E$. It follows that $N(v_2)\subseteq V(Q)\cup V(v_1\overrightarrow{C}\xi_2^{-2})$. If $N(v_2)\subseteq V(Q)$ then $|Q|\ge \delta=\overline{p}+2$, a contradiction. Otherwise $v_2y\in E$ for some $y\in V(v_1^+\overrightarrow{C}\xi_2^{-2})$. By choosing $y$ so as to minimize $|v_1\overrightarrow{C}y|$, we get $$ |v_1\overrightarrow{C}\xi_2^{-2}|\ge |Q|+1+2(\delta-|Q|-1)=2\delta-|Q|-1\ge\delta=\overline{p}+2. $$ This yields $|I_a|\ge\overline{p}+5$, a contradiction.\\ \textbf{Case a1.1.2.2}. $\xi_2^+\xi_2^{-2}\not\in E$. If $v_2\xi_1\in E$ then as in Case a1.1.2.1.1, $|\xi_1\overrightarrow{C}\xi_2^-|\ge \overline{p}+4$, contradicting the fact that $|I_1|=\overline{p}+4$. Otherwise, as in Case a1.1.2.1.2, $|v_1\overrightarrow{C}\xi_2^-|\ge \overline{p}+2$. Since $|I_1|=\overline{p}+4$, we have $v_1=\xi_1^+$, $|Q|=\delta-1=\overline{p}+1$ and $v_3=\xi_2^-$. Moreover, we have $N(v_2)=(V(Q)\cup \{\xi_2^-\})\backslash \{v_2\}$. Further, let $v$ be an arbitrary vertex in $V(Q)\backslash \{v_1\}$. Put $Q^\prime=v_1\overrightarrow{Q}v^-v_2\overleftarrow{Q}v$. Since $Q^\prime$ is another longest path with $V(Q^\prime)\cap V(C)=\{v_1\}$, we can suppose that $N(v)=(V(Q)\cup \{\xi_2^-\})\backslash \{v\}$ for each $v\in V(Q)\backslash\{v_1\}$. Furthermore, if $\xi_1y\in E$ for some $y\in V(\xi_1^{+2}\overrightarrow{C}\xi_2^{-2})$ then $$ \xi_1x_1\overrightarrow{P}x_2\xi_2\xi_2^+\xi_2^-v_2\overleftarrow{Q}v_1\overrightarrow{C}y\xi_1 $$ is longer than $C$, a contradiction. Hence, $\xi_1y\not\in E$ for each $y\in V(\xi_1^{+2}\overrightarrow{C}\xi_2^{-2})$. Analogously, if $y\xi_2\in E$ for some $y\in V(\xi_1^{+}\overrightarrow{C}\xi_2^{-2})$ then $$ \xi_1x_1\overrightarrow{P}x_2\xi_2y\overleftarrow{C}\xi_1^+\overrightarrow{Q}v_2\xi_2^-\xi_2^+\overrightarrow{C}\xi_1 $$ is longer than $C$, a contradiction. Hence, $y\xi_2\not\in E$ for each $y\in V(\xi_1^{+}\overrightarrow{C}\xi_2^{-2})$. But then $G\backslash \{\xi_1^+,\xi_2^-\}$ has at least three components, contradicting $\tau\ge1$.\\ \textbf{Case a1.1.3}. $v_1=\xi_2^{-2}$. By Claim 1(a6), $N(v_2)\subseteq V(I_1)$. If $v_2y\in E$ for some $y\in V(\xi_1^+\overrightarrow{C}v_1^{-})$ then we can argue as in Case a1.1.2. Hence, $N(v_2)\subseteq V(Q)\cup \{\xi_1,\xi_2\}$. If $v_2\xi_2\in E$ then $$ \xi_1x_1\overrightarrow{P}x_2\xi_2v_2\overleftarrow{Q}v_1\xi_2^-\xi_2^+\overrightarrow{C}\xi_1 $$ is longer than $C$, a contradiction. Then clearly, $v_2\xi_1\in E$ and $N(v_2)\subseteq V(Q)\cup \{\xi_1\}$. Furthermore, we have $|Q|\ge \delta-1$, implying that $|\xi_1\overrightarrow{C}v_1|\ge|Q|+1\ge\delta$. Since $|\xi_1\overrightarrow{C}v_1|=\delta$, we have $|Q|=\delta-1=\overline{p}+1$ and $N(v_2)=(V(Q)\cup \{\xi_1\})\backslash\{v_2\}$. Moreover, as in Case 1.1.2.2, we have $N(v)=(V(Q)\cup \{\xi_1\})\backslash\{v\}$ for each $v\in V(Q)\backslash \{v_1\}$. Now consider an arbitrary vertex $y\in V(\xi_1^+\overrightarrow{C}\xi_2^{-3})$. Clearly, $|\xi_1\overrightarrow{C}y|\le\overline{p}+1$. By Claim 2, $y\xi_2^+\not\in E$. Next, by Claim 4, $y\xi_2\not\in E$. Further, if $y\xi_2^-\in E$ then $$ \xi_1x_1\overrightarrow{P}\xi_2\xi_2\xi_2^+\xi_2^-y\overrightarrow{C}\xi_2^{-2}\overrightarrow{Q}v_2\xi_1 $$ is longer than $C$, a contradiction. Finally, since $\mu(\Upsilon)=1$, we have $yv\not\in E$ for each $v\in V(\xi_2^{+2}\overrightarrow{C}\xi_1^{-})$. But then $G\backslash \{\xi_1,\xi_2^{-2}\}$ has at least three components, contradicting $\tau\ge1$.\\ \textbf{Case a1.1.4}. $v_1=\xi_1$. If $v_2v_3\in E$ for some $v_3\in V(\xi_2^{+2}\overrightarrow{C}\xi_1^{-})\cup V(\xi_1^+\overrightarrow{C}\xi_2^{-2})$ then we can argue as in Cases a1.1.1-a1.1.3. Otherwise $v_2v_3\in E$ for some $v_3\in\{\xi_2^-,\xi_2^+,\xi_2\}$. If $v_3\in\{\xi_2,\xi_2^+\}$ then we can show, as in Case a1.1.3, that $G\backslash \{\xi_1,v_3\}$ has at least three components, contradicting $\tau\ge1$. Now let $v_3=\xi_2^-$. Consider an arbitrary vertex $v\in V(Q)\backslash \{v_1\}$. Since $C$ is extreme, we have $N(v)\cap \{\xi_2,\xi_2^+\}=\emptyset$. Next, if $vy\in E$ for some $y\in V(C)\backslash \{\xi_1,\xi_2,\xi_2^-,\xi_2^+\}$ then we can argue as in Cases a1.1.1-a1.1.3. Thus, we can assume that $N(v)\subseteq V(Q)\cup \{\xi_2^-\}$, implying that $|Q|\ge\delta-1=\overline{p}+1$. Let $w\in V(\xi_1^+\overrightarrow{C}\xi_2^{-3})$. Since $|\xi_1\overrightarrow{C}w|\le \overline{p}+1$, we have $w\xi_2^+\not\in E$ (by Claim 2) and $w\xi_2\not\in E$ (by Claim 4). Recalling also that $\mu(\Upsilon)=1$, we conclude that $N(v)\subseteq V(\xi_1\overrightarrow{C}\xi_2^{-})$. If $\xi_2^{-2}\xi_2,\xi_2^{-2}\xi_2^+\not\in E$ then clearly $G\backslash \{\xi_1,\xi_2^-\}$ has at least three components, contradicting $\tau\ge1$. Hence, either $\xi_2^{-2}\xi_2\in E$ or $\xi_2^{-2}\xi_2^+\in E$. \\ \textbf{Case a1.1.4.1}. $\xi_2^{-2}\xi_2\in E$. If $\xi_2^{-2}\xi_2^+\not\in E$ then $G\backslash \{\xi_1,\xi_2,\xi_2^-\}$ has at least four components, contradicting $\tau\ge1$. Hence, $\xi_2^{-2}\xi_2^+\in E$, that is $\langle\xi_2,\xi_2^-,\xi_2^{-2},\xi_2^+\rangle$ is a complete graph. If $V(G)=V(C\cup P\cup Q)$ then $n=4\delta+1\equiv 1 (mod\ 4)$ with $c=2\delta+3$, and we are done. Otherwise, as in previous cases, we can show that $\tau<1$, a contradiction.\\ \textbf{Case a1.1.4.2}. $\xi_2^{-2}\xi_2^+\in E$. If $\xi_2^{-2}\xi_2\not\in E$ then $G\backslash \{\xi_1,\xi_2^-,\xi_2^+\}$ has at least four components, contradicting $\tau\ge1$. Otherwise $\langle\xi_2,\xi_2^-,\xi_2^{-2},\xi_2^+\rangle$ is a complete graph and we can argue as in Case a1.1.4.1. \\ \textbf{Case a1.1.5}. $v_1\in \{\xi_2,\xi_2^-,\xi_2^+\}$. Since $C$ is extreme, we have $v_2\not\in \{\xi_2,\xi_2^-,\xi_2^+\}$ and therefore, we can argue as in Cases a1.1.1-1.1.4. \\ \textbf{Case a1.2}. $|I_1|=|I_2|=\overline{p}+3$. We can show that $n=3\delta+1\equiv 1(mod\ 3)$ with $c=2\delta+2$, by arguing as in Case a1.1.\\ \textbf{Case a2}. $s=3$. By Claim 6, $|I_1|=|I_2|=|I_3|=\overline{p}+3=\delta$ and $\xi_2^-\xi_2^+\in E$. If $\delta\ge4$ then $c=3\delta\ge2\delta+4$, contradicting (1). Hence $\delta=3$ and therefore, $\overline{p}=0$. Put $$ C=\xi_1w_1w_2\xi_2w_3w_4\xi_3w_5w_6\xi_1, $$ where $w_2w_3\in E$. Using Claims 2-5, we can show that It is not hard to see that $$ N_C(w_1)=\{w_2,\xi_1,\xi_3\}, \ N_C(w_6)=\{w_5,\xi_1,\xi_3\}. $$ Analogous relations hold for $w_4,w_5$. If $V(G\backslash C)=\{x_1\}$ then $n=10\equiv1(mod\ 3)$ with $c=9=2\delta+3>2\delta+2$, and we are done. Otherwise $N(y)=\{v_1,v_2,v_3\}$ for some $y\in V(G\backslash C)\backslash\{x_1\}$ with $N(y)\subseteq V(C)$. Since $C$ is extreme, it is not hard to see that either $N(y)=\{w_2,\xi_1,\xi_3\}$ or $N(y)=\{w_3,\xi_1,\xi_3\}$ or $N(y)=\{\xi_1,\xi_2,\xi_3\}$. But then $G\backslash N(y)$ has at least four components, contradicting $\tau\ge1$. Claim 8 is proved.\\ \textbf{Claim 9}. If $\mu=3$ then $G$ is the Petersen graph, that is $n=10\equiv1(mod\ 3)$ with $c\ge2\delta+2$. \textbf{Proof}. By Claim 5, $\Upsilon(I_1,...,I_s)$ contains three pairwise crossing intermediate independent edges $e_1,e_2,e_3$. Let $e_1=w_1w_2$, $e_2=w_3w_4$ and $e_3=w_5w_6$. If $w_1,w_3,w_5\in V(I_f^\ast)$ for some $f\in \{1,...,s\}$, then we can argue as in proof of Claim 7. Otherwise we can assume w.l.o.g. that $w_1,w_3\in V(I_f^\ast)$, $w_2,w_5\in V(I_g^\ast)$ and $w_4,w_6\in V(I_h^\ast)$ for some distinct $f,g,h\in\{1,...,s\}$, where both $\xi_f,\xi_g,\xi_h$ and $w_1,w_3,w_5,w_2,w_4,w_6$ occur on $\overrightarrow{C}$ in a consecutive order. By Claim 1(a1 and a5), $|I_f|=|I_g|=|I_h|=\overline{p}+3$ and $|I_i|=\overline{p}+2$ for each $i\in\{1,...,s\}\backslash\{f,g,h\}$. Put $$ |\xi_f\overrightarrow{C}w_1|=d_1, \ |w_1\overrightarrow{C}w_3|=d_2, \ |w_3\overrightarrow{C}\xi_{f+1}|=d_3, $$ $$ |\xi_g\overrightarrow{C}w_5|=d_4, \ |w_5\overrightarrow{C}w_2|=d_5, \ |w_2\overrightarrow{C}\xi_{g+1}|=d_6, $$ $$ |\xi_h\overrightarrow{C}w_4|=d_7, \ |w_4\overrightarrow{C}w_6|=d_8, \ |w_6\overrightarrow{C}\xi_{h+1}|=d_9. $$ If $d_3+d_7\ge\overline{p}+3$, $d_1+d_6\ge\overline{p}+3$ and $d_4+d_9\ge\overline{p}+3$ then clearly $|I_f|+|I_g|+|I_h|\ge3\overline{p}+12$, a contradiction. Otherwise we can assume w.l.o.g. that $d_3+d_7\le\overline{p}+2$. Further, if either $d_1\ge2$ or $d_9\ge2$ then we can argue as in the proof of Claim 7 (Case a1.1). Hence, we can assume that $d_1=d_9=1$. By Claim 2, $d_4=d_6=1$. By the same reason, using the fact that $d_1=d_6=1$, we get $d_3=d_7=1$. \\ \textbf{Case a1}. Either $\xi_{h+1}\not=\xi_f$ or $\xi_{f+1}\not=\xi_g$ or $\xi_{g+1}\not=\xi_h$. Assume w.l.o.g. that $\xi_{h+1}\not=\xi_f$, implying that $|I_{f-1}|=\overline{p}+2$. By Claim 5, $\xi_f^-y\not\in E$ for each $y\in V(I_i^\ast)$ and $i\in\{1,...,s\}\backslash\{f-1\}$. Moreover, by Claim 4, $\xi_f^-y\not\in E$ for each $y\in\{\xi_{f+1},\xi_h\}$. If $N(\xi_f^-)\subseteq V(C)$ then $d(\xi_f^-)\le\delta-1$, a contradiction. Otherwise we can argue as in the proof of Claim 6 (Case a1.2.1). \\ \textbf{Case a2}. $\xi_{h+1}=\xi_f$, $\xi_{f+1}=\xi_g$, $\xi_{g+1}=\xi_h$. It follows that $s=3$. Assume w.l.o.g. that $f=1$, $g=2$ and $h=3$.\\ \textbf{Case a2.1.} Either $d_2\ge2$ or $d_5\ge2$ or $d_8\ge2$. Assume w.l.o.g. that $d_2\ge2$, that is $w_1^+\not=w_3$. If $\overline{p}=0$ then $|I_1|=3$, implying that $d_2=1$, a contradiction. Let $\overline{p}\ge1$. By Claim 4, $w_1^+\xi_2,w_1^+\xi_3\not\in E$. If $N(w_1^+)\subseteq V(C)$ then by Claim 4, $N(w_1^+)\subseteq V(w_1^{+2}\overrightarrow{C}w_3)\cup \{\xi_1\}$. Since $|I_1|=\overline{p}+3$, we have $|w_1^+\overrightarrow{C}w_3|\le \overline{p}$. But then $d(w_1^+)\le\overline{p}+1=\delta-2$, a contradiction. If $N(w_1^+)\not\subseteq V(C)$ then we can argue as in the proof of Claim 6 (Case a1.2.1). \\ \textbf{Case a2.2.} $d_2=d_5=d_8=1$. It follows that $|I_i|=3$ $(i=1,2,3)$, that is $\overline{p}=0$, $\delta=3$ and $c=9$. Clearly $\langle V(C)\cup \{x_1\}\rangle$ is the Petersen graph. If $V(G\backslash C)\not=\{x_1\}$ then it is not hard to see that $c\ge10$, a contradiction. Otherwise, $n=10\equiv 1(mod\ 3)$ with $c=9=2\delta+3>2\delta+2$. Claim 9 is proved.\\ Thus, the result holds from Claims 7,8,9.\\ \textbf{Case 2}. $\overline{p}=\delta-1$. Clearly, $|N_C(x_i)|\ge1$ $(i=1,2)$.\\ \textbf{Case 2.1}. $x_1y_1, x_2y_2\in E$ for some distinct $y_1,y_2\in V(C)$. We distinguish three main subcases.\\ \textbf{Case 2.1.1}. There exists a path $Q=z\overrightarrow{Q}y$ with $z\in V(P)$, $y\in V(C)\backslash \{y_1,y_2\}$ and $V(Q)\cap V(C\cup P)=\{z,y\}$. Assume w.l.o.g. that $y\in V(y_1^+\overrightarrow{C}y_2^-)$. Since $C$ is extreme, we have $$ |y_1\overrightarrow{C}y|\ge|x_1\overrightarrow{P}z|+2, \ |y\overrightarrow{C}y_2|\ge|z\overrightarrow{P}x_2|+2, \ |y_2\overrightarrow{C}y_1|\ge\delta+1. $$ By summing, we get $|C|\ge2\delta+4$, contradicting (1).\\ \textbf{Case 2.1.2}. There exists a path $Q=z\overrightarrow{Q}y$ with $z\in V(y_1^+\overrightarrow{C}y_2^-)$, $y\in V(y_2^+\overrightarrow{C}y_1^-)$ and $V(Q)\cap V(C\cup P)=\{z,y\}$. By Claim 1(a1), $|C|\ge2\overline{p}+6=2\delta+4$, contradicting (1).\\ \textbf{Case 2.1.3}. $G\backslash \{y_1,y_2\}$ has at least three components. It follows that $\tau<1$, contradicting the hypothesis.\\ \textbf{Case 2.2}. $N_C(x_1)=N_C(x_2)=\{y\}$ for some $y\in V(C)$. It follows that $$ N(x_1)=(V(P)\cup\{y\})\backslash \{x_1\}, \ N(x_2)=(V(P)\cup\{y\})\backslash \{x_2\}. $$ Moreover, $x_1\overrightarrow{P}v^-x_2\overleftarrow{P}v$ is a longest path in $G\backslash C$ for each $v\in V(x_1^+\overrightarrow{P}x_2)$. Since $G$ is 2-connected, we have $wz\in E$ for some $w\in V(P)$ and $z\in V(C)\backslash\{y\}$. If $w=x_1$ then using the path $zx_1\overrightarrow{P}x_2y$, we can argue as in Case 2.1. Otherwise we can use the path $yx_1\overrightarrow{P}w^-x_2\overleftarrow{P}wz$.\\ \textbf{Case 3}. $\overline{p}\ge\delta$. \textbf{Case 3.1}. $x_1y_1,x_2y_2\in E$ for some distinct $y_1,y_2\in V(C)$. Clearly, $|y_1\overrightarrow{C}y_2|\ge\delta+2$ and $|y_2\overrightarrow{C}y_1|\ge\delta+2$, which yields $|C|\ge2\delta+4$, contradicting (1).\\ \textbf{Case 3.2}. $N_C(x_1)=N_C(x_2)=\{y\}$ for some $y\in V(C)$. Let $y_1,y_2,...,y_t$ be the elements of $N_P^+(x_2)$ occuring on $\overrightarrow{P}$ in a consecutive order. Put $H=\langle V(y_1^-\overrightarrow{P}x_2)\rangle$ and $$ P_i=x_1\overrightarrow{P}y_i^-x_2\overleftarrow{P}y_i \ (i=1,...,t). $$ Since $P_i$ is a longest path in $G\backslash C$ for each $i\in\{1,...,t\}$, we can assume w.l.o.g. that $P$ is chosen so as to maximize $|V(H)|$. If $y_iz\in E$ for some $i\in\{1,...,t\}$ and $z\in V(C)\backslash\{y\}$, then we can argue as in Case 3.1. Otherwise $N(y_i)\subseteq V(H)\cup \{y\}$ $(i=1,...,t)$, that is $|N_H(y_i)|\ge\delta-1$ $(i=1,...,t)$. By Lemma 3, for each distinct $u,v\in V(H)$, there is a path in $H$ of length at least $\delta-1$, connecting $u$ and $v$. Since $G$ is 2-connected, $H$ and $C$ are connected by two vertex disjoint paths. This means that there is a path $Q=y_1\overrightarrow{Q}y_2$ of length at least $\delta+1$ with $V(Q)\cap V(C)=\{y_1,y_2\}$. Further, we can argue as in Case 2.1. \\ \textbf{Case 3.3}. Either $N_C(x_1)=\emptyset$ or $N_C(x_2)=\emptyset$. Assume w.l.o.g. that $N_C(x_1)=\emptyset$. By arguing as in Case 3.2, we can find a path $Q=y_1\overrightarrow{Q}y_2$ of length at least $\delta+2$ with $V(Q)\cap V(C)=\{y_1,y_2\}$, and the result follows immediately. Theorem 1 is proved. \quad \quad \rule{7pt}{6pt}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction to the Hierarchy Problem and Black Hole Formation at the LHC} The European Organization of Nuclear Research, CERN, seeks to study the Standard Model of particle physics, its problems, and possible solutions. Arguably the most outstanding and pressing problem that the Standard Model fails to address is the irreconcilability of general relativity and quantum mechanics. Quantum gravity is a theory that attempts to explain gravity according to the principles of quantum mechanics. The quantum effects of gravity are believed to become strong and apparent at the Planck scale, which, in four dimensions, is estimated to be around 1 x $10^{19}$ GeV. Another prominent and pressing problem that the Standard Model fails to address is the hierarchy problem, or the extreme discrepancy between aspects of gravity and the weak force. Out of the four fundamental forces, the gravitational force is by far the weakest, ranking approximately 32 orders of magnitude weaker than the weak force, 38 orders of magnitude weaker than the strong force, and 36 orders of magnitude weaker than the electromagnetic force. As noted by Arkani-Hamed, S. Dimopoulos and G. Dvali, one way to account for the extreme discrepancy in gravity's relative strength, the hierarchy problem, is the existence of extra dimensions~\cite{ADDpheno, ADD}. Gravity, unlike the other fundamental forces, may propagate in more than four dimensions, thereby reducing its apparent strength. With more than four dimensions, the Planck scale may be lowered to the low TeV-scale, the energy range accessible by the Large Hadron Collider in Geneva, Switzerland~\cite{ADD}. This allows for the possibility of black hole production. A black hole is predicted to be formed at the LHC if, as proposed by Thorne, the hoop conjecture is fulfilled by two partons from a proton-proton collision~\cite{Casadio, Thorne}. The classical Hoop Conjecture states that a black hole can be formed if an imploding object is compressed in all directions into a region with a circumference smaller than 2$\pi$ times the object's Schwarzschild radius~\cite{Thorne}. While LHC black hole event generators incorporate the classical hoop conjecture, a more precise and accurate description of the hoop conjecture for quantum black hole production at the LHC can be found in a study of the quantum hoop conjecture by Casadio, Micu, and Scardigli~\cite{Casadio}. The main types of black hole and signatures of TeV-scale predicted to be formed at the LHC are semiclassical black holes, quantum black holes, and string balls. The latest results for searches for each type of black hole formation will follow. \section{Models Tested and 13 TeV Results} \subsection{Semiclassical Black Holes} Semiclassical black holes at the LHC are primarily modeled by ATLAS and CMS with the Charybdis2 and BlackMax generators using ADD-type extra dimensions. Two ATLAS semiclassical black hole searches have been performed with 13 TeV data: a multijet search and a lepton plus jets search. For the multijet search, the selection included at least three jets with scalar sum of jet transverse momenta greater than 1 TeV. Using 13 TeV data with an integrated luminosity of 3.6 fb$^{-1}$, the ATLAS multijet search set exclusion limits for rotating black holes in 6 extra dimensions with mimumum black hole masses of 9.0 TeV - 9.7 TeV~\cite{ATLASmultijet}. The lepton plus jet search set a selection consisting of at least three objects: a leading lepton with pT $>$ 100 GeV and at least two other objects (jets or leptons) with pT $>$ 100 GeV. Additionally, the $\sum$ pT was required to be $>$ 2 TeV or 3 TeV. The lepton plus jet search used data with an integrated luminosity of 3.2 fb$^{-1}$ to exclude rotating black holes in two extra dimensions with minimum black hole masses up to 7.8 TeV for MD = 2 TeV and with minimum black hole masses of 7.4 TeV for MD = 5 TeV~\cite{test}. CMS undertook a search for semiclassical black holes decaying to multijets, in which data of an integrated luminosity of 2.3 fb$^{-1}$ was used to exclude semiclassical black holes with masses as high as 9.5 TeV~\cite{CMSMultijets}. \subsection{Quantum Black Holes} The ATLAS experiment has set limits on the following QBH decay channels using 13 TeV data: QBH $\rightarrow$ dijet, QBH $\rightarrow$ photon + jet. The ATLAS dijet search used data with 37 fb$^{-1}$ to set a 95 percent confidence level exclusion on quantum black holes up to masses of 8.9 TeV~\cite{ATLASdijet}. The ATLAS photon + jet search used data with an integrated luminosity of 36.7 fb$^{-1}$ to exclude QBH in RS-type extra dimensions below 4.4 TeV and QBH in ADD-type extra dimensions below 7.1 TeV~\cite{ATLASphoton}. The CMS experiment has used 13 TeV data with an integrated luminosity of 3 fb$^{-1}$ to set limits on QBH $\rightarrow$ dijet for 6 dimensional ADD-type QBH with a lower limit mass of 7.8 TeV and for 5 dimensional RS-type QBH with a lower limit mass of 5.3 TeV. The CMS multijet used 13 TeV data with an integrated luminosity of 2.3 fb$^{-1}$ to exclude quantum black hole masses as high as 9.0 TeV~\cite{CMSMultijets}. \subsection{String Balls} ATLAS searches for stringballs in the multijet final state~\cite{ATLASmultijet} using data 3.0 fb$^{-1}$ to exclude string balls as high as 9 TeV. CMS used data of an integrated luminosity of 2.3 fb$^{-1}$ to exclude string balls with masses as high as 9.5 TeV~\cite{CMSMultijets}. \section{Results Interpretation} \subsection{Reasons why extra dimensions have not yet been discovered} ATLAS and CMS have used 13 TV data from the 2015-2017 Run II campaign to search for semiclassical black holes, quantum black holes, and string balls. Based on the latest public results, no evidence of such models has been found and exclusion limits have been set. The absence of evidence for signatures of TeV-scale gravity at the LHC can be attributed to either one of three reasons. It could be that extra dimensions do not exist and any subsequent search for signatures of extra dimensions will prove futile. However, it could also be that extra dimensions do exist, and that we have not discovered evidence of them because we need more energy to access them. Finally, and most excitingly, signatures of TeV-scale gravity may have already been produced at the LHC, but have either been hiding in signatures different than those we have used in searches or because they have been produced at a very low rate and the signal is diluted. \subsection{The LHC's future plans and the best bet for TeV-scale gravity} The LHC has been producing collisions at 13 TeV since April 2015. However, at the end of 2018, the LHC plans to enter a long shutdown, at which point all data taking will stop. Collisions are scheduled to resume at the start of 2020 with energies of 14 TeV. This is the only planned energy increase in the set future of the LHC. All other improvements scheduled until 2035 will be regarding luminosity, as part of the High Luminosity Large Hadron Collider(the HL-LHC), the version of the LHC intended for luminosity upgrades up to 3000 fb$^{-1}$~\cite{HLLHC}. This poses a problem for models of TeV-scale gravity already tested because black hole cross sections increase exponentially with increases in energy, but only linearly with increases in luminosity. Upon increasing the energy of the LHC from 8 TeV to 13 TeV, the 13 TeV to 8 TeV cross section ratio of quantum black holes with masses of 6 TeV becomes 9000. When increasing from 13 TeV to 14 TeV, the 14 TeV to 13 TeV cross section ratio of quantum black holes with masses of 9 TeV in 6 extra dimensions becomes 5.3 TeV. Larger increases in energy, therefore, would provide the best opportunity for discovery. However, increases in energy will not occur until after the long shutdown at the winter of 2018, after which energy is set to increase to 14 TeV. There is perhaps a possibility to increase energies by replacing a magnet. However, no such plans have been set for the near future. Therefore, the best bet for finding new physics with black hole remains looking at different models yielding different signatures. Searches for semiclassical black holes, quantum black holes, and other signatures of TeV-scale gravity at the LHC have all targeted final states containing high transverse momenta. Noncommutative black holes offer a very different experimental signature from other potential signatures of TeV-scale gravity: many particles with a low transverse momenta spectra. \section{Noncommutative Black Holes} As described by Nicolini~\cite{Nicolini}, noncommutative black holes are predicted to form in models where noncommutative geometry is incorporated into the framework of extra dimensions. The defining feature of noncommutativity in black holes is that the spacetime coordinates $\hat{x}^A$ and $\hat{x}^B$ do not commute~\cite{Nicolini}, such that \begin{equation} \label{eq1} \left[ \hat{x}^A, \hat{x}^B \right] = i\theta^{AB} \equiv i \frac{\epsilon^{AB}}{\Lambda_\mathrm{NC}^2}\, , \end{equation} \noindent where, taking directly from Gingrich~\cite{DGNC}, ``$\theta^{AB}$ is an real antisymmetric $D\times D$ matrix. For convenience, we have separated the mass scale $\Lambda_\mathrm{NC}$ associated with noncommutative from the dimensionless matrix structure $\epsilon^{AB}$ of $\theta^{AB}$. If $\Lambda_\mathrm{NC}^{-2}$ is the average magnitude of the elements in $\theta^{AB}$, we assume the elements of $\epsilon^{AB}$ are of $\mathcal{O}(1)$.'' Noncommutativity invokes a smearing in the mass distribution~\cite{FNCBH} such that the charge density~\cite{TRNC}, $\rho$, becomes~\cite{DGNC} \begin{equation} \label{eq2} \rho = \frac{m}{(4\pi\theta)^{(n+3)/2}} e^{-r^2/(4\theta)}\, , \end{equation} \noindent where $\sqrt{\theta} = 1/\Lambda_\mathrm{NC}$ Noncommutative black holes are theoretically of interest due to the resulting nonsingular solution, but most importantly, because they form an effective theory to quantum gravity. Experimentally, this effective theory allows for a new mass threshold above the Planck scale and, thus, a new energy regime for physics beyond the Standard Model~\cite{DGNC}. Noncommutative black holes are cold and decay to a stable remnant~\cite{FNCBH},~\cite{Nicolini}. The opportunity to study stable remnants through noncommutative black hole studies is exciting because stable remnants appear in a variety of models ranging from those of loop quantum gravity and string gravity to tunneling~\cite{DGNC}. \section{2010 Neutral Noncommutative Black Hole Study} A study undertaken in 2010 by Gingrich~\cite{DGNC} shows the phenomenological aspects of noncommutative black holes at the LHC. The cross sections and temperatures for noncommutative black holes, shown in Figure~2~\cite{DGNC}, are significantly lower than those of other signatures of TeV-scale gravity. The combined effect is that noncommutative black holes yield softer particles than semiclassical black holes, string balls, or quantum black holes. \begin{figure}[h] \centering \includegraphics[width=\textwidth]{figure3.eps} \caption{\label{label} Black Hole Temperature vs. Black Hole Mass (M/$M_{D}$). Solid lines: NC BH, Dashed lines: Commutative BH. Figure taken from~\cite{DGNC}.} \end{figure} There is an average of 9 primary decay particles from the black hole and a maximum of 25. The average transverse momentum of the final state particles is 70 GeV. A generator cut-off was imposed at 100 MeV above the mass of the remnant due to technical complications regarding the generator's efficiency near the mass of the remnant~\cite{DGNC}. Without this cutoff, the primary decay particle multiplicity would be greater and the average transverse momentum of final state particles would be even lower. The majority of events in the simulations of the study contain at least one jet, while about 45 percent have a leading lepton. Figure 2~\cite{DGNC} depicts the soft jet maximum transverse momentum and the soft lepton maximum transverse momenta. This soft transverse momentum spectrum of the final state particles allows for the noncommutative black hole signal to blend into QCD background. Searches for semiclassical black holes with low masses set a selection on scalar sum of transverse momentum of greater than a few TeV. Due to the cold nature of noncommutative black holes, the scalar sum of transverse momentum can reach zero. This eliminates the ability to include scalar sum of transverse momentum in the selection~\cite{DGNC}. As for the characteristics of the noncommutative black hole's stable remnant, the remnant is slow, with the most likely speed being .3c, and has an average transverse momentum of less than 230 GeV. \begin{figure}[h] \centering \includegraphics[width=0.9\textwidth]{figure7.eps} \caption{\label{label} Characteristics of Final State Particles. Figure taken from~\cite{DGNC}.} \end{figure} \section{Strategies for a Charged Noncommutative Black Hole Search} The 2010 LHC noncommutative black hole studies by Gingrich~\cite{DGNC} show that noncommutative black holes offer the unique signature of particles with low transverse momentum. This was the only study of noncommutative black holes using LHC data, and it was stopped at trigger level due to the low transverse momentum spectrum of the final state particles being obscured by the QCD background. This study was undertaken for a neutral noncommutative black hole model. The addition of charge would further lower the transverse momentum spectrum, leading to a more dramatic signature. Therefore, a search for charged noncommutative black holes, in which the charge of the black hole is confined to the brane, is intended. The following is a discussion regarding possible search options and ways to separate the low pT signal from the QCD background. A first step would be to look at the HT for each noncommutative black hole event and compare it to that of QCD. Thorough studies should be undertaken to compare the shape of black hole radiation to that of QCD. It might be useful to check if the black hole thermal distribution matches the KNO distribution. The signal may be able to be extracted from the QCD background by treating the signal and background as multijets and studying the isotropicness of the noncommutative black hole versus the isotropicness of the QCD background. Adding a W or Z to the selection could help reduce the background, but comes at the risk of chopping the probability by a factor of alpha. For charged noncommutative black holes, adding a photon in the selection could help separate the signal from background. A search for multijets plus photons has not been done before, so this would have the added benefit of exploring a new final state. The study by Gingrich points out that another way to detect noncommutative black holes is to search for the remnant. If the remnant is charged, once can use the fact that the remnant is slow to search for charged, long lived particles. Additionally, for a charged remnant, there should be a balancing charge. Perhaps this balancing charge looks like a charged lepton combiation, which can be used as a selection requirement. \section{Conclusion} Based on the latest public results, no evidence of hitherto tested models of quantum black holes, semiclassical black holes, or string balls have been found at the LHC using 13 TeV data. Noncommutative black holes offer a chance to use a different signature to search for TeV-scale gravity. A search is intended for charged noncommutative black holes using the ATLAS detector \section*{Acknowledgments} The author wishes to thank Marco Sampaio for the crucial specification that, for a charged black hole model to be incorporated in Charybdis2, the charge of the black hole must be confined to the brane. Additionally, the author would like to thank Frank Krauss, Richard Keith Ellis, and Daniel Maitre for their keen insight on separating low pT signal from QCD background, which was subsequently included above. \section*{References}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} In spite of all the progress that has been achieved in the study of the high-energy cosmic rays, their sources still remain largely unknown. It is believed that for energies of at least that of the `knee' spectral steepening observed at $E_{\rm k}\simeq 4$~PeV \cite{ku59}, the cosmic rays (CRs) are predominantly of Galactic origin, possibly accelerated in supernova remnants or pulsars, while above the `ankle' spectral hardening observed at $E_{\rm a}\simeq 5$~EeV \cite{li63}, the CRs are expected to be predominantly of extragalactic origin, possibly accelerated in active galactic nuclei or gamma ray bursts. The precise location of the transition between Galactic and extragalactic CRs is a matter of debate. Some scenarios associate it to the `second-knee' steepening of the spectrum observed at $E_{\rm sk}\simeq 100$~PeV \cite{berg07}, that would correspond to the break associated to the steepening of the heavy Fe component of the Galactic CRs in models where the knee would be the break associated to the lighter H/He Galactic component \cite{ca02,an05}. Other scenarios relate it to the ankle feature, associating it to the energy at which a harder extragalactic population would be overtaking the more steeply falling Galactic one. Besides the spectral features, another important handle to understand the origin of the CRs is their composition, since changes in the average nuclear masses, as well as on the spread of their values, can provide clues about the source populations producing them. Indeed, the average composition is observed to become increasingly heavy from the knee up to the second-knee \cite{msu,yak,an05,ap13}, supporting scenarios in which the Galactic CRs get suppressed in a rigidity dependent way, so that the component of charge $eZ$ gets suppressed above an energy $ZE_{\rm k}$ \cite{pe61}. This suppression could either be due to an acceleration cutoff at the sources or, alternatively, be due to a more efficient diffusive escape from the Galaxy, since being both effects of magnetic nature they naturally depend on the particle's rigidities. The composition is observed to become lighter at EeV energies, suggesting the emergence of a new type of source population \cite{al14}, or eventually that a strong photodisintegration of heavy nuclei takes place at the sources, producing large amounts of secondary protons at energies of a few EeV \cite{un15}. According to the Pierre Auger Observatory data, above the ankle energy the CRs appear to become increasingly heavy \cite{augerxm}, what possibly indicates that a rigidity dependent suppression is also present at the highest energies. Another relevant result is that the spread in the CR masses appears to become quite narrow above the ankle, suggesting that the heavier species that dominate at the highest energies have to be strongly suppressed for decreasing energies, so as to avoid the simultaneous presence of light and heavy species at energies near $E_{\rm a}$. A final ingredient that should help to understand the CR origin is the anisotropy in the distribution of their arrival directions (for a recent review, see \cite{rpp}). In particular, near the knee energy a dipolar modulation in the equatorial component of the anisotropy has been observed by IceCube and IceTop \cite{ic12,ic16} that points close to the Galactic center direction, which is consistent with a predominant Galactic origin for the CRs at these energies. At higher energies, and up to $\sim 1$~EeV, the equatorial dipolar phases remain not far from the right ascension of the Galactic center, although the dipolar amplitudes are not significant \cite{lsra19}. The restrictive upper-bounds on the amplitudes, which are required to be below $\sim 1.5$\% in the range 1 to 4~EeV, combined with the observation that at these energies the composition is relatively light, disfavors a Galactic origin for this predominant light component, since if this were the case the anisotropy would be expected to be much larger \cite{lsl13}. At energies above 8~EeV, a significant dipolar anisotropy has been observed, pointing in the opposite hemisphere with respect to the Galactic center direction \cite{LSA17}, which is indicative of an extragalactic origin for the CRs at these energies. Moreover, some hints of more localized anisotropies, with hot spots on typical angular scales of 20$^\circ$ appearing at the highest energies, have been reported \cite{augersb,tahs} and, if confirmed, they may help to identify the first sources of ultrahigh-energy cosmic rays (UHECRs). An important observation is that the spectrum is dominated by heavy elements at the highest energies and that these elements are strongly suppressed for decreasing energies, so as to allow for the composition to become mostly light near the ankle energy. This can be interpreted as resulting from the emission of different mass components having a rigidity dependent cutoff at energies of a few $Z$~EeV, that suppresses the light components above the ankle energy. Below this cutoff, the components need to have a very hard source spectrum so as to allow for the abrupt emergence of the heavy components at the highest energies. In particular, assuming a power-law source differential spectrum $\Phi(E)\propto E^{-\gamma}$ for each of the mass components, a fit to the Auger Observatory data on the spectrum and composition allows to determine $\gamma$ \cite{combfit}. The actual value of the common spectral index $\gamma$ turns out to depend on the hadronic model considered to describe the interactions in the atmosphere (as well as on other assumptions, such as the evolution of the sources or the extragalactic background light model). For instance, for the EPOS-LHC model values of $\gamma<1.3$ are obtained, while for Sibyll 2.1 or QGSJET II-04 even harder spectra, with $\gamma<-1.5$, turn out to be preferred. These small required values are however in tension with the expectations from the CR diffusive shock acceleration scenarios, which typically predict that $\gamma\simeq 2$ to 2.4 (for a review see \cite{longair}). An alternative scenario was proposed in \cite{difu1}, where it was suggested that the hard spectrum that has been inferred for the heavier mass components above a few EeV could be a consequence of the effects of the propagation of the CRs through the intervening extragalactic magnetic fields. In particular, if the closest sources are at distances larger than few tens of Mpc, as the energy decreases below $Z$~EeV the propagation time of the diffusing CRs can become longer than the lifetime of the sources, and the CRs reaching the Earth would then be suppressed for decreasing energies due to the so-called magnetic horizon effect. For this suppression to be significant, the strength of the magnetic fields should be sizable ($B\gg {\rm nG}$) and their coherence length should preferentially not be too large ($l _{\rm c}\ll{\rm Mpc}$). We note that the properties of the extragalactic magnetic fields are poorly known, being constrained indirectly from observed Faraday rotation measures of polarized sources, synchrotron emission, etc. \cite{fe2012}, or being estimated alternatively from simulations of structure formation that include seed magnetic fields, from which a broad range of predictions are obtained \cite{dolag05,enzo17} (see \cite{widrow02,vallee04,vallee11} for reviews). Note that the presence of the Galactic magnetic field is not expected to affect significantly the spectrum and composition of the extragalactic flux component, and we will hence ignore it. In this work we consider a scenario that can account for the main features of the spectrum and composition measurements for all energies down to 100~PeV. It consists of two main extragalactic source populations contributing to the UHECRs, and a Galactic component which progressively fades away above 100~PeV and that contributes already less than $\sim 10$\% to the CR flux at 1~EeV. The extragalactic populations are considered to arise from the superposition of five representative nuclear components at the sources: $i={\rm H}$, He, N, Si and Fe. They are assumed to originate from continuously emitting sources with power-law CR spectra, $\Phi_i\propto f_i E^{-\gamma}$, with $f_i$ the fractional contribution to the spectrum at a given energy arising from the nuclei of type $i$. The spectrum of the CRs reaching the Earth is obtained taking into account propagation effects, due both to interactions with the radiation backgrounds and to magnetic deflections in the intervening extragalactic magnetic fields. For simplicity, we model the effects of a source acceleration cutoff directly by introducing a rigidity dependent exponential suppression in the fluxes reaching the Earth. The first extragalactic population consists mostly of light nuclei (H, He and N) with a steeply falling source spectrum, with $\gamma\simeq 3.5$, having a relatively large density of sources so as to lead to a typical intersource separation smaller than 10~Mpc (as is the case, for instance, for normal galaxies, starburst galaxies or Seyfert active galaxies). This population will dominate the CR flux in the range 0.1 to 2~EeV. The second extragalactic population has instead a smaller source density (as could be the case, for instance, for powerful radiogalaxies, blazars or galaxy clusters), so that the larger intersource separation leads, through a magnetic horizon effect caused by the CR deflections in the intergalactic magnetic fields, to a significant suppression of its flux for energies smaller than $\sim Z$~EeV, as was the case in the scenario suggested in \cite{difu1}. This population has significant amounts of heavier elements (He, N, Si and Fe), which also lead to large numbers of secondary protons through their photodisintegration, and dominates the CR flux above a few EeV. A somewhat similar two component scenario, but in which the high-energy CR flux originated from one (or few) nearby extragalactic powerful source emitting since relatively recent times, so that the magnetic horizon suppression could be sizable in spite of the relatively closer distance to the sources, was proposed in \cite{mr19}. In the discussion of the present scenario, that includes instead sources emitting since very early times, we will consider different models for the cosmological evolution of the CR luminosities of the extragalactic populations. \section{Model for the cosmic ray fluxes} The total differential flux of cosmic rays with energies above 0.1~EeV will be modelled with contributions coming from a Galactic population, $\Phi^{\rm G}$, and the two mentioned extragalactic populations: $\Phi^{{\rm XG}l}$, that is dominant at low energies (between 0.1 and few EeV) and $\Phi^{{\rm XG}h}$, that is dominant at high energies (above a few EeV), with \begin{equation} \Phi^{\rm tot} (E) = \Phi^{\rm G} (E) + \Phi^{{\rm XG}l} (E) + \Phi^{{\rm XG}h} (E). \label{eq:phitot} \end{equation} The Galactic population is modelled, following \cite{mr19gal}, as a superposition of five nuclear components with relative fractions consistent with the direct measurements performed at $\sim 100$~TeV, and with rigidity dependent broken power laws with a high-energy exponential cutoff, with parameters determined from a fit to spectrum and composition data obtained between 1~PeV and 1~EeV. Since we are mostly interested in the extragalactic populations present at energies above 100~PeV, we keep the Galactic spectrum fixed in the analysis. Each one of the extragalactic populations is modelled with five mass groups plus the secondary nucleons that are produced during the propagation as a consequence of the interactions with the radiation backgrounds \begin{equation} \Phi^{{\rm XG}I} (E)= \sum_i \Phi^{ I}_i (E) + \Phi^{ I}_{\rm sp} (E), \label{eq:phixg} \end{equation} where the sum runs over $i$ = H, He, N, Si and Fe, for ${I} = l,h$. The source flux for each one of the mass group representative elements of the low or high extragalactic populations will be modelled as a power-law spectrum with spectral index $\gamma_{ I}$ up to a rigidity-dependent energy at which the acceleration at the sources is cut off, leading to an effective exponential suppression of the fluxes observed at the Earth above energies $Z_iE^{ I}_{\rm cut}$. The effects of the interactions with the radiation backgrounds are taken into account by introducing a modification factor $\eta^i (E)$, defined as the ratio between the spectrum from a continuous distribution of sources obtained including the attenuation effects and the spectrum that would have been expected from the same sources in the absence of interactions \cite{dip}. The attenuation factors have been found to be quite insensitive to the source spectral index considered, although they depend on the cosmological evolution adopted for the luminosity of the sources. We will consider two representative cases of source evolution: a constant luminosity up to $z_{\rm max}=1$ (no evolution, NE) and a luminosity proportional to the star formation rate (SFR), for which we adopt the parametrization from \cite{ho06}, assuming that the source intensity evolves as $(1+z)^{3.44}$ up to redshift 0.97, evolving then as $(1+z)^{-0.26}$ for larger redshifts to then fall as $(1+z)^{-7.8}$ for redshifts above 4.48. These two illustrative cases bracket a wide range of plausible source evolution scenarios. We parametrize the attenuation factors for each of the mass groups considered following the approach of \cite{mr19}, and the parametrizations used are reported in the Appendix. One then has that, neglecting the possible effects associated to magnetic deflections and finite source distances, \begin{equation} \Phi^{ I}_i (E) = \Phi^{ I}_0 f^{ I}_i\left(\frac{E}{\rm EeV}\right)^{-\gamma_{ I}} \eta^i(E)\frac{1}{\cosh(E/Z_iE^{ I}_{\rm cut})}, \label{phid} \end{equation} where the different fractions are defined at low enough energies such that the attenuation effects are negligible, and they satisfy $f^{ I}_{\rm H}+f^{ I}_{\rm He}+f^{ I}_{\rm N}+f^{ I}_{\rm Si}+f^{ I}_{\rm Fe}=1$ (equivalently, they can be considered as being the fractions in the source flux at an energy smaller than the H acceleration cutoff). Note that the cosh$^{-1}$ function allows to smoothly match the exponential suppression of the flux at energies higher than $ZE_{\rm cut}$ with the spectrum present at lower energies. The secondary protons arise from the fragmentation of the different nuclei during propagation, and the resulting flux depends on the mass number, spectral index and source evolution of the component considered. They can be parametrized following the results of \cite{mr19}, and the parametrizations used are also reported in the Appendix. The finite distance to the closest sources affects the attenuation of the high-energy population at the highest energies, and we include this effect by directly computing the expected attenuation for any adopted intersource separation, although in the scenarios considered it is actually the source cutoff that provides the dominant attenuation effect at the highest energies. The combination of the finite source distance and the presence of intergalactic magnetic fields also determines the attenuation of the spectrum of the high-energy population for decreasing rigidities, as we now discuss. \section{The magnetic horizon effect} One crucial ingredient for the high-energy population of the present scenario is the spectral suppression appearing for decreasing energies as a consequence of the magnetic horizon effect \cite{le04,be06,gl07}. This suppression results from the combination of the relatively large intersource separation of this component and the diffusive propagation through the intergalactic magnetic fields, which implies that, even for the closest sources, it may take longer than the age of the source for the low-energy CRs to reach the Earth. For the simple model of an isotropic turbulent magnetic field, characterized by an RMS strength $B$ and coherence length $l _{\rm c}$, the suppression can be accurately described through the analytic procedure developed in ref.~\cite{difu1}.\footnote{A fit to the Auger data above 5~EeV using non-uniform extragalactic magnetic field configurations was performed in \cite{wi18}.} To obtain the suppression we compute, using the analytic solution developed by Berezinsky and Gazizov \cite{be06,be07} describing the difusion of CRs in an expanding Universe, the spectrum of protons resulting from a distribution of sources with a given density, as well as that for a continuous distribution of sources, and obtain the ratio between them, that we call $G(E)$. Note that according to the propagation theorem \cite{al04}, the total CR flux in the limit of a continuous distribution of sources should be the same as that obtained ignoring magnetic field effects. Then, the knowledge of the magnetic suppression factor $G(E)$ allows to account for the effects of the magnetic horizon just by multiplying the spectrum obtained in the absence of magnetic fields by $G(E)$. The suppression depends on the average distance between sources, $d_{\rm s}$, and on the coherence length, $l_{\rm c}$, through the combination \begin{equation} X _{\rm s}\equiv \frac{d _{\rm s}}{\sqrt{R _{\rm H} l _{\rm c}}}\simeq \frac{d _{\rm s}}{65\ \rm Mpc}\sqrt{\frac{\rm Mpc}{l _{\rm c}}}, \label{xs.eq} \end{equation} where the Hubble radius is $R_{\rm H}\equiv c/H_0\simeq 4.3$~Gpc. The average separation between the UHECR sources, $d _{\rm s}$, is related to their density $n _{\rm s}$ through $d _{\rm s}\simeq n _{\rm s}^{-1/3}$. For example, $d _{\rm s}\simeq 10$~Mpc for $n _{\rm s}=10^{-3}$~Mpc$^{-3}$ while $d _{\rm s}\simeq 100$~Mpc for $n _{\rm s}=10^{-6}$~Mpc$^{-3}$, which spans most of the range of UHECR source densities usually considered. The magnetic suppression is computed considering a distribution of radial distances to the CR sources that follows the average distances to the closest sources in a homogeneous distribution \cite{difu1} (in particular, the closest source lies in this case at a distance $r_1\simeq 0.55d _{\rm s}$). The magnetic suppression depends on the magnetic field amplitude through the critical energy $E _{\rm c}$, defined as the energy for which the effective Larmor radius, given by $r _{\rm L}= E/ZeB \simeq 1.1 \,(E/{\rm EeV}) / (ZB_{{\rm nG}}) \ {\rm Mpc}$, is equal to the coherence length (with $B_{{\rm nG}}\equiv B/{\rm nG}$). Then, requiring that $r _{\rm L}(E _{\rm c})=l _{\rm c}$ one finds that $E _{\rm c}\simeq 0.9 Z B_{{\rm nG}}(l _{\rm c}/{\rm Mpc})$~EeV. The analytic solution from \cite{be06} is a function of the diffusion length, which for a turbulent magnetic field with a Kolmogorov spectrum can be accurately parametrized as \cite{difu2} \begin{equation} l_{\rm D}(E) = l _{\rm c}\left[ 4\left(\frac{E}{E _{\rm c}}\right)^2 + 0.9\left(\frac{E}{E _{\rm c}}\right) + 0.23\left(\frac{E}{E _{\rm c}}\right)^{1/3}\right]. \label{geldnew.eq} \end{equation} Note that the diffusion length is the typical distance after which a charged particle would be deflected by about 1~rad. The magnetic suppression turns out to also depend on the evolution of the luminosity of the sources with redshift. In Figure~\ref{fig:zmev} we show with points the suppression factor $G$ obtained as a function of $E/E _{\rm c}$, for two models for the source evolution (NE and SFR) and for four different values of the mean source separation, corresponding to $X _{\rm s} = 0.3, 1, 2$ and 5. The results in the plots for the SFR actually include sources just up to a maximum redshift of four, since the contribution from sources farther away is negligible. The magnetic suppression is stronger for larger intersource distance $d _{\rm s}$ (larger $X _{\rm s}$, lower density), as expected. The suppression is weaker in the SFR evolution case, since the particles travelling for longer times, and thus reaching us from farther away, get more weight in this case. The suppression has also a slight dependence on the spectral index $\gamma$, and we display the results for $\gamma =1,~2$ and 3. \begin{figure}[h] \centering \includegraphics[scale=0.48,angle=270]{noevg123.eps} \includegraphics[scale=0.48,angle=270]{sfrg123.eps} \caption{Suppression factor $G(E/E _{\rm c})$ for different source evolution models, spectral index $\gamma$ and $X _{\rm s}$ parameter. The points are the results of the numerical computation while the lines correspond to the fits obtained using eq.~(\ref{gfit.eq}).} \label{fig:zmev} \end{figure} A good fit to the suppression factor can be obtained through the expression \begin{equation} G(x)=\exp\left[-\left(\frac{a\,X _{\rm s}}{x+b (x/a)^\beta}\right)^\alpha\right], \label{gfit.eq} \end{equation} with $x=E/E _{\rm c}$. This expression slightly improves the one adopted in ref.~\cite{difu1}, where a less accurate expression for $l_{\rm D}$ was used. The results of the fits, obtained using the parameters reported in Table~\ref{tab:gpar}, are shown as lines in Figure \ref{fig:zmev}. These fits are quite accurate for the different cases of source evolution and densities studied, and hence we will use them in the combined fit of the spectrum and composition data since they allow to consider different magnetic field parameters and source models without the need of performing new computations for each case. \begin{table}[ht] \centering \begin{tabular}{c c c c c} \hline\hline Evolution & $a(\gamma$) & $b(\gamma$) & $\alpha$($\gamma$)& $\beta$($\gamma$)\\ \hline NE & 0.206+0.026\ $\gamma$ & 0.146+0.004\ $\gamma$ & 1.83 - 0.08\ $\gamma$ & 0.13 \\ SFR &0.135+0.040$\ \gamma$ & 0.254+0.040\ $\gamma$ & 2.03 - 0.11\ $\gamma$ & 0.29 \\ \hline \end{tabular} \caption{Parameters of the fit to the suppression factor $G(E/E _{\rm c})$ for the two models of source evolution, as a function of the source spectral index $\gamma$.} \label{tab:gpar} \end{table} We note that the magnetic suppression factor $G$ was obtained ignoring interactions during propagation, just keeping redshift effects, since this suppression is relevant only at energies smaller than about $Z$~EeV, while the interactions are relevant mostly at higher energies. If one were to consider values of the parameters for which the magnetic suppression would appear at higher energies, the interactions could in principle also affect the magnetic suppression, as discussed in \cite{difu1}. \section{The two extragalactic source population scenarios} In this section we obtain the main features of the two extragalactic populations, as well as of the intergalactic magnetic fields, which are required in order that they lead to predictions in reasonable agreement with the observed CR spectrum and composition. We consider the measurements performed by the Pierre Auger Observatory for energies above 0.1~EeV. Data from other experiments exist in this energy range, but we do not include them since they rely on significantly smaller number of events and hence they should not significantly affect the results obtained. Moreover, different datasets are affected by different systematic uncertainties, such as those related to the different energy calibrations of each experiment, and this would further complicate a combined analysis. For the Galactic CRs, we will adopt the fluxes already derived in \cite{mr19gal} in a fit including lower energy data (we just rescale the energy parameters of the fit in \cite{mr19gal}, which relied on the energy scale of the Telescope Array experiment, to the energy scale of the Auger experiment). We will fit the parameters describing the two extragalactic populations to the Auger spectrum data above 0.1~EeV from \cite{ve19,co19} as well as to the composition data obtained for $E\geq 0.16$~EeV in \cite{augerxm}. This last includes the derived values of the average logarithm of the mass number of the CRs, $\langle {\rm ln} A\rangle$, and its variance, $ \sigma^2({\rm ln} A)$. These are obtained from the measurements of the depth of maximum development of the air showers, $X_{\rm max}$, performed with the fluorescence detectors. The relation between $\langle X_{\rm max}\rangle$ and $\langle {\rm ln}A\rangle$ depends on the hadronic model considered to simulate the CR interactions in the atmosphere and, for definiteness, we adopt in our analysis the results based on the Sibyll~2.3c model \cite{sibyll}, which leads to an inferred composition slightly heavier than that based on the EPOS-LHC model \cite{epos}. For consistency, the Galactic population that we adopt is also that obtained using the Sibyll~2.3c hadronic model in \cite{mr19}, and we consider the scenario including a high-energy cutoff for this population. \begin{figure}[h] \centering \includegraphics[scale=.72,angle=0]{sp_sib_ne_ne-2-75.eps} \includegraphics[scale=.72,angle=0]{sp_sib_sfr_ne-2-75.eps} \includegraphics[scale=.72,angle=0]{sp_sib_ne_sfr-2-75.eps} \includegraphics[scale=.72,angle=0]{sp_sib_sfr_sfr-2-75.eps} \caption{Spectrum and composition for different assumptions on the cosmological evolution of the luminosity of the two extragalactic populations, adopting $E _{\rm c}=2$~EeV and $d _{\rm s}^h=75$~Mpc. We show separately the contributions to the spectrum from the different mass groups of the low (continuous lines) and high (short dashes) extragalactic populations, as well as their total contributions in black. The total contribution of the secondary protons from both populations is indicated with blue dot-dashed lines, the Galactic contribution with black long-dashed lines and the total spectrum is displayed as the violet continuous line.} \label{fig:fits} \end{figure} In Fig.~\ref{fig:fits} we display the results obtained for the spectrum, $\langle {\rm ln}A\rangle$ and $\sigma^2({\rm ln} A)$, making different assumptions for the cosmological evolution of the luminosity of the two extragalactic source populations (either with no evolution, NE, or assuming an evolution that follows the star formation rate, SFR). We adopted in the plots a critical energy $E _{\rm c}=2$~EeV to characterize the effects of the extragalactic magnetic field, and an intersource separation for the high-energy population $d _{\rm s}^h=75$~Mpc to evaluate the attenuation at the highest energies. In Table~\ref{tab:fitpars} we report the values of the different parameters that are obtained in each case through the minimization of the $\chi^2$ function constructed considering the statistical uncertainties of the different measurements. One can see from the figure that the overall agreement of the models with the data points is quite good for all the energy range considered. In the spectrum plot we show separately the contribution of the different mass components for each extragalactic population. One should keep in mind that, for instance, the component labelled as Si includes all the leading nuclear fragments arriving to the Earth that were produced in the photodisintegration of the nuclei emitted as Si at the source, and the secondary protons resulting from the interactions of all nuclear species are displayed separately. The lowest energy bump in the flux of secondary protons is mostly due to the low-energy extragalactic population, while the larger bump appearing at higher energies is mostly due to the high-energy extragalactic population. \begin{table}[t] \centering \caption{Parameters obtained in the fit adopting $E _{\rm c}=2$~EeV and $d _{\rm s}^h=75$~Mpc. The first column indicates the evolutions assumed for the low and high-energy extragalactic populations respectively. } \bigskip \begin{tabular}{c c c c c c c c c c} \hline\hline Evolution & $\gamma _{\rm l}$ &$E_{\rm cut}^l$ [EeV] & $X _{\rm s}^l$ & $f _{\rm H}^l$ & $f_{He}^l$ & $f_N^l$ & $f_{Si}^l$& $f_{Fe}^l$ & $\phi_0^l$ [1/km$^2$\,yr\,sr\,EeV]\\ \hline NE-NE & 3.5 & 0.44& 0.63 & 0.13 & 0.63 & 0.24 & 0 & 0 & 101 \\ SFR-NE & 3.4 & 100 & 0.79 & 0.19 & 0.51 & 0.30 & 0 & 0 & 77 \\ NE-SFR & 3.5 & 0.30 & 0.70 & 0.20 & 0.40 & 0.40 & 0 & 0 & 140 \\ SFR-SFR & 3.5 & 1.2 & 0.95 & 0.17 & 0.41 & 0.26 & 0.08 & 0.08 & 93 \\ \hline\hline Evolution & $\gamma _{\rm h}$ &$E_{\rm cut}^h$ [EeV] & $X _{\rm s}^h$ & $f _{\rm H}^h$ & $f_{He}^h$ & $f_N^h$ & $f_{Si}^h$& $f_{Fe}^h$ & $\phi_0^h$ [1/km$^2$\,yr\,sr\,EeV] \\ \hline NE-NE & 2.0 & 1.6& 3.6 & 0 & 0.52 & 0.31 & 0.07 & 0.10 & 196 \\ SFR-NE & 2.0 & 1.4 & 3.7 & 0 & 0.52 & 0.30 & 0.07 & 0.11 & 221 \\ NE-SFR & 2.4 & 5.3 & 5.2 & 0 & 0.01 & 0.55 & 0.15 & 0.29 & 873 \\ SFR-SFR & 2..4 & 5.0 & 5.5 & 0 & 0.03 & 0.52 & 0.20 & 0.25 & 1000 \\ \end{tabular} \label{tab:fitpars} \end{table} There are several salient features which are common to all the different scenarios. In particular, between 0.1 and $\sim 2$~EeV the spectrum is dominated by the light component (H, He and N) of the low-energy population and this population has negligible contributions from heavier elements.\footnote{Given that the Si and Fe components of the low-energy population cannot be reliably constrained separately, we just considered in the fits equal fractions for both of them.} The lack of heavy elements in this component helps to reduce the spread in mass values, leading to a good agreement with the variance of ln$A$ that is observed. In the energy range between 1 and 5~EeV, the main contributions are from the N of the low-energy population as well as a significant amount of secondary protons from the high-energy population. Above the ankle energy, the main contributions are those from the N, Si and Fe components of the high-energy population, with the larger masses progressively dominating for increasing energies.\footnote{Note that the average CR masses that are predicted by the models above $\sim 10$~EeV are slightly heavier than the values inferred from the data. This conclusion depends however on the hadronic model being considered, and given that at these energies one needs to rely on extrapolations of the hadronic models beyond the energies at which they are constrained by colliders, significant systematic uncertainties could still affect the values of $\langle {\rm ln} A\rangle$ that are inferred from observations in this energy range. Moreover, we did not consider the impact of the experimental systematic uncertainties that affect the determination of the depth of shower maximum $X_{\rm max}$ as well as the energy scale, which could also affect the average mass that is inferred from the data.} The low-energy population ended up having a very steep spectrum, with $\gamma_l\simeq 3.5$. Since this spectral index is mostly determined by the shape of the spectrum in the decade below 1~EeV, it has almost no sensitivity to the source evolution adopted for the low-energy population. When the low-energy population has no evolution, one generally finds that its cutoff has a small value, $E_{\rm cut}^l<1$~EeV. When the evolution follows the SFR, which already leads to a steeper final spectrum due to the effects of the interactions which get enhanced at high redshifts, the resulting cutoff can be much larger, even reaching the maximum value that we allowed of 100~EeV. However, the $\chi^2$ function has very little sensitivity to this parameter since above 20~EeV the low-energy population contributes already less than 1\% to the total flux. Note that, in this kind of scenarios, the presence of a subdominant population of light CRs possibly extending up to the highest energies could prove helpful in the attempts to identify some of the nearby sources through anisotropy studies. Regarding the spectrum of the high-energy population, we are particularly interested in an explanation in which a source spectral index compatible with the expectations from diffuse shock acceleration gets effectively hardened by the magnetic horizon effects after the propagation is taken into account. We will hence just consider values for $\gamma_h$ in the range 2 to 2.4. For the NE case, the spectral index obtained tended to the lower boundary of the range considered, $\gamma_h\simeq 2$, with the cutoff energy having typical values of about 1.5~EeV. In this case an even harder spectrum ($\gamma\simeq 1.2$) would have been preferred by the fit, but with only slight improvements in the $\chi^2$ value, with a correlated reduction of $X _{\rm s}^h$ and a decrease in $E_{\rm cut}^h$. Since the modelling of the extragalactic populations that we consider is very simplistic, with just five different components of uniformly spaced equal intensity sources with similar spectra, and there are also possible unaccounted systematic effects related to the assumptions about the hadronic models, the energy calibration, etc., we favor in our analysis the possibility of getting a source spectral index closer to the expectations from diffusive shock acceleration ($\gamma\geq 2$) rather than to strictly minimizing the $\chi^2$ function by allowing less physically motivated regions of the parameter space. In the case of the SFR evolution, the spectral slope of the high-energy population turns out to be $\gamma_h\simeq 2.4$, and the cutoff energies have typical values of about $\sim 5$~EeV. The values obtained for the cutoff energy of the high-energy population are essential in order to ensure that the light component of this population does not extend much beyond the ankle energy. Let us note that the global $\chi^2$ value per degree of freedom obtained in the fits turns out to be smaller for the cases in which the high-energy population has no evolution ($\chi^2/{\rm dof}\simeq 4$) than for the cases with an evolution following the SFR ($\chi^2/{\rm dof}\simeq 6$). One can see from the plots in Fig.~\ref{fig:fits} that the models that consider a high-energy population with a SFR evolution lead to a larger amount of secondary protons at a few EeV energies, having also a broader distribution. On the other hand, when the high-energy population has no cosmological evolution, the amount of secondary protons gets reduced and an increased He contribution from the high-energy population is then required. The parameter $X _{\rm s}$ determining, together with $E _{\rm c}$, the magnetic horizon effect, needs to be much larger for the high-energy population than for the low-energy one, since this suppression is crucial to lead to an effectively very hard spectrum for each of the mass components of the high-energy population. This can naturally result if the high-energy population has a much lower source density than the low-energy one. One typically obtains, for the initially adopted value of $E _{\rm c}=2$~EeV, that $X _{\rm s}^h\simeq 3.6$ in the no evolution case and $X _{\rm s}^h\simeq 5$ for the SFR case, while in all cases $X _{\rm s}^l<1$. Given that the required intersource separation would be $d _{\rm s}\simeq 65\,{\rm Mpc}X _{\rm s}\sqrt{l _{\rm c}/{\rm Mpc}}$, if we also require that $d _{\rm s}<100$~Mpc in order that the high-energy sources are not too rare and not too suppressed at the highest energies by interactions during propagation, one concludes that the coherence length of the magnetic field should be of the order of galactic scales ($<100$~kpc) rather than of the order of the typical distance between galaxies ($\sim {\rm Mpc}$). On the other hand, requiring that $l _{\rm c}>10$~kpc one would conclude that $d _{\rm s}^h>20$~Mpc for the NE case (while $d _{\rm s}^h>40$~Mpc for the SFR case). This would imply a source density smaller than $10^{-4}$~Mpc$^{-3}$ ($10^{-5}$~Mpc$^{-3}$ respectively) for the high-energy population. If we were to consider a different value of the critical energy, the main impact on the results would be that the preferred value of $X _{\rm s}$ would become smaller for increasing values of $E _{\rm c}$. For instance, for the SFR-NE scenario one gets $X _{\rm s}^h\simeq 7.6$, 5.9, 3.7 and 1.7 for $E _{\rm c}=0.5$, 1, 2 and 10~EeV respectively. Given that $E _{\rm c}\simeq 0.9B_{\rm nG}(l _{\rm c}/{\rm Mpc})$, one finds that the required value of the extragalactic magnetic field needs to be sizable, of order $B\simeq 20\,{\rm nG}(E _{\rm c}/{\rm EeV})(50\,{\rm kpc}/l _{\rm c})$. Such large values of the extragalactic magnetic fields could result, for instance, from the amplification of primordial seeds \cite{enzo17}. For the low-energy population, the values obtained of $X _{\rm s}^l\simeq 0.6$ to 1 suggest that the associated source density should be much larger, with $n _{\rm s}^l>10^{-3}$~Mpc$^{-3}$. The magnetic horizon suppression of the flux from this population should be important in shaping its spectrum at energies below 0.1~EeV. In this respect, the study of the low-energy ankle feature present at $\sim 20$~PeV could be helpful to further constrain $X _{\rm s}^l$ \cite{mr19gal}. \section{On the steepness of the low-energy population spectra} One property that was derived in the previous analysis is that the low-energy population needs to have, below its cutoff value, a very steep spectrum with $\gamma\simeq 3.5$. This is significantly larger than the values 2 to 2.4 which are typically obtained in scenarios of diffusive shock acceleration. A possible way to obtain an effectively steeper spectrum from sources having a hard spectrum, but having a power-law distribution of values of the source cutoff energies, was suggested in \cite{ka06}, and we here comment on this alternative. Let us consider a population of continuously distributed sources having similar luminosities below their cutoff energies, with a common spectral index $\gamma _{\rm s}$ but having a distribution of cutoff energies. For simplicity we here assume the cutoff to be sharp, so that for any given source with cutoff energy $E_{\rm cut}$ the number of CRs emitted per unit time is $q(E,E_{\rm cut})\propto E^{-\gamma _{\rm s}}\Theta(E_{\rm cut}-E)$, with $\Theta$ the Heaviside function. Considering the cutoff values of different sources to have a power-law distribution such that the source density satisfies d$n _{\rm s}(E_{\rm cut})/{\rm d}E_{\rm cut}\propto E_{\rm cut}^{-\beta}$, one would get, ignoring evolution and propagation effects, that the total flux at the Earth will be \begin{equation} \Phi(E)\propto \int_E^\infty {\rm d}E_{\rm cut}\frac{{\rm d}n _{\rm s}(E_{\rm cut})}{{\rm d}E_{\rm cut}}q(E,E_{\rm cut})\propto E^{-\gamma _{\rm s}-\beta+1}. \end{equation} In this case, the spectrum resulting from the superposition of all the sources will have an effective spectral index $\gamma=\gamma _{\rm s}+\beta-1$. Hence, a steep spectrum with $\gamma\simeq 3.5$ could result, for instance, from $\gamma _{\rm s}=2$ if one considers $\beta\simeq 2.5$. If the sources have an evolution with redshift, the same reasoning can be applied to the emissivity from any redshift interval to conclude that it is equivalent to have a population of sources with a steep spectrum $\gamma$ having all a large cutoff energy or to have instead sources with a harder spectral index $\gamma _{\rm s}$ but having a power-law distribution of cutoff energies, with $\beta=\gamma-\gamma _{\rm s}+1$. Note that if $E_{\rm cut}$ were to depend on redshift, this would ultimately also modify the effective source evolution of the model. \section{Two populations with a common composition?} In this section we consider whether the two extragalactic populations could be associated with a similar underlying composition, in such a way that the fraction of the different elements that are present in the medium in which the CRs get accelerated is similar for both populations. Even if this were the case, their spectral indices and cutoff energies could end up being different due to the different properties of the acceleration process involved in each case. If we denote as $f_i^0$ the fraction of the element $i$ that is present in the medium in which the acceleration takes place, and consider that all elements get fully ionized and are accelerated in a rigidity dependent way, one should expect then that the final cumulative source fluxes above a certain threshold rigidity value should also have the same relative abundances, i.e. \begin{equation} \frac{\int_{Z_iE_{\rm th}}^\infty {\rm d}E\,\Phi_i^s(E)}{\int_{E_{\rm th}}^\infty {\rm d}E\,\Phi _{\rm H}^s(E)}= \frac{f_i^0}{f _{\rm H}^0}. \end{equation} In particular, for a power-law source spectrum such that $\Phi_i^s(E)\propto f_iE^{-\gamma}$ (note that the fractions can be defined at the energy $E_{\rm th}\ll E_{\rm cut}$, and hence the effects of the source cutoff can be neglected here), this would lead to \begin{equation} f_i\simeq f _{\rm H} \,Z_i^{\gamma-1}\,f_i^0/f _{\rm H}^0. \end{equation} If the low-energy and high-energy extragalactic populations were to originate from environments with similar composition fractions $f_i^0$, and the CRs were accelerated such that they end up having power-law spectra characterised by indices $\gamma_l$ and $\gamma_h$, one should then expect that \begin{equation} f_i^l\simeq f_i^hZ_i^{\gamma_l-\gamma_h}. \end{equation} This implies that the composition of the accelerated CRs of the population with steeper spectrum should be enhanced in heavier elements with respect to the population having a harder spectrum. This is however at odds with the results we obtained previously for the two extragalactic population scenarios considered, which indicated that the steeper low-energy population had however a smaller fraction of heavier elements than the high-energy population. This then suggests that the CRs from the two populations get accelerated in environments having quite different distributions of elements (or, alternatively, that the heavy nuclei in the low-energy population get largely disintegrated during their acceleration). We also note that the compositions inferred for these two populations differ from the composition of the Galactic cosmic rays measured at lower energies. For instance, at $10^{14}$~eV, where $\gamma\simeq 2.7$, the composition of the different mass groups is $f_{\rm H}\simeq f_{\rm He}\simeq 0.35$ and $f_{\rm N}\simeq f_{\rm Si}\simeq f_{\rm Fe}\simeq 0.1$, which suggests that the nature of the sources responsible for these populations is different. \section{Discussion} We have considered a scenario in which the UHECRs are mostly extragalactic and arise from two main populations having different source densities, compositions, spectral indices and cutoff values. In these scenarios, the Galactic-extragalactic transition would take place slightly below the second-knee energy, with the low-energy extragalactic population dominating the CR spectrum in the range from $\sim 0.07$~EeV up to about 2~EeV while the high-energy population would dominate the spectrum at higher energies. One of the main features that was derived \cite{combfit} from the spectrum and composition inferred from the Auger Observatory measurements, is the requirement that the different components observed above the ankle energy need to have a very hard spectrum and that they also need to have a rigidity dependent source cutoff at energies of about a few $Z$~EeV. Instead of getting the hard spectrum as a result of a very hard injection spectrum at the source, in tension with the expectations from diffuse shock acceleration, we here considered the possibility that this be the result of the hardening produced during the propagation as a consequence of a magnetic horizon effect, as originally suggested in \cite{difu1}.\footnote{Yet another possibility to implement the magnetic horizon effect that suppresses the observed flux at low rigidities would be in scenarios in which the high-energy sources are located in the cores of galaxy clusters \cite{ha16} since, given the magnetic fields with typical $\mu$G strengths present in the cluster environments, the confinement times of the charged CRs inside the clusters could be longer than the times required for their subsequent propagation up to the Earth.} We here also combined this high-energy population with another extragalactic population dominating the flux below a few EeV, as had been considered in \cite{mr19} in a scenario in which the high-energy flux originated from nearby extragalactic sources within the Local Supercluster that were active since relatively recent times. In the scenarios considered in the present work, with continuous emission since the earliest times, the source density of the high-energy population needs to be small, typically $n _{\rm s}^h< 10^{-4} {\rm Mpc}^{-3}$, in order that the magnetic suppression be significant at energies $\sim Z$~EeV for acceptable values of the extragalactic magnetic field strength and coherence length. We generally obtain that the low-energy population has a small contribution from the elements heavier than N, while the high-energy population has a small contribution from H at the sources, although an important contribution of secondary protons at energies of a few EeV results from the photodisintegration of the heavy elements during their propagation. Since these protons are expected to be produced mostly at high redshifts, their flux would be quite isotropic, and hence one would expect that they tend to suppress the CR anisotropies at energies of a few EeV, in line with the present restrictive upper limits on the equatorial dipole amplitude, that should be below 1.5\% in the energy range 1 to 4~EeV \cite{lsra19}. We note that a difference with respect to the scenario in which the high-energy population is due to a nearby source emitting since recent times would be the lack of significant amounts of secondary protons in this last case \cite{mr19}. This kind of scenario then needs to include instead a larger fraction of light elements produced directly at the nearby source, which tends to enhance the predicted anisotropies, and this could help to distinguish between the different possibilities. A detailed study of these predictions would need to consider also the effects of the Galactic magnetic fields on the anisotropies. The inferred source properties for the two extragalactic populations considered in this work depend significantly on the assumed source evolution, and hence a detailed determination of the CR composition could also help to obtain information about the evolution of the sources. We note that the inferred source spectrum of the low-energy population turns out to be quite steep and, as we mentioned, this could be an effective slope resulting from the combination of many harder sources having a distribution of cutoff energies. This is clearly a very natural possibility, since the cutoff energies will ultimately depend on the power of the sources and on the magnetic fields present in them, and there is no reason for these quantities to be the same for all UHECR sources. \section*{Appendix: Attenuation factors} We report here the attenuation factors $\eta$, both for protons and for the four representative heavier nuclear species considered in this work. They are given by the ratio between the spectrum of the particles reaching the Earth from a continuous (i.e. high density) distribution of sources including the attenuation effects with respect to the spectrum that would have been expected from the same sources in the absence of interactions. Protons lose energy mainly through pair production and photo-pion production when interacting with the cosmic microwave background (CMB) radiation. The nuclei are affected by their photodisintegration off the photon backgrounds (which reduces the mass of the leading fragment and leads to the emission of secondary nucleons), as well as by electron-positron pair production (which reduces their Lorentz factor without changing their mass). Photopion production of heavy nuclei is only sizable for Lorentz factors larger than $4\times 10^{10}$, and hence is relevant only for energies larger than those considered here. We collect all of the leading fragments heavier than H that result from the photodisintegration of a given primary element in the mass group of that element, while the secondary protons are considered separately (the emitted neutrons will quickly decay into protons). In this way, it is possible to introduce an effective attenuation factor for each mass group. Note that some of the leading fragments from heavy nuclei may be light, but the resulting mass distribution of the leading fragments is however generally peaked close to the mass of the primary. The total spectrum can then be obtained by adding up the contributions from the different mass groups as well as the secondary protons. On the other hand, when computing the average logarithmic mass and its dispersion we use the actual mass distribution of the leading fragments obtained in the simulations, since neglecting the spread in each mass group could lead to slight differences in the results. For these computations we follow \cite{hmrheavy}, using the photodisintegration cross sections from \cite{psb,salomon} and the redshift evolution of the extragalactic background light (EBL) from \cite{in13}. We show in the left panel of Figure~\ref{fig:etanesfr} the results for the case of no source evolution and in the right panel those for the SFR evolution case. The five representative mass groups are shown in both cases. The relatively larger suppression of the flux at high energies in the SFR evolution scenario is actually due to the increased luminosity of high redshift sources leading to a larger flux at low energies. Solid lines correspond to the results obtained in numerical simulations, while the dashed lines correspond to the fitted functions reported below. \begin{figure}[h] \centering \includegraphics[scale=.82,angle=0]{etanoevzm1.eps} \includegraphics[scale=.82,angle=0]{etasfr.eps} \caption{Attenuation factor $\eta^j(E)$ for different primaries and for the two source evolution models. Dots are the results of the simulations and the lines are the fits obtained.} \label{fig:etanesfr} \end{figure} \subsection*{Protons} The attenuation factor for the protons can be parametrized as \begin{equation} \eta^{\rm H}(E)=\left[1/g_0(E)+ 1/g_1(E)+1/g_2(E)\right]^{-1}, \end{equation} where the function $g_0$ accounts for the pile-up appearing at energies below the threshold of the interactions and is parametrized as \begin{equation} g_0(E)\equiv (\cosh(a\, E/{\rm EeV}))^b. \end{equation} The function $g_1$ accounts for the effects of the photopion production interactions while $g_2$ for those of pair production (both with the CMB). They are parametrized in terms of the function \begin{equation} F_{[A,B,C]}(E)\equiv A\exp(B\,(E/{\rm EeV})^C). \end{equation} The attenuation factors for the two source evolution models considered are then obtained from the functions - no evolution (NE) \begin{eqnarray} g_0(E)&=& (\cosh(1.9\, E/{\rm EeV}))^{0.48}, \\ g_1(E)&=&F_{[0.0037,333,-1.03]}(E),\\ g_2(E)&=&F_{[0.24,2.2,-0.96]}(E)+F_{[0.0089,0.074,0.89]}(E). \label{etapnoev} \end{eqnarray} - star formation rate (SFR) \begin{eqnarray} g_0(E)&=& 1, \\ g_1(E)&=&F_{[0.00048,515,-1.12]}(E),\\ g_2(E)&=&F_{[0.0035,5.0,-0.33]}(E)+F_{[0.001,3.2,0.021]}(E). \label{etapsfr} \end{eqnarray} \subsection*{Nuclei} The attenuation factor for the four mass groups, $j={\rm He}$, N, Si and Fe, can be parametrized with the function \begin{equation} \eta^j(E)=\left[1/g^j_0(E)+ 1/g^j_1(E)+1/g^j_2(E)\right]^{-1}, \label{eq:etaj} \end{equation} where now the different functions are $g^j_0(E)\equiv (\cosh(a^j\, E/{\rm EeV}))^{b^j}$ and $g^j_i(E)=F_{[A_i^j,B_i^j,C_i^j]}(E)$ for $i=1,2$. The functions $g_1^j$ account mostly for the effects of the photodisintegrations off the CMB while $g_2^j$ for those of the photodisintegrations with the EBL, although the subdominant pair production effects are also included in them. The resulting coefficients of the fits are collected in Table~\ref{tab:fitnuclei}. \begin{table}[ht!] \centering \caption{Coefficients of the fits to the attenuation factors for the different nuclei and for the two models of source luminosity evolution. } \bigskip \begin{tabular}{c c c c c c c c c c} \hline\hline Evolution & Element & $a^j$ &$b^j$ & $A^j_1$ & $B^j_1$ & $C^j_1$ & $A^j_2$& $B^j_2$ & $C^j_2$ \\ \hline NE & He & 0 & 1 & $8.3\times 10^{-4}$ & $2.0\times 10^{3}$&-2.1 &$7.9\times 10^{-3}$ & 6.9 & -0.43\\ & N &1.46 & 0.36 & $1.2\times 10^{-3}$ & $6.3\times 10^{3}$ &-1.9 &$1.8\times 10^{-10}$ &24.5 &-0.062 \\ & Si & 0.57 & 0.17 & $4.2\times 10^{-3}$ & $8.7\times 10^{4}$ &-2.4 & $9.5\times 10^{-3}$ & 13.1 & -0.45 \\ & Fe & 0.18 &1.13 &$2.6\times 10^{-2}$ &$1.2\times 10^{11}$ & -5.2 & $1.1\times 10^{-8}$& 22.9 & -0.084\\ \hline SFR & He & 0 & 1 & $4.1\times 10^{-5}$ & $2.0\times 10^{3}$&-2.0 &$3.8\times 10^{-5}$ & 10 & -0.24\\ & N & 4.5 & 0.089 & $1.2\times 10^{-4}$ & $1.4\times 10^{3}$ &-1.5 &$2.1\times 10^{-5}$ &11 &-0.21 \\ & Si & 0.13 & 20 & $7.7\times 10^{-4}$ & $1.4\times 10^{5}$ &-2.5 & $2.6\times 10^{-17}$ & 41 & -0.047 \\ & Fe & 0.059 & 16 & $2.9\times 10^{-3}$ &$2.7\times 10^{8}$ & -3.9 & $1.3\times 10^{-4}$& 15 & -0.27\\ \end{tabular} \label{tab:fitnuclei} \end{table} \subsection*{Secondary protons} Secondary protons get produced in significant amounts (comparable in some cases to the primary fluxes) in the energy range between 0.1 and few EeV. Their flux depends on the source spectral index and on the cosmological source evolution considered. Their maximum energies are actually directly related to the maximum energies of the primaries as $E_{\rm max}^{\rm sp}=E_{\rm max}^j/A\simeq E_{\rm cut}/2$. After the secondaries get produced and until they arrive to the Earth, the proton energies get degraded, mostly due to pair production and to adiabatic redshift losses. The density of secondary protons can be approximately fitted as \cite{mr19} \begin{equation} \Phi^{ I}_{\rm sp}(E)\simeq \Phi^{ I}_0\sum_j f^{ I}_j \left(\frac{E}{\rm EeV}\right)^{-\gamma_{ I}}\frac{A^{2-\gamma_{ I}}g(E)}{\cosh(2E/E^{ I}_{\rm cut})}, \label{secflux} \end{equation} where for no evolution we obtain \begin{equation} g_{\rm NE}(E)\simeq \frac{1}{1.1 (E/{\rm EeV})^{0.75}+0.45/(E/{\rm EeV})^{1.6}}, \label{genoev} \end{equation} and for SFR evolution we obtain \begin{equation} g_{\rm SFR}(E)\simeq \frac{1}{2.7 (E/{\rm EeV})^{1.1}+0.15/(E/{\rm EeV})^{1.4}}. \label{gesfr} \end{equation} \section*{Acknowledgments} This work was supported by CONICET (PIP 2015-0369) and ANPCyT (PICT 2016-0660). We thank the Auger Collaboration for making data available at www.auger.org.
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} The human being interacts with its surrounded environment, and through this interaction, the environment finds its meaning for him. He percepts its environment (that he calls it the real world) through its sensors, and in order to understand it, he constructs mental symbolic forms~\cite{kn:kasir}, and formal structures in his mind. After that, he reasons about its environment through these mental constructions. Euclidian Geometry, Ptolemy's Almagset, Copernicus revolution, Turing computation, and etc are some few samples of theories constructed by the human being through interaction with the world. After the human being constructs a formal theory, he observes and means its environment through it as a window to the outer world. He proposes questions about its environment in his (constructed mental) formal theory, and tries to answer them in the context of the same theory. For example, in astronomy, after Ptolemy's Almagset, the human being tried to find an explanation for the irregular motions of the \emph{wandering stars} in the context of Ptolemy's Almagset, and in geometry the human being attempted to prove \emph{parallel postulate} using Euclid's first four postulates. Model theory, a branch of mathematical logic~\cite{kn:model}, studies mathematical structure in order to determine that given a theory (a set of formulas) $\Gamma$, which other formulas are true in all structures which all formulas in $\Gamma$ are true in them. In (classic) model theory, the mathematician is regarded as a god who lives out of a mathematical structure, and thinking activities of the mathematician does not effect the structure. In this way, the role of the human being in developing formal systems is ignored. In this paper, we introduce a new semantics for predicate logic, that the meaning of predicate and function symbols are not already predetermined, and they find their meaning through the interaction of a subject with the logical language. We name the proposed semantics ``the persistently evolutionary semantics". The paper is organized as follows: \noindent In Section~\ref{CID}, We discuss persistently evolutionary intensions. We scrutinize that whether it is possible that the intension of a word persistently evolves whereas the subject cannot be aware of it. \noindent In section~\ref{SEM}, we propose persistently evolutionary Kripke semantics to formalize the notion of persistently evolutionary intensions. \noindent In section~\ref{Logic}, and section~\ref{Logic2}, we introduce persistently evolutionary semantics for propositional and predicate logic. \noindent In section~\ref{compu}, using persistently evolutionary semantics for predicate logic, we formalize the argument of section~7 of the manuscript~\cite{kn:comp1}. \section{Persistently Evolutionary Intensions}\label{CID} The human being, as an intelligent agent, uses languages to express and encode the intensions (concepts) that he constructs to mean his environment. Intension refers to a property that specifies the set of all possible things that a \emph{word} (a finite string) could describe, while extension refers to the set of all actual things the word describes. Also an intensional definition of a set of objects is to intend the set by a word, and an extensional definition of a set of objects is by listing all objects. Obviously, it is impossible to give an extensional definition for an infinite set. For example, the human being intends an infinite subset of natural numbers by the word ``prime", and he can never list all prime numbers. As another example, the human being, to define the set of all Turing machines has no way except to use the intensional definition. In theory of computation, for every Turing machine $T$, $L(T)$ refers to the set of all strings that the Turing machine $T$ halts for them. We may say the Turing machine $T$ is an intensional definition for the set $L(T)$, or in other words, the human being intends the set $L(T)$ by the \emph{word} (finite string) $T$ (note that Turing machines can be coded in finite strings). The human being constructs concepts to mean its environment through them. It is possible that both the human being and its environment (the real world, the nature) evolve as the human being checks that whether an object is an extension of a word. This evolution could be persistent such that the nature (or the human being) works well-defined. That is, if an output $z$ is already provided for an input $[x,y]$ ($x$ is a word, and $y$ is a thing in order to be checked whether it is an extension of $x$) then whenever in future, the same input $[x,y]$ are chosen, the output would be the same $z$. In other words, the meaning of the word $x$ may change, but in a conservative manner, that is, all the things that the human being already realized that whether they are extensions of $x$ or not, their status remains unchanged. \begin{definition}\label{ped} Let $w$ be a word (a finite string), and $\mathcal{O}$ a domain of objects. We say the intension of the word $w$ for a subject is persistently evolutionary (or its extension is order-sensitive) whenever in the course that the subject chooses an object $o\in\mathcal{O}$ to check that whether it is an extension of the word $w$ or not, then the intension of $w$ changes, but persistently, i.e., if the agent (the subject) has checked whether an object $d$ is an extension of $w$ already, and the answer has been yes (has been no), then whenever in future the agent checks again whether the same object $d$ is an extension, the answer would be the same yes (would be the same no). What remains unchanged is the word $w$ (the syntax), but its meaning (the semantics) changes for the subject. In this way, the set of all extensions of the word $w$ is not predetermined and it depends on the order that the agent chooses objects from the domain $\mathcal{O}$ to check whether they are extensions of $w$ or not. \end{definition} \begin{itemize} \item[\textbf{Q1)}] Is it possible that an intension of a word would be a persistently evolutionary one? \end{itemize}The answer is Yes. To answer the above question, we should first clarify what the meaning (the intension) of a word is? For a subject (the human being) the meaning of a word is given by how the subject interacts via the word with the environment (the language). This interaction is nothing except choosing objects and checking whether they are extensions of the word. Wittgenstein says the meaning of a word is identified by how it is used. \begin{quote} ``For a large class of cases-though not for all- in which we employ the word `meaning' it can be defined thus: the meaning of a word is its use in the language"~\footnote{The quote is written from Stanford Encyclopeida of Philosophy~\cite{kn:plato}.} \end{quote} The use of a word happens in time, therefore we may say that the meaning of a word exists in time, and it is an \emph{unfinished entity} similar to a choice sequence~\cite{kn:ob}. The meaning of a word is not predetermined, and as time passes the word finds its meaning through the interaction of the subject with the language. The meaning is a dynamic temporal (mental) construction~\footnote{An object is temporal exactly if it exists in time, and it is dynamic if at some moments are part added to it or removed from it~\cite{kn:BH}, page~16. Another philosophical framework that knows possible the evolution of a meaning is ``dynamic Semantics" (see http://plato.stanford.edu/entries/dynamic-semantics/). In this framework, meaning is context change potential.}. In Brouwer's intuitionism, mathematical objects are mental construction. Brouwer's choice sequences, as a kind of mathematical objects, are dynamic temporal objects~(see~page~16,~\cite{kn:BH}). The meaning of a word (which is identified via how it is used by the human being) has lots of common with a choice sequence. At each stage of time, the human being only experienced a finite set of things that whether they are extensions of the word or not. Also, in a choice sequence, at each stage of time only a finite segment of the sequence is determined. As the human being freely chooses another thing to check its extension status for the word, the meaning of the word may persistently change (the construction of the human's brain (or mind) may persistently change). It is similar to the act of the human being in developing a choice sequence. The use of a word is dependent to the human being and the way that he uses (interacts via) the word in (with) the environment. We may assume that the human being has freedom to choose things in any order that he wants to check their status of being extensions of a word. The different order of choosing objects may cause that the meaning of the word evolves in different ways. But since the human being cannot go back to the past, he just lives in one way of evolution. As soon as the human being chooses an object and checks whether it is an extension of a word $w$, (the biological construction of) his mind may persistently evolve, and the meaning of the word $w$ persistently changes. Therefore, it seems possible that the intension of a word would be persistently evolutionary, and \begin{quote} the interaction of the human being with the language may make the meaning of a word persistently evolve. Assuming the free will for the human being, the behavior of the human being is not predetermined, and as a consequence the meaning of a word needs not to be predetermined. \end{quote} \begin{itemize} \item[\textbf{Q2)}] Is it possible for a subject to distinguish between persistently evolutionary intensions and static ones? In other words, is it possible for a subject to determine that whether the intension of a word is static or persistently evolutionary? \end{itemize} The answer is No. It is not possible for the human being to recognize whether the meaning of a word, in the course of his thinking activities, persistently evolves or remains constant. Suppose $w$ be a word. Two cases are possible \begin{itemize}\item[1)] the intension of the word $w$ is static and \item[2)] the intension of the word $w$ is persistently evolutionary. \end{itemize} In both cases, at each stage of time, the human being just has experienced the status of extension of a finite set of objects. The human being just has access to his pervious experiences and does not have access to the future. So at each stage of time, all information that the human being has about the word $w$ is a finite set of objects $\{d_1,d_2,...,d_n\}$ that their extension status are determined. This information of the word $w$ are the same in both cases. The human being cannot differ between these two cases based on his obtained information. The persistent evolution is similar to being static in view of past experiences. The difference of the persistent evolution and being static is in future. But the human being does not have access to the future. As soon as, the meaning of a word evolves then it has been evolved and the subject cannot go back to the past and experience another way of evolution. \begin{example} Suppose that I am in a black box with two windows: an input window and an output one. You give natural numbers as input to the black box and receive a natural numbers as output of the black box. I do the following strategy in the black box. I plan to output $1$ for each input before You give the black box $5$ or $13$ as inputs. If you give $5$ as input (and you have not given $13$ already) then after that time, I output $2$ for all future inputs that have not been already given to the black box. For those natural numbers that you have already given them as input, I still output the same $1$. If you give $13$ as input (and you have not given $5$ already) then after that time, I output $3$ for all future inputs that have not been already given to the black box. For those natural numbers that you have already given them as input, I still output the same $1$. \end{example} The black box of the above example behaves well defined. But it persistently evolve through interactions with the environment, and the function that the black box provides is not a predetermined function. If one does not have access to the inner structure of the black box, he could always assume that there exists a a static machine in the black box. \begin{remark} \textsc{Deterministic vs. Predetermination}. Being deterministic does not force to be predetermined. The human may computationally intend a function deterministically, but it is not needed that the language to be predetermined. It may be determined as time passes by the free will of the human being. \end{remark} We propose a postulate about the extension of a word as follows: Suppose $w$ is a word and $\mathcal{O}$ is a class of objects. We refer to the set of extension of $w$ by $\mathrm{E}(w)\subseteq \mathcal{O}$. Our proposed postulate which we call it "the Postulate of Persistent Evolution", $\mathbf{PPE}$, says: \begin{itemize} \item[$\mathbf{PPE}$:] if a subject has not yet proved that $\mathrm{E}(w)$ is finite (or in other words, if a subject has not yet listed all the element of $\mathrm{E}(w)$ on paper) then he could not yet disprove that the meaning (intension) of $w$ does not persistently evolve. \end{itemize} Suppose that a subject wants to prove that \begin{itemize} \item[i.] the meaning of a word $w$ is static and does not persistently evolve, and \item[ii.] the set $\mathrm{E}(w)$ is predetermined and does not depend to the order that he chooses objects from the domain $\mathcal{O}$ to check whether they are extensions of $w$ or not.\end{itemize} The subject at each stage of time, only knows the status of a finite number of objects in $\mathcal{O}$ that whether they are extensions of $w$ or not. Suppose $E(w)$ is infinite. Then the subject has never written all extensions of $w$ at any stage of time. He always could know it possible that the the meaning of the word $w$ may persistently change. But since this change happens persistently, he cannot recognize whether the meaning is static or not, based on the finite history that he has access to it. \begin{quote}If a subject does not sense a change about a process, then he may (wrongly) presuppose that the process is static and independent of his interaction with the process. In spite of this, in the case that a process persistently evolves, the subject does not sense any change as well! We only sense a change whenever we discover that an event which has been sensed before is not going to be sensed similar to past. Persistent evolution always respects the past. As soon as a subject experiences an event, then whenever in future he examines the same event, he will experience it similar to past. Persistent evolution effects the future which has not been determined yet. \end{quote} In other words, the postulate $\mathbf{PPE}$ says that \begin{center} it is not possible for a subject to differ between static intensions and persistently evolutionary one. \end{center} \section{Persistently Evolutionary Semantics}\label{SEM} In this section, to clarify the notion of persistently evolutionary intensions, we introduce a kind of Kripke structures that we name Persistently Evolutionary Kripke structures. Let $P=\{p_i\mid i\in I\}$ be a set of atomic propositional formulas for an index set $I\subseteq \mathbb{N}$, and $A=\{a_i\mid i\in I'\}$ ($I'\subseteq \mathbb{N}$ is an index set) be a set that is assumed as the set of actions of an agent $ag$. \begin{definition} A Persistently Evolutionary Kripke structure over a set of actions $A$ and a set of atomic formulas $P$ is a tuple $K=\langle S,\Pi=\{\pi_j\mid j\in J\}, \sim_{ag} , V\rangle$ where $J\subseteq \mathbb{N}$ is an index set, and \begin{itemize} \item $S=A^*\times J$ is the set of all possible worlds ($A^*$ is the set of all finite sequences of actions in $A$). For each $s=(\vec{x},i)\in S$, we call $i$ the \emph{meaning index} of the state $s$. \item $\Pi$ is the set of \emph{meaning functions}. Each $\pi_i\in \Pi$ ($i\in J$), is a partial function from $S\times A$ to $\mathbb{F}P$ (the set of finite subsets of $P$) that its domain is $(A^*\times\{i\})\times A$. To each state $s=(\langle b_1,b_2,...,b_n\rangle,i)$, and each action $a\in A$ the function $\pi_i$ corresponds a finite subset of $P$ as the meaning of the action $a$, satisfying the following condition (\emph{persistently evolutionary condition}): for each state $s=(\langle b_1,b_2,...,b_n\rangle,i)\in S$, if an action $a$ is appeared in the finite sequence $\langle b_1,b_2,...,b_n\rangle$, and $\langle b_1,b_2,...,b_n\rangle=\langle b_1,b_2,...,b_i\rangle\langle a\rangle\langle b_{i+2},...,b_n\rangle$, then $\pi_i(s,a)=\pi_i((\langle b_1,b_2,...,b_i\rangle,i),a)$. \item The agent $ag$ is an operator which chooses actions from $A$ and performs them. By his operation, it makes the universe evolve. If $s=(\vec{x},i)\in S$ is the current state of the model $K$ and $ag$ performs $a\in A$, then the current world evolves to $ s'=(\vec{x}.\langle a\rangle,i)$. Note that via evolution, the meaning index of the states does not change. We say the agent $ag$ lives in the meaning function $\pi_i$, or in other words, the actual meaning function for the agent $ag$ is $\pi_i$. \item $V$ is a function form $S$ to $2^P$ defined as follows: for each $s\in S$, $s=(\langle b_1,b_2,...,b_n\rangle,j)$, \begin{center}$V(s)=\pi_j((\langle\rangle,j), b_1)\cup(\bigcup_{1\leq i\leq n} \pi_j((\langle b_1,b_2,...,b_i \rangle,j), b_{i+1}))$. \end{center} \item $\sim_{ag}\subseteq S\times S$ is a binary relation which satisfies the following condition: for all two states $s_1,s_2\in S$, we have $s_1\sim_{ag} s_2$ whenever $s_1=(\vec{x},i)$ and $s_2=(\vec{x},j)$ for some $\vec{x}=\langle x_1,x_2,...,x_k\rangle$ and $i,j\in J$ such that for all $1\leq t\leq k$, $\pi_i((\langle x_1,...,x_{t-1}\rangle ,i), x_t)=\pi_j((\langle x_1,...,x_{t-1}\rangle ,j), x_t)$. \end{itemize} \end{definition} The relation $\sim_{ag}$ is an indistinguishability relation for the agent $ag$. If $s_1\sim_{ag} s_2$ then it means that the agent $ag$ cannot distinguish between these two states, since all experiences that he has observed in both states are the same. \begin{definition}\label{osens} Let $K=\langle S,\Pi=\{\pi_j\mid j\in J\}, \sim_{ag} , V\rangle$ be a persistently evolutionary Kripke Structure. We say a meaning function $\pi_i\in \Pi$ is static if it is not order-sensitive. That is, for every $n\in \mathbb{N}$, for every $a_1,a_2,...,a_n, a\in A$, for every permutation $\delta: \{1...n\}\rightarrow\{1...n\}$, \begin{center} $\pi_i((\langle a_{\delta(1)},a_{\delta(2)},...,a_{\delta(n)}\rangle, i),a)=\pi_i((\langle a_{1},a_{2},...,a_{n}\rangle, i),a)$.\end{center} \end{definition} \begin{notation} For each state $s=(\vec{x},i)$, we let $D(s)=\{s'\mid s'=(\vec{y},i), \vec{x}~is~a~prefix~of~\vec{y}\}$. \end{notation} \begin{definition}\label{indist} Let $K=\langle S,\Pi=\{\pi_j\mid j\in J\}, \sim_{ag} , V\rangle$ be a persistently evolutionary Kripke Structure. Suppose $s=(\vec{a},i)$, $\vec{a}\in A^*$, and $i\in J$ be a current state that the agent $ag$ lives in. We may say that the agent $ag$ can never become conscious that whether his world is static or persistently evolutionary whenever for every $s'=(\vec{a},i)\in D(s)$ there exists a meaning function $\pi_j\in \Pi$ which is not static, and for $s''=(\vec{a},j)$, we have $s'\sim_{ag}s''$. \end{definition} \begin{definition} Let $P$ be a non-empty set of propositional variables. The language $L(P)$ is the smallest superset of $P$ such that \begin{center} if $\varphi,\psi\in L(P)$ then $\neg \varphi,\ (\varphi\wedge\psi), (\varphi\vee\psi), (\varphi\rightarrow\psi), K_{ag}\varphi,\Box \varphi, C_f\varphi\in L(P), $, \end{center} $C_f\varphi$ has to be read as ``the formula $\varphi$ conflicts with the free will of the agent $ag$", $K_{ag}\varphi$ has to be read as ``the agent $ag$ knows $\varphi$", and $\Box\varphi$ has to be read as ``$\varphi$ is necessary true". \end{definition} \begin{notation}Let $K$ be a Kripke model with the set of state $S$. For each subset $A\subseteq S$, $K_A$ is defined to be the same Kripke model $K$ which its set of states is restricted to the set $A$. \end{notation} \begin{definition} In order to determine whether a formula $\varphi\in L(P)$ is true in a current world $(K,s)$, denoted by $(K,s)\models \varphi$, we look at the structure of $\varphi$: \[ \begin{array}{l}\begin{array}{ccccc} (K,s)\models p & \emph{iff} &p\in V(s) \\ (K,s)\models (\varphi\vee\psi) & \emph{iff} & (K,s)\models\varphi~or~(K,s)\models\psi \\ (K,s)\models (\varphi\rightarrow \psi) & \emph{iff} & for~all~ t\in D(s),~ if (K,t)\models\varphi~ then~(K,t)\models\psi \\ (K,s)\models (\varphi\wedge\psi) & \emph{iff} & (K,s)\models\varphi~and~(K,s)\models\psi \\ (K,s)\models\neg\varphi & \emph{iff} & for~all~ t\in D(s),~(K,t)\not\models\varphi \\ (K,s)\models \Box\varphi & \emph{iff} & for~all~ t\in D(s),~(K,t)\models\varphi \\ (K,s)\models K_{ag}\varphi & \emph{iff} & for~all~ t\in S, ~if~t\sim_{ag} s~then~(K,t)\models\varphi\\ (K,s)\models C_f\varphi & \emph{iff} & there~exsits~an~infinite~set~ Path=\{ s_0,s_1,...\},\\~&~&~where~s_0=s,~and ~for ~each~i,~ s_{i+1}\in D(s_i)~and~(K_{Path},s_i)\not\models \varphi \end{array}\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \end{array} \] The current state of the Kripke model $K$ is not a fixed state. The current state evolves due to $agent$'s operation, and it is not possible for the $agent$ to travel back in time from a state $s$ to one of its prefixes. Note that during the evolution, the meaning function does not change. That is, if the current state is $s=(\vec{x},i)$ and due to executing an action $a$, the current state changes to be $s'$, then $s'=(\vec{x}.\langle a\rangle,i)$ for the same $i$. In this case, we call $\pi_i$ the actual meaning function of the universe. The semantics of $C_f\varphi$ says that the agent $ag$ can interact with the universe and evolve it in a way that never $\varphi$ holds true. Therefore, the assumption of truth of $\varphi$ conflicts with the free will of the agent. \end{definition} \begin{definition} Let $K=\langle S,\Pi,\sim_{ag}, V \rangle$ be a persistently evolutionary Kripke structure. We say an action $a\in A$, at the state $s=(\vec{x},i)\in S$, is a static action whenever for all $s_1,s_2\in D(s)$ , we have $\pi_i(s_1,a)=\pi_i(s_2,a)$. That is, if the $agent$ starts from the state $s$ to perform actions, then the different orders that he may perform the actions does not make the meaning of the action `$a$' change. \end{definition} \begin{remark} At each state, the $agent$ cannot go back to past to experience his universe in different ways, thus he cannot distinguish between static actions and persistently evolutionary ones. \end{remark} \begin{example} Suppose $A=\mathbb{N}$ as a set of actions, and $P=\{p_{i,j}\mid i,j\in \mathbb{N}\}$ as a set of atomic propositions. For each finite sequence of numbers $\vec{x}=\langle x_1,x_2,...,x_n\rangle$, and $y\in A$, \begin{itemize} \item[] if for all $1\leq i\leq n$, $x_i\neq 5$,$x_i\neq 13$, define $\pi(\vec{x}, y)=\{p_{y,1}\}$ \item[] if for some $1\leq i\leq n$, $x_i= 5$ and for all $j<i$ $x_j\neq 13$ then \begin{itemize} \item if for some $t\leq\min(\{i|x_i=5\})$, $y=x_t$ then define $\pi(\vec{x}, y)=\{p_{y,1}\}$ else define $\pi(\vec{x}, y)=\{p_{y,2}\}$. \end{itemize} \item[] if for some $1\leq i\leq n$, $x_i= 13$ and for all $j<i$ $x_j\neq 5$ then \begin{itemize} \item if for some $t\leq\min(\{i|x_i=13\})$, $y=x_t$ then define $\pi(\vec{x}, y)=\{p_{y,1}\}$ else define $\pi(\vec{x}, y)=\{p_{y,3}\}$. \end{itemize} \end{itemize} The function $\pi$ satisfies the persistently evolutionary condition. We have $(K,\langle 1,3\rangle)\models p_{1,1}$. Also $(K,\langle 1,3\rangle)\models C_f p_{6,1}$, and $(K,\langle 1,3\rangle) \models C_f \neg p_{6,1}$. The value of $p_{6,1}$ is not predetermined yet and depends on the free will of the agent. \end{example} One may check that the indistinguishability relation $\sim_{ag}$ is \begin{itemize} \item[$1)$] \emph{reflexive} (for all $s\in S$, $s\sim_{ag}s$); \item[$2)$] \emph{transitive} (for all $s,t,u\in S$, if $s\sim_{ag}t$ and $t\sim_{ag}u$ then $s\sim_{ag}u$); \item[$3)$] \emph{Euclidean} (for all three states $s,t,u\in S$ if $s\sim_{ag}t$ and $s\sim_{ag}u$ then $t\sim_{ag}u$). \end{itemize} Therefore, persistently evolutionary Kripke structures are models for the standard epistemic logic $S5$~\cite{kn:dit3} which consists of axioms $A1-A5$ and the derivation rules $R1$ and $R2$ given below \[ \begin{array}{l}\emph{A1:~ Axioms~of~propositional~logic}\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\\ \emph{A2:~} (K\varphi\wedge K(\varphi\rightarrow\psi))\rightarrow K\psi\\ \emph{A3:~} K\varphi\rightarrow\varphi\\ \emph{A4:~} K\varphi\rightarrow KK\varphi\\ \emph{A5:~} \neg K\varphi\rightarrow K\neg K\varphi\\ \end{array} \] \[ \begin{array}{l}\emph{R1:~} \vdash\varphi,\ \vdash\varphi\rightarrow\psi\Rightarrow\ \vdash\psi\\ \emph{R2:~}\vdash\varphi\Rightarrow K\varphi, \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \end{array} \] \subsection{A Kripke Model for Persistently Evolutionary Intensions} Now we describe the notion of persistently evolutionary intensions using persistently evolutionary Kripke models. Let $\textsc{Language}=\{w_1,w_2,...\}$ be a set of words for a subject $IA$, and $X=\{x_1,x_2,...\}$ be an infinite set of objects that could be assumed as possible extensions of words in $\textsc{Language}$. The subject chooses a word $w\in \textsc{Language}$ and an object $x\in X$ to check whether $x$ is an extension of the word $w$ or not. Therefore, the set of actions of the Kripke model is defined to be $A_e=\{(w_i,x_j)\mid i,j\in \mathbb{N}\}$. The set of atomic propositions is defined to be $P_e=\{p_{(w_i,x_j,0)}\mid i,j\in \mathbb{N}\}\cup\{p_{(w_i,x_j,1)}\mid i,j\in \mathbb{N}\}$. The agent $IA$ chooses a word $w_i$ and an object $x_j$ to check whether $x_j$ is an extension of the word $w_i$. If at state $s$, he chooses $(w_i,x_j)$ then the current state evolves to $s'= s.\langle(w_i,x_j)\rangle$. We let the set of meaning functions $\Pi_e$ to be the set of all functions $\pi_i$s which satisfy the following conditions: \begin{itemize} \item[1-] For each state $s=(\vec{x},i)$, and action $(w_i,x_j)$ either $\pi_i(s,(w_i,x_j))=p_{(w_i,x_j,1)}$ (we read it as ``at the current state $s$, the agent $IA$ checked that whether $x_j$ is an extension of the word $w_i$ and found out the answer `yes') or $\pi_i(s,(w_i,x_j))=p_{(w_i,x_j,0)}$ (we read it as ``at the current state $s$, the agent $IA$ checked that whether $x_j$ is an extension of the word $w_i$ and found out the answer `no'). \item[2-] Each $\pi_i\in \Pi$ satisfies the persistently evolutionary condition. \end{itemize} We call the Kripke model $K_e=(S_e,\Pi_e,\sim_{ag},V_e)$ (introduced above) the model of persistently evolutionary intensions. \begin{definition}\label{peint} We say the intension of a word $w\in \textsc{Language}$ is static (or its extension is not order-sensitive) at a state $s=(\vec{x},j)$ whenever for all $a\in\{(w,x_i)\mid i\in \mathbb{N}\}$ and for all $s_1$ and $s_2$ in $D(s)$, we have $\pi_j(s_1,a)=\pi_j(s_2,a)$. \end{definition} \begin{theorem} For each state $s=(\vec{x},j)\in S_e$ and each word $w\in \textsc{Language}$ there exist two states $s_1=(\vec{x},t)\in S_e$ and $s_2=(\vec{x},t)\in S_e$ such that $s\sim_{ag} s_1$ and $s\sim_{ag} s_2$, and the intension of $w$ is static at $s_1$, and persistently evolutionary at $s_2$. \end{theorem}\begin{proof} The proof is straightforward. \end{proof} The above theorem says that it is not possible for the agent who lives in the persistently evolutionary Kripke model $K_e$ to gets aware that whether the intension of a word $w$ is static or persistently evolutionary. It is because, at each stage of time (at each state of the Kripke model $K_e$) the agent only observed a finite set of experiences, and as he cannot travel to the past (go back to a prefix of the current state), he cannot experience different orders of his behavior to be assure that if the actual meaning function which the universe evolves in, is order-sensitive or not. \begin{quote}If the agent wants to be aware of a change, then he must experience an event different from the way that he has experienced the same event already. But as the evolution happens persistently, it is impossible.\end{quote} In persistently evolution, the behavior of the agent changes the future which has not yet occurred. \section{Persistently Evolutionary Semantics for Propositional Logic}\label{Logic} Persistently evolutionary Kripke structures can be considered as models for propositional logic. Let $P_l=\{p_i\mid i\in I\}$ be a set of atomic formulas. The language $L_l$ of propositional logic is the smallest set containing $P_l$ satisfying the following condition: \begin{center} $\phi,\psi\in L\Rightarrow \varphi\wedge\psi, \varphi\vee\psi, \neg\varphi, \varphi\rightarrow\psi, K\varphi, C_f\varphi, \Box\varphi \in L_l$. \end{center} We say $K=\langle S,\Pi,\sim_{ag}, V \rangle$ is a Kripke structure for propositional logic whenever the set of actions is $A=P_l$, the set of atomic formulas of the structure $K$ is $\mathrm{P}=\{p=b\mid p\in P_l, b\in\{0,1\}\}$, and for each $\pi_i\in\Pi$, $\pi_i((\langle p_1,p_2,...,p_n\rangle,i),p)=\{p=b\}$ for some $b\in\{0,1\}$. \begin{definition} For every formula $\varphi\in L_l$, we define \[ \begin{array}{l}\begin{array}{ccccc} (K,s)\models p & \emph{iff} &p=1\in V(s) \\ (K,s)\models (\varphi\vee\psi) & \emph{iff} & (K,s)\models\varphi~or~(K,s)\models\psi \\ (K,s)\models (\varphi\rightarrow \psi) & \emph{iff} & for~all~ t\in D(s),~ if (K,t)\models\varphi~ then~(K,t)\models\psi \\ (K,s)\models (\varphi\wedge\psi) & \emph{iff} & (K,s)\models\varphi~and~(K,s)\models\psi \\ (K,s)\models\neg\varphi & \emph{iff} & for~all~ t\in D(s),~(K,t)\not\models\varphi \\ (K,s)\models K_{ag}\varphi & \emph{iff} & for~all~ t\in S, ~if~t\sim_{ag} s~then~(K,t)\models\varphi\\ K,s)\models \Box\varphi & \emph{iff} & for~all~ t\in D(s),~(K,t)\models\varphi \\ (K,s)\models C_f\varphi & \emph{iff} & there~exsits~an~infinite~set~ Path=\{ s_0,s_1,...\},\\~&~&~where~s_0=s,~and ~for ~each~i,~ s_{i+1}\in D(s_i)~and~(K_{Path},s_i)\not\models \varphi \end{array}\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \end{array} \] \end{definition} One may check that if we omit the operator $K$, $\Box$, and $C_f$ from the language $L_l$ then the persistently evolutionary semantics is sound and complete for intuitionistic propositional logic (see Chapter 2, \cite{kn:TD}). \section{Persistently Evolutionary Semantics for Predicate Logic}\label{Logic2} In this part, we propose a persistently evolutionary semantics for predicate logic. A predicate language $L_o$ contains \begin{itemize} \item a set of predicate symbols $\mathcal{R}$, and a natural number $n_R$ for each $R\in \mathcal{R}$ as its ary, \item a set of function symbols $\mathcal{F}$, and a natural number $n_f$ for each $f\in \mathcal{F}$, \item a set of constant symbols $\mathcal{C}$. \end{itemize} \begin{definition} A \emph{partial} $L_o$-structure $\mathcal{N}$ is given by the following data \begin{itemize} \item[1)] a nonempty set $N$ called the domain, \item[2)] a partial function $f^\mathcal{N}:N^{n_f}\rightarrow N$, for each $f\in \mathcal{F}$, \item[3)] a set $R^\mathcal{N}\subseteq N^{n_R}$ for each $R\in \mathcal{R}$, \item[4)] a \emph{partial} zero-ary function $c^\mathcal{N}\in N$ for each $c\in C$. (In this way, there could be some constant symbols $c\in C$, which are not interpreted in the structure.) \end{itemize} \end{definition} We refer to $R^\mathcal{N},f^\mathcal{N},c^\mathcal{N}$ as interpretations of symbols $R,f,c$. \begin{definition} TERM is the smallest set containing \begin{itemize} \item[] variable symbols, \item[] constants symbols in $C$, \item[] for each function symbol $f\in \mathcal{F}$, if $t_1,t_2,...,t_{n_f}\in TERM$ then $f(t_1,t_2,...,t_{n_f})$ is a term. \end{itemize} \end{definition} The interpretation of a term $t$, denoted by $t^\mathcal{N}$ is defined to be a \emph{partial }function from $N^k$ to $N$ for some $k$, similar to the interpretation of terms in model theory (see definition~1.1.4~of~\cite{kn:model}). The only difference is that the interpretations are partial functions. \begin{definition} FORMULA is the smallest set satisfying the following conditions: \begin{itemize} \item[] $\perp\in Formula$, \item[] $t_1,t_2\in TERM$ then $t_1=t_2\in FORMULA$, \item[] for each predicate symbol $R\in \mathcal{R}$, if $t_1,t_2,...,t_{n_R}\in TERM$ then $R(t_1,t_2,...,t_{n_R})\in FORMULA$ \item[] $\varphi,\psi\in FORMULA$ then $\neg\varphi, \varphi\wedge\psi, \varphi\vee\psi, \varphi\rightarrow\psi, \forall y\varphi, \exists y\varphi \in FORMULA$. \end{itemize} \end{definition} A persistently evolutionary Kripke structure $K_{L_o}$ for the language $L_o$ is defined as follows: \begin{itemize} \item[-] a nonempty set $\mathcal{O}$ called the domain, \item[-] The set of actions of the Kripke structure is $A_O=\{R(\vec{o})\mid \vec{o}\in \mathcal{O}^{n_R},~R\in \mathcal{R}\}\cup$ $ \{f(\vec{o})\mid \vec{o}\in \mathcal{O}^{n_f},~f\in \mathcal{F} \}\cup$ $\{c\mid c\in \mathcal{C}\}$. \item[] The set of atomic propositions of the Kripke structure is $P_O=\{(R(\vec{o})=b)\mid b\in\{0,1\},\vec{o}\in \mathcal{O}^{n_R}, R\in \mathcal{R} \}\cup$\\ $\{(f(\vec{o})=o')\mid \vec{o}\in \mathcal{O}^{n_f}, o'\in \mathcal{O}, f\in \mathcal{F}\}\cup$\\ $\{(c_j=o)\mid c\in \mathcal{C}, o\in \mathcal{O}\}$. \end{itemize} The set of meaning functions $\Pi$ of the Kripke structure is the set of all functions $\pi_i$, which satisfy persistently evolutionary condition and \begin{itemize} \item[] $\pi(s,R(\vec{o}))=\{(R(\vec{o})=b)\}$ for some $ b\in\{0,1\}$, and \item[] $\pi(s,f(\vec{o}))=\{(f(\vec{o})=o')\}$ for some $o'\in \mathcal{O}$. \item[]$\pi(s,c)=\{(c=o)\}$, for some $o\in \mathcal{O}$. \end{itemize} The meaning of predicates and the value of functions are not predetermined in the Kripke structure. As soon as the agent $ag$ chooses a predicate symbol $R$ and a tuple $(o_1,o_2,...,o_{n_R})$ to find the value of $R(o_1,o_2,...,o_{n_R})$, the meaning function gives out an atomic proposition $(R(o_1,o_2,...,o_{n_R})=b)$, $b\in\{0,1\}$, and the current state evolves to a new state. \begin{definition} Let $s$ be a state of the persistently evolutionary Kripke model $K_{L_o}$. The partial $L_o$-structure of the state $s$, denoted by $\mathcal{N}_s$, is defined as follows: \begin{itemize} \item[1-] the domain of the structure $N_s$ is the same domain of the Kripke model $\mathcal{O}$. \item[2-] For each symbolic predicate $R\in \mathcal{R}$, the relation $R^{\mathcal{N}_s}$ is defined to be $\{\vec{o}\mid (R(\vec{o})=1)\in V(s)\}$. \item[3-] For each symbolic function $f\in \mathcal{F}$, the partial function $f^{\mathcal{N}_s}$ is defined to be $\{(o,o')\mid (f(o)=o')\in V(s)\}$. \item[4-] For each symbolic constant $c\in \mathcal{C}$, we define $c^{\mathcal{N}_s}=o$ if $(c=o)\in V(s)$. \end{itemize} \end{definition} We say a constant $c$ is predetermined at a state $s$ whenever for some $o\in \mathcal{O}$, $c^{\mathcal{N}_s}=o$. We say a predicate symbol $R$ is predetermined for $\vec{o}$ at a state $s$, whenever for some $b\in\{0,1\}$, $(R(\vec{o})=b)\in V(s)$. We say a function symbol $f$ is predetermined for $\vec{o}$ at a state $s$, whenever for some $o'\in \mathcal{O}$, $f^{\mathcal{N}_s}(\vec{o})=o'$. We simply can inductively define being predetermined for terms and formulas. \begin{definition} Let $K_{L_o}=\langle S,\Pi=\{\pi_i\mid i\in I\}, V\rangle$ be a persistently evolutionary Kripke structure for the language $L_o$. Let $s$ be a state of this model. Also let $\phi$ be a formula with free variables $\vec{y}=(y_1,y_2,...,y_n)$, and let $\vec{o}=(o_1,o_2,...,o_n)\in \mathcal{O}^n$. We inductively define $(K,s)\models \varphi(\vec{o})$ as follows. \begin{itemize} \item[-] if $\phi$ is $t_1=t_2$, then $(K,s)\models \phi(\vec{o})$ iff $t_1^{\mathcal{N}_s}(\vec{o})=t_2^{\mathcal{N}_s}(\vec{o})$, \item[] if $\phi$ is $R(t_1,t_2,...,t_{n_R})$, then $(K,s)\models \phi(\vec{o})$ iff $(t_1^{\mathcal{N}_s}(\vec{o}), t_2^{\mathcal{N}_s}(\vec{o}),...,t_{n_R}^{\mathcal{N}_s}(\vec{o}))\in R^{\mathcal{N}_s}$, \item[] if $\phi$ is $\neg\psi$, then $(K,s)\models \phi(\vec{o})$ iff for all $w\in D(s)$, $w\not\models \psi(\vec{o})$, \item[] if $\phi$ is $\varphi\rightarrow \psi$, then $(K,s)\models \phi(\vec{o})$ iff for all $w\in D(s)$, $w\models \varphi(\vec{o})$, then $w\models \psi(\vec{o})$, \item[] if $\phi$ is $\varphi\wedge \psi$, then $(K,s)\models \phi(\vec{o})$ iff $s\models \varphi(\vec{o})$, and $s\models \psi(\vec{o})$, \item[] if $\phi$ is $\varphi\vee \psi$, then $(K,s)\models \phi(\vec{o})$ iff $s\models \varphi(\vec{o})$, or $s\models \psi(\vec{o})$, \item[] if $\phi$ is $\forall x \psi(\vec{y},x)$, then $(K,s)\models \phi(\vec{o})$ iff for all $w\in D(s)$, for all $o'\in \mathcal{O}$, if $\psi(\vec{o},o')$ is defined in the partial structure $\mathcal{N}_w$ (\underline{predetermined} at the state $w$) then $w\models \psi(\vec{o},o')$, if $\phi$ is $\exists x \psi(\vec{y},x)$, then $(K,s)\models \phi(\vec{o})$ iff there exists $o'\in \mathcal{O}$, $s\models \psi(\vec{o},o')$. \end{itemize} \end{definition} \begin{proposition} For every formula $\varphi\in L_o$, and every state $(K,s)$, if $(K,s)\models \varphi$ then for all $s'\in D(s)$, $(K,s')\models \varphi$. \end{proposition}\begin{proof} It is straightforward. \end{proof} \subsection{Free Will} One of our purpose of proposing persistently evolutionary semantics is to provide a framework to formalize the notion of free will. We discussed the notion of free will in section~4.3 of~\cite{kn:comp1}. In this part, we repeat the same discussion using persistently evolutionary Kripke structures. Let $R$ be a a one-ary predicate symbol. Let $\mathcal{O}=\{0,1\}^*$ be the set of all finite strings over $0$ and $1$. Consider the meaning function $\pi_j$ as follows: for $s=(\langle R(x_1), R(x_2),...,R(x_k)\rangle,j)$, and $x\in \{0,1\}^*$, \begin{itemize} \item if for some $1\leq i\leq k$, $x=x_i$ then $\pi_j(s,R(x))$ is defined to be $\pi_j(s',R(x_i))$ for $s'=(\langle R(x_1), R(x_2),...,R(x_{i-1})\rangle,j)$, \item if for all $1\leq i\leq k$, $x\neq x_i$, and there exists $1\leq i\leq k$, such that $x_i=x0$ or $x_i=x1$ and for $s'=(\langle R(x_1), R(x_2),...,R(x_{i-1})\rangle,j)$, $\pi_j(s',R(x_i))=\{R(x_i)=1\}$ then $\pi_j(s,R(x))$ is defined to be $\{ (R(x)=0)\}$, \item otherwise, $\pi_j(s,R(x))$ is defined to be $\{ (R(x)=1)\}$, \end{itemize} It is easy to check that the meaning function $\pi_j$ behaves similar to the persistently evolutionary Turing machine $PT_1$ introduced in example~4.6 in~\cite{kn:comp1}. The next theorem is a formal version of the theorem~4.9 in~\cite{kn:comp1}. Let $K'$ be the persistently evolutionary Kripke model which the set of its meaning function $\Pi$ is $\{\pi_j\}$. \begin{theorem}\label{fr} Let \begin{center} $\varphi:= (\exists k\in \mathbb{N})(\forall n>k)(\exists x\in\{0,1\}^*)(|x|=n\wedge R(x))$. \end{center} We have for the initial state $s=(\langle\rangle,j)$, \begin{center} $(K',s)\models \Box C_f \varphi \wedge \Box C_f \neg \varphi$.\end{center} \end{theorem} \begin{proof} The agent can develop the future in two ways such if the first way happens $\varphi$ is true in the universe, but if the second happens $\neg\varphi$ is true. We define two ordering $\preceq_1,\preceq_2$ on the elements of $\{0,1\}^*$ as follows. Let $x_1,x_2\in\{0,1\}^*$. \begin{itemize} \item[1-] if $|x_1|<|x_2|$ then $x_1\preceq_1 x_2$, \item[2-] if $|x_1|=|x_2|$ then \begin{itemize} \item[] $0\preceq_1 1$, \item[] if $x_1\preceq_1 x_2$ then $x_1a\preceq_1 x_2a$, for $a\in \{0,1\}$, \item[] $x_10\preceq_1 x_11$. \end{itemize} \end{itemize}, and \begin{itemize} \item[1-] if $|x_1|+1<|x_2|$ then $x_1\preceq_2 x_2$, \item[2-] if $|x_1|=|x_2|$ then \begin{itemize} \item[] $x_1\preceq_2 x_2$ iff $x_1\preceq_1 x_2$, \end{itemize} \item[3-] if $|x_1|+1=|x_2|$ and $|x_1|$ is even then $x_1\preceq_2 x_2$, \item[4-] if if $|x_1|+1=|x_2|$ and $|x_1|$ is odd then $x_2\preceq_2 x_1$, \end{itemize} Now let $y_1,y_2,...$ be an enumeration of element of $\{0,1\}^*$ with respect of the ordering $\preceq_1$, and $z_1,z_2,...$ be an enumeration of element of $\{0,1\}^*$ with respect of the ordering $\preceq_2$. For each $n\in \mathbb{N}$, let $s_n=(\langle R(y_1),R(y_2),...,R(y_n)\rangle, j)$, and $s'_n=(\langle R(z_1),R(z_2),...,R(z_n)\rangle, j)$. Let $Path_1=\{s_1,s_2,...\}$, and $Path_2=\{s'_1,s'_2,...\}$. We are done. \end{proof} Let $(K,s)$ be an arbitrary state of a persistently evolutionary Kripke model. One may easily observe that for all formula $\psi$, $(K,s)\models \Box C_f\varphi\rightarrow \Box\neg K_{ag}\psi$. It is because if $(K,s)\models C_f\psi$ then $(K,s)\not\models \psi$ and thus $(K,s)\not\models K_{ag}\psi$. Therefore, for the formula $\varphi$ in theorem~\ref{fr}, we have $(K',s)\models \Box\neg K_{ag} \varphi \wedge \Box\neg K_{ag} \neg \varphi$. It says that the agent $ag$ never have evidence for $\varphi$ and never have evidence for $\neg\varphi$. Therefore the principle of ``from perpetual ignorance to negation" (PIN, see Chapter~5 of~\cite{kn:ob}) is not true in persistently evolutionary Kripke models. \section{ A Persistently Evolutionary Kripke Structure for Computation environments}\label{compu} In this section, we propose a persistently evolutionary Kripke structure, $K_{ce}$, for the notion of computation environments. The language of computation environment $L_{ce}$ contains \begin{itemize} \item a predicate symbol $SB$ for the successful box, \item a function symbol $TB$ for the transition box. \end{itemize} Let $INST_s$ and $CONF_s$ be two set introduced in the Turing computation environment (see example~3.4 of~\cite{kn:comp1}). The set of actions is defined to be \begin{itemize}\item[] $A_{ce}=\{SB(C)\mid C\in CONF_s\}\cup \{ TB(C,\iota)\mid C\in CONF_s, \iota\in INST_s\}$. \end{itemize} The set of atomic proposition of the Kripke structure $K_{ce}$ is defined to be \begin{itemize} \item[] $P_{ce}=\{SB(C)=b\mid b\in\{0,1\}, C\in CONF_s\}\cup\{TB(C,\iota)=C'\mid C,C'\in CONF_s, \iota\in INST_s\}$. \end{itemize} The set of meaning functions $\Pi_{ce}$ is defined to be the set of all functions $\pi$ which satisfy persistently evolutionary condition, and for every $s\in S_{ce}$, $C=(q,xb_1\underline{a}b_2y)\in CONF_s$, and $\iota\in INST_s$, \begin{itemize} \item[] if $\pi(s,SB(C))=\{ SB(C)=1\}$ then either $C=(h,\underline{\triangle}x)$ or $C=(h,x\underline{\triangle})$, \item[] if $C=(h,\underline{\triangle}x)$ then $\pi(s,SB(C))=\{ SB(C)=1\}$, \item[] $\pi(s,TB(C,\iota))=\{TB(C,\iota)=(p,xb_1c\underline{b_2}y)\}$ for $\tau=[(q,a)\rightarrow (p,c,R)]$, \item[] $\pi(s,TB(C,\iota))=\{TB(C,\iota)=(p,x\underline{b_1}cb_2y)\}$ for $\tau=[(q,a)\rightarrow (p,c,L)]$. \item[] if $\tau\neq[(q,a)\rightarrow (p,c,L)]$ and $\tau\neq[(q,a)\rightarrow (p,c,R)]$ then $\pi(s,TB(C,\iota))=\{TB(C,\iota)=\perp\}$. \end{itemize} Let $\pi_i$ be the meaning function that behaves accord to $SBOX_s$ and $TBOX_s$ of the Turing computation environment. We prove \begin{center}$(K_{ce},(\langle\rangle,i))\models \neg K_{ag} (\mathrm{P=NP})$.\end{center} To do this, we should prove that for every finite sequence of actions $\vec{a}$ in $A_{ce}$, there exists a meaning function $\pi_j$ such that $(\vec{a},i)\sim_{ag}(\vec{a},j)$, and $(K_{ce},(\vec{a},j))\not\models (\mathrm{P=NP})$. Suppose $\vec{a}=\langle a_1,a_2,...,a_n\rangle$, and let $H=\{a_i\mid a_i= SB(h,x\underline{\triangle})\}$. We construct a meaning function $\pi_j$ that considers the following boxes. For the symbol function $TB$ the meaning function $\pi_j$ behaves based on the transition box $TBOX_s$. For the symbol predicate $SB$, it behaves as follows: We persistently evolve the persistently evolutionary machine $PT_1$ in the way that for every $x\in \Sigma^*$, if there exists a configuration $C=(h,x\underline{\triangle})\}$ such that $SB(C)\in H$, then the machine $PT_1$, after evolution, outputs $1$ for $x$ if and only if $\pi_i((\vec{a},i), SB(C))=1$~\footnote{Actually, since $\pi_i$ is the meaning function that accords with the $SBOX_s$ of the Turing computation environment, for all $C=(h,x\underline{\triangle})\}$ such that $SB(C)\in H$, we have $\pi_i((\vec{a},i), SB(C))=1$.}. Then we construct a successful box, denoted by $SBOX'$, which its inner structure is similar to $SBOX_e$ except that instead of the $PT_1$ machine, we replaced the above evolved $PT_1$ machine. Now, we let $\pi_j$ be the meaning function that behaves accord to $SBOX'$. Then the two followings are straightforward. \begin{itemize} \item[1-] $(\vec{a},i)\sim_{ag}(\vec{a},j)$, and \item[2-]$(K_{ce},(\vec{a},j))\not\models (\mathrm{P=NP})$ \end{itemize} One should verify that at the state $(\vec{a},j))$, the formula $\mathrm{P=NP}$ conflicts with the free will of the agent (see the proof of theorem~5.8 of~\cite{kn:comp1}), and thus we have $(K_{ce},(\vec{a},j))\not\models (\mathrm{P=NP})$. Therefore,\begin{center}$(K_{ce},(\langle\rangle,i))\models \neg K_{ag} (\mathrm{P=NP})$.\end{center} The finite sequence $\vec{a}$ was assumed to be arbitrary. Therefore, we proved that for all finite sequence of actions $\vec{a}$, $(K_{ce},(\langle\rangle,i))\models \neg K_{ag} (\mathrm{P=NP})$, and it informally means that \begin{itemize} \item[] the agent $ag$ can never know (have evidence for) $\mathrm{P=NP}$. \end{itemize}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} The present paper discusses the dynamics of extremism in a democratic setting. A probably over-optimistic view of democracy is that when opinions are openly expressed, some consensus opinion would emerge and citizens would vote in favour of a government whose actions would be in accordance with the views of a large majority of citizens. This utopia is shared by many writers, but History has consistently shown us that National Consensus was a dream, that could eventually occur at war time, not such a wishful situation. At least one would expect that the elected government would be close enough to a centrist position satisfying the largest proportion of citizens, as the ice cream seller choosing to put his stand near the middle of a linear beach (\cite{hotel}). Once again, History since the Eighteenth century Enlightenment period in Western Europe contradicts these simple views and we are not observing a smooth evolution towards more consensus nor towards the success of centrists parties. We rather observed alternation between regimes of dominance of centrists political parties and regimes of strong ideological fights between more extremist parties, eventually leading to de facto dictatorship, according to time periods and world regions. The present paper is an essay to model possible evolutions of public opinions leading to different opinion aggregation landscape forming the basis of political entities corresponding to parties. We here develop a model of opinion dynamics in order to answer such questions as: \begin{itemize} \item How come rational\footnote{rational here does not refer to economists' full rationality but rather to its common sense, people able to practice some form of reasoning} people choose extremism? \item How does an initial low proportion of anti-conformist influences/(or does not), a large fraction of the general population to aggregate in powerful extremist clusters? \item What characterises political clusters in terms of the number of agents in the cluster and their distance to a middle opinion? More precisely what are the regions in the parameter space of the model which would lead the different outcome of the dynamics? \end{itemize} The simulations presented in this paper are based on opinion dynamics: agents exchange their views on the occasion of encounters, and they might update their opinion as a result of these exchanges. We are well aware that opinion formation in politics involves many other processes than encounters and discussions among individuals: media, political parties, the government and other political institutions are involved as well. For the sake of clarity, we postpone the discussion of the robustness of our results with respect to these other factors to the last section of the paper. The earliest models of opinion dynamics were binary opinions models, where opinions could take only two discrete values, e.g. -1 and +1 in the so-called voters models as described in \cite{holley75},\cite{galam82} and summarised in \cite{rmp}. We here present a model based on continuous opinions, more adapted to the discussion of the assets and liabilities of political choices among agents, and to the traditional right/left axis of political analysts, than binary opinions. It is inspired from two approaches, the bounded confidence model of \cite{dna} and the anti-conformism model of \cite{smalep}. Since these two models are used as building blocks of our model, we will first summarise their main aspects. The rest of the paper is then divided into 3 sections \begin{itemize} \item Short reminders of the previous models. \begin{itemize} \item \cite{dna} bounded confidence model including its application to extremism. \item Smaldino and Epstein model of anti-conformism. \end{itemize} \item Our synthetic model is developed and its results presented. \item Conclusions and discussion. \end{itemize} Disclaimer The present paper should not be interpreted as normative: we rather try to describe the evolution of opinions and political choices. One can certainly give examples such as Civil Rights in general, when initially considered extremist opinions were later largely accepted by the public. And other cases in which the consequences of extremism turned out to be dramatic. \section{Essentials of former models} In order to achieve consistency in notations and hypotheses, we use our own notation throughout the paper, which sometimes differ from those of \cite{dna} and \cite{smalep} and make appropriate scale changes. \subsection{Bounded confidence} The bounded confidence model is based on a major cognitive bias, the confirmation bias (\cite{plous}): we are mostly influenced by opinions close to ours and tend to reject opinions too far away. The mathematical model was independently introduced by \cite{dna} and by \cite{HK}. It follows the spirit of Axelrod's earlier model of dissemination of cultures. In \cite{axel} model, cultures are described by strings of integers. Pairs of agents interact if their cultures are already close enough, in which case one of them adjusts one feature of its culture string to match that of the other agent's culture. In bounded confidence models, opinions are represented by real numbers. When opinion differences are lower than a confidence threshold, agents adjust their opinion by decreasing such difference. In Deffuant et al. model, pairs of agents are randomly chosen in the population of agents and they eventually adjust their opinion if the confidence condition is met. Another pair is randomly chosen, and so on. Such an iteration mode is called random sequential. By contrast \cite{HK} apply the same opinion updating equation but they use parallel iteration: all opinions are updated simultaneously. Their choice is well adapted to discussions in committees for instance. We will consistently use random sequential iteration in this paper. \subsection{Deffuant et al. bounded confidence model} \cite{dna} bounded confidence model was introduced to model situations in which actors have to take decisions involving cost/benefit analysis in terms of money. Such was the case when the Common Agricultural Policy was modified in 1992: farmers were proposed to change their former practices in favour of more environment friendly practices, e.g. by reducing fertilisers and pesticides use, in exchange for financial aid. But optimising new practices involved a lot of financial uncertainties and surveys demonstrated that farmers would have many social interactions discussing the pros and the cons of the environmental contracts before taking any decision. \cite{dna} model can be simply described: Opinions are represented by a {\bf continuous variable} $x$. Two randomly chosen agents with opinions $x$ and $x'$ interact if, and only if, $|x-x'|<u$. Opinions are updated according to: \begin{equation} \begin{array}{c} x = x + \mu \cdot (x'-x) \\ x' = x' + \mu \cdot (x-x') \end{array} \end{equation} $u$ represents the uncertainty of the agents, and $\mu$, taken between 0 and $0.5$, is a kinetic parameter. If the two initial opinions are close enough, the two agents interact and their opinions move closer. Otherwise, no opinion change occurs\footnote{ Many extensions of the bounded confidence model were proposed as described in the review of \cite{rmp}. Some take into account the possiblity of repulsion among agents such \cite{huet,aliza}: agents can be either attracted for small differences in opinions, but can also have repulsive interaction when their difference is larger than another upper thresold. Other models, \cite{kurmy}, consider two populations of interacting agents, some having only attractive interaction, others also having repulsive interactions.}. \begin{figure}[!h] \centerline{\epsfxsize=120mm\epsfbox{mv.eps}} \caption {Agents with initial positions $x$ and $x'$ move there opinion closer to each other opinion. The threshold for actual interaction $u$ is interpreted as a confidence or uncertainty parameter.} \end{figure} \paragraph{Simulations} An initial distribution of agents opinions is first randomly established. To achieve maximum randomness most authors choose a uniform distribution on a segment. The first model of Deffuant used [0,1] as initial segment. Later most authors used [-1,1] as initial segment which we do in this paper for the sake of comparison. At each time step, a random pair of agents is chosen, to which the above described updating algorithm is applied. Simulations are stopped after convergence of opinions into one or several clusters. \paragraph{Results} Opinions vs time plots, figure 2, represent opinion dynamics. Each point on the graph represents the opinion of an individual agent along the y axis at time t on the horizontal axis when the agent is tested for opinion change. The time unit for all plots corresponds to 1000 pair updatings (on average each agent is tested twice per time unit). Individual dots might hardly be distinguished on these plots, but the envelope of the clouds gives an indication of the gradual convergence of opinions in the course of time. \begin{figure}[!h] \centerline{\epsfxsize=80mm \epsfbox{convergence.eps} \epsfxsize=80mm \epsfbox{2pics.eps}} \caption{Comparison of opinion dynamics for different uncertainties $u$. ($u=0.6$ for the left plot, $u=0.4$ for the right plot).} \end{figure} These two plots compare the evolution of opinions until convergence. The number of agents is 1000, $\mu=0.1$, initial opinion uniformly and randomly chosen on segment [-1,1]. Uncertainty $u$ is 0.6 for the left plot and 0.4 for the right plot. Time 150 correspond to sampling 150 pairs, so each agent has been sampled for updating 300 times on average. \cite{dna} have shown that agents uncertainty $u$ is the main determinant of the outcome of the dynamics. To summarise the results relevant to our analysis: \begin{itemize} \item Opinions are clustered in $n=int(\frac{1}{u})$ clusters which do not interact any more after a long enough time ($int$ stands for integer part). \item Clusters are of equal size and are at least $2u$ apart. \end{itemize} We will further refer to the above statements as the $n=int(\frac{1}{u})$ rule. The general expression for an initial distribution of opinions on a segment of width $w$ is $n=int(\frac{w}{2u})$. \FloatBarrier \subsection{Deffuant et al. model of extremism} \cite{daw} later proposed of model of extremism prevalence inspired from the bounded confidence model. Their extremism model bears certain differences with the bounded confidence model, the most relevant for us is the introduction of a small fraction of extremists in the agent population. Extremists are different from the other agents by having a very low uncertainty and by having extreme opinions at the end of the spectrum $[-1,1]$. \cite{daw} also added a dynamics on uncertainty: agents not only exchange opinions but also uncertainty $u$. The main issue in \cite{daw} is whether the small fraction of extremists is able to drag towards extremism the normal (centrist) agents with initially larger uncertainty and extended distribution of opinions. The initial fraction of extremists plays some role in the outcome of the dynamics, but the most significant result is that the centrists uncertainty $u$ determines the outcome of the dynamics in large regions of parameters. For narrow uncertainties, say $u \leq 0.4$, a few perc. of centrists are dragged toward extremism. For $u \simeq 1$ initially moderate agents are split in two opposed extremist fractions, and for $u \geq 1.4$ most moderate agents are dragged towards an asymmetric single extremist attractor, close to either -1 or +1. This extremism model and the full set of results described in \cite{daw}, and more recently in \cite{def2006}, show that extremists can convert a large fraction of the population for the largest values of the centrist uncertainty, even when their number is relatively small with respect to the total population. What is missing is the reason why extremism first arises. \cite{daw} postulate the initial existence of some extremists in the population from empirical observations of the political scene of most countries, including democracies. The model by \cite{smalep} provides a clue to the origin of extremism. \subsection{Anti-conformism generates extremism} \cite{smalep} recently introduced a model of anti-conformism and its consequences on individual preferences: ``Social Conformity Despite Individual Preferences for Distinctiveness''. \footnote{ Their paper does not explicitely refer to politics but they quote several references about politics.} The concept of distinctiveness plays a central role in several social psychological theories of self and identity processes. In such framework, positions are most often taken in multi-dimensional spaces. Since we here represent political opinions as continuous variables on a bounded support, the most distinctive opinions should be those at the boundaries. The idea is that some political agents choose anti-conformist attitudes to attract attention and get some prestige. The process can be observed among political activists and we will further discuss in the conclusion section why some professional political agents choose non-conformism or extreme positions. In the Smaldino-Epstein model, instead of exchanging with other agents to share common views, anti-conformist agents react to the distribution of opinions (which they are supposed to be aware of) and they choose opinions away from the average opinion of the other agents. Anti-conformists view as ideal a position $x^*$ such that: \begin{equation} x^* = x_{aver} + \delta \cdot \sigma \end{equation} where $x_{aver}$ is the average opinion of the distribution, $\sigma$ is the standard deviation of the distribution and $\delta$ a kind of anti-conformist strength. Agents then gradually update their opinion in the direction of $x^*$ according to: \begin{equation} x = x + \mu \cdot (x^*-x) \end{equation} Although anti-conformists wish to remain distinct from the crowd, since they all share the same goal $x^*$ the variance of the distribution actually decreases during each iteration by a factor $1-\mu$ and they converge towards a single attractor. Starting from an initial uniform distribution on segment [-1,+1], for positive values of $\delta$, the final opinion cluster is well above the initial opinion average at $\sigma_0\delta$, where $\sigma_0$ is the initial standard deviation. The asymptotic opinion can then stand outside the initial opinion range for large values of $\sigma_0\delta$. In other words, anti-conformism\footnote{Other authors have introduced anti-conformist agents in the simulation of binary opinion dynamics \cite{serge,andre}. In the context of binary opinions, say 0 or 1, anti-conformists have opinions {\bf opposed} to the opinion of their neighbours. In the \cite{smalep} model as in the present paper, the anti-conformists choose opinions {\bf further} than those of other agents. Their position can be described as 'plus royaliste que le Roi' or in English 'more catholic than the Pope'. Hence, the dynamics of 'our' mixed population is quite different from those described in \cite{serge,andre}} results in convergence to more extreme opinions than the initial average opinion, which makes the process a valuable hypothesis on the origin of extremism. Of course many other factors can play a role, but we will here only investigate the importance of the anti-conformism factor\footnote{The \cite{smalep} paper covers more situations than reported here, including heterogeneity of $\delta$, the anti-conformist strength. It e.g. shows that the above conclusions on convergence of the dynamics remain true provided that such heterogeneity is limited: the standard deviation of the $\delta$ distribution should be less than 1.}. \bigskip \begin{figure}[h!] \centerline{\epsfxsize=110mm\epsfbox{expo.eps}} \caption{Anti-conformism generates extremism: evolution of opinions (red dots) and of the standard deviation (green dots) for 1000 anti-conformist agents with kinetic parameter $\mu=0.1$ and anti-conformist strength $\delta=1$.} \end{figure} \FloatBarrier \section{The synthetic model} In the present synthetic model, two different populations of agents are introduced: \begin{itemize} \item Conformists obeying a bounded confidence model as in \cite{dna}. The conformists can be directly influenced by all individual agents within their uncertainty range. Their opinions aggregate into clusters. \item Anti-conformists obeying the anti-conformist model of \cite{smalep}. The anti-conformists are only influenced by the local properties, average opinion and standard deviation of the agents' opinion, within their uncertainty range. As a result, they soon move towards the exterior of the distribution of conformists. \end{itemize} Numerical simulations show that a large fraction of the conformists agents also evolve towards extreme positions where they form extra clusters with anti-conformists. Let us stress the difference between the anti-conformists of the present model and the extremists of \cite{daw}. In \cite{daw}, the extremists have been given {\bf ab initio} extreme positions and they are far less susceptible to centrists attraction than the other agents. In the present model, anti-conformists have the same initial opinion distribution than the other agents. They evolve towards extreme positions because of their specific dynamics. They become little influenced by centrists when they get close to their equilibrium position because meanwhile centrists opinions have coalesced into clusters, thus reducing the standard deviation of their distribution. In other words, anti-conformists {\bf acquired} a role similar to extremists. By contrast, the attraction of conformists towards anti-conformist has been identically maintained. \subsection{Model description} Two populations of conformists and anti-conformists co-exist and interact according to the threshold condition. For any agent, anti-conformist or conformist, interactions only involve their neighbourhood, $[x-u,x+u]$. $u$ is the same for conformists and anti-conformists and specifies the interaction range for both types of agents. We start from a uniform initial distribution of agents opinions on the [-1,+1] segment. At each time step, one agent is first randomly selected. If an anti-conformist were selected during the first draw, it interacts with all the agents within the reach of its opinion $[x_e-u,x_e+u]$ using the Smaldino-Epstein algorithm, moving towards the local $x^*$. This happens with probability $r$ where $r$ is the fraction of anti-conformist agents. And that's it for the round. If the first agent were a conformist, which happens with probability $1-r$, a second agent is randomly selected and we apply the Deffuant algorithm. In fact, if the second agent were an anti-conformist, only the conformist moves towards the anti-conformist since anti-conformists are not involved in binary interaction\footnote{We have chosen not to move the anti-conformist, although the alternative choice, move according to SE rule could have been made. Anyway, differences in behaviour between the two choices would not have changed qualitatively the dynamics for our choice of parameters.}. In other word, their only moves are those dictated by the Smaldino-Epstein algorithm. Figure 4 describes the tree of probabilities for one sampling round. \bigskip \begin{figure}[h!] \centerline{\epsfxsize=80mm\epsfbox{tree.eps}} \caption{ The tree of probabilities for one iteration step. A-Co stands for anti-conformist agents and Co for conformists. The applied algorithms are noted SE (resp. BC) for Smaldino Epstein (resp. for Bounded confidence). Refer to the text for the computation of probabilities of each branch.} \end{figure} \FloatBarrier Since we here describe the political arena, we further add two more rules for anti-conformists. \begin{itemize} \item Anti-conformists choose positions outside the crowd and as far as possible from the other party(ies); a simple implementation of this inclination is to choose positive (resp. negative) values of $\delta$ for anti-conformists with positive opinion (resp. negative). This simple rejection rule is only applied to anti-conformists\footnote{This simple implementation does not cause any practical problem for opinion values close to zero since anti-conformists move away very soon from the average as observed in figures 5 and 7.}. Anti-conformist opinions are only updated when the first draw was an anti-conformist. \item We take into account the fact that anti-conformists are more active in propaganda than conformists. They express their opinion more often than conformists, whether in the streets or in the media (\cite{bron}). One extra parameter of the model is $f$ the relative frequency of opinion expression of anti-conformists with respect to the frequency of expression of conformists. This is implemented in the model by the fact that any anti-conformist is randomly selected for interaction $f$ times more often than any conformist. \end{itemize} Let us be more specific about the implementation of the second rule concerning the second draw. Let $r$ be the fraction of anti-conformist in the population. Since any anti-conformist is selected $f$ more often than any conformist, anti-conformists are selected for interaction with a probability proportional to $rf$. Conformists are selected with a probability proportional to $1-r$. Normalising probabilities to sum 1 implies to draw anti-conformists with probability $\frac{rf}{1+r(f-1)}$ and conformists with probability $\frac{1-r}{1+r(f-1)}$. \subsection{Simulation results} \paragraph{Time plots and histograms} Let us first compare time plots (fig. 4 and 6) and asymptotic histograms (fig. 5 and 7) of opinion dynamics, when one changes the relative expression frequency parameter $f$ from 1 to 20. The following plots were drawn for an uncertainty level $u=0.3$, anti-conformism strength $\delta=2$, number of agents 1000, number of iterations per agent 300, fraction of anti-conformists 0.05, kinetic factor $\mu=0.1$. Let us remind here for the sake of comparison that the $int(\frac{1}{u})$ rule predicts 3 clusters at opinions -0.66, 0 and 0.66 in the absence of anti-conformists. \begin{figure}[!h] \centerline{\epsfxsize=110mm\epsfbox{tol.031.eps}} \caption{Time evolution of opinions for 0.3 uncertainty level, $\delta=2$, $\mu=0.1$ and equal chances for opinion expression for conformists (in red) and anti-conformists (in green). Anti-conformists move to the border of the distribution in a few tens steps and maintain their position at the border until convergence of conformists is achieved.} \end{figure} \begin{figure}[!h] \centerline{\epsfxsize=110mm\epsfbox{htol.031.eps}} \caption{Histogram of asymptotic opinions after 300 iterations per individual for 0.3 uncertainty level, $\delta=2$, $\mu=0.1$ and equal chances for opinion expression for conformists (in red) and anti-conformists (in green) (the same simulation condition as in figure 5). Apart from the presence of anti-conformists at the extreme of the distribution, the position of conformists differs little from the $int(\frac{1}{u})$ rule prediction, -0.66 and +0.66.} \end{figure} \FloatBarrier As a preliminary conclusion, the presence of 5 percent anti-conformists in the population does not modify the distribution of conformists opinion when anti-conformists have the same level of opinion expression as conformists (fig. 5 and 6). \begin{figure}[!h] \centerline{\epsfxsize=110mm\epsfbox{tol.0320.eps}} \caption{Time evolution of opinions for 0.3 uncertainty level, $\delta=2$, $\mu=0.1$ when anti-conformists (in green) express their views 20 times more often than conformists (in red). We now observes 4 clusters instead of 3 with quite different positions. Anti-conformists have attracted two clusters to more extreme positions around -1 and +1 and the rest of the conformists have now moved closer to the center. } \end{figure} \begin{figure}[!h] \centerline{\epsfxsize=110mm\epsfbox{htol.0320.eps}} \caption{ Histogram of asymptotic opinions for 0.3 uncertainty level, $\delta=2$, $\mu=0.1$ , when anti-conformists (in green) express their views 20 times more often than conformists (in red). We now observes 4 clusters instead of 3 with quite different positions. Anti-conformists have attracted two clusters to more extreme positions around -1 and +1 and the rest of the conformists have now moved closer to the center.} \end{figure} \FloatBarrier However as one can observe on the next couple of figures 7 and 8, anti-conformists strongly influence the distribution of conformist opinions when anti-conformists express their views 20 times more often than conformists. Figure 7 shows that interaction among conformists make them converge into 4 clusters. The respective positions of conformists and anti-conformists clusters results from their mutual interaction: anti-conformists first aggregate outside conformists, but later, after conformists aggregation, the anti-conformists have a reverse motion towards the closest conformists' cluster, symmetrical to the motion of the extreme conformist cluster towards anticonformists. \FloatBarrier \paragraph{Variability of asymptotic clusters } As a matter of fact, several trials with different random sampling give qualitatively equivalent results: the presence of a few outspoken anti-conformist (with $f=20$) changes the attractors of opinion dynamics from three clusters to two center clusters + two extreme clusters; but the cluster positions and amplitude might change noticeably between simulations with different random samplings for the same set of parameters. Such instabilities are well known in random processes such as Polya urns or Chinese restaurant process (\cite{polya}). In the case of Polya urns, a coloured ball is randomly drawn from the urn and two balls of the same colour are then replaced in the urn. When the initial number of balls is small, say one red ball and one black ball, the proportion of late samplings is strongly dependent upon the first occurring draws and is thus susceptible of large fluctuations. We are precisely in a similar case, since the initial opinions and sampling of the 50 anti-conformist have a strong influence on the outcome of the dynamics. This phenomenon is a particular instance of the path dependence phenomenon observed in non-linear and random processes (\cite{BA,NW}). Path dependence reflects the influence of history, which importance in politics is not a surprise to political scientists. We therefore performed 100 simulations per set of parameters and display the results as histograms. In the next 3 figures, red histograms correspond to conformist opinions at the end of the simulation and green histograms correspond to anti-conformist opinions. The blue histograms corresponding to simulations in the absence of anti-conformist are shown for the sake of comparison. Common simulation parameters are anti-conformism strength parameter $\delta=2$, number of agents 1000, number of iterations per agent 3000, fraction of anti-conformist 0.05, kinetic factor $\mu=0.1$ and multiplicative frequency factor of anti-conformist $f=20$. They only differ by the uncertainty levels: $u=0.3, 0.4, 0.6$. \begin{figure}[!h] \centerline{\epsfxsize=120mm\epsfbox{tol06d2r0.05f20ag1000.eps}} \caption{ Histograms of opinions averaged over 100 runs after 3000 iterations per individual. 0.6 uncertainty level. The red (resp. green) histograms are for conformists (resp. anti-conformists) and the blue histogram was obtained for conformists in the absence of anti-conformists.} \end{figure} \begin{figure}[!h] \centerline{\epsfxsize=120mm\epsfbox{tol04d2r0.05f20ag1000.eps}} \caption{ Histograms of opinions averaged over 100 runs after 3000 iterations per individual. 0.4 uncertainty level. The red (resp. green) histograms are for conformists (resp. anti-conformists) and the blue histogram was obtained for conformists in the absence of anti-conformists.} \end{figure} \begin{figure} \centerline{\epsfxsize=120mm\epsfbox{tol03d2r0.05f20ag1000.eps}} \caption{ Histograms of opinions averaged over 100 runs after 3000 iterations per individual. 0.3 uncertainty level. The red (resp. green) histograms are for conformists (resp. anti-conformists) and the blue histogram was obtained for conformists in the absence of anti-conformists.} \end{figure} The 3 figures confirm that large levels of expression by anti-conformists allow them to drag significant fractions of conformists towards the two emergent extreme clusters, with some variability in positions and relative amplitude. The influence of anti-conformists increases the number of clusters composed of conformists by (at least) one; two extreme clusters have opinions aligned with those of anti-conformists, while the remaining cluster(s) (which can be 0, 1 or 2) are centered closer to the original opinion average\footnote{ A word of caution: such histograms could be interpreted either as histograms of the positions of single isolated peaks as observed in figures 5 and 7, or as the aggregation of wider peaks. We confirm that only the first interpretation is correct from many direct observations of asymptotic histograms of single iteration processes. Furthermore, wide peaks would not be stable under the bounded confidence process.}. The following simple argument explains the increase in the number of clusters due to the influence of the anti-conformist agents. Two extreme conformist clusters of initial width $u$ are attracted by the anti-conformists. They don't participate in the formation of the central clusters. The ``effective'' width $w_c$ of initial segment of conformists which end up in the central clusters is then reduced to $2-2u$, instead of 2. Applying the $n=int(\frac{w_c}{2u})$ rule to the number $n_c$ of central clusters gives $n_c=int(\frac{1-u}{u})$. The predicted total number of clusters is then: \begin{equation} n=2+int(\frac{1-u}{u}) \end{equation} A comparison with simulation results gives: \begin{tabular}{|r|c|c|}\hline uncertainty & predicted clusters' number & observed number\\\hline .6 & 2 & 2 \\\hline .4 & 3 & 3 \\\hline .3 & 4 & 5 with overlap \\\hline \end{tabular} \FloatBarrier \paragraph{Checking the path dependency} Because the earlier steps of the dynamics are so important, we might expect that anti-conformists have a stronger influence if they step in earlier rather than later. In the next two sets of simulations, the frequency factor $f$ was either decreased or increased linearly in time between 1 and 20, for the same set of parameters as in figure 10. One can check that the red histogram taken for decreasing $f$ from 20 to 1 is nearly the same as the red one on figure 10, obtained in the presence of 5 perc. anti-conformists with relative expression frequency 20, while the green one for increasing $f$ from 1 to 20 is nearly the same as the blue one (obtained in the absence of anti-conformist) on figure 10. In other words, the early expression of extremist views determines the outcome of the process, while late expression has nearly no effect. \begin{figure} \epsfxsize=120mm\epsfbox{hist_ramp.eps} \caption{ Histograms of conformist opinions after 3000 iterations per individual. 0.4 uncertainty level. The red histogram correspond to decreasing the anti-conformist expression frequency, the green one to its increase. } \end{figure} \FloatBarrier \paragraph{The consensus regime} For larger uncertainty values, such as $u=0.9$, the situation changes dramatically. Single runs show that there remains only one cluster of conformists at the same position as the anti-conformist cluster; consensus is restored. But a more systematic survey running 300 simulations per set of parameters shows on figure 13 that the position of the single cluster varies widely along the opinion axis; furthermore, no peak structure similar to those observed at lower uncertainties is apparent on the histogram of clusters. The vertical bars of the histogram are the number of opinions divided by the number of conformist agents. Except for a small region in the neighbourhood of opinion 0.0, the height of the peaks are integer values, indicating that for each simulation, all opinions are concentrated in one single cluster. A first tentative explanation for randomness of the cluster position rests on the faster dynamics at larger $u$ values: nearly all pair samplings satisfy the confidence condition for interaction, and convergence is then faster, as we checked on time plots (not represented). We have earlier seen that faster convergence yields more sensitivity to the early steps of the dynamics and more dispersion of the asymptotic results. Obtaining a consensus for $u=0.9$ is in accordance with the $int(1/u)$ rule of the bounded confidence model; but the standard bounded confidence model yields a consensus attractor close to the center of gravity of the initial distribution, which is quite different from the present result: the consensus peaks seem randomly located on the [-1,+1] opinion axis. Our results also differ from those of the extremism model of \cite{daw} who predict clusters located close to the anti-conformist initial positions i.e -1 or +1, for large values of $u$ ( in accordance to their hypotheses of quasi-fixed position of anti-conformist at -1 or +1). By contrast, for large values of $u$, in our present model, the position of anti-conformist initially scattered over the entire $[-1,+1]$ segment results largely from the earlier iteration steps and can undergo large fluctuations. The transition between single cluster dynamics at larger $u$ and two clusters dynamics at lower $u$ is smooth; it is a crossover rather than a sharp transition and occurs around $u=0.8$. At $u=0.8$ the histogram (fig 14) displays co-occurrence of bins with integer values corresponding to single clusters and of bins with non-integer values clustered in two wide peaks around $\pm 0.66$. \begin{figure} \centerline{\epsfxsize=120mm\epsfbox{tol0.9d2r0.05f20.eps}} \caption{ Histogram of conformist agents asymptotic opinions for a 0.8 uncertainty level based on 300 simulations. The vertical bars are the number of opinions divided by the number of conformist agents. 30000 iterations per individual for a 0.9 uncertainty level. Except for a small region in the neighbourhood of opinion 0.0, the height of the bins are integer values, 1 ,2 or 3 indicating that for each simulation, all opinions are concentrated in one single cluster} \end{figure} \begin{figure} \centerline{\epsfxsize=120mm\epsfbox{tol0.8d2r0.05f20.eps}} \caption{ Histogram of conformist agents asymptotic opinions for a 0.8 uncertainty level based on 300 simulations. 3000 iterations per individual for a 0.8 uncertainty level, corresponding to the transition. The bins are a mixture of integer values 1 and 2, plus two wide peaks around $\pm 0.66$ } \end{figure} \FloatBarrier \paragraph{Influence of simulation parameters} In order to study the influence of the different parameters $r, f, u, \mu $ and $\delta$ on the outcome of opinion dynamics, one has to compress the information in the histograms by monitoring some of their characteristics. We have chosen to monitor the characteristics of the positive 'extreme' peak, the rightmost peak\footnote{Isolating the rightmost peak for these measurements was done by checking the histograms for a gap left of the peak and taking measurements on the remaining bins right of the gap; for figure 10 e.g., the empty bin at 0.65 opinion can be used to start collecting the statistics.} observed on figures 9, 10, 11. We monitored the fraction of opinions in the peak (i.e how many conformists were attracted by the rightmost anti-conformists), their average deviation (how far from the initial average 0 they were attracted), the standard deviation of the distribution of opinions in the peak\footnote{ These three quantities correspond to standard measurements of peak characteristics in spectra, the area under the peak (fraction), the peak position with respect to the origin (average deviation), and the peak width (twice the standard deviation). For figure 10 e.g. the fraction of opinions in the righmost peak is 33 perc., the average position is 0.91 and the standard deviation 0.10. }, and finally the product of the average times the fraction\footnote{In the next five figures, the fraction of opinions, the standard deviation and the attractiveness are given by the scale on the left, in red, and the average deviation by the scale on the right, in green.} . This latest quantity measures some kind of ``attractiveness''. We shall see that this attractiveness often (but not always as further discussed) displays relatively little variation, corresponding to a balance between how many conformists are attracted by anti-conformists and how far they are attracted. The present systematic investigation of the role of simulation parameters is limited to the lower values\footnote{larger values of $u$ were previously investigated in the section on the consensus regime.} of $u$, in the multiple clusters regime. \begin{figure}[!h] \centerline{\epsfxsize=120mm\epsfbox{plot_tol.eps}} \caption{ Variations of the rightmost peak characteristics with uncertainty $u$. The fraction of opinions, the standard deviation and the attractiveness are given by the scale on the left, in red, and the average deviation by the scale on the right, in green. When uncertainty increases from $u=0.3$, to $u=0.7$, the fraction of attracted conformists (the red crosses) increases up to values (0.47) corresponding to a near depletion of the central peak(s) where only 6 perc. of the population remains; most conformists became extremists. Average opinion (green crosses) of the extreme peak decreases to 0.8 and its standard deviation increases. } \end{figure} Applying this method to the {\bf influence of uncertainty $u$} one obtains the following graph (fig.15), some points ($u=0.3, 0.4, 0.6$) of which can be checked against the histograms on fig. 9, 10 and 11. The linear increase with $u$ of the fraction of agents attracted to the extreme peak is simply understood from the width of the conformist's zone under the influence of the anti-conformist: since anti-conformist move early to the upper boundary of the conformist cluster they can at most influence a fraction $u$ of them. The simulated results give fractions of 0.227 for $u=0.3$, 0.33 for $u=0.4$ and 0.43 for $u=0.6$, not far away from the $u$ upper-bound. But the attracted fraction saturates close to 0.5 at larger $u$ values, when the two extremist clusters are competing. Increasing the {\bf fraction $r$} of anti-conformist (fig. 16) and their relative {\bf frequency of intervention $f$} (fig. 17) increases the average deviation of the peak from 0. The ratio of biased moves of conformists towards anti-conformists to their converging moves obtained from the tree of probabilities (figure 4) is $\frac{rf}{1-r}$ This ratio increases with both $r$ and $f$, and so does the average deviation of the peak. \begin{figure}[!h] \centerline{\epsfxsize=120mm\epsfbox{plot_frac.eps}} \caption{ Variations of the rightmost peak characteristics with $r$ the fraction of anti-conformists. The fraction of opinions, the standard deviation and the attractiveness are given by the scale on the left, in red, and the average deviation by the scale on the right, in green.} \end{figure} \begin{figure}[!h] \centerline{\epsfxsize=120mm\epsfbox{plot_freq.eps}} \caption{ Variations of the rightmost peak characteristics with $f$ the multiplicative factor of interactions with anti-conformist. The fraction of opinions, the standard deviation and the attractiveness are given by the scale on the left, in red, and the average deviation by the scale on the right, in green.} \end{figure} \FloatBarrier {\bf $\mu$} is a priori a simple {\bf kinetic parameter} which increase reduces the convergence time. For instance, a value of 0.5 was generally considered as optimal in bounded confidence models since it implies the full agreement between a pair of agents on the middle position in one single iteration. But as observed with this plot (fig. 18), $\mu$ also influences the peak characteristics. When convergence is fast, the initial steps of the iteration process have an even stronger influence on the outcome of the dynamics - see for instance the dramatic increase of the standard deviation of the rightmost peak when $\mu = 0.4$. A technical conclusion is that in order to avoid strong sampling variations, opinion dynamics models should be run with values of $\mu \leq 0.25$. This is anyway compatible with the fact that in real life several interactions are necessary to significantly change opinions. \begin{figure}[!h] \centerline{\epsfxsize=120mm\epsfbox{plot_mu.eps}} \caption{ Variations of the rightmost peak characteristics with $\mu$ the kinetic factor. The fraction of opinions, the standard deviation and the attractiveness are given by the scale on the left, in red, and the average deviation by the scale on the right, in green.} \end{figure} \FloatBarrier The variations of the rightmost peak characteristics with {\bf $\delta$ the anti-conformism intensity} are non-monotonic. When $\delta$ increases from 1.5 to 2.6, the average deviation increases, which is a direct consequence of equation (2). But it reaches a maximum around $\delta=2.6$. The change of slopes of the average deviation, of the fraction of conformists in the peak and of the attractiveness curves, as the strong increase of the standard deviation are evidences of a regime transition around $\delta=2.6$ \begin{figure} \centerline{\epsfxsize=120mm\epsfbox{plot_delta.eps}} \caption{ Variations of the rightmost peak characteristics with $\delta$ the anti-conformism intensity. When $\delta$ starts increasing from $\delta=1$, the conformists are attracted to more extremism (the green points). But when $\delta=2.6$ a regime changes occurs as observed on all four monitored quantities: extremists' fraction (red crosses) and attractiveness (pink squares) decrease, and they start loosing followers (red crosses). The strong increase of the standard deviation around $\delta=2.6$ is also a clue of the regime transition. } \end{figure} \FloatBarrier To observe the transition region in detail (fig. 20), we came back to individual asymptotic histograms similar to figures 6 and 8. One notices that before the transition, at $\delta=2.4$ the leftmost and rightmost peaks of conformists histograms (in red) occur at the same opinion values as anti-conformist histogram peaks (in green). For higher values of $\delta$ the anti-conformist peaks are outside the leftmost and rightmost conformists peaks. In other words, the anti-conformists became unable to drag anymore the conformists to their position. Since larger values of conformists uncertainty $u$ make them more susceptible to the influence of anti-conformists, the stalling transition value $\delta_s$ increases with $u$: we observed a transition at $\delta_s=2.5$ when $u=0.3$, at $\delta_s=2.8$ when $u=0.4$ and at $\delta_s=3.2$ when $u=0.6$. \begin{figure}[!h] \centerline{\epsfxsize=120mm\epsfbox{mul_hist.eps}} \caption{ The stalling transition: asymptotic opinion histograms for $\delta= 2.4, 2.5, 2.6, 2.7$. The red (resp. green) histograms are for conformists (resp. anti-conformists). Extreme peaks coincide in position at $\delta=2.4$, but they start diverging at $\delta=2.5$ } \end{figure} \FloatBarrier To summarise on ``Attractiveness'', it displays relatively small variations with $r,f$ and $\mu$, reflecting a rough balance\footnote{we can only conjecture about this balance, but we have no explanation for it, nor why changes in attractiveness or its derivatives goes along with transitions in dynamical regimes.} between how many conformists are attracted toward extremism and how far. The same relative stability is observed with respect to $\delta$ until $\delta=2.5$. Attractiveness decays when $\delta>2.5$ and this is an indication of a change in dynamical regime: anti-conformists lost their strong influence on conformists. The increase in ``attractiveness'' with conformists's uncertainty $u$ also reflects the series of transitions in the number of clusters with $u$. \section{Discussion and conclusions} Let us first summarise our results. \begin{itemize} \item Anti-conformism of small fraction of the agents population can result in the emergence of large extremist clusters, provided that the anti-conformists express more often their views than conformists. \item This influence exists whatever conformist uncertainty, and it is larger when uncertainty increases. Two distinct dynamical regimes are observed according to the value of uncertainty. For lower values, anti-conformists drag important fractions of conformist agents to their own extreme position. For higher values of uncertainty, consensus is restored, but along a much wider range of positions which can be centered far away from the initial center of gravity of initial opinions. \item Obviously the anti-conformist influence increases with their number and the frequency of their interventions. By contrast, one observes a transition in the anti-conformist influence when anti-conformists position themselves too far away from the center; they then loose influence and are unable to drag large fractions of conformists. \item Early intervention of anti-conformists increases their influence. And the early steps of the dynamics are responsible for the large deviations in peak positions. \item The results concerning the number of peaks in the opinion distribution as a function of the uncertainty parameter and their approximate position are robust. The exact position of the peak cannot be predicted accurately, due to the susceptibility of the probabilistic dynamics to initial samplings. \end{itemize} Let us now discuss how these conclusions would eventually be modified by the other players in the political game, media, parties, and other institutions such as elections, government etc. In fact, media and political parties re-enforce the influence of non-conformists. Journals, newspapers or television compete for readership and audience. Journalists fight for impact, notoriety and reputation. In this market for information, or cognitive market as proposed by \cite{bron}, the motivations are the same as for anti-conformists of \cite{smalep}. Impact is achieved by taking simple, extreme and fast positions. The tendency is increased by the use of Internet, from which journalists often take their views. The fast communication procedures on Social Networks also favour the extremes as observed on tweets and readers reaction to articles in the press. To maximise audience, societal and political debates on the television are dramatised: they oppose extreme views and seldom result in consensus. As a matter of fact, the media contribute largely to the high value of the relative frequency factor $f$ used in our simulations. In that respect, the growing role of the media and especially of the Internet will not automatically lead to a better understanding of challenges and options, but might on the contrary favour the expression of extremist views. The same mechanisms can be observed during the political debate inside parties, before elections. Party members are competing to get positions inside the party or to represent the party in future elections. They also want to make clear that they are faithful to their party by strongly opposing other parties views. For them too, a simple ideological position is easier to express and to defend, than balancing between the contradictory constraints faced in the choice of a policy adapted to societal challenges. So both media and political parties internal discussions re-reinforce the influence of extremists. The dynamics might be different during elections and at the government level. On the occasion of national elections for instance, parties have to adapt their program to the electorate and make alliances to win support. In principle they should move towards the center for this. But when the electorate comprises strong extremist clusters, as often observed in our simulations, they have a choice to position themselves clearly on one side of the political checkers, especially under the influence of their members which are biased with respect to the 'rational' position of optimising support from the general population. The government itself has to navigate between general support and the support from inside the parties of the alliance which brought it in power. In conclusion of the present discussion, the dynamical processes inside the media and the parties are in agreement with our hypothesis of a stronger expression of anti-conformist positions. This re-enforce the conclusions of our model. On the other hand, other aspects of politics concerning general elections or government positions necessitate further analysis. What we tried to demonstrate is that evolution towards extremism does not automatically imply coercion, strategic plots or the control of the media by a single agent. Simple human cognitive processes such as anti-conformism, cognitive biases and uncertainty of agents can favour its emergence and its influence on the constituency. The results of our simulations were interpreted in terms of politics, but they could also provide some insight into other social phenomena involving the dynamics of extreme choices: \begin{itemize} \item In markets of luxury goods: for instance, why do people buy fast cars or SUV vehicles when they have little use for these products? How is the market driven by these extreme choices? \item in Fashion and in the Arts, where anti-conformism is the rule driving the perpetual motion of expressed realisations; \item in the propagation of imaginary dangers related to new technologies in the media and the Internet (\cite{bron}). \end{itemize} {\bf Acknowledgements} We thank Joshua Epstein and Paul Smaldino for sending their preprint prior to publication and the participants to the ``s\'eminaire inattendu'' for their comments. We thank anonymous referees for their corrections and for raising interesting issues.
{ "redpajama_set_name": "RedPajamaArXiv" }
\subsection{Approach Overview} We propose a two-steps approach to generate repair patches for transformations containing multiple type errors, as depicted in Fig.~\ref{fig:approach_overview}. The first step, ``Exploration phase'', takes as input a faulty transformation and the source and target metamodels defining the domain type system, and produces candidate patches. This step has two goals: (1) exploring the space of possible patches with the objective of correcting as much as possible type errors, and (2) minimizing the deviation from the original behavior by combining two lightweight surrogate objectives to testing. \begin{figure}[htb!] \centering \includegraphics[width=.9\linewidth]{images/approach_overview} \caption{Overview of the proposed two-steps approach} \label{fig:approach_overview} \end{figure} As the first step is based on an evolutionary population-based algorithm, the exploration evaluates an important number of solutions. Consequently, we cannot afford to use resource-consuming objectives for behavior preservation. Alternatively, after a patch solution is produced by the first step, we refine it in a second step to increase the likelihood that type-error fixes do not alter the behavior. ``Refinement phase'' exploits four heuristics that better determine the parameters of some change operations included in the candidate patches or propose alternative change operations. In the remainder of this section, we describe both steps. \section{Multi-step derivation of patches} \label{sec:approach} \input{approach_overview.tex} \subsection{Exploration Phase} \label{sec:ep} In order to adapt NSGA-II, like any evolutionary population-based algorithm, to our problem, three key points must be defined: solution representation, solution derivation, and fitness evaluation. \paragraph{Solution representation.} As stated in the previous section, a patch can be seen as a sequence of edit operations that should be applied on the faulty transformation to correct it. We use the basic edit operations listed in Table~\ref{t:oplist}~\cite{cuadrado2018-quickfix} to build our candidate sequences. \begin{table}[h] \small \centering \caption{Set of basic edit operations of model transformations taken from ~\cite{cuadrado2018-quickfix}} \label{t:oplist} \begin{tabular}{ll} \textbf{Operator} & \textbf{Target} \\ \hline Creation & Binding \\ \hline Type & Type of source/target pattern element \\ modification& Type of variable or collection \\ & Type parameter (e.g., oclIsKindOf(Type)) \\ \hline Feature name & Navigation expression (binding RHS) \\ modification & Target of binding (binding LHS) \\ \hline Operation& Predefined operation call (e.g., size) \\ modification& Collection operation call (e.g., includes) \\ & Iterator call (e.g., exists, collect) \\ \hline \color{gray} Deletion & \color{gray} Rule, helper, binding ... \\ & \end{tabular} \end{table} The operation \textit{binding creation} adds a new binding in a rule. \textit{Type of source/target pattern element} changes the type of the \texttt{from} or \texttt{to} part of a rule. \textit{Type of variable collection} changes the type of a collection such as the type \texttt{UML!ActivityPartition} of the Sequence in line 4 of the listing. The operation \textit{Type parameter} changes the parameter \textit{Type} of a function such as \texttt{oclIsKindOf()}. \textit{Navigation expression} and \textit{Target of binding} replace, respectively, the RHS and LHS of a given binding. \textit{Predefined operation call modification}, \textit{Collection operation call modification} and \textit{Iterator call modification} replaces a function call by another one (e.g., \texttt{collect()} or \texttt{flatten()} from line 5). As we are dealing with type errors in transformations, it is important to pay a special attention to the delete operators. Indeed, sequences with these operators may artificially resolve some errors by removing faulty fragments of statements, statements or rules containing errors. Therefore, we ignore delete operators at this stage of our work. For the sake of consistency, in our evaluation in Section~\ref{sec:evaluation}, we do not consider errors that require delete operations. Fig.~\ref{fig:solrepresentation} presents a example of a sequence with two edit operations (i.e., a patch) which can be applied on Listing~\ref{lst:syntacticerrors} to fix some of the type errors identified in Section~\ref{sec:context}. \begin{figure*}[ht] \centering \includegraphics[width=.8\linewidth]{images/fig2-solution-vv2} \caption{Example of a sequence of two edit operations (patch) which can be applied on the ATL transformation program of Listing~\ref{lst:syntacticerrors}} \label{fig:solrepresentation} \end{figure*} In candidate sequences, each edit operation is identified by a name as defined in Table~\ref{t:oplist}. These two operations have four parameters: \textit{ruletoModify}, \textit{objectToModify}, \textit{oldValue}, and \textit{newValue}. For example, the edit operation \texttt{TargetOfBinding} changes the target of the binding (its LHS) from \textit{artifacts} to \textit{documentation} in the rule \texttt{activity2diagram} in line 10. The edit operation \texttt{TypeOfSourcePatternElement} replaces \textit{Comment} by \textit{Activity} in the input pattern of the rule \texttt{activitypartition2pool}. Applying this patch on Listing~\ref{lst:syntacticerrors} produces the Listing~\ref{lst:semanticerrors} in which two type errors of different categories have been simultaneously corrected: the \textit{incompatible type} error of line 10 is fixed as properties \textit{documentation} and \textit{name} are both of type \texttt{String}, and the \textit{invalid type} error of line 15 is fixed as \texttt{Activity} is an existing element of the source metamodel. A solution is then defined by selecting a sequence of operations and by assigning values to their parameters. The solution space thus spans over all potential combinations of operations, their parameterizations and their order. \begin{lstlisting}[ breaklines=true, keepspaces=false, breakindent=0pt, basicstyle=\ttfamily\scriptsize, caption={Model transformation program of Listing~\ref{lst:syntacticerrors} after applying the patch of Fig.~\ref{fig:solrepresentation}},label={lst:semanticerrors}] 1 create OUT : Intalio from IN : UML; 2 ... 3 helper context UML!Activity def: allPartitions 4 :Sequence(UML!ActivityPartition) = 5 self.partition->collect(p | p.allPartitions)->flatten(); 6 7 rule activity2diagram { 8 from a : UML!Activity 9 to d : Intalio!BpmnDiagram ( 10 documentation <- a.name, 11 pools <- a.allPartitions 12 )} 13 14 rule activitypartition2pool { 15 from a : UML!Activity 16 to p : Intalio!Pool, 17 l : Intalio!Lane ( 18 activities <- a.node->reject( 19 e|e.oclIsKindOf(UML!ObjectNode)) 20 )} 21 ... \end{lstlisting} \paragraph{Solution derivation.} Two kinds of operators derive new solutions from existing ones: \textit{crossover}, i.e., the recombination of the existing genetic material, and \textit{mutation}, i.e., the injection of new genetic material. A sequence of operations is a convenient representation for breeding through genetic operators. In our adaptation, we use single point crossover operator. This operator consists in cutting the operation sequences of two selected solutions into two parts and in swapping the parts at the right of the cut point to create two new solutions. The mutation operator introduces random changes into candidate solutions. In our adaptation, it selects one or more operation(s) from the solution sequence and either replaces them by another type of edit operation or modifies the parameters. \paragraph{Fitness evaluation.} A good solution is a sequence of operations which, when applied on a transformation, \textit{(\textit{i}) fixes the type errors} and \textit{(ii) preserves the transformation behavior}. The objective of fixing type errors can be directly evaluated by tools based on transformation language features, such as static fault analysis. This is our first objective: \textbf{(1) Fixing type errors} \textit{or, to minimize the number of transformation errors.} We used this objective to check the number of errors in the transformation rules after applying the sequence of change operations. To measure the number of errors, we use the AnATLyzer tool~\cite{cuadrado2018-anatlyser}, which finds a wide range of syntactic errors (including type errors) in ATL transformations using static analysis. Formally, the objective function for a solution S is: $Minf1(S)=|Errors(S)|$. The behavior-preservation objective poses a significant challenge and is difficult to capture with a single objective. In this paper, we explore the combination of two additional objectives that we believe favors behavior-preservation: \textbf{(2) To favor solutions of small size} \textit{or, to minimize the number of change operations.} This objective represents the number of operations in (or the size of) a candidate patch sequence. We used this objective to reduce the deviation from the initial transformation, and then the risk of changing the semantic. Additionally, we want to prevent the solutions to grow unnecessary large and escape the bloating effect~\cite{dejong2002-bloating}. Formally: $Minf2(S)=|V(S)|$, where V is the solution's sequence of operations. \textbf{(3) To keep changes local} \textit{or, to minimize the alteration of the metamodels' footprint:} The footprint of the source or target metamodels defined by an (initial or candidate) transformation is estimated by the number of elements from both source and target metamodels that the candidate solution employs (resp. does not employ) whereas the original transformation does not (resp. employs)~\cite{burgueno2015-staticfaultlocalization}. Formally, the third objective can be expressed as follows: $Minf3(S)=|SFP(O)-SFP(S)| + |TFP(O)-TFP(S)|$, where SFP and TFP are the footprints in respectively the source and target metamodels, extracted from the original transformation O and the candidate transformation S. To extract the footprint set of a transformation for a metamodel, we use the footprint tool defined by Burgueño et al.~\cite{burgueno2015-staticfaultlocalization}. These objectives are conflicting in essence. This is why we solve the multi-objective patch derivation problem by adapting the evolutionary population-based algorithm NSGA-II~\cite{deb2000-nsga} described in Section~\ref{sec:NSGAII}. \subsection{Refinement Phase} The exploration phase produces a set of candidate patch solutions corresponding to the Pareto front of the last iteration of NSGA-II. These solutions may remove completely or partially syntactic type errors detected by AnATLyzer, but do not semantically correct some of these errors. There are many reasons that could explain this phenomenon. For example, the choice of a parameter for a given change operation is made without checking the global consistency with the other change operations in the sequence. Another example is when many type-compatible possibilities exist for a given parameter, one is selected randomly without a proper way to evaluate the likelihood of each possibility to semantically correct the error. One can sophisticate the decision process of operation and parameter choices in the exploration phase, but this comes at high computation cost considering the number of explored solutions. Thus, we decided to alternatively refine one or more solutions produced by the exploration phase. As the refinement concerns a few solutions, we can afford a more resource-consuming decision process. After analyzing the used change operations, we identified four for which we can define heuristics to improve the decisions made during the exploration phase. In what follows, we present the improvement heuristics for these operations and illustrate their mechanisms on transformation program excerpts rather than sequences of edit operations, as it is easier to comprehend. \textbf{(1) Target of binding.} This edit operation changes the LHS of a binding. It may thus produce several bindings having the same target property in a given rule, even though a property should not be initialized more than once. In these cases, the heuristic seeks to change the LHS of necessary bindings until no property is initialized several times, as illustrated in Fig.~\ref{fig:targetOfBinding}. First, the heuristic performs an edit distance computation between the target property and the different RHS properties: the binding with the minimum distance is ignored on the next step as it is considered the correct initialization (\ding{182}). Then, the heuristic retrieves the list of accessible target properties, and computes the edit distance with each RHS of the remaining bindings (\ding{183}). Finally, it modifies the target properties with the closest property of this list (\ding{184}). \begin{figure}[ht] \centering \includegraphics[width=.8\linewidth]{images/targetOfBinding.pdf} \caption{Heuristic for the operation TargetOfBinding} \label{fig:targetOfBinding} \end{figure} When the binding RHS is of type \texttt{String}, changing the LHS to any \texttt{String} property prevents typing errors. However, it is common to select an incorrect LHS with regards to semantics. Steps \ding{183} and \ding{184} can be applied in this particular case. Applying this heuristic in Listing~\ref{lst:semanticerrors} would change the binding \texttt{documentation <- a.name} (line 10) to \texttt{name <- a.name}, which is more coherent. \textbf{(2) Navigation expression.} This operation may change a binding's RHS to have a different type than the binding's LHS (see Fig.~\ref{fig:navigationExpression}), causing a \textit{type mismatch} error (\ding{182}). When the property in the binding's LHS is of type \texttt{String}, the heuristic first retrieves the list of accessible properties which are of \texttt{String} type in the input model element (\ding{183}). Then, it selects from this list the property's name having the smallest edit distance with the LHS property's name (\ding{184}) and replaces the RHS accordingly (\ding{185}). \begin{figure}[ht] \centering \includegraphics[width=.8\linewidth]{images/navigationExpression.pdf} \caption{Heuristic for the operation navigationExpression} \label{fig:navigationExpression} \end{figure} \textbf{(3) Type of source (target) pattern element.} This edit operation may introduce an improper type in the \texttt{from} part of a rule. We have seen previously that each binding which takes into account references towards objects should be resolved (i.e., associated with the correct rule in the transformation). The correct rule has its \texttt{from} part corresponding to the type of the RHS of the binding and it \texttt{to} part corresponding to the type of the LHS of the binding. We could use the RHS of the binding that refers to that rule to infer the correct type for the \texttt{from} part. The third heuristic checks existing bindings to find the one with a LHS which type is equivalent to the \texttt{to} part of a given rule (\ding{182}). Then, it verifies if the type of the RHS of this binding corresponds to the \texttt{from} part of the rule, and changes the latter accordingly if it is not the case (\ding{183}). This heuristic can be applied when there exists only one rule having a given type in its \texttt{to} part. \begin{figure}[ht] \centering \includegraphics[width=.9\linewidth]{images/sourcePatternElement.pdf} \caption{Heuristic for the operation typeOfSourcePatternElement} \label{fig:sourcePatternElement} \end{figure} The same verification can be done the other way around for verifying the \texttt{to} part of the rule. \textbf{(4) Type parameter (e.g., oclIsKindOf(Type)).} This edit operation change the \texttt{Type} parameter defined in functions such as \texttt{oclIsKindOf} or \texttt{oclAsType}. Based on OCL definition, \texttt{Type} parameter of \texttt{oclIsKindOf}, for example, must inherit from the type defined before (i.e., the inferred type). For instance, in Fig.~\ref{fig:argument-heuristic}, \texttt{UML!NamedObject} should inherit the inferred type of \texttt{a.node}, i.e., \texttt{ActivityNode}. \begin{figure}[ht] \centering \includegraphics[width=1\linewidth]{images/TypePArameter.pdf} \caption{Heuristic for operation TypeParameter (e.g., oclIsKindOf(Type))} \label{fig:argument-heuristic} \end{figure} The fourth heuristic first retrieves the inferred type (\ding{182}), then checks whether the \texttt{Type} parameter inherits from this type. If not (\ding{183}), the heuristic changes the \texttt{Type} parameter by a subclass of the inferred type (\ding{184}). If \texttt{oclIsKindOf} is followed by a property, e.g., \texttt{(e|e.oclIsKindOf(UML!NamedObject)) -> select(e|e.Language)}, the heuristic chooses a \texttt{Type} which has access to that property (here \texttt{OpaqueAction}). \\ Listing~\ref{lst:semanticerrors} contains two semantic errors (on lines 10 and 15) that are fixed by heuristics 1 and 3. This produces the transformation in Listing~\ref{lst:noerror}, now free of type error and behavior deviation. \begin{lstlisting}[ breaklines=true, keepspaces=false, breakindent=0pt, basicstyle=\ttfamily\scriptsize, caption={Model transformation program of Listing~\ref{lst:semanticerrors} after applying the heurisitics},label={lst:noerror}] 1 create OUT : Intalio from IN : UML; 2 ... 3 helper context UML!Activity def: allPartitions 4 :Sequence(UML!ActivityPartition) = 5 self.partition->collect(p | p.allPartitions)->flatten(); 6 7 rule activity2diagram { 8 from a : UML!Activity 9 to d : Intalio!BpmnDiagram ( 10 name <- a.name, 11 pools <- a.allPartitions 12 )} 13 14 rule activitypartition2pool { 15 from a : UML!ActivityPartition 16 to p : Intalio!Pool, 17 l : Intalio!Lane ( 18 activities <- a.node->reject( 19 e|e.oclIsKindOf(UML!ObjectNode)) 20 )} 21 ... \end{lstlisting} \section{Conclusion and future work}\label{sec:conclusion} In this paper, we explored the idea of fixing type errors in model transformation programs without relying on predefined patches. Considering that a patch is a sequence of basic edit operations, our approach explores the space of candidate sequences guided by two families of objectives: correction of type errors and behavior preservation. While the correction of type errors is relatively easy to measure using transformation language features, behavior preservation poses many challenges. To tackle these issues, we proposed a two-phase approach to find candidate sequences that limit behavior deviations. The first phase combined two objectives during the exploration to approximate the behavior preservation: minimizing the size of the sequence and keeping the changes local. During a second phase, we applied four heuristics on the obtained patches to improve the decisions made during the exploration phase. An evaluation of our idea showed that the first phase corrected a majority of type errors for two transformation problems, \textit{Class2Table} and \textit{PNML2PN}, most of the time without altering the behavior. We also showed that refining the patches obtained after the exploration using the four proposed heuristics significantly improved the quality of the patches in terms of behavior preservation for the three transformation problems, including \textit{UML2BPMN}. As a future work, we plan to further investigate alternative objectives for behavior preservation to achieve correct and complete patches. We also envision to inject some heuristics when selecting edit operations (initial population generation and mutations) to decrease the probability of altering the behavior. Finally, we aim at generalizing our approach to repair semantic errors. \section{Background}\label{sec:problem} In this section, we start by giving some background information about ATL and type errors in ATL transformation programs. Then, we present the challenges of repairing those transformations. Finally, we present NSGA-II~\cite{deb2000-nsga}, the evolutionary population-based algorithm we use in our approach. \subsection{Type Errors in ATL Transformations}\label{sec:context} Listing~\ref{lst:syntacticerrors} presents an excerpt of an ATL transformation program of UML activity diagrams into Intalio business process models\footnote{\url{http://www.intalio.com/products/bpms}}, borrowed from~\cite{cuadrado2018-quickfix}. The two metamodels are shown in Fig.~\ref{fig:Motivatingexample}. ATL transformation programs consist in a source metamodel (\texttt{IN}), a target metamodel (\texttt{OUT}), and a set of transformation \texttt{rule}s. Each rule is named and describes a pattern in the source metamodel (\texttt{from} part, also called the input pattern) and a pattern in the target metamodel (\texttt{to} part, also called the output pattern). An ATL transformation program uses an execution mechanism triggering a rule when an object in the input model matches the input pattern of the rule. When the rule is executed, an object is created in the output model according to the output pattern of the rule. For example, the rule \texttt{activity2diagram} (lines 7-12) states that each object instance of the \texttt{Activity} class of UML (line 8) triggers the creation of an object, instance of the \texttt{BpmnDiagram} class of Intalio (line 9). \begin{lstlisting}[ breaklines=true, keepspaces=false, breakindent=0pt, basicstyle=\ttfamily\scriptsize, caption={Excerpt of an ATL transformation program, from UML Activity Diagram to Intalio BPMN},label={lst:syntacticerrors}] 1 create OUT : Intalio from IN : UML; 2 ... 3 helper context UML!Activity def: allPartitions 4 :Sequence(UML!ActivityPartition) = 5 self.partition->collect(p | p.allPartitions)->flatten(); 6 7 rule activity2diagram { 8 from a : UML!Activity 9 to d : Intalio!BpmnDiagram ( 10 artifacts <- a.name, 11 pools <- a.allPartitions 12 )} 13 14 rule activitypartition2pool { 15 from a : UML!Comment 16 to p : Intalio!Pool, 17 l : Intalio!Lane ( 18 activities <- a.node->reject( 19 e|e.oclIsKindOf(UML!ObjectNode)) 20 )} 21... \end{lstlisting} \begin{figure*}[ht!] \centering \captionsetup{skip=1pt} \begin{subfigure}[t]{0.5\textwidth} \centering \includegraphics[width=0.99\linewidth]{images/umlmetamodel} \caption{UML AD metamodel} \end{subfigure}% ~ \begin{subfigure}[t]{0.5\textwidth} \centering \includegraphics[width=0.99\linewidth]{images/bpmnmetamodel} \caption{Intalio BPMN metamodel} \end{subfigure} \caption{Excerpts UML activity diagrams (AD) metamodel and Intalio Business process model (BPMN) metamodel} \label{fig:Motivatingexample} \end{figure*} An input object may trigger the creation of several output objects. For instance, the rule \texttt{activitypartition2pool} (lines 14-20) states that each object instance of the \texttt{Comment} class of UML (line 15) triggers the creation of two objects in the output model: one instance of the \texttt{Pool} class of Intalio (line 16) and the other instance of the \texttt{Lane} class of Intalio (line 17). Input and output objects are related by a trace link: it is possible to access properties of the input object and to set those of the output object. For instance, the rule \texttt{activity2diagram} initializes the properties \textit{artifacts} and \textit{pools} of \texttt{BpmnDiagram} depending on properties it accesses in \texttt{Activity} (lines 10-11). Property initialization, called \textit{binding} in ATL, may use a property of the input object (line 10), a \texttt{helper} (similar to methods, as defined in lines 3-5) to reshape the input object property (line 11), or OCL constraints (lines 18-19). Properties can be attributes with native types, or references towards objects. When a binding's right-hand side (RHS) is a reference to an object of the input model, we cannot assign this object directly to the output object's property of the left-hand side (LHS). In fact, the object of the input model needs to be transformed into elements of the output model. In this case, a binding resolution mechanism is used to retrieve a rule which can perform this transformation, i.e., with a \texttt{from} part corresponding to the type of the input model object (binding's RHS), and a \texttt{to} part corresponding to the type of the output model property (binding's LHS). Models are primary artifacts that are exploited through model transformations~\cite{sendall2003-MTHeartAndSoul}. Transformations use a type system mostly defined by the source and target metamodels, \emph{i.e.}, the input and output pattern elements in transformations have to refer to existing elements in the involved metamodels~\cite{cuadrado2017-staticanalysis}. Consequently, a type error can be introduced in a transformation program by accident during development (developer or domain expert error) by wrongly using the metamodel types. It can also result from changes in the metamodels it uses, but this case is out of the scope of this paper. Resolving type errors in ATL is thus difficult because of the declarative nature of the transformation language and the dependencies towards the involved metamodels. In the following, we illustrate type errors using the transformation program excerpt of Listing~\ref{lst:syntacticerrors}. A common type error concerns properties' types in bindings, such as in line 10. In the Intelio metamodel, the property \textit{artifacts} refers to objects of type \texttt{Artifact}. However, in the RHS of this binding, the input object property \textit{name} is of type \texttt{String}, causing a \textit{incompatible type} error. Invalid types are also frequent errors. As mentioned earlier, each rule is triggered by an input object that is compatible with the \texttt{(from)} part of the rule. The rule \texttt{activitypartition2pool} is thus triggered by objects conforming to \texttt{Comment} in the UML metamodel (line 15). If we look closely to the UML metamodel of Fig.~\ref{fig:Motivatingexample}, there is no \texttt{Comment} element: this raises the error \textit{invalid type}. Another common error concerns the binding resolution. Let us consider the binding of line 11. The RHS of the binding calls an helper returning objects of type \texttt{ActivityPartition} from UML metamodel. The LHS of the binding is the property \textit{pool}, with type \texttt{Pool} from Intelio metamodel. To resolve this binding, there must be a rule in the transformation program having \texttt{UML!ActivityPartition} as input pattern, and \texttt{Intelio!Pool} as output pattern, which is not the case in our excerpt: this raises an \textit{possible unresolved binding}~\cite{cuadrado2018-quickfix} error. \section{Automatix - Preliminary Tool and Evaluation}\label{sec:evaluation} We implemented our approach in a tool, called Automatix, and performed an empirical evaluation\footnote{For the review process, the experimental data can be downloaded using the link \url{https://bitbucket.org/zahravaraminy/ecmfa2021/src/master}}. The rest of this section describes the investigated research questions, details the evaluation procedure used, presents the results, and discusses the threats to the validity of our evaluation. \subsection{Reseach Questions} As we explore many solutions during our evolutionary algorithm, it is legitimate to question whether the results are due to our search strategy or to the amount of candidate solutions explored during the search. Thus, we start by performing a sanity check to compare the number of type errors fixed by patches obtained with our approach during the exploration phase and by patches obtained with a random search. Then, we assess whether the patches obtained after the exploration phase preserve the behavior of the transformations. Note that we do not evaluate the behavior of the output models that can be generated with a corrected transformation, but the behavior of the transformation program itself. Finally, we do the same evaluation, but for patches obtained after the refinement phase. In summary, we formulate the following research questions: \textbf{RQ0}: Are our results attributable to an efficient exploration of the search space, or are they due to the large number of candidate solutions we explore during the evolution? \textbf{RQ1}: Is the exploration phase able to correct type errors in transformations while preserving their behavior? \textbf{RQ2}: Is the refinement phase (combined with the exploration phase) able to correct type errors in transformations while preserving their behavior? \subsection{Evaluation Setup} To assess our approach's performance, we followed a rigorous protocol depicted in~Fig.~\ref{fig:evalbp}. \begin{figure*}[htb!] \centering \includegraphics[width=.7\linewidth]{images/bigpictureN2} \caption{Evaluation procedure overview} \label{fig:evalbp} \end{figure*} Steps 1 and 2 correspond to the creation process of faulty transformations from existing correct model transformations. Then, step 3 represents the patch generation with Automatix, and step 4 the application of these patches on the faulty transformations to correct them. Finally, step 5 is a manual comparison of the repaired model transformations against the original correct model transformations, which will help answer RQ1 and RQ2. The sanity check (RQ0) uses the output of step 2 to perform a random search, and compare the results with the output of step 4. We applied our evaluation on three existing third-party transformations, \textit{Class2Table}, \textit{PNML2PN} and \textit{UML2BPMN} from the ATL Zoo\footnote{\url{http://www.eclipse.org/atl/atlTranformations}}. \textit{Class2Table} takes as source a class diagram and outputs a relational database schema. \textit{PNML2PN} enables to produce a Petri net from an XML Petri net representation in the PNML format. Finally, \textit{UML2BPMN} transforms a UML Activity Diagram into a business process model (Intalio BPMN) \cite{schumbacher2013-grafcet}. To limit introducing bias in our evaluation, we used a list of existing faulty transformation mutants provided by the QuickFix project~\cite{cuadrado2018-quickfix} (Fig.~\ref{fig:evalbp}, step 1). Each mutant $MT_{1}$ corresponds to the original correct transformation $MT_{o}$ in which one error of a given class was injected, among the type error categories in ATL transformations~\cite{cuadrado2018-quickfix}. To create transformations with multiple errors, we selected randomly, for each of the three transformation problems, 6 sets with respectively 3 to 8 mutants coming from distinct error categories. Then, we merged the mutants in each set to form 6 faulty transformations $MT_{3-8}$ with various numbers of type errors (Fig.~\ref{fig:evalbp}, step 2). Note that we performed the merge sequentially and, then, the number of errors in the resulting transformations can be lower or higher than the number of merged mutants, as some errors may overlap or create new errors as side effects. This allowed us to consider faulty transformations with different numbers of errors, from all error categories, except those requiring delete operations to be fixed, as explained in Section~\ref{sec:ep}. Additionally, as we are using a probabilistic approach, we run our approach 5 times for each faulty transformation. Thus, we obtained for each transformation problem 30 different runs (5 runs * 6 faulty transformations). For each faulty transformation, we created an initial population of 50 solutions, \emph{i.e.,} sequences of edit operations, generated randomly. The edit operations are generated for rules with flagged type errors. We complete the initial population by 50 additional solutions obtained by crossover and mutation. We limited the number of generations to 500, which means that, for each run, our algorithm explores 50,000 possible solutions (500x100). For the other parameters, the crossover and mutation rates are respectively set to 0.8 and 0.2, values that usually perform well~\cite{haupt2004practical}. In this evaluation, Automatix takes as input a faulty transformation, and produces candidate patches in the form of sequences of change operations in two phases: one of exploration, and another of refinement (Fig.~\ref{fig:evalbp}, step (3)). To perform the evaluation, we define the following independent variables w.r.t. to the faulty transformations: \begin{itemize} \item \textbf{\#MUT} -- Number of mutations applied on the original transformation to derive the faulty one. \item \textbf{\#ERR$_{in}$} -- Number of type errors found on the transformation after \#MUT mutations were applied. \end{itemize} We then define dependent variables w.r.t. the obtained candidate patches: \begin{itemize} \item \textbf{\#ERR$_{out}$} -- Number of type errors found on the transformation after a recommended patch has been applied. \item \textbf{\#OPE} -- Size of a recommended patch in number of change operations. \item \textbf{\#ITE} -- Number of algorithm iterations before a recommended patch is found, \textit{i.e.} $\#ERR_{out(patches)} = 0$. \item \textbf{SEM} -- Rate of errors corrected while preserving the behavior, after the exploration phase (\textbf{EP}) and after the refinement phase (\textbf{RP}). \end{itemize} We used AnATLyzer to detect the type errors in the input and output transformation programs. This tool also allows us to identify which type errors of the input faulty transformation have been corrected. Although, the exploration phase may produce more than one solution in the Pareto set, we decided to select only one for the refinement and for the comparison with the random search. To this end, we first select the solution that fixes the highest number of type errors according to AnATLyzer. In the case of a tie, we choose one with the shortest change-operation sequence. The two criteria were enough to reduce the possibilities to only one solution for all the runs. \subsection{Evaluation Results} Table~\ref{rq1:table} summarizes the results of the different runs of our approach on the mutant configurations described in the setup, for respectively \textit{Class2Table} , \textit{PNML2PN} and \textit{UML2BPMN} transformations (col. 1). Except for \textbf{\#MUT}, the values indicate the average for the 5 runs on each faulty transformation. For \textbf{\#ERR$_{out}$}, we give the min, average and max values for the 5 runs. Results are ranked by the number of mutations (\#MUT). On average, the majority of errors introduced by the mutants (\#ERR$_{in}$ - col. 3), were successfully corrected (\#ERR$_{out}$ - col. 6-8 indicating the min, average and max of the number of errors left after applying the patches found by Automatix), according to AnATLyzer. We checked manually that the errors left are those intially introduced and not newly created ones by the patches. For all the cases, we obtained at least one solution without any error left ($min = 0$). Additionally, we did not observe a significant correlation between the number of inputs errors/mutants and the number of generations to find a solution (\#ITE - col. 4). \begin{table*}[h!] \centering \caption{Results of Automatix. Values are averages, unless precised.} \begin{tabular}{@{}c|ll|ll|ccc|l|l@{}} & {\#MUT} & {\#ERR$_{in}$} & {\#ITE} & {\#OPE} & \multicolumn{3}{c|}{\#ERR$_{out}$} & \multicolumn{2}{c}{SEM}\\ & & & & & min. & avg. & max. & EP & RP \\ \hline & 3 & 3,4 & 134 & 3 & 0 & 0 & 0 & 68\% &68\% \\ & 4 & 6 & 44,8 & 4 & 0 & 0 & 0 & 76\% &\textbf{82\%}\\ & 5 & 7,8 & 56,8 & 5 & 0 & 0 & 0 & 91\%& \textbf{93\% } \\ & 6 & 8,4 & 226,8 & 6 & 0 & 0 & 0 & 91\% &91\% \\ & 7 & 9,6 & 261,8 & 5,8 & 0 & 0,6 & 1 & 71\% & \textbf{75\%} \\ {\multirow{-7}{*}{\rotatebox{90}{Class2Table }}} & 8 & 10,6 & 276,5 & 7 & 0 & 0,2 & 1 & 87\% & 87\% \\ \midrule & 3 & 5,8 & 86,4 & 3,2 & 0 & 0 & 0 & 78\% &\textbf{89\% } \\ & 4 & 7 & 137,2 & 4,2 & 0 & 0,4 & 2 & 72\% &\textbf{79\% }\\ & 5 & 8,4 & 188 & 5,2 & 0 & 0 & 0 & 80\%& \textbf{86\% } \\ & 6 & 9,2 & 179,2 & 5,8 & 0 & 0,2 & 1 & 76\% &\textbf{87\% } \\ & 7 & 8,4 & 78,6 & 6 & 0 & 0,8 & 1 & 69\% &\textbf{76\% } \\ {\multirow{-7}{*}{\rotatebox{90}{PNML2PN }}} & 8 & 9,8 & 244,6 & 7,4 & 0 & 0,4 & 1 & 67\% &\textbf{78\%} \\ \midrule & 3 & 3,2 & 162,4 & 3 & 0 & 0 & 0 & 45\% & \textbf{83\% } \\ & 4 & 4,2 & 35 & 3.4 & 0 & 0,6 & {1} & 34\% &\textbf{41\%} \\ & 5 & 6,8 & 116 & 5,2 & 0 & 0,2 & {1} & 44\% &\textbf{72\% } \\ & 6 & 6,8 & 19,2 & 5,4 & 0 & 0,8 & {1} & 40\% &\textbf{53\%} \\ & 7 & 7,8 & 229,6 & 5,6 & 0 & 0,8 & 2 & 25\% &\textbf{52\%} \\ {\multirow{-7}{*}{\rotatebox{90}{UML2BPMN }}} & 8 & 8 & 72 & 5,8 & 0 & 1,2 & 2 & 26\% &\textbf{40\%} \\ \end{tabular} \label{rq1:table} \end{table*} \subsubsection{RQ0 - Sanity Check} To perform the sanity check we limited ourselves to a sample of runs. We considered faulty transformations with 2, 4, 6 and 8 mutants for the problem of \textit{Class2Table}. We compare the results obtained by Automatix to those of a random search for the considered transformations. Since Automatix explores 50,000 solutions for each run, the random exploration also picks, for each run, the best individual from 50,000 solutions generated randomly in the same way as for the initial population in Automatix. As for Automatix, the random exploration was also performed 5 times for each faulty transformation. \begin{table}[h] \centering \caption{Automatix vs random results for \textit{Class2Table}.} \label{rq0} \begin{tabular}{l|c|c|c|c|} \cline{2-5} & \multicolumn{2}{c|}{\begin{tabular}[c]{@{}c@{}}Average \#ERR$_{out}$ \\ Value \end{tabular}} & \multirow{2}{*}{\begin{tabular}[c]{@{}c@{}}Mann Witney\\ p-value\end{tabular}} & \multirow{2}{*}{\begin{tabular}[c]{@{}c@{}}Effect Size\\ Cohen's d\end{tabular}} \\ \cline{1-3} \multicolumn{1}{|r|}{\rotatebox{90}{\#MUT~} } & \emph{Automatix} & \emph{RDN} & & \\ \hline \multicolumn{1}{|r|}{2 } & 0.0 & 0.2 & 0.374 & - \\ \multicolumn{1}{|r|}{4 } & 0.0 & 2.8 & \textless 0.001 & 10.6 \\ \multicolumn{1}{|r|}{6 } & 0.4 & 5.8 & \textless 0.001 & 8.6 \\ \multicolumn{1}{|r|}{8 } & 3.0 & 6.4 & \textless 0.001 & 3.24 \\ \hline \end{tabular} \end{table} As shown in Table~\ref{rq0}, the solutions obtained with Automatix correct on average clearly more errors than random ones. Except for transformations with two mutants (first line), the difference between the two strategies is statistically significant (T-Test with a \textit{p-value} lesser than 0.001), and with a high effect size (Cohen's d greater than 3)\footnote{According to Sawilowsky~\cite{sawilowsky2009new}, an effect size greater than 2 is considered as huge}. \subsubsection{RQ1 - Error Correction after Exploration Phase (EP)} We consider that an error is actually corrected in a transformation program when the change brought by the patch matches the corresponding code fragment in the original correct transformation $MT_{o}$. To assess that (Fig.~\ref{fig:evalbp}, step 5), we followed a two-steps process. We started by applying an automated text diff between the original transformation $MT_{o}$ (the ground truth) and the transformation $MT_r$ fixed by a patch obtained after the exploration phase. Then, we manually checked the discrepancies flagged by the diff to determine the number of errors that were corrected without altering the behavior of the transformation (call them semantically fixed) and reported the rate of these errors with respect to \#ERR$_{in}$ in columns SEM(EP). In this way, we are sure to determine if the applied patches obtained automatically are correcting type errors actually and not just making AnATLyzer not detecting them. As shown in column SEM(EP) of Table~\ref{rq1:table}, the actual correction rates were good for two transformation problems. Indeed, we succeeded to correct on average between 68\% and 91\% of errors for \textit{Class2Table}, and between 67\% and 80\% for \textit{PNML2PN}. As expected, some classes of type errors are difficult to correct syntactically while keeping a correct semantic, such as \textit{Invalid type} and \textit{Compulsory feature not found}. These classes of errors require substituting or adding one of the many features present in the metamodels with compatible types. This increases obviously the risk of choosing a wrong feature. For the third transformation problem \textit{UML2BPMN}, the results were less good with an average actual correction rate between 25\% and 45\%, although some executions reached higher scores. When analyzing the semantic discrepancies, we noticed that, in addition to the complexity of the involved metamodels, these make an intensive use of inheritance. Fixing errors like \textit{Invalid type} and \textit{Compulsory feature not found} with correct solutions is thus even more difficult in this case. \subsubsection{RQ2 - Error Correction after Refinement Phase (RP)} To answer question RQ2, we perform the same semantic discrepancy but this time on patches obtained after executing the refinement phase on the candidate patches generated by the exploration phase. As shown in column SEM(RP) of Table~\ref{rq1:table}, the results indicate an improvement of the correction rates in all three transformation problems. By comparing correction rates between the exploration phase SEM(EP) and refinement phase SEM(RP), we can see that the heuristics improve the correction with behavior preservation of the transformations (increased rates are shown in boldface in the table). For \textit{Class2Table}, the correction rate increased on average from 80.7\% to 82.7\%. Over the 6 faulty transformation, 3 witnessed a higher rate (transformation with 4, 5, and 7 mutants). The improvement was more important for \textit{PNML2PN}, for which the average correction rate jumped from 73.7\% to 82.5\%. In this case, all the faulty transformations saw their correction improve. Finally, for \textit{UML2BPMN}, we observed sizeable improvements of correction rates from 35.7\% on average to 56.8\%. Here again, the improvement concerned all the faulty transformations reaching a rate of 83\% for the transformation with 3 mutants. In the rates shown, we do not include errors that were partially corrected thanks to the heuristics. For example, we observed that for some bindings, the RHS was actually corrected but not the LHS. This means that the impact of the refinement phase can be much higher than one indicated by the correction rates. In conclusion, we show that the proposed approach is able to correct multiple type errors at the same time. The evaluation reveals that the two behavior-oriented objectives of the first phase circumscribe the risk of behavior alteration. It also shows that the combination of exploration and refinement phases allows to generate patches that correct most of the type errors while preserving the behavior. These results are evidence that deepening the analysis of edit operations and the possible behavior deviations they may introduce help guiding the search through new objectives or refinement heuristics. \subsection{Threats to Validity} There are some threats that may call into question the validity of our evaluation results. First, the faulty transformations used in the evaluation contain mutants and not errors actually introduced by developers. We used this external data set because it is independent from our project and was used to evaluate the state-of-the-art work. Moreover, it covers a large spectrum of error types. Finally, the random combination of basic mutants we used can be representative of the randomness with which errors can be introduced by developers. Another limitation of our work at this stage of our project, is that we do not consider some of the error types (mutations). In addition to errors that require delete operations mentioned earlier in the paper, we do not handle errors on ATL transformation helpers. We expect to extend our work in the near future to also consider both families of errors. Our approach does not produce a single solution, but rather a Pareto set of solutions. For the sake of automated evaluation, we selected from the Pareto set the solution with the minimum number of errors left. In the case of a tie, we choose the smallest solution (\textit{i.e.} with minimum \#OPE). We did the same for the RQ0 with the random exploration. In a real setting, other solutions, discarded for their larger size, can be presented to the user, as well, as alternative solutions. This can be done by using a diversity strategy to propose a representative sample of solutions \cite{Batot17Heuristic}. We performed a manual inspection of the solutions to evaluate the semantic discrepancy between fixed transformations and original ones. In future evaluations, we plan to use a test suite of pairs of input-output models to check to which extent the solutions proposed by our approach handle correctly the test cases (assess the behavior preservation). Of course, this is possible only for solutions with no type errors. \section{Introduction} Model-Driven Engineering (MDE) is increasingly used for product development in industries like automotive, telecom or banking~\cite{Whittle2014State}. In those industries, the primary interest in modeling recently shifted from producing complex models -- mainly for documenting software systems -- to using these models to (semi-)automatically generate software artifacts by means of model transformations~\cite{combemale2016engineering}. Model transformations usually take as input models expressed in a modeling language (i.e., metamodel), which can be of general-purpose (e.g., UML) or domain-specific (e.g., AUTOSAR\footnote{\url{http://www.autosar.org}} for automotive systems). The outputs of model transformations can be either models (possibly conforming to different metamodels), or texts such as source code or XML documents. In this paper, we focus on the former, i.e., model-to-model transformations. Model transformation programs can be written in general programming languages or transformation-dedicated languages such as ATL~\cite{jouault2008atl}. These programs usually describe transformation rules that indicate how to transform elements of the input models into elements of the output models. Whether they are learned automatically from examples, like in~\cite{baki2016multi}, or written manually, these transformations must be checked to ensure they are free of errors. Transformation languages such as ATL are dynamically typed, making transformations expressed in these languages particularly prone to type errors, such as referring to elements that do not exist in the metamodels, or initializing properties with values of the wrong types. A way to automatically correct type errors is to provide predefined patches for each category of errors~\cite{cuadrado2018-quickfix}. Although this approach may be useful for developers, it suffers from two limitations. Firstly, predefined patches require an intensive knowledge to modify them or to define new ones (e.g., for new categories of errors). Secondly, they fix errors individually without taking into account possible interactions between them~\cite{cuadrado2018-quickfix} and may thus introduce new errors while trying to fix existing ones. Another way to tackle the correction of type errors in transformations is to use automatic program repair techniques such as search-based algorithms. These techniques have been proven to efficiently support developers for debugging and correction tasks~\cite{monperrus2018-progrepairbiblio}. Contrary to predefined patches, they enable to explore a space of potential patches, and may help overcome the aforementioned limitations. These techniques closely relate to oracles checking whether the program behavior is correct after applying a patch, test suites being the most popular oracles~\cite{monperrus2018-progrepairbiblio}. However, a substantial amount of knowledge is required to provide representative test suites that would constitute a relevant oracle~\cite{staats2011-testingfoundationsrevisited}, especially for transformations that use complex structures as input/output. Moreover, in the specific context of type errors, valuable patch solutions may fix most of the errors but not all, and the resulting transformation cannot be executed -- and then be tested -- as type errors are syntactic errors. Relying on test suites to guarantee that type error patches preserve a transformation behavior is thus hardly possible. In this paper, we define a method for patch recommendation fixing type errors in model transformation programs without relying on predefined patches nor test suites. This method does not seek fully-automatic correction, but rather to alleviate developers' tasks by avoiding patch maintenance and test suites definition. Thus, its goal is to recommend to developers patches correcting the most errors possible while preserving the behavior of a given faulty transformation. In a first phase, we propose to explore the space of possible combinations of basic edit operations to find the sequences (i.e., patches) that repair several type errors simultaneously. To limit the transformation's behavior deviation, we explore the idea of using several objectives to guide the search, as surrogate to test oracles. We test two objectives we think are behavior-preserving: a) minimizing the changes introduced by the patches and b) preserving the transformation footprint with respect to the involved input/output languages. We analysed the behavior of faulty transformation programs corrected by this first phase and identify four types of recurring behavior deviations, along with the edit operations introducing them, which may be prevented by following simple guidelines. However, implementing these guidelines in objectives would be too resource-consuming and make the method non-tractable. Thus, we define four heuristics to improve the decisions made during the exploration phase and apply them once, in a second phase, on the best patches obtained in the first phase, to further prevent possible behavior deviations. We evaluate these two phases using three existing ATL model-to-model transformations, with a published dataset containing several mutations of these transformations with various errors and error categories. The evaluation of the first phase showed contrasting results: while we succeeded to correctly fix, on average, respectively 80\% and 73\% of the type errors while preserving a correct behavior for two transformations, this correction rate was lower (36\%) for the third transformation. However, after applying the heuristics during the second phase, the correction rates increased, on average, to 83\%, 82\% and 57\%, respectively. We made the following contributions: \begin{itemize} \item We adapt an evolutionary population-based algorithm to automatically generate patches which can fix several type errors at the same time in model transformation programs; \item We show that two objectives (namely, minimizing the changes and keeping the changes local) help to guide the patch generation to preserve the behavior of a corrected model transformation program; \item We define four heuristics to refine the obtained patches and show that these heuristics further limit behavior deviation. \end{itemize} The remainder of this paper is organized as follows. Section~\ref{sec:problem} gives the necessary background and discusses issues related to automatically fixing type errors in model transformations. Section~\ref{sec:approach} describes the two-step approach to fix type errors without predefined patches nor test cases. An implementation and an evaluation of our approach are provided in Section~\ref{sec:evaluation}. Section~\ref{sec:rw} presents related work. We discuss our findings and conclude in Section~\ref{sec:conclusion}. \section*{About the authors} \shortbio{Zahra VaraminyBahnemiry}{is a PhD student at the department of Computer Science and Operations Research (GEODES Group) of the Université de Montréal (Canada) \editorcontactf[]{varaminz@iro.umontreal.ca}} \shortbio{Jessie Galasso}{is a post-doc at the department of Computer Science and Operations Research (GEODES Group) of the Université de Montréal (Canada). \editorcontactf[]{jessie.galasso-carbonnel@umontreal.ca}} \shortbio{Houari Sahraoui}{is a professor at the department of Computer Science and Operations Research (GEODES Group) of the Université de Montréal (Canada). \editorcontact[]{sahraouh@iro.umontreal.ca }} \end{document} \subsection{NSGA-II, a Multi-Objective Population-based Evolutionary Algorithm} \label{sec:NSGAII} \begin{figure}[ht \centering \includegraphics[width=.9\linewidth]{images/nsga2.pdf} \caption{NSGA-II Algorithm~\cite{deb2000-nsga}} \label{fig:nsga2} \end{figure} For a multi-objective optimization problem, the idea of evolutionary population-based algorithms (EPA) is to make a population of candidate solutions evolve toward a near-optimal solution in order to solve that problem. As such, EPA are designed to find a set of optimal solutions, called non-dominated solutions, or Pareto set. A non-dominated solution provides a suitable compromise between all objectives such that one objective cannot be further improved without degrading another objective. In this paper, we use NSGA-II, a well-known fast multi-objective genetic algorithm, that is suitable to the kind of problem we are solving~\cite{ali2020quality}. The first step in NSGA-II as described in Fig.~\ref{fig:nsga2} is to create randomly a population $P_0$ of $N/2$ individuals encoded using a specific representation (1). Then to complete the population of size $N$, a child population $Q_0$, also of size $N/2$, is generated from the population of parents $P_0$ using genetic operators such as crossover and mutation (2). Both populations are merged into an initial population $R_0$ of size $N$, which is then sorted into dominance fronts according to the dominance principle (3a). A solution $s_1$ dominates a solution $s_2$ for a set of objectives $\{O_i\}$ if $\forall i, O_i(s_1) \geqslant O_i(s_2)$ and $\exists j \mid O_j(s_1) > O_j(s_2)$. The first (Pareto) front includes the non-dominated near-optimal solutions. The second front contains the solutions that are dominated only by the solutions of the first front, and so on and so forth. The fronts are included in the parent population $P_1$ of the next generation following the dominance order until the size of $N/2$ is reached. If this size coincides with part of a front, the solutions inside this front are sorted, to complete the population, according to a crowding distance which favors diversity in the solutions~\cite{deb2000-nsga} (3b). This process is repeated (4) until a stop criterion is reached, \textit{e.g.} a number of iterations or one or more objectives greater than a certain threshold. \subsection{Challenges of Fixing Model Transformations}\label{sec:pbstatement} Existing research on model transformation repair generally follows the precept that errors sharing ``the same symptoms, the same root cause, or the same solution'' can be fixed in the same fashion~\cite{martinez2014-fixIngredientsRedundancyForRepair}. Concretely, for a range (or class) of equivalent errors, a predefined patch is applied to all instances of this kind of error. Cuadrado et al. present an evolvable list of patches tailored as a response to every characterized type of syntactic error~\cite{cuadrado2018-quickfix}. The authors point that the proposed list of patches may evolve with new error types or with the refinement of existing patches. Additionally, one may want to adapt patches to other transformation languages. Thus, defining, refining and adapting patches require an important amount of knowledge, thorough study of their impact and manual maintenance effort. Another issue of predefined patches mentioned by the authors is that the order in which one applies patches may bring unexpected interactions and side effects on the transformation, \textit{e.g} new errors can be injected or contradictory changes may loop. We do believe that an approach that explores dynamically candidate patches, rather than applying/instantiating predefined ones, can circumvent the above-mentioned issues. As patches can be seen as sequences of basic edit operations, such an approach can automatically explore the space of possible sequences that fix several typing errors at once, without creating new ones. Another even more important issue arising when fixing type errors is to ensure that the original behavior/semantics of the transformation is not altered -- or at least that the semantic discrepancy is circumscribed and characterized. The common way to ensure this behavior preservation after changes is to use an oracle such as test suites, pre-/post-conditions, or possibly other specifications. Yet, in the context of domain-specific problems such as those MDE offers to solve, the necessary knowledge required to build a relevant and trustable oracle is not always available~\cite{baudry2010-barrierToTestingMT}. More important yet, since we are dealing with faulty (i.e., non executable) transformations, potentially good candidate patches cannot be evaluated through test cases, since they may not correct all errors, resulting in corrected transformations that still cannot be executed. In our approach, we propose to take additional behavior-preserving objectives into account, and thus view the exploration of the space of candidate patches as a multi-objective optimization problem. \section{Related Work}\label{sec:rw} The work presented in this paper crosscuts two research areas: program repair in general and verification and validation of model transformations. In the following subsections, we discuss representative work of both areas. \paragraph{Program Repair.} There is a plethora of works that try to automatically fix bugs in programs using different approaches such as genetic programming~\cite{LeGoues2013}, machine learning~\cite{Jeffrey2009, Martinez2013} or SMT solvers~\cite{demarco:hal-2014}. Most of the existing work targets a specific type of errors such as buggy IF conditions, memory allocation errors and infinite loops~\cite{Goues2013, logozzo2012modular, Muntean2015, Perkins2009AutomaticallyPE, demarco:hal-2014}. To evaluate the patches, most of the approaches use test suites as oracle. However, other oracles such as specifications (pre-/post-conditions) were also explored~\cite{Pei2014}. Although these approaches produced promising results, they cannot be used to fix transformation typing errors. As mentioned in Section ~\ref{sec:pbstatement}, test suites are difficult to use and specifications are often not available. Moreover, we aim at correcting simultaneously a variety of errors types. \paragraph{Model Transformation verification and validation.} In this research area, there are three families of work: transformation testing, verification and validation of transformations, and transformation repair. In the first family, Gogolla et al. ~\cite{gogolla2011}, for example, presented a model transformation testing approach based on the concept of Tract, which defines a set of constraints on the source and target metamodels, a set of source-target constraints, and a tract test suite, i.e., a collection of source models satisfying the source constraints. Then, they automatically generated input models and transformed the source model into the target model. Finally, they verified that the source/target models satisfy those constraints. There are other approaches to test the model transformations using other techniques such as graph patterns ~\cite{Balogh2010}, model fragments ~\cite{mottu2008}, Triple Graph Grammars (TGGs) ~\cite{wieber2014} or a combination of these approaches ~\cite{giner2009}. In second family, for example, Troya et al.~\cite{troya2018-spectrumbased} presented the Spectrum-Based Fault Localization technique and used the results of test cases to determine the probability of each rule of transformation to be faulty. Similarly Burgueño et al.~\cite{burgueno2015-staticfaultlocalization} presented a static approach for detecting the faulty rules in model transformations. Their approach uses matching functions that automatically create the alignment between specifications and implementations. Oakes et al.~\cite{OakesTLW182018} presented one method to fully verify pre-/post-condition contracts on declarative portion of ATL model transformations. Their approach transforms the declarative portion of ATL transformations into DSLTrans and uses a symbolic-execution to produce a set of path conditions, which represent all possible executions to the transformation. To verify the transformation, they verify pre-/post-condition contracts on these path conditions. Finally, Cuadrado et al.~\cite{cuadrado2014-uncovering} presented a combining approach involving a static analysis and constraint solving to detect errors in model transformations. They detected potentially problematic statements and then used a witness model to confirm the erroneous statements. We also used this technique for calculating the number of errors, which is defined as a fitness function.\\ All the above-mentioned approaches allow to find behavior errors and/or localize the faulty rules/statements. However, they do not propose patches to repair the errors, which is the goal of our approach. Yet, they can be used upstream of our approach like we did with AnATLyzer. For work that fixes transformation errors, we distinguish between errors generated by the evolution of metamodels as addressed by Kessentini et al.~\cite{kessentini2018-MM-MT-Coevolution} and errors introduced by the developers. For these errors, to the best of our knowledge, the only existing work is Quick fix~\cite{cuadrado2018-quickfix}, which allows the correction of detected errors in ATL model transformation. In this approach, they used the static analyser presented in~\cite{cuadrado2014-uncovering} to identify errors in ATL model transformations. Then, they extended the analyser to generate a catalogue of quick fixes, which depends on static analysis and constraint solving, for identified errors in ATL transformations. Then, quick fixes propose changes in the transformation based on the kind of error. The user selects a suitable fix among the proposed ones and applies interactively. The differences with our work are that we aim at fixing errors jointly without predefined patch patterns, and also we target to generate a patch without requiring human assistance, except for accepting/rejecting the patches. Model transformations are not the only MDE artefacts targeted by repair approaches. There are many research contributions to generate patches for various modeling artifacts. Models are those that gather much attention as evidenced by the study of Macedo et al. ~\cite{Macedo2017Feature}. Another example of MDE artifact repair is given by Hegedus et al.~\cite{hege2011} in which the authors used state-space exploration techniques to generate quick fixes for Domain-Specific Modelling Languages (DSMLs).
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} \label{intro} Heavy quarks, mostly charm and bottom, are produced in the early stage of heavy-ion collisions, therefore they are good probes to study the evolution of hot and dense medium which is expected to be produced in the heavy-ion collisions. Heavy quark production has been studied in various collision systems. In case of \mbox{$p$$+$$p$}\xspace collision, we can test our theoretical predictions based on perturbative QCD (pQCD) for heavy quark production. In \mbox{$d$$+$Au}\xspace and heavy-ion collisions, we can study initial- and final-state modifications by comparing with the results from \mbox{$p$$+$$p$}\xspace collisions. The PHENIX experiment has excellent capabilities to measure leptons from semi-leptonic decay of heavy-flavor mesons, $D$ and $B$. In central arms ($|\eta|<0.35$) and muon arms ($1.2<|\eta|<2.2$), electrons and muons from open heavy-flavor, respectively, can be measured by using hadrorn cocktail method. At mid-rapidity, a huge suppression of heavy-flavor electron production relative to the scaled \mbox{$p$$+$$p$}\xspace results are observed in the central \mbox{Au$+$Au}\xspace collisions~\cite{ppg066}, whereas a clear enhancement is seen in the central \mbox{$d$$+$Au}\xspace collisions~\cite{ppg131}. These results indicate that the suppression of heavy quark production in \mbox{Au$+$Au}\xspace collisions is due to the hot nuclear matter effects. In addtion, the enhancement in central \mbox{$d$$+$Au}\xspace collisions suggests strong cold nuclear matter (CNM) effects. At forward rapidity, a similar level of suppression, seen in the central \mbox{Au$+$Au}\xspace collisions at mid-rapidity, is observed in the central \mbox{Cu$+$Cu}\xspace collisions~\cite{ppg117}, and the pQCD prediction considering additional CNM effects well describe the large suppression at forward region. \section{Cold Nuclear Matter Effects} \label{cnm} In heavy-ion collisions, the CNM effects are convoluted in the effects from hot and dense medium, so that it is hard to interpret the results in heavy-ion collisions solely with the hot nuclear effects. In order to study the CNM effects, \mbox{$d$$+$Au}\xspace collisions are used as a control experiment. There are several CNM matter effects such as modification of nuclear parton distribution function (nPDF), \mbox{$p_T$}\xspace broadening, initial-state energy loss, and nuclear breakup. The \mbox{$R_{dA}$}\xspace of heavy-flavor electrons at mid-rapidity suggests a mainly contributed CNM effect at this region is the \mbox{$p_T$}\xspace broadening. Precise measurements of heavy quark production at other rapidity ranges will help to the detailed study of CNM effects. Recently, negatively charged muons from open heavy-flavor decay have been measured in \mbox{$d$$+$Au}\xspace and \mbox{$p$$+$$p$}\xspace collisions~\cite{ppg153}. Figure~\ref{fig0} shows the measured \mbox{$p_T$}\xspace of heavy-flavor muons in different centrality bins of \mbox{$d$$+$Au}\xspace collisions at forward ($1.4<y<2.0$, $d$-going direction) and backward ($-2.0<y<-1.4$, Au-going direction) rapidity regions. The black square data points at the bottom of both panels represent the \mbox{$p_T$}\xspace spectrum in \mbox{$p$$+$$p$}\xspace collisions. The new \mbox{$p$$+$$p$}\xspace results are consistent with the previous PHENIX measurements~\cite{ppg117} within systematic uncertainty, as well as the systematic uncertainty is reduced based on larger statstics of data and improved analysis techniques. \begin{figure} \centering \includegraphics[width=0.49\textwidth,clip]{Figure0_yield_fwd.pdf} \includegraphics[width=0.49\textwidth,clip]{Figure0_yield_bwd.pdf} \caption{Invariant yield of heavy-flavor muons as a function of \mbox{$p_T$}\xspace in \mbox{$p$$+$$p$}\xspace and different centrality of \mbox{$d$$+$Au}\xspace collisions at \mbox{$\sqrt{s_{_{NN}}}=200$~GeV}\xspace at forward (left) and backward (right) rapidity. Solid lines are a fit to the \mbox{$p$$+$$p$}\xspace results, scaled by the corresponding \mbox{$N_{\rm coll}$}\xspace for centrality classes.} \label{fig0} \end{figure} Figure~\ref{fig1} shows the nuclear modification factor \mbox{$R_{dA}$}\xspace as a function \mbox{$p_T$}\xspace for the three centrality classes, 60--88\% (top left), 0--20\% (top right), and 0--100\% (bottom) at forward (red squares) and backward (blue circles) rapidity. In the most peripheral collisions, the \mbox{$R_{dA}$}\xspace at both rapidity regions are consistent with each other and the unity, indicating no overall modification. However, strong CNM effects on heavy quark production are observed in the central \mbox{$d$$+$Au}\xspace collisions. A clear enhancement of heavy-flavor muon production relative to the scaled \mbox{$p$$+$$p$}\xspace results is observed at backward rapidity, whereas a suppression is seen at forward rapidity region. Two bands in each panel are PYTHIA 8 calculations of the $D\to\mu$ process considering nPDF modification based on the EPS09s nPDF set~\cite{eps09s}. This theoretical predictions qualitively describe the forward data but underestimate the enhancement seen in the most central collisions. Another theoretical prediction from pQCD calculation considering three CNM effects, such as shadowing, \mbox{$p_T$}\xspace broadening, and energy loss also shows a good agreement with the forward data. Therefore, other CNM effects beyond the nPDF modification may significantly contribute to heavy quark production at forward and backward rapidity regions. \begin{figure} \centering \includegraphics[width=0.49\textwidth,clip]{Figure1_cent6088.pdf} \includegraphics[width=0.49\textwidth,clip]{Figure1_cent0020.pdf} \includegraphics[width=0.49\textwidth,clip]{Figure1_cent0100.pdf} \caption{The nuclear modification factor \mbox{$R_{dA}$}\xspace as a function of \mbox{$p_T$}\xspace for heavy-flavor muons at forward (red squares) and backward (blue circles) rapidity ranges in the most peripheral (top left), the most central (top right), and the unbiased (bottom) \mbox{$d$$+$Au}\xspace collisions. The red dashed (blue solid) lines in each panel are the PYTHIA$+$EPS09s nPDF calculations at forward (backward) rapidity. The black dotted line is a pQCD prediction considering the CNM effects for forward rapidity.} \label{fig1} \end{figure} Figure~\ref{fig2} shows the nuclear modification factor \mbox{$R_{dA}$}\xspace as a function of \mbox{$N_{\rm coll}$}\xspace for two different \mbox{$p_T$}\xspace ranges, $1<\mbox{$p_T$}\xspace<3~{\rm GeV}/c$ (left) and $3<\mbox{$p_T$}\xspace<5~{\rm GeV}/c$ (right). In both \mbox{$p_T$}\xspace ranges, the suppression (enhancement) becomes larger with increasing centrality. The results at mid-rapidity (black circles) shows a similar trend with that seen in the backward data. The EPS09s nPDF calculations does not reproduce the large difference between forward and backward rapidity. \begin{figure} \centering \includegraphics[width=0.49\textwidth,clip]{Figure2_pT1030.pdf} \includegraphics[width=0.49\textwidth,clip]{Figure2_pT3050.pdf} \caption{The nuclear modification factor \mbox{$R_{dA}$}\xspace as a function of \mbox{$N_{\rm coll}$}\xspace for heavy-flavor leptons from different rapidity and \mbox{$p_T$}\xspace bins. The lines with bands are the PYTHIA$+$EPS09s nPDF calculations for forward (red dashed) and backward (blue solid) rapidity regions.} \label{fig2} \end{figure} Quakonia is additionally affected by the nuclear breakup, interaction with surrounding matters. Figure~\ref{fig3} shows comparisons of \mbox{$R_{dA}$}\xspace as a function of \mbox{$p_T$}\xspace between \mbox{$J/\psi$}\xspace~\cite{ppg125} and heavy-flavor muons in the most central \mbox{$d$$+$Au}\xspace collisions at forward and backward rapidity. At forward rapidity, \mbox{$R_{dA}$}\xspace of \mbox{$J/\psi$}\xspace and heavy-flavor muons are consistent within the uncertainties. However, the backward results show a large difference at $\mbox{$p_T$}\xspace<3~{\rm GeV/c}$, suggesting a significant breakup effect at backward rapidity. These data may provide a new contraint on theoretical models for quarkonia production. \begin{figure} \centering \includegraphics[width=0.6\textwidth,clip]{Figure3.pdf} \caption{Comparison of the nuclear modification factor \mbox{$R_{dA}$}\xspace between \mbox{$J/\psi$}\xspace and heavy-flavor muons in the most central \mbox{$d$$+$Au}\xspace collisions.} \label{fig3} \end{figure} \section{Summary} \label{summary} Negatively charged muons from heavy-flavor meson decay in various centrality classes of \mbox{$d$$+$Au}\xspace collisions at \mbox{$\sqrt{s_{_{NN}}}=200$~GeV}\xspace have been measured at forward and backward rapidity ranges. For the most peripheral centrality class, The \mbox{$R_{dA}$}\xspace at both rapidity are consistent with each other and the unity indicating no overall modification. In the most central collisions, an clear enhancement of heavy-flavor muon production is observed at backward rapidity, whereas a suppression is seen at forward rapidity region. The pQCD prediction considering the CNM effects shows a good agreement with the forward data. In addition, another theoretical calculation based on the EPS09s nPDF set qualitatively reproduce the forward \mbox{$R_{dA}$}\xspace as well. However this calculation, considering only the modification of nPDF, underestimate the difference between forward and backward rapidity regions observed in the data, suggesting the significant role of other CNM effects beyond the modification of parton density functions. The \mbox{$R_{dA}$}\xspace as a function of \mbox{$N_{\rm coll}$}\xspace for two different \mbox{$p_T$}\xspace bins show a larger enhancement (suppression) as increasing centrality at backward (forward) rapidity. The trend seen in backward rapidity is similar with the heavy-flavor electron results at mid-rapidity. The comparison to the \mbox{$J/\psi$}\xspace results suggets a significant role of the nuclear breakup in the quarkonia production. New silicon vertex detectors (VTX/FVTX) was installed, and these systems will provide high precision of primary and secondary vertex measurements. The enhanced information will allow to measure charm ($D$) and bottom ($B$) production separately. These new measurements with the data in upcoming runs (\mbox{Au$+$Au}\xspace--RHIC Run-14, \mbox{$p$$+$Au}\xspace--RHIC Run-15) will help to improve the current understanding of hot and cold nuclear matters as well as provide essential constraints on theoretical predictions.
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} In recent years many measurements of hadronic three-body and four-body decays of charmed mesons have been performed with Dalitz-plot amplitude analyses. Amplitudes describing $D$ meson decays into multibody final states are dominated by quasi-two-body processes, such as $D\to PP, VP, SP, AP$ and $TP$, where $P, V, S, A$ and $T$ denote pseudoscalar, vector, scalar, axial-vector and tensor mesons, respectively. Among various $S$-, $P$- and $D$-wave intermediate resonances, the identification of the scalar mesons is rather difficult due to their broad widths and flat angular distributions. Scalar mesons with masses lower than 2 GeV can be classified into two nonets: one nonet with masses below or close to 1 GeV, including $\sigma/f_0(500)$, $f_0(980)$, $a_0(980)$ and $\kappa/K_0^*(700)$; and the other nonet with masses above 1 GeV, including $a_0(1450)$, $K^*_0(1430)$, $f_0(1370)$, $f_0(1500)$ and $f_0(1710)$. Since the last three are all isosinglet scalars and only two of them can be accommodated in the quark model, implying a dominant scalar glueball content in one of the three isosinglets. In this work, we shall study the quasi-two-body $D\to SP$ decays and the three-body $D$ decays proceeding through intermediate scalar resonances. In Tables~\ref{tab:DataSP} and \ref{tab:DataD0SP} we collect all the measured branching fractions of $D\to SP\to P_1P_2P$ decays available in the Particle Data Group (PDG) \cite{PDG}. It is clear that $f_0(980)$ and the $f_0$ family such as $f_0(1370)$, $f_0(1500)$ and $f_0(1710)$ are observed in the three-body decays of $D^+, D^0$ and $D_s^+$, while $a_0(980)$ is seen exclusively in three-body $D^0$ decays (except for $D_s^+\to a_0^{+,0}\pi^{0,+}$). Contrary to $f_0(980)$ and $a_0(980)$ which are relatively easy to identify experimentally, the establishment of $\sigma$ and $\kappa$ is very difficult and controversial because their widths are so broad that their shapes are not clearly resonant. Nevertheless, their signals in three-body $D$ decays have been identified in $D^{+,0}\to\sigma \pi^{+,0}\to \pi^+\pi^-\pi^{+,0}$, $D^+\to \bar\kappa^0 \pi^+\to K_S\pi^0\pi^+$ and $D^+\to \bar\kappa^0 K^+\to \pi^+K^-K^+$, respectively. Because of threshold and coupled-channel effects for $f_0(980)$ and $a_0(980)$ and the very broad widths for $\sigma$ and $\kappa$, it is no longer pertinent to use the conventional Breit-Wigner parametrization to describe their line shapes. The $D \to SP$ decays and related three-body $D$ decays have been studied previously in Refs.~\cite{Kamal,Katoch,Buccella96,Fajfer,ChengSP,ElBennich,Boito,Cheng:SAT,Xie:2014tma, Dedonder:2014xpa,Loiseau:2016mdm,Dedonder:2021dmb}. In the $D \to SP$ decays, the flavor diagram of each topology has two possibilities: one with the spectator quark in the charmed meson going to the pseudoscalar meson in the final state, and the other with the spectator quark ending up in the scalar meson. We thus need two copies of each topological diagram to describe the decay processes. Many of these decays have been observed in recent years through dedicated experiments and powerful Dalitz plot analysis of multi-body decays. We will investigate whether an extraction of the sizes and relative strong phases of these amplitudes is possible. One purpose of studying these decays is to check our understanding in the structures and properties of light even-parity scalar mesons. Another goal is to learn the final-state interaction pattern in view of the rich resonance spectrum around the $D$ meson mass range. Not only does this work update our previous study \cite{Cheng:SAT}, we also study the finite-width effect in the three-body decays mediated by the scalar mesons. Such an effect is observed to be particularly important for decays involving $\sigma/f_0(500)$ and $\kappa/K_0^*(700)$ in the intermediate state because of their broad widths compared to their masses, respectively. Therefore, one should be careful in the use of the narrow width approximation (NWA) to extract the $D \to SP$ two-body decays from the three-body decay rates. This paper is organized as follows. In Section~\ref{sec:status}, we review the current experimental status about how various $D \to SP$ decay branching fractions are extracted using the NWA from three-body decay rates. In Section~\ref{sec:properties}, we discuss the two-quark $q\bar q$ and tetraquark pictures of the scalar nonet near or below 1~GeV along with the associated conundrums. The decay constants and form factors required for subsequent numerical calculations are given in this section, too. Section~\ref{sec:flavorapp} sets up the notation and formalism of flavor amplitude analysis, for both quark-antiquark and tetraquark pictures. In Section~\ref{sec:facapp}, we take the factorization approach as an alternative toward analyzing these decays. We also introduce line shapes for the scalar resonances when describing various three-body decays. Section~\ref{sec:results} gives the results obtained based upon the approaches in the previous two sections for a comparison. Section~\ref{sec:finitewidth} is devoted to the study of finite-width effect and how the NWA should be modified. We summarize our findings in Section~\ref{sec:conclusions}. \section{Experimental status \label{sec:status}} It is known that three- and four-body decays of heavy mesons provide a rich laboratory for studying the intermediate-state resonances. The Dalitz plot analysis of three-body or four-body decays of charmed mesons is a very useful technique for this purpose. We are interested in $D\to SP$ decays followed by $S\to P_1P_2$. The results of various experiments are summarized in Tables~\ref{tab:DataSP} and \ref{tab:DataD0SP}. To extract the branching fraction for a $D\to SP$ decay, it is the usual practice to use the NWA: \begin{eqnarray} \label{eq:fact} \Gamma(D\to SP\to P_1P_2P)=\Gamma(D\to SP)_{\rm NWA} {\cal B}(S\to P_1P_2) ~. \end{eqnarray} Since this relation holds only in the $\Gamma_S\to 0$ limit, we put the subscript NWA to emphasize that ${\cal B}(D\to SP)$ thus obtained is under this limit. Finite width effects will be discussed in Section~\ref{sec:finitewidth}. For the branching fractions of two-body decays of scalar mesons, we shall use~\cite{PDG} \begin{eqnarray} {\cal B}(a_0(980)\to\pi\eta)=0.850\pm0.017 ~, && {\cal B}(\sigma(500)\to\pi^+\pi^-)={2\over 3} ~, \nonumber \\ {\cal B}(f_0(1500)\to\pi\pi)=0.345\pm0.022 ~, && {\cal B}(f_0(1710)\to K^+K^-)=0.292\pm0.027 ~, \\ {\cal B}(K_0^{*0}(1430)\to K^+\pi^-)={2\over 3}(0.93\pm0.10) ~, && {\cal B}(\kappa(700)\to K^+\pi^-)={2\over 3} ~, \nonumber \end{eqnarray} where we have applied the average of $\Gamma(a_0(980)\to K\overline K)/\Gamma(a_0(980)\to \pi\eta)=0.177\pm 0.024$ from PDG \cite{PDG} to extract the branching fraction of $a_0(980)\to \pi\eta$, assuming that its width is saturated by the $K\overline K$ and $\pi\eta$ modes. For $f_0(1710)$ we have used the values of $\Gamma(f_0(1710)\to \pi\pi)/\Gamma(f_0(1710)\to K\overline K)=0.23\pm0.05$ and $\Gamma(f_0(1710)\to \eta\eta)/\Gamma(f_0(1710)\to K\overline K)=0.48\pm0.15$ from PDG together with the assumption of its width being saturated by $\pi\pi$, $K\overline K$ and $\eta\eta$ modes. For $S=f_0(980)$ or $a_0(980)$, we are not able to extract the branching fractions of $D\to SP$ due to the lack of information of ${\cal B}(S\to P_1P_2)$ (except for $a_0(980)\to\pi\eta$), especially for ${\cal B}(S\to K\overline K)$ where the threshold effect must be taken into account. For example, the NWA relation \begin{eqnarray} \Gamma(D^+\to f_0(980) K^+\to K^+K^-K^+)=\Gamma(D^+\to f_0(980) K^+){\cal B}(f_0(980)\to K^+K^-) \end{eqnarray} cannot be applied to extract the branching fraction of $D^+\to f_0(980) K^+$ due to the unknown ${\cal B}(f_0(980)\to K^+K^-)$. Therefore, we will calculate the branching fractions of ${\cal B}(D\to SP\to P_1P_2P)$ directly and compare them with experiment (see Table~\ref{tab:DtoSP:theory} below). \begin{table}[t!] \caption{Experimental branching fractions of $(D^+,D_s^+)\to SP\to P_1P_2P$ decays. For simplicity and convenience, we have dropped the mass identification for $\sigma(500)$, $f_0(980)$, $a_0(980)$, $\kappa(700)$ and $K^*_0(1430)$. Data are taken from Ref.~\cite{PDG} unless specified otherwise. We have applied the NWA given by Eq.~(\ref{eq:fact}) to extract the branching fractions of the two-body $D$ decay denoted by ${\cal B}(D\to SP)_{\rm NWA}$. } \label{tab:DataSP} \footnotesize{ \begin{ruledtabular} \begin{tabular}{l l l} ${\cal B}(D\to SP; S\to P_1P_2)$ & ${\cal B}(D\to SP)_{\rm NWA}$ \\ \hline ${\cal B}(D^+\to f_0\pi^+; f_0\to\pi^+\pi^-)=(1.56\pm 0.33)\times 10^{-4}$ & \\ ${\cal B}(D^+\to f_0(1370)\pi^+; f_0(1370)\to\pi^+\pi^-)=(8\pm4)\times 10^{-5}$ & $$ \\ ${\cal B}(D^+\to f_0(1500)\pi^+; f_0(1500)\to\pi^+\pi^-)=(1.1\pm0.4)\times 10^{-4}$ & ${\cal B}(D^+\to f_0(1500)\pi^+)=(4.78\pm1.77)\times 10^{-4}$ \\ ${\cal B}(D^+\to f_0(1710)\pi^+; f_0(1710)\to\pi^+\pi^-)<5\times 10^{-5}$ & ${\cal B}(D^+\to f_0(1710)\pi^+)<5.8\times 10^{-4}$ \\ ${\cal B}(D^+\to f_0K^+; f_0\to \pi^+\pi^-)=(4.4\pm 2.6)\times 10^{-5}$ & \\ ${\cal B}(D^+\to f_0K^+; f_0\to K^+K^-)=(1.23\pm 0.02)\times 10^{-5}$ \footnotemark[1] & \\ ${\cal B}(D^+\to a_0(1450)^0\pi^+; a_0^0\to K^+K^-)=(4.5^{+7.0}_{-1.8})\times 10^{-4}$ & $$ \\ ${\cal B}(D^+\to\sigma\pi^+; \sigma\to\pi^+\pi^-)=(1.38\pm0.12)\times 10^{-3}$ & ${\cal B}(D^+\to\sigma\pi^+)=({2.07\pm0.18})\times 10^{-3}$ \\ ${\cal B}(D^+\to \bar\kappa^0 \pi^+; \bar \kappa^0\to K_S\pi^0)=(6^{+5}_{-4})\times 10^{-3}$ & ${\cal B}(D^+\to \bar\kappa^0 \pi^+)=({3.6^{+3.0}_{-2.4}})\%$ \\ ${\cal B}(D^+\to \bar\kappa^0 K^+; \bar \kappa^0\to K^-\pi^+)=(6.8^{+3.5}_{-2.1})\times 10^{-4}$ & ${\cal B}(D^+\to \bar\kappa^0 K^+)=({1.0^{+0.5}_{-0.3}})\times 10^{-3}$ \\ ${\cal B}(D^+\to\overline K_0^{*0}\pi^+; \overline K_0^{*0}\to K^-\pi^+)=(1.25\pm 0.06)\%$ & ${\cal B}(D^+\to\overline K_0^{*0}\pi^+)=(2.02\pm0.24)\%$ \\ ${\cal B}(D^+\to\overline K_0^{*0}\pi^+; \overline K_0^{*0}\to K_S\pi^0)=(2.7\pm 0.9)\times 10^{-3}$ & ${\cal B}(D^+\to\overline K_0^{*0}\pi^+)=(1.74\pm0.61)\%$ \\ ${\cal B}(D^+\to\overline K_0^{*0}K^+;\overline K_0^{*0}\to K^-\pi^+)=(1.82\pm 0.35)\times 10^{-3}$ & $D^+\to\overline K_0^{*0}K^+$ prohibited on-shell \\ \hline ${\cal B}(D_s^+\to f_0\pi^+; f_0\to K^+K^-)=(1.14\pm 0.31)\%$ & \\ ${\cal B}(D_s^+\to f_0\pi^+; f_0\to \pi^0\pi^0)=(2.1\pm 0.4)\times 10^{-3}$ \footnotemark[2] & \\ ${\cal B}(D_s^+\to S(980)\pi^+; S(980)\to K^+K^-)=(1.05\pm 0.07)\%$ \footnotemark[3], \footnotemark[4] & \\ ${\cal B}(D_s^+\to f_0(1370)\pi^+; f_0\to K^+K^-)=(7\pm 5)\times 10^{-4}$ & \\ ${\cal B}(D_s^+\to f_0(1370)\pi^+; f_0\to K^+K^-)=(7\pm 2)\times 10^{-4}$ \footnotemark[3] & \\ ${\cal B}(D_s^+\to f_0(1370)\pi^+; f_0\to \pi^0\pi^0)=(1.3\pm 0.2)\times 10^{-3}$ \footnotemark[2] & \\ ${\cal B}(D_s^+\to f_0(1710)\pi^+; f_0\to K^+K^-)=(6.6\pm 2.8)\times 10^{-4}$ & ${\cal B}(D_s^+\to f_0(1710)\pi^+)=(2.26\pm0.98)\times 10^{-3}$ \\ ${\cal B}(D_s^+\to f_0(1710)\pi^+; f_0\to K^+K^-)=(10\pm 4)\times 10^{-4}$ \footnotemark[3] & ${\cal B}(D_s^+\to f_0(1710)\pi^+)=(3.42\pm1.40)\times 10^{-3}$ \\ ${\cal B}(D_s^+\to a_0^{+,0}\pi^{0,+}; a_0^{+,0}\to \eta\pi^{+,0})=(1.46\pm0.27)\%$ \footnotemark[5] & ${\cal B}(D_s^+\to a_0^0\pi^+ +a_0^+\pi^0)=(1.72\pm0.32)\%$ \\ ${\cal B}(D_s^+\to \overline K_0^{*0}K^+; \overline K_0^{*0}\to K^-\pi^+)=(1.8\pm0.4)\times 10^{-3}$ & ${\cal B}(D_s^+\to \overline K_0^{*0}K^+)=({ 2.9\pm0.7})\times 10^{-3}$ \\ ${\cal B}(D_s^+\to \overline K_0^{*0}K^+; \overline K_0^{*0}\to K^-\pi^+)=(1.6\pm0.4)\times 10^{-3}$ \footnotemark[3] & ${\cal B}(D_s^+\to \overline K_0^{*0}K^+)=({ 2.6\pm0.7})\times 10^{-3}$ \\ ${\cal B}(D_s^+\to K_0^{*0}\pi^+; K_0^{*0}\to K^+\pi^-)=(5.0\pm3.5)\times 10^{-4}$ & ${\cal B}(D_s^+\to K_0^{*0}\pi^+)=({ 8.1\pm5.7})\times 10^{-4}$ \\ \end{tabular} \footnotetext[1]{Assuming a fit fraction of 20\% for $D^+\to f_0(980)K^+$ in $D^+\to K^+K^-K^+$ decay \cite{LHCb:D+toKKK}.} \footnotetext[2]{BESIII data taken from Ref. \cite{BESIII:Dspi+pi0pi0}.} \footnotetext[3]{BESIII data taken from Ref. \cite{BESIII:DsKKpi}.} \footnotetext[4]{$S(980)$ denotes both $f_0(980)$ and $a_0(980)$.} \footnotetext[5]{The branching fraction is assigned to be $(2.2\pm0.4)\%$ by the PDG \cite{PDG}. However, as pointed out in Ref. \cite{BESIII:Dstoa0pi}, the fraction of $D_s^+\to a_0(980)^{+(0)}\pi^{0(+)}, a_0(980)^{+(0)}\to \pi^{0(+)}\eta$ with respect to the total fraction of $D_s^+\to a_0(980)\pi,a_0(980)\to\pi\eta$ is evaluated to be 0.66. Consequently, the branching fraction should be multiplied by a factor of 0.66 to become $(1.46\pm0.27)\%$.} \end{ruledtabular} } \end{table} \begin{table}[t] \caption{Same as Table \ref{tab:DataSP} except for $D^0\to SP\to P_1P_2P$ decays.} \label{tab:DataD0SP} \medskip \footnotesize{ \begin{ruledtabular} \begin{tabular}{l l l} ${\cal B}(D\to SP; S\to P_1P_2)$ & ${\cal B}(D\to SP)_{\rm NWA}$ \\ \hline ${\cal B}(D^0\to f_0\pi^0; f_0\to \pi^+\pi^-)=(3.7\pm0.9)\times 10^{-5}$ & \\ ${\cal B}(D^0\to f_0\pi^0; f_0\to K^+K^-)=(3.6\pm0.6)\times 10^{-4}$ & \\ ${\cal B}(D^0\to f_0(1370)\pi^0; f_0\to \pi^+\pi^-)=(5.5\pm2.1)\times 10^{-5}$ & \\ ${\cal B}(D^0\to f_0(1500)\pi^0; f_0\to \pi^+\pi^-)=(5.8\pm1.6)\times 10^{-5}$ & ${\cal B}(D^0\to f_0(1500)\pi^0)=(2.5\pm0.7)\times 10^{-4}$ \\ ${\cal B}(D^0\to f_0(1710)\pi^0; f_0\to \pi^+\pi^-)=(4.6\pm1.6)\times 10^{-5}$ & ${\cal B}(D^0\to f_0(1710)\pi^0)=(3.7\pm1.4)\times 10^{-4}$ \\ ${\cal B}(D^0\to f_0\overline K^0; f_0\to \pi^+\pi^-)=(2.40^{+0.80}_{-0.46})\times 10^{-3}$ & \\ ${\cal B}(D^0\to f_0\overline K^0; f_0\to K^+K^-)<1.8\times 10^{-4}$ & \\ ${\cal B}(D^0\to f_0(1370)\overline K^0; f_0\to \pi^+\pi^-) =(5.6^{+1.8}_{-2.6})\times 10^{-3}$ & \\ ${\cal B}(D^0\to f_0(1370)\overline K^0; f_0\to K^+K^-)=(3.4\pm2.2)\times 10^{-4}$ & \\ ${\cal B}(D^0\to a_0^+K^-; a_0^+\to K^+\overline K^0)=(1.18\pm 0.36)\times 10^{-3}$ & \\ ${\cal B}(D^0\to a_0^+K^-; a_0^+\to K^+\overline K^0)=(3.07\pm 0.84)\times 10^{-3}$ \footnotemark[1] & \\ ${\cal B}(D^0\to a_0^-K^+; a_0^-\to K^-\overline K^0)<2.2\times 10^{-4}$ & \\ ${\cal B}(D^0\to a_0^0\overline K^0; a_0^0\to K^+K^-)=(5.8\pm 0.8)\times 10^{-3}$ & \\ ${\cal B}(D^0\to a_0^0\overline K^0; a_0^0\to K^+K^-)=(8.12\pm 1.80)\times 10^{-3}$ \footnotemark[1] & \\ ${\cal B}(D^0\to a_0^0\overline K^0; a_0^0\to \eta\pi^0)=(2.40\pm0.56)\times 10^{-2}$ & ${\cal B}(D^0\to a_0^0\overline K^0)=({ 2.83\pm0.66})\%$ \\ ${\cal B}(D^0\to a_0^-\pi^+; a_0^-\to K^-K^0)=(2.6\pm 2.8)\times 10^{-4}$ & \\ ${\cal B}(D^0\to a_0^+\pi^-; a_0^+\to K^+\overline K^0)=(1.2\pm 0.8)\times 10^{-3}$ & \\ ${\cal B}(D^0\to a_0(1450)^-\pi^+; a_0^-\to K^-K^0)=(5.0\pm 4.0)\times 10^{-5}$ & \\ ${\cal B}(D^0\to a_0(1450)^+\pi^-; a_0^+\to K^+\overline K^0)=(6.4\pm 5.0)\times 10^{-5}$ & \\ ${\cal B}(D^0\to a_0(1450)^-K^+; a_0^-\to K^-K_S)< 0.6\times 10^{-3}$ \footnotemark[1] & \\ ${\cal B}(D^0\to \sigma\pi^0; \sigma\to \pi^+\pi^-)=(1.22\pm0.22)\times 10^{-4}$ & ${\cal B}(D^0\to \sigma\pi^0)=({1.8\pm0.3})\times 10^{-4}$ \\ ${\cal B}(D^0\to K_0^{*-}\pi^+; K_0^{*-}\to \overline K^0\pi^-)=(5.34^{+0.80}_{-0.66})\times 10^{-3}$ & ${\cal B}(D^0\to K_0^{*-}\pi^+)=({ 8.6^{+1.6}_{-1.4}})\times 10^{-3}$ \\ ${\cal B}(D^0\to K_0^{*-}\pi^+; K_0^{*-}\to K^-\pi^0)=(4.8\pm 2.2)\times 10^{-3}$ & ${\cal B}(D^0\to K_0^{*-}\pi^+)=(1.55\pm0.73)\%$ \\ ${\cal B}(D^0\to \overline K_0^{*0}\pi^0; \overline K_0^{*0}\to K^-\pi^+)= (5.9^{+5.0}_{-1.6})\times 10^{-3}$ & ${\cal B}(D^0\to \overline K_0^{*0}\pi^0)=(9.5^{+8.1}_{ -2.8})\times 10^{-3}$ \\ % ${\cal B}(D^0\to K_0^{*+}\pi^-; K_0^{*+}\to K^0\pi^+)<2.8\times 10^{-5}$ & ${\cal B}(D^0\to K_0^{*+}\pi^-)<4.5\times 10^{-5}$ \\ \end{tabular} \footnotetext[1]{BESIII data taken from Ref. \cite{BESIII:D0KKKS}.} \end{ruledtabular} } \end{table} \section{Physical properties of scalar mesons \label{sec:properties}} It is known that the underlying structure of scalar mesons is not well established theoretically (see, {\it e.g.}, Refs.~\cite{Amsler,Close} for a review). Scalar mesons with masses lower than 2~GeV can be classified into two nonets: one nonet with masses below or close to 1~GeV, including the isoscalars $f_0(500)$ (or $\sigma$), $f_0(980)$, the isodoublet $K_0^*(700)$ (or $\kappa$) and the isovector $a_0(980)$; and the other nonet with masses above 1~GeV, including $f_0(1370)$, $a_0(1450)$, $K^*_0(1430)$ and $f_0(1500)/f_0(1710)$. If the scalar meson states below or near 1~GeV are identified as the conventional low-lying $0^+$ $q\bar q$ nonet, then the nonet states above 1~GeV could be excited $q\bar q$ states. In the na{\"i}ve quark model, the flavor wave functions of the light scalars read \begin{eqnarray} && \sigma={1\over \sqrt{2}}(u\bar u+d\bar d) ~, \qquad\qquad~ f_0= s\bar s ~, \nonumber \\ && a_0^0={1\over\sqrt{2}}(u\bar u-d\bar d) ~, \qquad\qquad a_0^+=u\bar d ~, \qquad a_0^-=d\bar u ~, \\ && \kappa^{+}=u\bar s ~, \qquad \kappa^{0}= d\bar s ~, \qquad~ \bar \kappa^{0}=s\bar d ~,\qquad~ \kappa^{-}=s\bar u ~, \nonumber \end{eqnarray} where an ideal mixing for $f_0$ and $\sigma$ is assumed as $f_0(980)$ is the heaviest one and $\sigma$ the lightest one in the light scalar nonet. However, as summarized in Ref.~\cite{Cheng:SAT}, this simple picture encounters several serious problems: \begin{enumerate} \item It is impossible to understand the mass degeneracy between $f_0(980)$ and $a_0(980)$, which is the so-called ``inverted spectrum problem.'' \item The $P$-wave $0^+$ meson has one unit of orbital angular momentum which costs an energy around 500~MeV. Hence, it should have a mass lying above rather than below 1~GeV. \item It is difficult to explain why $\sigma$ and $\kappa$ are much broader than $f_0(980)$ and $a_0(980)$ in width. \item The $\gamma\gamma$ widths of $a_0(980)$ and $f_0(980)$ are much smaller than na{\"i}vely expected for a $q\bar{q}$ state~\cite{bar85}. \item The radiative decay $\phi\to a_0(980)\gamma$, which cannot proceed if $a_0(980)$ is a pure $q\bar q$ state, can be nicely described by the four-quark nature of $a_0(980)$ \cite{Achasov:1987ts,Achasov:2003cn} or the kaon loop mechanism~\cite{Schechter06}. Likewise, the observation of the radiative decay $\phi\to f_0(980)\gamma\to \pi\pi\gamma$ is also accounted for by the four-quark state of $f_0(980)$ \cite{Achasov:2003cn}. \end{enumerate} It turns out that these difficulties can be readily resolved in the tetraquark scenario where the four-quark flavor wave functions of light scalar mesons are symbolically given by \cite{Jaffe} \begin{eqnarray} \label{4quarkw.f.} && \sigma=u\bar u d\bar d ~, \qquad\qquad\qquad~~ f_0= \frac{1}{\sqrt2} (u\bar u+d\bar d) s\bar s ~, \nonumber \\ && a_0^0= \frac{1}{\sqrt2} (u\bar u-d\bar d) s\bar s ~, \qquad a_0^+=u\bar ds\bar s ~, \qquad a_0^-=d\bar us\bar s ~, \nonumber \\ && \kappa^+=u\bar sd\bar d ~, \qquad \kappa^0=d\bar su\bar u ~, \qquad \bar \kappa^0=s\bar du\bar u ~, \qquad \kappa^-=s\bar ud\bar d ~. \end{eqnarray} The four quarks $q^2\bar q^2$ can form an $S$-wave (rather than $P$-wave) $0^+$ meson without introducing one unit of orbital angular momentum. This four-quark description explains naturally the inverted mass spectrum of the light nonet, \footnote{However, it has been claimed recently in Ref.~\cite{Kuroda:2019jzm} that the inverse mass hierarchy can be realized in the $q\bar q$ picture through a $U(1)$ axial anomaly including explicit $SU(3)_F$ breaking. The anomaly term contributes to $a_0(980)$ with the strange quark mass and to $\kappa/K_0^*(700)$ with the up or down quark mass due to its flavor singlet nature. The current mass of the strange quark makes the $a_0$ meson heavier than the $\kappa$ meson.} especially the mass degeneracy between $f_0(980)$ and $a_0(980)$, and accounts for the broad widths of $\sigma$ and $\kappa$ while $f_0(980)$ and $a_0(980)$ are narrow because of the suppressed phase space for their decays to the kaon pairs. Lattice calculations have confirmed that $a_0(1450)$ and $K_0^*(1430)$ are $q\bar q$ mesons, and suggested that $\sigma$, $\kappa$ and $a_0(980)$ are tetraquark mesonia \cite{Prelovsek,Mathur,Wakayama:scalar,Alexandrou:a0kappa,Alexandrou:a0}. The inverted spectrum problem can also be alleviated in the scenario where the light scalars are dynamically generated from the meson-meson interaction, with the $f_0(980)$ and the $a_0(980)$ coupling strongly to the $K\overline K$ channel with isospin 0 and 1, respectively. Indeed, the whole light scalar nonet appears naturally from properly unitarized chiral amplitudes for pseudoscalar-pseudoscalar scatterings~\cite{Oller:1997ng,Oller:1998hw}. Consequently, both $f_0(980)$ and $a_0(980)$ are good candidates of $K\overline K$ molecular states~\cite{Weinstein:1990gu}, while $\sigma$ and $\kappa$ can be considered as the bound states of $\pi\pi$ and $K\pi$, respectively. In the na{\"i}ve two-quark model with ideal mixing for $f_0(980)$ and $\sigma(500)$, $f_0(980)$ is purely an $s\bar s$ state, while $\sigma(500)$ is an $n\bar n$ state with $n\bar n\equiv (\bar uu+\bar dd)/\sqrt{2}$. However, there also exists some experimental evidence indicating that $f_0(980)$ is not a purely $s\bar s$ state. For example, the observation of $\Gamma(J/\psi\to f_0\omega)\approx {1\over 2}\Gamma(J/\psi\to f_0\phi)$ \cite{PDG} clearly shows the existence of the non-strange and strange quark contents in $f_0(980)$. Therefore, isoscalars $\sigma(500)$ and $f_0(980)$ must have a mixing \begin{eqnarray} \label{eq:mixing} |f_0(980)\rangle = |s\bar s\rangle\cos\theta+|n\bar n\rangle\sin\theta ~, \qquad |\sigma(500)\rangle = -|s\bar s\rangle\sin\theta+|n\bar n\rangle\cos\theta ~. \end{eqnarray} Various mixing angle measurements have been discussed in the literature and summarized in Refs.~\cite{CCY,Fleischer:2011au}. A recent measurement of the upper limit on the branching fraction product ${\cal B}(\overline B^0\to J/\psi f_0(980))\times{\cal B}(f_0(980)\to \pi^+\pi^-)$ by LHCb leads to $|\theta|<30^\circ$~\cite{LHCb:theta}. Likewise, in the four-quark scenario for light scalar mesons, one can also define a similar $f_0$-$\sigma$ mixing angle \begin{eqnarray} |f_0(980)\rangle =|n\bar ns\bar s\rangle\cos\phi +|u\bar u d\bar d\rangle\sin\phi ~, \qquad |\sigma(500)\rangle = -|n\bar ns \bar s\rangle\sin\phi+|u\bar u d\bar d\rangle\cos\phi ~. \end{eqnarray} It has been shown that $\phi=174.6^\circ$~\cite{Maiani}. In reality, the light scalar mesons could have both two-quark and four-quark components. Indeed, a real hadron in the QCD language should be described by a set of Fock states each of which has the same quantum number as the hadron. For example, \begin{eqnarray}\label{eq:fockexpansion} |a^+(980)\rangle &=& \psi_{u\bar d}^{a_0} |u\bar d\rangle + \psi_{u\bar dg}^{a_0} |u\bar d g\rangle + \psi_{u\bar d s\bar s}^{a_0} |u\bar d s \bar s\rangle+ \dots\,. \end{eqnarray} In the tetraquark model, $\psi_{u\bar d s\bar s}^{a_0} \gg \psi_{u\bar d}^{a_0}$, while it is the other way around in the two-quark model. Although as far as the spectrum and decay are concerned, light scalars are predominately tetraquark states, their productions in heavy meson decays and in high energy hadron collisions are probably more sensitive to the two-quark component of the scalar mesons. For example, one may wonder if the energetic $f_0(980)$ produced in $B$ decays is dominated by the four-quark configuration as it requires to pick up two energetic quark-antiquark pairs to form a fast moving light tetraquark. Since the scalar meson production in charm decays is not energetic, it is possible that it has adequate time to form a tetraquark state. In principle, the two-quark and four-quark descriptions of the light scalars can be discriminated in the semileptonic charm decays. For example, the ratio \begin{eqnarray} R={{\cal B}(D^+\to f_0\ell^+\nu)+{\cal B}(D^+ \to \sigma\ell^+\nu) \over {\cal B}(D^+\to a_0^0\ell^+\nu)} \end{eqnarray} is equal to 1 in the two-quark scenario and 3 in the four-quark model under the flavor SU(3) symmetry~\cite{Wang:2009azc}. Based on the BESIII measurements of $D^+\to a_0(980)^0e^+\nu_e$ \cite{BESIII:Dtoa0SL}, $D^+\to \sigma e^+\nu_e$ and the upper limit on $D^+\to f_0(980) e^+\nu_e$ \cite{BESIII:DtosigmaSL}, it follows that $R>2.7$ at 90\% confidence level. Hence, the BESIII results favor the SU(3) nonet tetraquark description of the $f_0(500)$, $f_0(980)$ and $a_0(980)$ produced in charmed meson decays. A detailed analysis of BESIII and CLEO data on the decays $D^+\to \pi^+\pi^- e^+\nu_e$ and $_s^+\to \pi^+\pi^- e^+\nu_e$ in Ref. \cite{Achasov:2020qfx} also shows results in favor of the four-quark nature of light scalar mesons $f_0(500)$ and $f_0(980)$. The vector and scalar decay constants of the scalar meson are, respectively, defined as \begin{eqnarray} \label{eq:Sdecayc} \langle S(p)|\bar q_2\gamma_\mu q_1|0\rangle=f_S p_\mu ~, \qquad \langle S|\bar q_2q_1|0\rangle=m_S\bar f_S ~. \end{eqnarray} The neutral scalar mesons $\sigma$, $f_0$ and $a_0^0$ cannot be produced via the vector current owing to charge conjugation invariance or conservation of vector current: \begin{eqnarray} f_{\sigma}=f_{f_0}=f_{a_0^0}=0 ~. \end{eqnarray} Applying the equation of motion to Eq.~(\ref{eq:Sdecayc}) yields \begin{eqnarray} \label{eq:EOM} \mu_Sf_S=\bar f_S ~, \qquad\quad{\rm with}~~\mu_S={m_S\over m_2(\mu)-m_1(\mu)} ~, \end{eqnarray} where $m_{2}$ and $m_{1}$ are the running current quark masses. Therefore, the vector decay constant of the scalar meson $f_S$ vanishes in the SU(3) or isospin limit. The vector decay constants of $K^*_0(1430)$ and the charged $a_0(980)$ are non-vanishing, but they are suppressed due to the small mass difference between the constituent $s$ and $u$ quarks and between $d$ and $u$ quarks, respectively. The scalar decay constants $\bar f_S$ have been computed in Ref.~\cite{CCY} within the framework of QCD sum rules. For reader's conveneince, we list the scalar decay constants (in units of MeV) at $\mu=1$ GeV relevant to the present work \begin{eqnarray} && \bar f_{f_0}=370\pm20, \qquad \bar f_{a_0}=365\pm20, \qquad \bar f_{\sigma}=350\pm20, \qquad \bar f_{\kappa}=340\pm20, \nonumber \\ && \bar f_{a_0(1450)}=460\pm50, \qquad f_{f_0(1500)}=490\pm50, \qquad \bar f_{K_0^*}=445\pm50. \end{eqnarray} From Eq.~(\ref{eq:EOM}) we obtain (in units of MeV) \footnote{The vector decay constants of the scalar meson and its antiparticle are of opposite sign. For example, $f_{a_0(980)^+}=-1.3\,{\rm MeV}$ and $f_{a_0(980)^-}=1.3\,{\rm MeV}$.} \begin{eqnarray} |f_{a_0(980)^\pm}|=1.3\,, \qquad |f_{a_0(1450)^\pm}|=1.1\,, \qquad |f_\kappa|=45.5\,, \qquad |f_{K^*_0(1430)}|=35.3\,. \end{eqnarray} In short, the vector decay constants of scalar mesons are either zero or very small for non-strange scalar mesons. Form factors for $D\to P,S$ transitions are defined by \cite{BSW} \begin{eqnarray} \label{eq:DSm.e.} \langle P(p')|V_\mu|D(p)\rangle &=& \left(P_\mu-{m_D^2-m_P^2\over q^2}\,q_ \mu\right) F_1^{DP}(q^2)+{m_D^2-m_P^2\over q^2}q_\mu\,F_0^{DP}(q^2) ~, \nonumber \\ \langle S(p')|A_\mu|D(p)\rangle &=& -i\Bigg[\left(P_\mu-{m_D^2-m_S^2\over q^2}\,q_ \mu\right) F_1^{DS}(q^2) +{m_D^2-m_S^2\over q^2}q_\mu\,F_0^{DS}(q^2)\Bigg] ~, \end{eqnarray} where $P_\mu=(p+p')_\mu$ and $q_\mu=(p-p')_\mu$. As shown in Ref.~\cite{CCH}, a factor of $(-i)$ is needed in the $D\to S$ transition in order for the $D\to S$ form factors to be positive. This can also be checked from heavy quark symmetry consideration \cite{CCH}. Throughout this paper, we use the 3-parameter parametrization \begin{eqnarray} \label{eq:FFpara} F(q^2)=\,{F(0)\over 1-a(q^2/m_D^2)+b(q^2/m_D^2)^2} \end{eqnarray} for $D\to S$ transitions. For hadronic $D\to SP$ decays, the relevant form factor is $F_0^{DS}(q^2)$. The parameters $F_0^{DS}(0)$, $a$ and $b$ for $D \to S$ transitions calculated in the covariant light-front quark model (CLFQM) \cite{CCH,Verma:2011yw}, covariant confined quark model (CCQM) \cite{Soni:2020sgn}, light-cone sum rules (LCSR) \cite{Shi:2017pgh,Cheng:2017fkw,Huang:2021owr} are exhibited in Table~\ref{tab:FFDtoS}. Note that the matrix element $\langle S(p')|A_\mu|D(p)\rangle$ is sometimes parametrized as \begin{eqnarray} \langle S(p')|A_\mu|D(p)\rangle &=& -i\left[F_+^{DS}(q^2)P_\mu + F_-^{DS}(q^2)q_\mu \right]. \end{eqnarray} It is easily seen that \begin{eqnarray} \label{eq:FFrel} F_1(q^2)=F_+(q^2), \qquad F_0(q^2)={q^2\over m_D^2-m_S^2} F_-(q^2)+F_+(q^2) ~, \end{eqnarray} and hence $F_1(0)=F_0(0)=F_+(0)$. It was argued in \cite{Huang:2021owr} that the relation $F_-(q^2)=-F_+(q^2)$ holds in the LCSR calculation. In \cite{Soni:2020sgn}, the $D\to S$ transition form factors are defined by \begin{eqnarray} \langle S(p)|A_\mu|D(p+q)\rangle &=& -i\left[{F'}_+(q^2)p_\mu + {F'}_-(q^2)q_\mu \right]. \end{eqnarray} They are related to $F_+(q^2)$ and $F_-(q^2)$ through the relation \begin{eqnarray} F'_+(q^2)=2F_+(q^2), \qquad F'_-(q^2)=F_+(q^2)+F_-(q^2). \end{eqnarray} \begin{table}[t] \caption{Form factors $F_0^{DS}(0)$ for $D, D_s\to f_0(980), a_0(980), a_0(1450)$ and $K_0^*(1430)$ transitions in various models. } \label{tab:FFDtoS} \medskip \begin{ruledtabular} \begin{tabular}{l c c c c c } Transition & CLFQM & CCQM & LCSR(I) & LCSR(II) & LCSR(III) \\ & \cite{CCH,Verma:2011yw} & \cite{Soni:2020sgn} & \cite{Shi:2017pgh} & \cite{Cheng:2017fkw} & \cite{Huang:2021owr} \\ \hline $D\to f_0(980)$ & $0.51^{+0.04}_{-0.05}$ \footnotemark[1] & $0.45\pm0.02$ & 0.321 & \\ $D_s^+\to f_0(980)$ & $0.52^{+0.01}_{-0.01}$ \footnotemark[2] & $0.36\pm0.02$ & & \\ $D\to a_0(980)$ \footnotemark[3] & & $0.55\pm0.02$ & & $0.88\pm0.13$ \footnotemark[4] & $0.85^{+0.10}_{-0.11}$ \\ $D\to a_0(1450)$ & $0.51^{+0.01}_{-0.02}$ & & & & $0.94^{+0.02}_{-0.03}$ \\ $D\to K_0^*(1430)$ & $0.47^{+0.02}_{-0.03}$ & \\ $D_s^+\to K_0^*(1430)$ & $0.55^{+0.02}_{-0.03}$ \\ \end{tabular} \footnotetext[1]{For $D\to f_0^q$ transition.} \footnotetext[2]{For $D_s^+\to f_0^s$ transition.} \footnotetext[3]{It stands for either $D^0\to a_0(980)^-$ or $D^+\to a_0(980)^0$ transition.} \footnotetext[4]{Use of the relation $F_+(0)=F'_+(0)/2$ has been made.} \end{ruledtabular} \end{table} For the $q^2$ dependence of the form factors in various models, the parameters $a$ and $b$ are available in Refs. \cite{CCH,Verma:2011yw} and Ref. \cite{Shi:2017pgh} for CLFQM and LCSR(I), respectively. In CCQM and LCSR(II), one needs to apply Eq. (\ref{eq:FFrel}) to get the $q^2$ dependence of $F_0$. The form-factor $q^2$ dependence in the LCSR(III) calculation is shown in Fig. 3 of Ref. \cite{Huang:2021owr}. BESIII has measured the branching fractions of both $D^0\to a_0(980)^-e^+\nu_e$ and $D^+\to a_0(980)^0e^+\nu_e$ \cite{BESIII:SLa0}. The theoretical calculations depend on the form factors $F_+(q^2)$ and $F_-(q^2)$ and their $q^2$ dependence (see e.g. Ref. \cite{Cheng:DmesonSL}). It turns out that the predicted branching fractions for $D\to a_0(980)e^+\nu_e$ in LCSR(II) \cite{Cheng:2017fkw} are too large by more than a factor of 2 compared to the BESIII experiment (see Table VI of Ref. \cite{Huang:2021owr}). Hence, this model is disfavored. \section{Diagrammatic amplitudes \label{sec:flavorapp}} A least model-dependent analysis of heavy meson decays can be carried out in the so-called topological diagram approach. In this diagrammatic scenario, all two-body nonleptonic weak decays of heavy mesons can be expressed in terms of six distinct quark diagrams \cite{Chau,CC86,CC87}: $T$, the external $W$-emission tree diagram; $C$, the internal $W$-emission; $E$, the $W$-exchange; $A$, the $W$-annihilation; $H$, the horizontal $W$-loop; and $V$, the vertical $W$-loop. The one-gluon exchange approximation of the $H$ graph is the so-called ``penguin diagram.'' These diagrams are classified according to the topologies of weak interactions with all strong interaction effects encoded. The topological amplitudes for $D\to SP$ decays have been discussed in \cite{ChengSP,Cheng:SAT}. Just as $D\to V\!P$ decays, one generally has two sets of distinct diagrams for each topology. For example, there are two external $W$-emission and two internal $W$-emission diagrams, depending on whether the emitted particle is an even-party meson or an odd-parity one. Following the convention in \cite{ChengSP,Cheng:SAT}, we shall denote the primed amplitudes $T'$ and $C'$ for the case when the emitted meson is a scalar one. For the $W$-exchange and $W$-annihilation diagrams with the final state $q_1\bar q_2$, the primed amplitude denotes that the even-parity meson contains the quark $q_1$. Since $K^*_0$, $a_0(1450)$ and the light scalars $\sigma,~\kappa,~f_0(980),~a_0(980)$ fall into two different SU(3) flavor nonets, in principle one cannot apply SU(3) symmetry to relate the topological amplitudes in $D^+\to f_0(980)\pi^+$ to, for example, those in $D^+\to \overline K^{*0}_0\pi^+$. \begin{table}[!] \caption{Topological amplitudes of various $D\to SP$ decays. Schemes~I has $(\alpha, \beta) = (\sin\theta, \cos\theta)$, and scheme~II has $(\alpha , \beta) = (1, \sqrt2)$ for those modes with one $f_0$ and $(0,\sqrt2)$ for those modes with one $\sigma$. In Scheme I, light scalar mesons $\sigma,~\kappa,~a_0(980)$ and $f_0(980)$ are described by the $q\bar q$ states, while $K^*_0$ and $a_0(1450)$ as excited $q\bar q$ states. In Scheme II, light scalars are tetraquark states, while $K^*_0$ and $a_0(1450)$ are ground-state $q\bar q$. The $f_0-\sigma$ mixing angle $\theta$ in the two-quark model is defined in Eq. (\ref{eq:mixing}). The experimental branching fractions denoted by ${\cal B}_{\rm NWA}$ are taken from Tables \ref{tab:DataSP} and \ref{tab:DataD0SP}. For simplicity, we do not consider the $f_0-\sigma$ mixing in the tetraquark model as its value is close to $\pi$ \cite{Maiani}.} \label{tab:DSP} \begin{ruledtabular} \begin{tabular}{l l l } Decay & Amplitude & ${\cal B}_{\rm NWA}$ \\ \hline $D^+\to f_0\pi^+$ & $\frac{1}{\sqrt2}\alpha V_{cd}^*V_{ud}(T+C'+A+A') +\beta V_{cs}^*V_{us} C'$ & $$ \\ \qquad $\to f_0K^+$ &$V_{cd}^*V_{us}\left[ {1\over\sqrt{2}}\alpha (T+A') + \beta A \right]$ \\ \qquad $\to a_0^+\overline K^0$ & $V_{cs}^*V_{ud}(T'+C)$ & \\ \qquad $\to a_0^0\pi^+$ & $\frac{1}{\sqrt2} V_{cd}^*V_{ud}(-T-C'-A+A')$ & $$ \\ \qquad $\to \sigma\pi^+$ & ${1\over\sqrt{2}}\beta V_{cd}^*V_{ud}(T+C'+A+A') - \alpha V_{cs}^*V_{us} C'$ & $(2.1\pm0.2)\times 10^{-3}$ \\ \qquad $\to \bar\kappa^0\pi^+$ & $V_{cs}^*V_{ud}(T+C')$ & $(3.6^{+3.0}_{-2.4})\%$ \\ \qquad $\to \bar\kappa^0K^+$ & $ V_{cs}^*V_{us}T + V_{cd}^*V_{ud}A$ & $(1.0^{+0.5}_{-0.3})\times 10^{-3}$ \\ $D^0\to f_0\pi^0$ & ${1\over 2} \alpha V_{cd}^*V_{ud}(-C+C'-E-E') +{1\over\sqrt{2}} \beta V_{cs}^*V_{us} C'$ & \\ \quad~ $\to f_0\overline K^0$ & $V_{cs}^*V_{ud}[{1\over\sqrt{2}} \alpha (C+E) + \beta E']$ & \\ \quad~ $\to a_0^+\pi^-$ & $V_{cd}^*V_{ud}(T'+E)$ & $$ \\ \quad~ $\to a_0^-\pi^+$ & $V_{cd}^*V_{ud}(T+E')$ & $$ \\ \quad~ $\to a_0^+K^-$ & $ V_{cs}^*V_{ud}(T'+E)$ & $$ \\ \quad~ $\to a_0^0\overline K^0$ & $V_{cs}^*V_{ud}(C-E)/\sqrt{2}$ & $(2.83\pm0.66)\%$ \\ \quad~ $\to a_0^-K^+$ & $ V_{cd}^*V_{us}(T+E')$ & \\ \quad~ $\to \sigma\pi^0$ & ${1\over 2}V_{cd}^*V_{ud} \beta (-C+C'-E-E') -{1\over\sqrt{2}}\alpha V_{cs}^*V_{us} C'$ & $({ 1.8\pm0.3})\times 10^{-4}$ \\ $D_s^+\to f_0\pi^+$ & $\frac{1}{\sqrt2} V_{cs}^*V_{ud} \left[ \sqrt2 \beta T+ \alpha (A+A') \right]$ & \\ \quad~ $\to f_0K^+$ & $V_{cs}^*V_{us}\left[\beta (T+C'+A) + {1\over \sqrt{2}}\alpha A' \right] +{1\over\sqrt{2}}V_{cd}^*V_{ud} \alpha C'$ \\ \quad~ $\to a_0^0\pi^+$ & ${1\over \sqrt{2}}V_{cs}^*V_{ud}(-A+A')$ & $(0.86\pm0.23)\%$ \footnotemark[1] \\ \hline $D^+\to a_0(1450)^{0}\pi^+$ & ${1\over\sqrt{2}}V_{cd}^*V_{ud}(-T-C'-A+A')$ & $$ \\ \quad~ $\to \overline K_0^{*0}\pi^+$ & $V_{cs}^*V_{ud}(T+C')$ & $(1.98 \pm 0.22)\%$ \\ \quad~ $\to \overline K_0^{*0}K^+$ & $V_{cs}^*V_{us}T + V_{cd}^*V_{ud}A$ & prohibited \\ $D^0\to a_0(1450)^{+}\pi^-$ & $ V_{cd}^*V_{ud}(T'+E)$ & \\ \quad~ $\to a_0(1450)^{-}\pi^+$ & $ V_{cd}^*V_{ud}(T+E')$ & \\ \quad~ $\to a_0(1450)^{-}K^+$ & $ V_{cd}^*V_{us}(T+E')$ & \\ \quad~ $\to K_0^{*-}\pi^+$ & $V_{cs}^*V_{ud}(T+E')$ & $(8.8\pm1.5)\times 10^{-3}$ \\ \quad~ $\to \overline K_0^{*0}\pi^0$ & ${1\over\sqrt{2}}V_{cs}^*V_{ud}(C'-E')$ & $(9.5^{+8.1}_{-2.8})\times 10^{-3}$ \\ \quad~ $\to K_0^{*+}\pi^-$ & $V_{cd}^*V_{us}(T'+E)$ & $<4.5\times 10^{-5}$ \\ $D_s^+ \to K_0^{*0}\pi^+$ & $V_{cd}^*V_{ud}\,T+V_{cs}V_{us}^*\,A$ & $(8.1\pm5.7)\times 10^{-4}$ \\ \quad~ $\to \overline K_0^{*0}K^+$ & $V_{cs}^*V_{ud}(C'+A)$ & $(2.8\pm0.5)\times 10^{-3}$ \\ \end{tabular} \footnotetext[1]{Since the decay amplitudes of $D_s^+\to a_0^+\pi^0$ and $D_s^+\to a_0^0\pi^+$ are the same except an overall negative sign, they have the same rates.} \end{ruledtabular} \end{table} In Ref. \cite{Cheng:SAT} we have presented the topological amplitude decomposition in $D\to SP$ decays in two different schemes. In scheme I, light scalar mesons $\sigma, \kappa, a_0(980)$ and $f_0(980)$ are described by the ground-state $q\bar q$ states, while $K^*_0$ and $a_0(1450)$ as excited $q\bar q$ states. In scheme II, light scalars are tetraquark states, while $K^*_0$ and $a_0(1450)$ are ground-state $q\bar q$. The topological amplitudes for $D\to SP$ decays are listed in Table~\ref{tab:DSP}. The expressions of topological amplitudes are the same in both schemes I and II except for the channels involving $f_0$ and $\sigma$. For example, \begin{eqnarray} \label{eq:AmpDtof0pi} A(D^+\to f_0\pi^+) &=& \left\{ \begin{array}{cl} {1\over\sqrt{2}}V_{cd}^*V_{ud}(T+C'+A+A')\sin\theta+V_{cs}^*V_{us}C'\cos\theta & \quad \mbox{Scheme~I} \ , \\ {1\over\sqrt{2}}V_{cd}^*V_{ud}(T+C'+A+A')+\sqrt{2}V_{cs}^*V_{us}C' & \quad \mbox{Scheme~II} \ , \end{array}\right. \nonumber \\ A(D^+\to \sigma\pi^+) &=& \left\{ \begin{array}{cl} {1\over\sqrt{2}}V_{cd}^*V_{ud}(T+C'+A+A')\cos\theta-V_{cs}^*V_{us}C'\sin\theta & \quad \mbox{Scheme~I} \ , \\ V_{cd}^*V_{ud}(T+C'+A+A') & \quad \mbox{Scheme~II} \ . \end{array}\right. \end{eqnarray} In our numerical estimates, we will take $\theta = 30^\circ$, saturating the measured upper bound mentioned earlier. In Table~\ref{tab:DSP} the upper part involves only light scalar mesons ($f_0$, $a_0$, $\sigma$, and $\kappa$), whereas the lower part involves the $a_0(1450)$ and $K_0^*(1430)$ mesons in the heavier nonet representation. This division is made because the amplitudes of the same topology in these two groups have no {\it a priori} relations. In each group we have 15 unknown parameters for the 8 topological amplitudes $T,C,E,A$ and $T',C',E',A'$. For neutral scalar mesons $\sigma,f_0$ and $a_0^0$, we cannot set $T'=C'=0$ even though their vector decay constants vanish. As will be discussed in Sec. V.A, $T'$ and $C'$ do receive nonfactorizable contributions through vertex and spectator-scattering corrections \cite{Cheng:2006,Cheng:2013}. Nevertheless, it is na{\"i}vely expected that, for example, $|T'|\ll |T|$ and $|C'|\ll |C|$ for charged $a_0$. However, as we shall see in Sec. V.C, a realistic calculation yields $|C'|>|C|$ instead. At any rate, we have more theory parameters than observables (6 in the upper part and 5 in the lower part of the table), barring a fit. Since the branching fractions of $f_0\to \pi\pi$ and $(f_0, a_0)\to K\overline K$ are unknown, many of the two-body decays in Table~\ref{tab:DSP} cannot be extracted from the data of three-body decays. Nevertheless, the strong couplings such as $g_{f_0\to \pi\pi}, g_{f_0\to K\bar K}, g_{a_0\to K\bar K}$ and $g_{a_0\to \eta\pi}$ have been inferred from a fit to the data. There are 17 available $D\to SP\to P_1P_2P_2$ modes, but there are only 14 data related to $D\to SP$ and we have 15 parameters to fit. Moreover, since we need to introduce appropriate energy-dependent line shapes for the scalar mesons, it is not conceivable to extract the topological amplitudes from three-body decays as the decay rate cannot be factorized into the topological amplitude squared and the phase space factor. We will come back to this point later. It is interesting to notice that the current data already imply the importance of $W$-exchange and $W$-annihilation amplitudes. Consider the decays: $D^0\to a_0^+\pi^-\to K^+\overline K^0\pi^-$ and $D^0\to a_0^-\pi^+\to K^-K^0\pi^+$ with the two-body decay amplitudes proportional to $(T'+E)$ and $(T+E')$, respectively (see Table~\ref{tab:DSP}). If the $W$-exchange contributions are negligible, the former mode governed by the amplitude $T'$ is expected to have a rate smaller than the latter (cf. Table \ref{tab:DataD0SP}). Experimentally, it is the other way around. This is an indication that $E$ and $E'$ play some role. \section{Factorization Approach \label{sec:facapp}} The diagrammatic approach has been applied quite successfully to hadronic decays of charmed mesons into $PP$ and $V\!P$ final states \cite{RosnerPP08,RosnerVP,RosnerPP09,Cheng:Ddecay2010,% Cheng:2012a,Cheng:2012b,Li:2012,Qin,Cheng:2016,Cheng:2021}. When generalized to the decay modes involving a scalar meson in the final state, it appears that the current data are still insufficient for us to fully extract the information of all amplitudes. Therefore, we take the na{\"i}ve factorization formalism as a complementary approach to estimate the rates of these decay modes. In this framework, the $W$-exchange and -annihilation type of contributions will be neglected. \subsection{Factorizable and nonfactorizable amplitudes} The factorizable amplitudes for the $D\to SP$ decays read \begin{eqnarray} \label{eq:XDSP} X^{(D S, P)} &=& \langle P(q)| (V-A)_\mu|0\rangle \langle S(p)| (V-A)^\mu|D(p_D)\rangle, \nonumber \\ X^{(D P, S)} &=& \langle S(q)| (V-A)_\mu|0\rangle \langle P(p)| (V-A)^\mu|D(p_D)\rangle, \end{eqnarray} and have the expressions \begin{eqnarray} \label{eq:XSP} X^{(DS, P)} = -f_P(m_D^2-m_S^2) F_0^{DS}(q^2)\,, \qquad X^{(D P, S)}= f_S (m_D^2-m_P^2) F_0^{DP}(q^2)\,, \end{eqnarray} where use of Eqs.~(\ref{eq:Sdecayc}) and (\ref{eq:DSm.e.}) has been made. Hence, \begin{eqnarray} \label{eq:SP_T,C} T=- a_1(SP)f_P(m_D^2-m_S^2) F_0^{DS}(q^2), &\qquad& C=-a_2(SP)f_P(m_D^2-m_S^2) F_0^{DS}(q^2), \nonumber \\ T'= a_1(PS)f_S (m_D^2-m_P^2) F_0^{DP}(q^2), &\qquad& C'=a_2(PS)f_S (m_D^2-m_P^2) F_0^{DP}(q^2). \end{eqnarray} The primed amplitudes $T'$ and $C'$ vanish for the neutral scalar mesons such as $\sigma/f_0(500)$, $f_0(980)$ and $a_0(980)^0$ as they cannot be produced through the $(V-A)$ current; that is, $f_S=0$. Nevertheless, beyond the factorization approximation, contributions proportional to the scalar decay constant $\bar f_S$ of the scalar meson defined in Eq. (\ref{eq:Sdecayc}) can be produced from vertex and hard spectator-scattering corrections. It has been shown in Refs. \cite{Cheng:2006,Cheng:2013} that the nonfactorizable amplitudes can be recast to \begin{eqnarray} \label{eq:S0P_T,C} T'= a_1(PS)\bar f_S (m_D^2-m_P^2) F_0^{DP}(q^2), &\qquad& C'=a_2(PS)\bar f_S (m_D^2-m_P^2) F_0^{DP}(q^2), \end{eqnarray} for $S=\sigma/f_0(500), f_0(980)$ and $a_0(980)^0$, etc., while the expressions of $T'$ and $C'$ given in Eq.~(\ref{eq:SP_T,C}) are valid for $S=a_0^\pm, \kappa/K^*_0(800)$ and $K_0^*(1430)$, etc. \subsection{Flavor operators} The flavor operators $a_i(M_1M_2)$ in Eqs. (\ref{eq:SP_T,C}) and (\ref{eq:S0P_T,C}) are basically the Wilson coefficients in conjunction with short-distance nonfactorizable corrections such as vertex corrections and hard spectator interactions. In general, they have the expressions \cite{BBNS,BN} \footnote{Notice that $a_1$ and $a_2$ do not receive contributions from penguin contractions.} \begin{eqnarray} \label{eq:ai} a_1(M_1M_2) &=& \left(c_1+{c_2\over N_c}\right)N_1(M_2) + {c_{2}\over N_c}\,{C_F\alpha_s\over 4\pi}\Big[V_1(M_2)+{4\pi^2\over N_c}H_1(M_1M_2)\Big], \nonumber \\ a_2(M_1M_2) &=& \left(c_2+{c_1\over N_c}\right)N_2(M_2) + {c_{1}\over N_c}\,{C_F\alpha_s\over 4\pi}\Big[V_2(M_2)+{4\pi^2\over N_c}H_2(M_1M_2)\Big], \end{eqnarray} where $c_i$ are the Wilson coefficients, $C_F=(N_c^2-1)/(2N_c)$ with $N_c=3$, $M_2$ is the emitted meson and $M_1$ shares the same spectator quark with the $D$ meson. The quantities $V_i(M_2)$ account for vertex corrections, $H_i(M_1M_2)$ for hard spectator interactions with a hard gluon exchange between the emitted meson and the spectator quark of the $D$ meson. The explicit expressions of $V_{1,2}(M)$ and $H_{1,2}(M_1M_2)$ in the QCD factorization approach are given in \cite{Cheng:2006}. The expression of the quantities $N_i(M_2)$, which are relevant to the factorizable amplitudes, reads \begin{eqnarray} \label{eq:Ni} N_i(P) = 1, \qquad N_i(S) = \begin{cases} 0, \quad {\rm for}~S=\sigma, f_0, a_0^0, \\ 1, \quad {\rm else.} \end{cases} \end{eqnarray} Results for the flavor operators $a_i(M_1M_2)$ with $M_1M_2=SP$ and $PS$ are shown in Table \ref{tab:aiSP}. \footnote{Studies of $B\to SP$ decays in QCDF were presented in Refs. \cite{Cheng:2006,Cheng:2013}. Here We generalize these works to the $D\to SP$ decays and obtain the flavor operators given in Table \ref{tab:aiSP}.} \begin{table}[t] \caption{Numerical values of the flavor operators $a_{1,2}(M_1M_2)$ for $M_1M_2=SP$ and $PS$ at the scale $\mu=\overline m_c(\overline m_c)=1.3$ GeV, where use of $c_1(\mu)=1.33$ and $c_2(\mu)=-0.62$ has been made.} \label{tab:aiSP} \begin{center} \begin{tabular}{ l c c | l r r} \hline \hline $$ & ~~$f_0(500)\pi$~~ & ~~~$\pi f_0(500)$~~~ & ~~$$ & ~~~~$K_0^*(700)\pi$ ~~~~~ & ~~~$\pi K_0^*(700)$~~~~~~ \\ \hline $a_1$ & ~~~$1.292+0.080i$~~~ & ~~$0.033-0.056i$~~ & ~~$a_1$~~ & $1.292+0.080i$ & $1.579-0.492i$ \\ $a_2$ & $-0.527-0.172i$ & $-0.070+0.121i$ & ~~$a_2$~~ & $-0.527-0.172i$ & $-1.147+0.930i$ \\ % \hline\hline $$ & ~~$f_0(980)\pi$~~ & ~~~$\pi f_0(980)$~~~ & ~~$$~~ & ~~$f_0(980) K$~~~~~~ & $K f_0(980)$~~~~~~ \\ \hline $a_1$ & ~~~$1.292+0.080i$~~~ & ~~$0.033-0.056i$~~ & ~~$a_1$ & $1.295+0.075i$ & $0.033+0.075i$ \\ $a_2$ & $-0.527-0.172i$ & $-0.070+0.121i$ & ~~$a_2$ & $-0.533-0.162i$ & $-0.070+0.121i$ \\ \hline\hline $$ & ~~$a_0(980)^0\pi$~~ & ~~~$\pi a_0(980)^0$~~~ & ~~$$~~ & ~~$a_0(980)^0 K$~~~~~ & $K a_0(980)^0$~~~~ \\ \hline $a_1$ & ~~~$1.292+0.080i$~~~ & ~~$0.037-0.066i$~~ & ~~$a_1$ & $1.295+0.075i$ & $0.037-0.066i$ \\ $a_2$ & $-0.527-0.172i$ & $-0.080+0.141i$ & ~~$a_2$ & $-0.533-0.162i$ & $-0.080+0.141i$ \\ % \hline\hline $$ & ~~$a_0(980)^\pm\pi$~~ & ~~~$\pi a_0(980)^\pm$~~~ & ~~$$~~ & ~~$a_0(980)^\pm K$~~~~~ & $K a_0(980)^\pm$~~~ \\ \hline $a_1$ & ~~~$1.292+0.080i$~~~ & ~~$\pm(-10.04+20.03i)$~~ & ~~$a_1$ & $1.295+0.075i$ & ~~~$\pm(-10.04+20.03i)$ \\ $a_2$ & $-0.527-0.172i$ & ~~$\pm(23.89-43.14i)$ & ~~$a_2$ & $-0.533-0.162i$ & ~~~$\pm(23.89-43.14i)$ \\ % \hline\hline $$ & ~~$a_0(1450)\pi$~~ & ~~~$\pi a_0(1450)$~~~ & ~~$$~~ & ~~$K_0^*(1430)\pi$~~~~~ & $\pi K_0^*(1430)$~~~ \\ \hline $a_1$ & ~~~$1.292+0.080i$~~~ & ~~$0.033-0.056i$~~ & ~~$a_1$~~ & $1.292+0.080i$ & ~~$1.692-0.544i$ \\ $a_2$ & $-0.527-0.172i$ & $-0.071+0.108i$ & ~~$a_2$~~ & $-0.527-0.172i$ & $-1.390+1.171i$ \\ % \hline \hline \end{tabular} \end{center} \end{table} We see from Eqs.~(\ref{eq:ai}) and \eqref{eq:Ni} that the factorizable contributions to $a_1(PS)$ and $a_2(PS)$ vanish for $S=\sigma, f_0$ and $a_0^0$. Beyond the factorization approximation, nonfactorizable contributions proportional to the decay constant $\bar f_S$ can be produced from vertex and spectator-scattering corrections \cite{Cheng:2006,Cheng:2013}. Therefore, when the strong coupling $\alpha_s$ is turned off, the nonfactorizable contributions vanish accordingly. In short, the primed amplitudes $T'$ and $C'$ are factorizable for $S=a_0^\pm, \kappa, K^*_0$, namely $\langle S|J^\mu|0\rangle\langle P|J'_\mu|D\rangle$, whereas they are nonfactorizable for $S=\sigma, f_0, a_0^0$. Upon an inspection of Table \ref{tab:aiSP}, we see that (i) the flavor operators $a_i(PS)$ and $a_i(SP)$ are very different as the former does not receive factorizable contributions (i.e. $N_i(S)=0$), and (ii) while $a_1(SP)$ and $a_2(SP)$ are similar for any light and heavy scalar mesons, namely $a_1(SP)\approx 1.29\pm0.08i$ and $a_2(SP)\approx -0.53-0.17i$, $a_1(PS)$ and $a_2(PS)$ vary from neutral to the charged ones as shown in Table \ref{tab:aiPS}. One may wonder why the flavor operators $a_{1,2}(\pi a_0^\pm)$ are much greater than $a_{1,2}(\pi a_0^0)$. As noticed in Eqs. (\ref{eq:SP_T,C}) and (\ref{eq:S0P_T,C}), the nonfactorizable amplitudes are proportional to $a_{1,2}(\pi a_0^\pm)f_{a_0^\pm}$ for charged $a_0^\pm$ and to $a_{1,2}(\pi a_0^0)\bar f_{a_0}$ for neutral $a_0^0$. Hence, $a_{1,2}(\pi a_0^\pm)/a_{1,2}(\pi a_0^0)=\bar f_{a_0}/f_{a_0^\pm}\gg 1$. We see from Table \ref{tab:aiPS} that $a_{1,2}(PS)$ become larger when the decay constants become smaller. \begin{table}[t] \caption{Same as Table \ref{tab:aiSP} except for the flavor operators $a_{1,2}(PS)$ with $P=\pi$. For neutral scalar mesons $\sigma,f_0,a_0^0$, the vector decay constant $f_S$ is replaced by the scalar decay constant $\bar f_S$. } \label{tab:aiPS} \begin{center} \begin{tabular}{ c c c c} \hline \hline $S$ & ~~$f_S$ (MeV)~~ & ~~~$a_1(PS)$~~~ & ~~~$a_2(PS)$~ \\ \hline $\sigma,f_0,a_0^0$ & $350\sim 370$ & ~~$\sim 0.035-0.060i$~~ & ~~$\sim -0.075+0.130i$~~ \\ $\bar \kappa$~~ & $45.5$ & $1.58-0.49i$ & $-1.15+0.93i$ \\ $\bar K_0^*$ & 35.3 & $1.69-0.54i$ & $-1.39+1.17i$ \\ $a_0^-$ & 1.3 & $10-20i$ & $-24+43i$ \\ \hline \hline \end{tabular} \end{center} \end{table} \subsection{Implications} Na{\"i}vely it is expected that $|T'(\pi^-a_0^+)|\ll |T(a_0^-\pi^+)|$ because $f_{\pi}\gg f_{a_0^+}$ and $|C'(\pi^+\bar \kappa^0)|< |C(\pi^+f_0)|$ due to the fact that $f_\pi> f_\kappa$. Although we are not able to extract the topological amplitudes of $D\to SP$ from the experimental data of three-body $D\to P_1P_2P_3$ decays, we can use the theoretical calculations to see their sizes and relative phases. From Eq. (\ref{eq:SP_T,C}) we have \begin{eqnarray} T(f_0\pi^+) &=& -a_1(f_0\pi)f_\pi(m_D^2-m_{f_0}^2)F_0^{Df_0}(m_\pi^2), \nonumber \\ C(f_0\pi^0) &=& -a_2(f_0\pi)f_\pi(m_D^2-m_{f_0}^2)F_0^{Df_0}(m_\pi^2), \nonumber \\ T'(\pi^- a_0^+) &=& a_1(\pi a_0^+)f_{a_0^+}(m_D^2-m_{\pi}^2)F_0^{D\pi}(m_{a_0}^2), \\ C'(\pi^0 f_0^0) &=& a_2(\pi f_0)\bar f_{f_0}(m_D^2-m_\pi^2)F_0^{D\pi}(m_{f_0}^2), \nonumber \\ C'(\pi^+\bar\kappa^0) &=& a_2(\pi \kappa)f_{\kappa}(m_D^2-m_\pi^2)F_0^{D\pi}(m_{\kappa}^2). \nonumber \end{eqnarray} Using the flavor operators given in Table \ref{tab:aiSP}, form factors $F^{DS}$ listed in Table \ref{tab:FFDtoS} and $F^{DP}(q^2)$ evaluated in the covariant confining quark model \cite{Ivanov:2019nqd}, we find numerically (in units of $10^{-6}$ GeV), \begin{eqnarray} && T(f_0\pi^+)=1.80\,e^{-i 186^\circ}, \quad~ C(f_0\pi^0)=0.77\, e^{-i 18^\circ}, \quad T'(\pi^-a_0^+)=0.55\, e^{i 117^\circ}, \nonumber \\ && C'(\pi^0 f_0)=0.99\, e^{i 120^\circ}, \quad~~ C'(\pi^+\bar\kappa^0)=1.26\, e^{i 141^\circ}. \end{eqnarray} For heavier scalar mesons we find \begin{eqnarray} && T(K_0^{*-}\pi^+)=0.70\,e^{-i 177^\circ}, \quad T'(\pi^-K_0^{*+})=1.29\, e^{-i 18^\circ}, \qquad C'(\pi^0\bar K_0^{*0})=1.32\, e^{i 140^\circ}, \\ && T(a_0(1450)^0\pi^+)=0.93\,e^{-i 177^\circ}, \quad T'(\pi^-a_0(1450)^+)=0.59\, e^{i 121^\circ}, \quad C'(\pi^0a_0(1450)^{0})=1.21\, e^{i 123^\circ}. \nonumber \end{eqnarray} In the light scalar meson sector, we have $|T|>|T'|$ and $|C|<|C'|$ rather than $|T|\gg|T'|$ and $|C|>|C'|$. For scalar mesons in the higher nonet representation, we find $|T'|>|C'|>|T|$ with $|T|$ being suppressed as the mass term $(m_D^2-m_S^2)$ becomes smaller when $S$ becomes heavier. \subsection{Flatt\'e line shape} To describe three-body decays we need to introduce a line shape of the scalar resonance. Normally we use the relativistic Breit-Wigner line shape to describe the scalar resonance contributions to three-body decays $D\to SP\to P_1P_2P$: \begin{eqnarray} T^{\rm BW}(s)={1 \over s-m_R^2+i m_R \Gamma_R(s)}, \end{eqnarray} with \begin{eqnarray} \Gamma_{R}(s)=\Gamma_{R}^0\left( {q\over q_0}\right) {m_{R}\over \sqrt{s}}, \end{eqnarray} where $q=|\vec{p}_1|=|\vec{p}_2|$ is the c.m.~momentum in the rest frame of $R$, $q_0$ the value of $q$ when $s$ is equal to $m_R^2$. However, this parametrization is not suitable to describe the decay of $f_0(980)$ or $a_0(980)$ into $K\overline K$ as $m(K^+)+m(K^-)=987.4$ MeV and $m(K^0)+m(\bar K^0)=995.2$ MeV are near threshold. In other words, one has to take the threshold effect into account. Since $f_0(980)$ couples strongly to the channel $K\overline K$ as well as to the channel $\pi\pi$, they can be described by a coupled channel formula, the so-called Flatt\'e line shape \cite{Flatte:1976xu} \begin{eqnarray} \label{eq:Flattef0} T^{\rm Flatte}_{f_0}(s)={1\over s-m_{f_0}^2+i\left[g_{f_0\to\pi\pi}^2\rho_{\pi\pi}(s)+g^2_{f_0\to K\bar K}\rho_{K\bar K}(s)\right]}, \end{eqnarray} with the phase space factor \begin{eqnarray} \rho_{ab} ={1\over 16\pi}\left(1-{(m_a+m_b)^2\over s}\right)^{1/2} \left(1-{(m_a-m_b)^2\over s}\right)^{1/2}, \end{eqnarray} so that \begin{eqnarray} \rho_{K\!\bar K}(s) &=& \rho_{K^+K^-}(s)+\rho_{K^0\bar K^0}(s)={1\over 16\pi}\left( \sqrt{1-(4m_{K^\pm}^2/ s)}+ \sqrt{1-(4m_{K^0}^2/ s)}\right), \nonumber \\ \rho_{\pi\pi}(s) &=& \rho_{\pi^+\pi^-}(s)+{1\over 2}\rho_{\pi^0\pi^0}(s)={1\over 16\pi}\left( \sqrt{1-(4m_{\pi^\pm}^2/ s)}+{1\over 2} \sqrt{1-(4m_{\pi^0}^2/ s)}\right), \end{eqnarray} and $\rho\to i\sqrt{-\rho^2}$ when below the threshold, i.e. $s<4m_K^2$ for $\rho_{K\bar K}$. The dimensionful coupling constants in Eq. (\ref{eq:Flattef0}) are \begin{eqnarray} g_{f_0\to \pi\pi}\equiv g_{f_0\to \pi^+\pi^-}=\sqrt{2}g_{f_0\to \pi^0\pi^0}, \qquad g_{f_0\to K\bar K}\equiv g_{f_0\to K^+K^-}=g_{f_0\to K^0\bar K^0}. \end{eqnarray} Likewise, $a_0(980)$ couples strongly to $K\overline K$ and $\eta\pi$ \begin{eqnarray} T^{\rm Flatte}_{a_0}(s)={1 \over s-m_{a_0}^2+i\left[g_{a_0\to\eta\pi}^2\rho_{\eta\pi}(s)+g^2_{a_0\to K\bar K}\rho_{K\bar K}(s)\right]}. \end{eqnarray} with \begin{eqnarray} \rho_{\eta\pi}(s) &=& {1\over 16\pi}\left(1-{(m_\eta-m_\pi)^2\over s}\right)^{1/2} \left(1-{(m_\eta+m_\pi)^2\over s}\right)^{1/2}. \end{eqnarray} It is important to check whether $g_{f_0\to \pi\pi}$ and $g_{f_0,a_0\to K\bar K}$ can be interpreted as the strong couplings of $f_0$ to $\pi\pi$ and $K\overline K$, respectively. Using the formula \begin{eqnarray} \Gamma(f_0\to\pi^+\pi^-)={p_c\over 8\pi m_{f_0}^2}g_{f_0\to \pi^+\pi^-}^2, \end{eqnarray} with $p_c$ being the c.m. momentum of the pion in the rest frame of $f_0$, it is easily seen that the term $g_{f_0\to\pi\pi}^2\rho_{\pi\pi}(m_{f_0}^2)$ in Eq. (\ref{eq:Flattef0}) is identical to $m_{f_0}(\Gamma(f_0\to \pi^+\pi^-)+\Gamma(f_0\to \pi^0\pi^0))$. Therefore, we are sure that $g_{f_0\to\pi\pi}$ is the strong coupling appearing in the matrix element $\langle\pi^+\pi^-|f_0\rangle$. The strong couplings $g_{f_0,a_0\to K\bar K}$, $g_{f_0\to \pi\pi}$ and $g_{a_0\to \eta\pi}$ have been extracted from fits to the experimental data. In this work we shall use \begin{eqnarray} \label{eq:couplings} && g_{f_0\to K\bar K}=(3.54\pm0.05)\,{\rm GeV}, \qquad~~ g_{a_0\to K\bar K}=(3.77\pm0.42)\,{\rm GeV}, \nonumber \\ && g_{f_0\to\pi\pi}=(1.5\pm0.1)\,{\rm GeV}, \qquad\qquad~ g_{a_0\to \eta\pi}=(2.54\pm0.16)\,{\rm GeV}, \end{eqnarray} where the values of $g_{f_0\to K\bar K}$ and $g_{f_0\to \pi\pi}$ are taken from Ref. \cite{BESIII:D0KKKS}, dominated by the Dalitz plot analysis of $e^+e^-\to \pi^0\pi^0\gamma$ performed by KLOE \cite{KLOE:f0}. The couplings $g_{a_0\to K\bar K}$ and $g_{a_0\to \pi\eta}$ are taken from the analysis of the decay $D^0\to K_S^0K^+K^-$ by BESIII \cite{BESIII:D0KKKS}. \footnote{From the amplitude analysis of the $\chi_{c1}\to \eta\pi^+\pi^-$ decay, BESIII obtained another set of couplings: $g_{a_0\to \eta\pi}=(4.14\pm0.02)\,{\rm GeV}$ and $g_{a_0\to K\bar K}=(3.91\pm0.02)\,{\rm GeV}$ \cite{BESIII:etapipi}. However, this set of couplings is not appealing for two reasons: (a) the large coupling constant $g_{a_0\to \eta\pi}$ will yield too large partial width $\Gamma_{\eta\pi}=222$ MeV, recalling that the total width of $a_0(980)$ lies in the range of 50 to 100 MeV \cite{PDG}, and (b) it is commonly believed that $a_0(980)$ couples more strongly to $K\overline K$ than to $\eta\pi$, especially in the scenario in which $a_0(980)$ is a $K\overline K$ molecular state.} Note the result for the coupling $g_{f_0\to\pi\pi}$ is consistent with the value of $1.33^{+0.29}_{-0.26}$ GeV extracted from Belle's measurement of the partial width of $f_0(980)\to\pi^+\pi^-$~\cite{Belle:f0}. The partial widths can be inferred from the strong couplings listed in Eq.~(\ref{eq:couplings}) as \begin{eqnarray} \Gamma(f_0(980)\to \pi\pi)=(65.7\pm8.8)\,{\rm MeV}, \qquad \Gamma(a_0(980)\to \eta\pi)=(85.2\pm10.7)\,{\rm MeV}, \end{eqnarray} though they are not directly measured. \subsection{Line shape for $\sigma/f_0(500)$ \label{sec:line shape}} As stressed in Ref. \cite{Pelaez:2015qba}, the scalar resonance $\sigma/f_0(500)$ is very broad and cannot be described by the usual Breit-Wigner line shape. Its partial wave amplitude does not resemble a Breit-Wigner shape with a clear peak and a simultaneous steep rise in the phase. The mass and width of the $\sigma$ resonance are identified from the associated pole position $\sqrt{s_\sigma}$ of the partial wave amplitude in the second Riemann sheet as $\sqrt{s_\sigma}=m_\sigma-i\Gamma_\sigma/2$~\cite{Pelaez:2015qba}. We shall follow the LHCb Collaboration~\cite{Aaij:3pi_2} to use a simple pole description \begin{eqnarray} \label{eq:T sigma} T_\sigma(s)={1\over s-s_\sigma}={1\over s-m_\sigma^2+\Gamma_\sigma^2(s)/4+im_\sigma\Gamma_\sigma(s)}, \end{eqnarray} with $\sqrt{s_\sigma}=m_\sigma-i\Gamma_\sigma/2$ and \begin{eqnarray} \Gamma_{\sigma}(s)=\Gamma_{\sigma}^0\left( {q\over q_0}\right) {m_{\sigma}\over \sqrt{s}}. \end{eqnarray} Using the isobar description of the $\pi^+\pi^-$ $S$-wave to fit the $B^+\to\pi^+\pi^-\pi^+$ decay data, the LHCb Collaboration found~\cite{Aaij:3pi_2} \begin{eqnarray} \label{eq:sigmaMass} \sqrt{s_\sigma}=(563\pm 10)-i(350\pm13)\,{\rm MeV}, \end{eqnarray} consistent with the PDG value of $\sqrt{s_\sigma}=(400-550)-i(200-350)\,{\rm MeV}$~\cite{PDG}. In principle, we could also use a similar pole shape $T_\kappa(s)$ \begin{eqnarray} \label{eq:T kappa} T_\kappa(s)={1\over s-s_\kappa}={1\over s-m_\kappa^2+\Gamma_\kappa^2(s)/4+im_\kappa\Gamma_\kappa(s)}. \end{eqnarray} to describe the broad resonance $\kappa/K_0^*(700)$ and follow \cite{Pelaez:2020uiw} to use the latest result \begin{eqnarray} \label{eq:kappaMass} \sqrt{s_\kappa}=(648\pm 7)-i(280\pm16)\,{\rm MeV}, \end{eqnarray} determined from a dispersive data analysis. However, we find that this line shape together with the above pole mass and width will yield a very huge and unreasonable result for the finite-width correction to $D^+\to\bar \kappa^0\pi^+$ (see Sec. VI.B below). Hence, we will use the usual Breit-Wigner lineshape for $\kappa/K_0^*(700)$ and take the Breit-Wigner mass and width~\cite{PDG} \begin{eqnarray} m_{K_0^*(700)}^{\rm BW}=845\pm17\,{\rm MeV}, \qquad \Gamma_{K_0^*(700)}^{\rm BW}=468\pm30\,{\rm MeV}. \end{eqnarray} \subsection{Three-body decays} We take $D^+\to \sigma\pi^+\to \pi^+\pi^-\pi^+$ as an example to illustrate the calculation for the three-body rate. The two-body decay amplitude for $D^+\to \sigma(m_{12})\pi^+$ with $m_{12}$ ($m_{12}^2\equiv (p_1+p_2)^2)$ being the invariant mass of the $\sigma$ is given by \begin{align} \begin{split} A(D^+\to \sigma(m_{12})\pi^+) =& {G_F\over\sqrt{2}}V_{cd}^*V_{ud}\Big[ -a_1(\sigma\pi)f_\pi(m_D^2-s)F_0^{D\sigma}(m_\pi^2) \\ &~~~ +a_2(\pi\sigma)\bar f_\sigma(m_D^2-m_\pi^2)F_0^{D\pi}(s)\Big]. \end{split} \end{align} Denoting ${\cal A}_\sigma\equiv A(D^+\to\sigma\pi^+\to \pi^+(p_1)\pi^-(p_2)\pi^+(p_3))$, we have \begin{eqnarray} {\cal A}_\sigma= g^{\sigma\to \pi^+\pi^-} F(s_{12},m_\sigma)\,T_\sigma(s_{12})A(D^+\to \sigma(s_{12}) \pi^+)+ (s_{12}\leftrightarrow s_{23}), \end{eqnarray} where the $\sigma$ line shape $T_\sigma$ is given by Eq. (\ref{eq:T sigma}). When $\sigma$ is off the mass shell, especially when $s_{12}$ is approaching the upper bound of $(m_D-m_\pi)^2$, it is necessary to account for the off-shell effect. For this purpose, we shall follow~\cite{Cheng:FSI} to introduce a form factor $F(s,m_R)$ parametrized as \begin{eqnarray} \label{eq:FF for coupling} F(s,m_R)=\left( {\Lambda^2+m_R^2 \over \Lambda^2+s}\right)^n, \end{eqnarray} with the cutoff $\Lambda$ not far from the resonance, \begin{eqnarray} \Lambda=m_R+\beta\Lambda_{\rm QCD}, \end{eqnarray} where the parameter $\beta$ is expected to be of order unity. We shall use $n=1$, $\Lambda_{\rm QCD}=250$ MeV and $\beta=1.0\pm0.2$ in subsequent calculations. The decay rate then reads \begin{align} \begin{split} & \Gamma(D^+\to \sigma\pi^+\to \pi^+\pi^-\pi^+) \\ &= {1\over 2}\,{1\over(2\pi)^3 32 m_D^3}\int ds_{12}\,ds_{23} \Bigg\{ {|g^{\sigma\to \pi^+\pi^-}|^2 F(s_{12},m_\sigma)^2\over (s_{12}-m^2_{\sigma}+\Gamma_\sigma(s_{12})/4)^2+m_{\sigma}^2\Gamma_{\sigma}^2(s_{12})} |A(D^+\to \sigma(m_{12})\pi^+)|^2 \\ &\qquad\qquad +(s_{12}\leftrightarrow s_{23})+{\rm interference} \Bigg\}, \end{split} \end{align} where the factor of ${1\over 2}$ accounts for the identical particle effect. The coupling constant $g^{\sigma\to \pi^+\pi^-}$ is determined by the relation \begin{eqnarray} \Gamma_{\sigma\to \pi^+\pi^-}={p_c\over 8\pi m_\sigma^2}g^2_{\sigma\to\pi^+\pi^-}. \end{eqnarray} \begin{table}[!] \caption{Branching fractions for various $D\to SP$ decays calculated in schemes~I and II. The upper part involves only light scalar mesons ($f_0$, $a_0$, $\sigma$, and $\kappa$), whereas the lower part involves the $a_0(1450)$ and $K_0^*(1430)$ mesons in the heavier nonet representation. The theoretical calculations are done in the factorization approach with both $W$-exchange and $W$-annihilation amplitudes being neglected. In scheme~I, $K_0^*$ and $a_0(1450)$ are excited $q\bar q$ states. Hence, their predictions are not presented here. The $f_0-\sigma$ mixing angle $\theta$ is taken to be $30^\circ$ for scheme~I. } \label{tab:DSPfact} \medskip \footnotesize{ \begin{ruledtabular} \begin{tabular}{l c c l} Decay & Scheme I & Scheme II & ${\cal B}_{\rm NWA}$ \\ \hline $D^+\to \sigma\pi^+$ & $2.6\times 10^{-3}$ & $4.6\times 10^{-3}$ & $(2.1\pm0.2)\times 10^{-3}$ \\ % \qquad $\to \bar\kappa^0\pi^+$ & $6.1\%$ & $6.1\%$ & $(3.6^{+3.0}_{-2.4})\%$ \\ \qquad $\to \bar\kappa^0K^+$ & $1.1\times 10^{-3}$ & $1.1\times 10^{-3}$ & $(1.0^{+0.5}_{-0.3})\times 10^{-3}$ \\ $D^0\to a_0^0\overline K^0$ & $4.2\times 10^{-3}$ & $4.2\times 10^{-3}$ & $(2.83\pm0.66)\%$ \\ \quad~ $\to \sigma\pi^0$ & $3.2\times 10^{-5}$ & $7.8\times 10^{-5}$ & $({ 1.8\pm0.3})\times 10^{-4}$ \\ $D_s^+\to a_0^0\pi^+$ & 0 & 0 & $(0.86\pm0.23)\%$ \\ \hline $D^+\to \overline K_0^{*0}\pi^+$ & & $2.19\%$ & $(1.98 \pm 0.22)\%$ \\ $D^0\to K_0^{*-}\pi^+$ & & $2.1\times 10^{-3}$ & $(8.8\pm1.5)\times 10^{-3}$ \\ \quad~ $\to \overline K_0^{*0}\pi^0$ & & $2.1\times 10^{-3}$ & $(9.5^{+8.1}_{-2.8})\times 10^{-3}$ \\ \quad~ $\to K_0^{*+}\pi^-$ & & $1.1\times 10^{-5}$ & $<4.5\times 10^{-5}$ \\ $D_s^+ \to K_0^{*0}\pi^+$ & & $2.9\times 10^{-4}$ & $(8.1\pm5.7)\times 10^{-4}$ \\ \quad~ $\to \overline K_0^{*0}K^+$ & & $3.1\times 10^{-3}$ & $(2.8\pm0.5)\times 10^{-3}$ \\ \end{tabular} \end{ruledtabular}} \end{table} \section{Results and Discussion \label{sec:results}} In Tables \ref{tab:DSPfact} and \ref{tab:DtoSP:theory} we have calculated two-body $D\to SP$ and three-body $D\to SP\to P_1P_2P$ decays, respectively, in schemes I and II using the factorization approach with $W$-exchange and $W$-annihilation being neglected. We see from Table \ref{tab:DSP} that the decay modes $D^+\to a_0^+\overline K^0, \bar \kappa \pi^+$ and $\overline K_0^*\pi^+$ are free of $W$-annihilation contributions and they are ideal for testing the validity of the factorization approach. From Table \ref{tab:DtoSP:theory} it is evident that the calculated rates of $D^+\to\bar \kappa\pi^+\to K_S\pi^0\pi^+$ and $D^+\to \overline K^{*0}_0\pi^+\to (K\pi)^0\pi^+$ in scheme II are in agreement with experiment. These modes are governed by the topologies $T+C'$ which interfere {\it constructively}. This is in contrast to the Cabibbo-favored (CF) $D^+\to \overline K^0\pi^+$ decay in the $P\!P$ sector where $T$ and $C$ contribute {\it destructively.} For $(D^+, D_0, D_s^+)\to f_0 P; f_0\to P_1P_2$, predictions in scheme II are improved over that in scheme I and the discrepancies presumably arise from the $W$-exchange or $W$-annihilation amplitude. This implies that the tetraquark picture for light scalars works better than the quark-antiquark scenario. Upon an inspection of Table \ref{tab:DSPfact}, the reader may wonder (i) why the branching fractions for $D\to (f_0,\sigma)P$ decays in scheme~II are always larger than that in scheme~I except for $D^0\to f_0\pi^0$, and (ii) why the predicted branching fractions of $D^+\to \sigma\pi^+$ and $D^+\to \bar \kappa^0\pi^+$ are larger than experimental data, while the corresponding three-body decays agree with the measurements. For (i), we see from Table~IV and also Eq.~(\ref{eq:AmpDtof0pi}) that the $W$-emission decay amplitude involving $\sigma$ is suppressed by a factor of $\cos\theta/\sqrt{2}$ in scheme~I relative to that in scheme~II, while it is suppressed by a factor of $\sin\theta$ for the $W$-emission decay amplitude involving $f_0(980)$. As a consequence our choice of $\theta = 30^\circ$, the branching fractions for $D\to (f_0,\sigma)P$ in scheme~II are always larger than scheme~I except for $D^0\to f_0\pi^0$. For (ii), it has something to do with the finite-width effects of $\sigma$ and $\kappa$ as they are both very broad. We shall see in Sec.~\ref{sec:finitewidth} that the extraction of ${\cal B}(D\to SP)$ from the data is affected by the broad widths of both $\sigma$ and $\kappa$. \begin{table}[!] \caption{Branching fractions of various $D\to SP\to P_1P_2P$ decays calculated in schemes~I and II. For simplicity and convenience, we have dropped the mass identification for $f_0(980)$, $a_0(980)$ and $K^*_0(1430)$. Data are taken from Tables \ref{tab:DataSP} and \ref{tab:DataD0SP}. In scheme~I, $K_0^*$ and $a_0(1450)$ are excited $q\bar q$ states. Hence, their predictions are not presented here. The $f_0-\sigma$ mixing angle $\theta$ is taken to be $30^\circ$ for scheme~I. } \label{tab:DtoSP:theory} \vskip 0.3cm \footnotesize{ \begin{ruledtabular} \begin{tabular}{l c c c } $D\to SP; S\to P_1P_2$ & Scheme I & Scheme II & Experiment \\ \hline $D^+\to f_0\pi^+; f_0\to\pi^+\pi^-$ & $7.6\times 10^{-5}$ & $2.2\times 10^{-4}$ & $(1.56\pm 0.33)\times 10^{-4}$ \\ $D^+\to f_0K^+; f_0\to \pi^+\pi^-$ & $3.6\times 10^{-7}$ & $1.2\times 10^{-5}$ & $(4.4\pm 2.6)\times 10^{-5}$ \\ $D^+\to f_0K^+; f_0\to K^+K^-$ & $2.5\times 10^{-7}$ & $8.4\times 10^{-6}$ & $(1.23\pm 0.02)\times 10^{-5}$ \\ $D^+\to\sigma\pi^+; \sigma\to\pi^+\pi^-$ & $4.9\times 10^{-4}$ & $1.7\times 10^{-3}$ & $(1.38\pm0.12)\times 10^{-3}$ \\ $D^+\to \bar\kappa^0 \pi^+; \bar \kappa^0\to K_S\pi^0$ & $5.4\times 10^{-3}$ & $5.4\times 10^{-3}$ & $(6^{+5}_{-4})\times 10^{-3}$ \\ $D^+\to \bar\kappa^0 K^+; \bar \kappa^0\to K^-\pi^+$ & $3.7\times 10^{-4}$ & $3.7\times 10^{-4}$ & $(6.8^{+3.5}_{-2.1})\times 10^{-4}$ \\ $D^0\to f_0\pi^0; f_0\to \pi^+\pi^-$ & $1.6\times 10^{-5}$ & $1.4\times 10^{-5}$ & $(3.7\pm0.9)\times 10^{-5}$ \\ $D^0\to f_0\pi^0; f_0\to K^+K^-$ & $1.1\times 10^{-5}$ & $8.8\times 10^{-6}$ & $(3.6\pm0.6)\times 10^{-4}$ \\ $D^0\to f_0\overline K^0; f_0\to \pi^+\pi^-$ & $9.0\times 10^{-6}$ & $3.0\times 10^{-4}$ & $(2.40^{+0.80}_{-0.46})\times 10^{-3}$ \\ $D^0\to f_0\overline K^0; f_0\to K^+K^-$ & $4.3\times 10^{-6}$ & $1.4\times 10^{-4}$ & $<1.8\times 10^{-4}$ \\ $D^0\to a_0^+\pi^-; a_0^+\to K^+\overline K^0$ & $1.3\times 10^{-5}$ & $1.3\times 10^{-5}$ & $(1.2\pm 0.8)\times 10^{-3}$ \\ $D^0\to a_0^-\pi^+; a_0^-\to K^-K^0$ & $2.9\times 10^{-4}$ & $2.9\times 10^{-4}$ & $(2.6\pm 2.8)\times 10^{-4}$ \\ $D^0\to a_0^+K^-; a_0^+\to K^+\overline K^0$ & $2.2\times 10^{-4}$ & $2.2\times 10^{-4}$ & $(1.47\pm 0.33)\times 10^{-3}$ \\ $D^0\to a_0^0\overline K^0; a_0^0\to K^+K^-$ & $3.4\times 10^{-4}$ & $3.4\times 10^{-4}$ & $(6.18\pm0.73)\times 10^{-3}$ \\ $D^0\to a_0^0\overline K^0; a_0^0\to \eta\pi^0$ & $1.1\times 10^{-3}$ & $1.1\times 10^{-3}$ & $(2.40\pm0.56)\%$ \\ $D^0\to a_0^-K^+; a_0^-\to K^-\overline K^0$ & $1.7\times 10^{-5}$ & $1.7\times 10^{-5}$ & $<2.2\times 10^{-4}$ \\ $D^0\to \sigma\pi^0; \sigma\to \pi^+\pi^-$ & $2.2\times 10^{-5}$ & $2.0\times 10^{-4}$ & $(1.22\pm0.22)\times 10^{-4}$ \\ $D_s^+\to f_0\pi^+; f_0\to K^+K^-$ & $2.5\times 10^{-3}$ & $5.1\times 10^{-3}$ & $(1.14\pm 0.31)\%$ \\ $D_s^+\to a_0^{+,0}\pi^{0,+}; a_0^{+,0}\to \eta\pi^{+,0}$ & 0 & 0 & $(1.46\pm0.27)\%$ \\ \hline $D^+\to a_0(1450)^0\pi^+; a_0^0\to K^+K^-$ & $$ & $1.7\times 10^{-5}$ & $(4.5^{+7.0}_{-1.8})\times 10^{-4}$ \\ $D^+\to\overline K_0^{*0}\pi^+; \overline K_0^{*0}\to K^-\pi^+$ & & 1.38\% & $(1.25\pm 0.06)\%$ \\ $D^+\to\overline K_0^{*0}\pi^+; \overline K_0^{*0}\to K_S\pi^0$ & $$ & $6.0\times 10^{-3}$ & $(5.4\pm 1.8)\times 10^{-3}$ \\ $D^+\to\overline K_0^{*0}K^+;\overline K_0^{*0}\to K^-\pi^+$ & $$ & $7.6\times 10^{-5}$ & $(1.82\pm 0.35)\times 10^{-3}$ \\ $D^0\to a_0(1450)^-\pi^+; a_0^-\to K^-K^0$ & $$ & $6.1\times 10^{-6}$ & $(5.0\pm 4.0)\times 10^{-5}$ \\ $D^0\to a_0(1450)^+\pi^-; a_0^+\to K^+\overline K^0$ & $$ & $1.8\times 10^{-7}$ & $(6.4\pm 5.0)\times 10^{-5}$ \\ $D^0\to a_0(1450)^-K^+; a_0^-\to K^-K_S$ & & & $< 0.6\times 10^{-3}$ \\ $D^0\to K_0^{*-}\pi^+; K_0^{*-}\to \overline K^0\pi^-$ & $$ & $8.3\times 10^{-4}$ & $(5.34^{+0.80}_{-0.66})\times 10^{-3}$ \\ $D^0\to K_0^{*-}\pi^+; K_0^{*-}\to K^-\pi^0$ & $$ & $4.2\times 10^{-4}$ & $(4.8\pm 2.2)\times 10^{-3}$ \\ $D^0\to \overline K_0^{*0}\pi^0; \overline K_0^{*0}\to K^-\pi^+$ & $$ & $9.6\times 10^{-4}$ & $(5.9^{+5.0}_{-1.6})\times 10^{-3}$ \\ $D^0\to K_0^{*+}\pi^-; K_0^{*+}\to K^0\pi^+$ & $$ & $5.4\times 10^{-6}$ & $<2.8\times 10^{-5}$ \\ $D_s^+\to K_0^{*0}\pi^+; K_0^{*0}\to K^+\pi^-$ & $$ & $1.3\times 10^{-4}$ & $(5.0\pm3.5)\times 10^{-4}$ \\ $D_s^+\to \overline K_0^{*0}K^+; \overline K_0^{*0}\to K^-\pi^+$ & $$ & $2.0\times 10^{-3}$ & $(1.7\pm0.3)\times 10^{-3}$ \\ \end{tabular} \end{ruledtabular} } \end{table} \subsection{$W$-annihilation amplitude \label{sec:annihilation}} In the factorization calculations presented in Tables \ref{tab:DSPfact} and \ref{tab:DtoSP:theory}, we have neglected both $W$-exchange and $W$-annihilation amplitudes. The $D_s^+\to a_0^+\pi^0+a_0^0\pi^+$ mode recently observed by BESIII \cite{BESIII:Dstoa0pi} proceeds only through the $W$-annihilation amplitudes. However, its branching fraction at a percent level is much larger than the other two $W$-annihilation channels $D_s^+\to \omega\pi^+$ and $\rho^0\pi^+$ whose branching fractions are $(1.92\pm0.30)\times 10^{-3}$ and $(1.9\pm1.2)\times 10^{-4}$, respectively \cite{PDG}. This implies that $|A(SP)| > |A(V\!P)|$. In other words, the $W$-annihilation amplitude plays a more significant role in the $SP$ sector than in the $V\!P$ one. \begin{figure}[t] \begin{center} \vspace{10pt} \includegraphics[width=100mm]{Dsa0pi_FSI.eps} \caption{Long-distance contributions to the $W$-annihilation amplitude of $D_s^+\to a_0^0\pi^+$ through final-state rescattering of $\rho\eta^{(')}\to a_0\pi$. } \label{fig:Dsa0pi_FSI} \end{center} \end{figure} \begin{figure}[t] \begin{center} \centering \subfigure[]{ \includegraphics[width=6.3cm]{Dsa0pi_pole.eps} } \hspace{0.5cm} \subfigure[]{ \includegraphics[width=6.7cm]{Dsa0pi.eps} } \vspace{0.0cm} \caption{Manifestation of Fig. \ref{fig:Dsa0pi_FSI} at the hadron level: (a) resonant contribution from the nearby resonance $\pi(1800)$ and (b) the triangle rescattering diagram. } \label{fig:Dsa0pi} \end{center} \end{figure} Consider the decay amplitude of $D_s^+\to a_0^0\pi^+$ and the $W$-annihilation contribution to $D_s^+\to f_0\pi^+$ (in scheme II) \begin{eqnarray} {\cal A}(D_s^+\to a_0^0\pi^+)={1\over\sqrt{2}}V_{cs}^*V_{ud}(-A+A'), \qquad {\cal A}(D_s^+\to f_0\pi^+)_{\rm ann}={1\over\sqrt{2}}V_{cs}^*V_{ud}(A+A'). \end{eqnarray} Following the $G$-parity argument given in Ref. \cite{Cheng:Ddecay2010}, it is obvious that the direct $W$-annihilation process through $c\bar s\to W\to u\bar d$ is allowed in $D_s^+\to f_0\pi^+$ decay but not in $D_s^+\to a_0^0\pi^+$ decay as $G(u\bar d)=-$, $G(a_0\pi)=+$ and $G(f_0\pi)=-$. This means that short-distance $W$-annihilation contributions respect the relation $A'=A$, contrary to the na{\"i}ve expectation. Hence, one needs large long-distance $W$-annihilation which yields $A'=-A$. Since $D_s^+\to\rho^+\eta$ has the largest branching fraction of $(8.9\pm0.8)\%$ among the CF $D_s^+\to VP$ decays \cite{PDG} , it is conceivable that long-distance contribution from the weak decays $D_s^+\to \rho^+\eta$ followed by the resonantlike final-state rescattering of $\rho^+\eta\to a_0^0\pi^+$ (see Fig. \ref{fig:Dsa0pi_FSI}), which has the same topology as $W$-annihilation, may explain the large $W$-annihilation rate. \footnote{The hadronic weak decays $D_s^+\to \rho^+\eta', \overline K^{*0}K^+$ and $\overline K^0K^{*+}$ followed by final-state rescattering will also contribute to $D_s^+\to a_0^0\pi^+$.} It is customary to evaluate the final-state rescattering contribution, Fig. \ref{fig:Dsa0pi_FSI}, at the hadron level manifested in Fig. \ref{fig:Dsa0pi}. One of the diagrams, namely, the triangle graph in Fig. \ref{fig:Dsa0pi}(b) has been evaluated recently in \cite{Hsiao:a0,Ling:a0}. It yields a major contribution to $D_s^+\to a_0^0\pi^+$ owing to the large coupling constants for $\rho^+\to\pi^+\pi^0$ and $a_0^0\to \pi^0\eta$. The graph in Fig. \ref{fig:Dsa0pi}(a) shows the resonant final-state interactions manifested by the nearby resonance $\pi(1800)$ whose strong decay to $a_0\pi$ has been seen experimentally \cite{PDG}. However, we are not able to have a quantitative statement owing to the lack of information on its partial width. Assuming $A'\approx -A$, the annihilation amplitude extracted from the data of $D_s^+\to a_0^+\pi^0+a_0^0\pi^+$ is (in units of $10^{-6}$ GeV), \begin{eqnarray} |A|=0.91\pm0.12 \,. \end{eqnarray} Hence, the annihilation amplitude is very sizable in the $SP$ sector, $|A/T|_{SP}\sim 1/2$, contrary to its suppression $|A/T|_{PP}\sim 0.18$ in the $P\!P$ sector \cite{Cheng:2019ggx} and $|A_V/T_P|_{VP}\sim 0.07$ in the $VP$ sector \cite{Cheng:2021yrn}. \subsection{Finite Width Effects \label{sec:finitewidth}} The finite-width effect is accounted for by the quantity $\eta_R$ defined by \cite{Cheng:2020mna,Cheng:2020iwk} \begin{eqnarray} \label{eq:eta} \eta_{_R}\equiv \frac{\Gamma(D\to RP_3\to P_1P_2P_3)_{\Gamma_R\to 0}}{\Gamma(D\to RP_3\to P_1P_2P_3)}=\frac{\Gamma(D\to RP_3){\cal B}(R\to P_1P_2)}{\Gamma(D\to RP_3\to P_1P_2P_3)}=1+\delta ~, \end{eqnarray} so that the deviation of $\eta_{_R}$ from unity measures the degree of departure from the NWA when the resonance width is finite. It is na{\"i}vely expected that the correction $\delta$ will be of order $\Gamma_R/m_R$. It is calculable theoretically but depends on the line shape of the resonance and the approach of describing weak hadronic decays such as QCD factorization and perturbative QCD. Using the branching fractions of two-body and three-body $D$ decays calculated in Tables \ref{tab:DSPfact} and \ref{tab:DtoSP:theory}, respectively, in scheme II, the resultant $\eta_R$ parameters for scalar resonances $\sigma, \kappa$ and $K_0^*$ produced in the three-body $D$ decays are summarized in Table~\ref{tab:eta}. We only consider the $D^+$ decays as the three-body modes listed in Table~\ref{tab:eta} are not contaminated by the $W$-annihilation amplitude and hence the calculated finite width effects are more trustworth. We have also checked explicitly that $\eta_R\to 1$ in the narrow width limit as it should be. The $\eta_R$ parameters for various resonances produced in the three-body $B$ decays have been evaluated in \cite{Cheng:2020mna,Cheng:2020iwk}. Our results for $\eta_R$'s in Table~\ref{tab:eta} have similar features as the values $\eta_{\sigma/f_0(500)}=2.15\pm0.05$ and $\eta_{K_0^*(1430)}=0.83\pm0.04$ obtained in $B$ decays. \begin{table}[t] \caption{A summary of the $\eta_R$ parameter for scalar resonances produced in the three-body $D$ decays. The mass and width of $\sigma/f_0(500)$ are taken from Eq. (\ref{eq:sigmaMass}). } \vskip 0.15cm \label{tab:eta} \footnotesize{ \begin{ruledtabular} \begin{tabular}{ l l c c c l } Resonance~~~ & ~$D\to Rh_3\to h_1h_2h_3$ ~~~ & ~$\Gamma_R$ (MeV)~\cite{PDG}~~ & ~$m_R$ (MeV)~\cite{PDG} & $\Gamma_R/m_R$ & ~~~$\eta_R$ \\ \hline $\sigma/f_0(500)$ & $D^+\to \sigma\pi^+\to \pi^+\pi^-\pi^+$ & ~$700\pm26$~~ & ~$563\pm10$~~ & $1.243\pm0.051$ & ~~1.850 \\ $\kappa/K_0^*(700)$ & $D^+\to \bar\kappa^0\pi^+\to K_S^0\pi^0\pi^+$ & ~$468\pm30$~~ & $845\pm17$ & $0.554\pm0.037$ & ~~1.873 \\ $K_0^*(1430)$ & $D^+\to \overline K_0^{*0}\pi^+\to K^-\pi^+\pi^+$ & ~$270\pm80$~~ & ~$1425\pm50$~~ & $0.19\pm0.06$ & ~~0.985 \\ \end{tabular} \end{ruledtabular} } \end{table} Note that {\it a priori} we do not know if the deviation of $\eta_R$ from unity is positive or negative. In general, it depends on the line shape, mass and width of the resonance. As alluded to above, the mass and width have a more dominant effect than the line shape in the case of $\kappa(700)$. As another example, we found in Ref.~[83] that $\eta_\rho>1$ for the Breit-Wigner line shape and $\eta_\rho<1$ when the Gounaris-Sakurai model \cite{Gounaris:1968mw} is used to describe the line shape of the broad $\rho(770)$ resonance. To our knowledge, there is no good argument favoring one line shape over the other. Therefore, $\eta_{K_0^*(1430)}=0.985<1$, for example, is the result of our particular line shape choice. When the resonance is sufficiently broad, it is necessary to take into account the finite-width effects characterized by the parameter $\eta_R$. Explicitly \cite{Cheng:2020mna,Cheng:2020iwk}, \begin{eqnarray} {\cal B}(D\to RP)=\eta_R{\cal B}(D\to RP)_{\rm NWA}=\eta_R{{\cal B}(D\to RP_3\to P_1P_2P_3)_{\rm expt}\over {\cal B}(R\to P_1P_2)_{\rm expt}} ~, \end{eqnarray} Therefore, the experimental branching fractions ${\cal B}(D\to R P)_{\rm NWA}$ for $D^+\to\sigma \pi^+, \bar\kappa^0 \pi^+$ and $\overline K_0^{*0} \pi^+$ decays in Tables \ref{tab:DataSP} and \ref{tab:DSPfact} should have the following corrections: \begin{eqnarray} {\cal B}(D^+\to\sigma \pi^+): && (2.1\pm0.2)\times 10^{-3}\to (3.8\pm0.3)\times 10^{-3}, \nonumber \\ {\cal B}(D^+\to\bar\kappa^0 \pi^+): && (3.6^{+3.0}_{-2.4})\% \qquad\quad ~\to (6.7^{+5.6}_{-4.5})\%, \\ {\cal B}(D^+\to\overline K_0^{*0} \pi^+): && (1.98\pm0.22)\% \quad ~~\to (1.94\pm0.22)\%. \nonumber \end{eqnarray} From Table \ref{tab:DSPfact}, it is evident that the agreement between theory and experiment is substantially improved for $D^+\to\sigma \pi^+$ and $D^+\to \bar\kappa^0 \pi^+$. If we employ the pole mass and width, $m_\kappa=648\pm7$ MeV and $\Gamma_\kappa=560\pm 32$ MeV, respectively, for $\kappa/K_0^*(700)$ and the pole line shape given in Eq. (\ref{eq:T kappa}), we will be led to the results ${\cal B}(D^+\to \bar \kappa^0\pi^+)=8.10\%$, ${\cal B}(D^+\to \bar \kappa^0\pi^+\to K_S^0\pi^0\pi^+)=1.62\times 10^{-3}$ and $\eta_\kappa=8.34$. This implies that the finite-width correction will be unreasonably too large and thus unlikely, as alluded to at the end of Sec.~\ref{sec:line shape}. However, if the Breit-Wigner mass and width are used instead, we get $\eta_\kappa=1.92$ for pole line shape, which is a more reasonable result. This implies that in this case, it is the mass and width rather than the line shape that governs the finite-width correction. For the case of $f_0(500)$, one may wonder what the correction will be if the Breit-Wigner line shape is used. According to PDG \cite{PDG}, the Breit-Wigner mass and width of $f_0(500)$ lie in the wide ranges of 400-800~MeV and 100-800~MeV, respectively. As a result, it is quite difficult to pin down a specific set of parameters and thereby determine the finite-width correction. On the contrary, LHCb has determined its pole mass and width with reasonable accuracy using the pole line shape [see Eq. (\ref{eq:sigmaMass})]. It turns out that the pole mass and width fall within the above allowed ranges of the Breit-Wigner mass and width. Therefore, it is more sensible to use pole mass and width for calculations in either line shapes. \section{Conclusions \label{sec:conclusions}} In this work we have examined the quasi-two-body $D\to SP$ decays and the three-body $D$ decays proceeding through intermediate scalar resonances. Our main results are: \begin{itemize} \item In the $D\to SP_3\to P_1P_2P_3$ decays, we cannot extract the two-body branching fractions ${\cal B}(D\to SP)$ for $S=f_0(980)$ and $a_0(980)$ due to the lack of information of ${\cal B}(S\to P_1P_2)$ (except for $a_0(980)\to\pi\eta$). For $S=\kappa/K_0^*(700)$ and $\sigma/f_0(500)$, the extracted two-body branching fractions are subject to large finite-width effects owing to their broad widths. Hence, for light scalars it is more sensible to study ${\cal B}(D\to SP\to P_1P_2P)$ directly and compare with experiment. \item We have considered the two-quark (scheme I) and four-quark (scheme II) descriptions of the light scalar mesons with masses below or close to 1 GeV. Recent BESIII measurements of semileptonic charm decays favor the SU(3) nonet tetraquark description of the $f_0(500)$, $f_0(980)$ and $a_0(980)$ produced in charmed meson decay. In Table \ref{tab:DtoSP:theory} we have calculated $D\to SP_3\to P_1P_2P_3$ in schemes I and II. It is evident that scheme II agrees better with experiment for decays such as $D^+\to f_0\pi^+$ followed by $f_0\to \pi^+\pi^-$ and $D^+\to f_0K^+$ followed by $f_0\to \pi^+\pi^-$ or $f_0\to K^+K^-$. This again favors the tetraquark structure for light scalars. The predicted rates for $D^0\to f_0 P, a_0 P$ are generally smaller than experimental data by one order of magnitude, presumably implying the importance of $W$-exchange. \item The three-body decay modes $D^+\to \bar \kappa^0 (\to K_S\pi^0) \pi^+$, $D^+\to \overline K_0^* (\to K^-\pi^+) \pi^+$ and $D^+\to \overline K_0^* (\to K_S\pi^0) \pi^+$ are ideal for testing the validity of the factorization approach as they are free of $W$-annihilation contributions. $T$ and $C'$ amplitudes contribute constructively, contrary to the Cabibbo-allowed $D^+\to \overline K^0\pi^+$ decay where the interference between external and internal $W$-emission is destructive. \item Denoting the primed amplitudes $T'$ and $C'$ for the case when the emitted meson is a scalar meson, it is na{\"i}vely expected that $T'=C'=0$ for the neutral scalars $\sigma, f_0$ and $a_0^0$, $|T'|\ll|T|$ and $|C'|\ll|C|$ for the charged $a_0$ and $|T'|<|T|$ and $|C'|<|C|$ for the $\kappa$ and $K_0^*(1430)$. Beyond the factorization approximation, contributions proportional to the scalar decay constant $\bar f_S$ can be produced from vertex and hard spectator-scattering corrections for the above-mentioned neutral scalars. \item We have studied the flavor operators $a_{1,2}(M_1M_2)$ for $M_1M_2=SP$ and $PS$ within the framework of QCD factorization. Notice that $a_i(PS)$ and $a_i(SP)$ are very different as the former does not receive factorizable contributions. While $a_{1,2}(SP)$ are similar for any light and heavy scalar mesons, $a_1(PS)$ and $a_2(PS)$ vary from neutral to the charged ones as shown in Table \ref{tab:aiPS}. The flavor operators $a_{1,2}(\pi a_0^\pm)$ are much greater than $a_{1,2}(\pi a_0^0)$. In general, $a_{1,2}(PS)$ become larger when the vector decay constants become smaller. \item For $f_0(980)$ and $a_0(980)$, we use the Flatt\'e line shape to describe both of them to take into account the threshold and coupled channel effects. For the very broad $\sigma/f_0(500)$ , we follow LHCb to employ a simple pole description. \item The annihilation amplitude inferred from the measurement of $D_s^+\to a_0^{+,0}\pi^{0,+}\to \eta\pi^{+,0}\pi^{0,+}$ is given by $|A|=(0.91\pm0.12)\times 10^{-6}\,{\rm GeV}$. It is very sizable in the $SP$ sector, $|A/T|_{SP}\sim 1/2$, contrary to its suppression in the $P\!P$ sector with $|A/T|_{PP}\sim 0.18$. \item Since $\sigma$ and $\kappa$ are very broad, we have considered their finite-width effects characterized by the parameter $\eta_S$, whose deviation from unity measures the degree of departure from the NWA when the resonance width is finite. We find $\eta_{\sigma}$ and $\eta_{\kappa}$ to be of order $1.85 - 1.87$. The experimental branching fractions ${\cal B}(D^+\to\sigma\pi^+)$ and ${\cal B}(D^+\to\bar\kappa^0 \pi^+)$ should then read $(3.8\pm0.3)\times 10^{-3}$ and $(6.7^{+5.6}_{-4.5})\%$, respectively. \item For each scalar nonet (lighter and heavier one) we have 15 unknown parameters for the 8 topological amplitudes $T,C,E,A$ and $T',C',E',A'$. However, there are only 14 independent data to fit. Moreover, since we need to introduce appropriate energy-dependent line shapes for the scalar mesons, it is not conceivable to extract the topological amplitudes from three-body decays as the decay rates cannot be factorized into the topological amplitude squared and the phase space factor. \end{itemize} \section*{Acknowledgments} This research was supported in part by the Ministry of Science and Technology of R.O.C. under Grant Nos.~MOST-107-2119-M-001-034, MOST-110-2112-M-001-025 and MOST-108-2112-M-002-005-MY3, the National Natural Science Foundation of China under Grant No. 11347030, the Program of Science and Technology Innovation Talents in Universities of Henan Province 14HASTIT037. \vskip 2.5cm
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} \label{sect:intro} Machine learning systems are nowadays being extensively adopted in computer security applications, such as network intrusion and malware detection, as they obtained remarkable performances even against the increasing complexity of modern attacks~\cite{AafDuYin13,LinNeuPla15,PenGatSarMol12}. More recently, learning-based techniques based on static analysis proved to be especially effective at detecting Android malware, which constitutes one of the major threats in mobile security. In particular, these approaches showed great accuracy even when traditional code concealing techniques (such as static obfuscation) are employed~\cite{rieck14-drebin,chen16-asiaccs,demontis17-tdsc,scalas19-cose}. Despite the successful results reported by such approaches, the problem of detecting malware created to fool learning-based systems is still far from being solved. The robustness of machine-learning models is challenged by the creation of the so-called \emph{adversarial examples}, {i.e.}\xspace, malicious files that receive fine-grained modifications oriented to deceive the learning-based algorithms~\cite{biggio18-pr,biggio13-ecml,szegedy14-iclr,goodfellow15-iclr}. In particular, recent work concerning Android malware demonstrated that specific changes to the contents of malicious Android applications might suffice to change their classification ({e.g.}\xspace, from malicious to benign)~\cite{demontis17-tdsc,calleja18}. The main characteristic of these attacks is their \emph{sparsity}, meaning that they enforce only a few changes to the whole feature set to be effective. Such changes may be represented by, {e.g.}\xspace, the injection of unused permissions or parts of unreachable/unused executable code. For example, adding a component that is loaded when the application is started (through a keyword called \texttt{LAUNCHER}) can significantly influence the classifier's decision~\cite{melis2018explaining}. One of the many reasons why such attacks are so effective is that classifiers typically assign significant relevance to a limited amount of features (this phenomenon has also been demonstrated in other applications such as email spam filtering). As a possible countermeasure, research showed that classifiers that avoid overemphasizing specific features, weighting them more evenly, can be more robust against such attacks~\cite{kolcz09,biggio10-ijmlc,demontis17-tdsc}. Simple metrics characterizing this behavior were proposed to identify and select more robust algorithms, especially in the context of linear classifiers, where feature weights can be used as a direct measure of a feature's relevance to each decision~\cite{demontis16-spr,demontis17-tdsc,demontis19-usenix}. In parallel, the ability to understand the classifiers behavior by looking to the input gradient, {i.e.}\xspace the feature weights in the case of linear classifiers, was also explored by multiple works in the field of explainable machine learning ~\cite{baehrens10-jmlr,shrikumar2016just,sundararajan2017axiomatic,Adadi2018}. In particular, it became of interest to figure out if the information provided by these gradient-based methods can also be employed to understand (and improve) the robustness of learning-based systems against attacks~\cite{chen2019robust}. In this paper, we investigate the possible correlations between gradient-based explanations, {i.e.}\xspace attributions, and the classifiers robustness to adversarial evasion attacks on an Android malware detection case study. We first provide a description of learning-based systems for Android malware detection (Section~\ref{sect:android}) and their adversarial vulnerabilities (Section~\ref{sect:advandroid}). Then, motivated by the intuition that the classifiers whose attributions are more evenly distributed should also be the more robust, as they rely on a broader set of features for the decision, we propose and empirically validate few synthetic metrics that allow correlating between the \emph{evenness} of gradient-based explanations and the \emph{adversarial robustness}, a new measure we propose to represent the classifier robustness to adversarial attacks along with an increasing attack power in a compact way (Section~\ref{sect:evenness}). We assess our findings on \textrm{Drebin}\xspace, a popular learning-based detector for Android (Section~\ref{sect:exp}). Our investigation unveils that, under some circumstances, there is a clear relationship between the distribution of gradient-based explanations and the adversarial robustness of Android malware detectors. After a brief description of many related works on adversarial attacks and explainable machine learning (Section~\ref{sect:relwork}), we conclude the paper with a discussion on how our findings can pave the way towards the development of more efficient mechanisms both to evaluate adversarial robustness and to defend against adversarial Android malware examples (Section~\ref{sect:conclusions}). \section{Android Malware Detection} \label{sect:android} Here we provide some background on the structure of Android applications, and then we describe \textrm{Drebin}\xspace~\cite{rieck14-drebin}, the Android malware detector that we consider in our case study. \subsection{Background on Android} Android applications are compressed in \texttt{apk} files, {i.e.}\xspace, archives that contain the following elements: \emph{(a)} the \texttt{AndroidManifest.xml} file; \emph{(b)} one or more \texttt{classes.dex} files; \emph{(c)}; resource and asset files, such as native libraries or images; \emph{(d)} additional \texttt{xml} files that define the application layout. Since \textrm{Drebin}\xspace only analyzes the \texttt{AndroidManifest.xml} and the \texttt{classes.dex} files, we briefly describe them below. \myparagraph{Android Manifest (\texttt{manifest}\xspace).} The basic information about the Android application is held in the \texttt{AndroidManifest.xml}, including its package name or the supported API levels, together with the declaration of its \emph{components}, {i.e.}\xspace, parts of code that perform specific actions. For example, one component might be associated with a screen visualized by the user (\emph{activity}) or to the execution of background tasks (\emph{services}). App components can also perform actions (through \emph{receivers}) on the occurrence of specific events, {e.g.}\xspace, a change in the device's connectivity status (\texttt{CONNECTIVITY\_CHANGE}) or the opening of an application (\texttt{LAUNCHER}). The \texttt{manifest}\xspace also contains the list of \emph{hardware components} and \emph{permissions} requested by the application to work ({e.g.}\xspace, Internet access). \myparagraph{Dex bytecode (\texttt{dexcode}\xspace).} The \texttt{classes.dex} file embeds the compiled source code of the applications, including all the user-implemented methods and classes; the bytecode can be executed with the Dalvik Virtual Machine (until Android 4.4) or the Android runtime (ART). The \texttt{classes.dex} may contain specific API calls that can access sensitive resources such as personal contacts (\emph{suspicious calls}). Additionally, it contains all system-related, \emph{restricted API calls} that require specific permissions ({e.g.}\xspace, writing to the device's storage). Finally, this file can contain references to \emph{network addresses} that might be contacted by the application. \subsection{Drebin} The majority of the approaches for Android malware detection employ static and dynamic analyses that extract information such as permissions, communications through Inter-Component Communication (ICC), system- and user-implemented API calls, and so forth~\cite{rieck14-drebin,lindorfer15-ieee,chen16-asiaccs,scalas19-cose,cai19-tifs}. \begin{figure*}[t] \centering \includegraphics[width=0.95\textwidth]{figs/system-arch.pdf} \caption{A schematic representation (\cite{demontis17-tdsc}) of \textrm{Drebin}\xspace. First, applications are represented as binary vectors in a $\con d$-dimensional feature space. A linear classifier is then trained on an available set of malware and benign applications, assigning a weight to each feature. During classification, unseen applications are scored by the classifier by summing up the weights of the present features: if $f(\vct x) \geq 0$, they are classified as malware. \textrm{Drebin}\xspace also explains each decision by reporting the most suspicious (or benign) features present in the app, along with the weight assigned to them by the linear classifier~\cite{rieck14-drebin}.} \label{fig:system-arch} \end{figure*} \textrm{Drebin}\xspace is among the most popular and used static detection approaches. It performs the detection of Android malware through static analysis of Android applications. In a first phase (training), it employs a set of benign and malicious apps provided by the user to determine the features that will be used for detection (meaning that the feature set will be strictly dependent on the training data). Such features are then embedded into a \emph{sparse}, high-dimensional vector space. Then, after the training of a linear machine-learning model, the system is able to perform the classification of previously-unseen apps. An overview of the system architecture is given in Figure~\ref{fig:system-arch}, and discussed more in detail below. \begin{table}[tp] \centering \resizebox{\linewidth}{!}{ \begin{tabular}{ ll|ll } \toprule \multicolumn{2}{ c| }{\texttt{manifest}\xspace} & \multicolumn{2}{ c }{\texttt{dexcode}\xspace} \\ \midrule $S_{1}$ & Hardware components & $S_{5}$ & Restricted API calls\\ $S_{2}$ & Requested permissions & $S_{6}$ & Used permission \\ $S_{3}$ & Application components & $S_{7}$ & Suspicious API calls \\ $S_{4}$ & Filtered intents & $S_{8}$ & Network addresses\\ \bottomrule \end{tabular}} \caption{Overview of feature sets.} \label{tab:feature_sets} \end{table} \myparagraph{Feature extraction.} First, \textrm{Drebin}\xspace statically analyzes a set of $\con n$ training Android applications to construct a suitable feature space. All features extracted by \textrm{Drebin}\xspace are presented as \emph{strings} and organized in 8 different feature sets, as listed in Table~\ref{tab:feature_sets}. Android applications are then mapped onto the feature space as follows. Let us assume that an app is represented as an object $\vct z \in \set Z$, being $\set Z$ the abstract space of all \texttt{apk} files. We denote with $\Phi : \set Z \mapsto \set X$ a function that maps an \texttt{apk} file $\vct z$ to a $\con d$-dimensional feature vector $\vct x = ( x^{1}, \ldots, x^{\con d} )^{\ensuremath{\top}} \in \set X=\{0,1\}^{\con d}$, where each feature is set to 1 (0) if the corresponding \emph{string} is present (absent) in the \texttt{apk} file $\vct z$. An application encoded in feature space may thus look like the following: { \centering \resizebox{\linewidth}{!} { \begin{minipage}{\linewidth} \centering \begin{align} \nonumber \small \vct x = \Phi( \vct z) \mapsto \begin{pmatrix} \cdots \\ \small 0\\ \small 1\\ \cdots \\ \small 1\\ \small 0\\ \cdots\\ \end{pmatrix} \begin{array}{ll} \cdots & \multirow{4}{*}{\hspace{-1mm}\bigg \} $S_2$ }\\ \texttt{\small permission::SEND\_SMS} \\ \texttt{\small permission::READ\_SMS}\\ \cdots & \multirow{4}{*}{\hspace{-1mm}\bigg \} $S_5$ }\\ \texttt{\small api\_call::getDeviceId}\\ \texttt{\small api\_call::getSubscriberId}\\ \cdots & \\ \end{array} \end{align} \end{minipage} } } \vspace{1em} \myparagraph{Learning and Classification.} \textrm{Drebin}\xspace uses a linear Support Vector Machine (SVM) to perform detection. It can be expressed in terms of a linear function $f : \set X \mapsto \mathbb R$, {i.e.}\xspace, $f(\vct x) = \vct w^{\ensuremath{\top}}\vct x + b$, where $\vct w \in \mathbb R^{\con d}$ denotes the vector of \emph{feature weights}, and $b \in \mathbb R$ is the so-called \emph{bias}. These parameters, optimized during training, identify a hyperplane that separates the two classes in the feature space. During classification, unseen apps are then classified as malware if $f(\vct x) \geq 0$, and as benign otherwise. In this work, we will also consider other linear and nonlinear algorithms to learn the classification function $f(\vct x)$. \myparagraph{Explanation.} \textrm{Drebin}\xspace explains its decisions by reporting, for any given application, the most influential features, {i.e.}\xspace, the ones that are present in the given application and are assigned the highest absolute weights by the classifier. The feature relevance values reported by \textrm{Drebin}\xspace correspond exactly to its feature weights, being \textrm{Drebin}\xspace a linear classifier. For instance, in Figure~\ref{fig:system-arch} it is possible to see that \textrm{Drebin}\xspace correctly identifies the sample as malware since it connects to a suspicious URL and uses SMS as a side-channel for communication. In this work, we use different state-of-the-art explainability methods to measure feature relevance and evaluate whether and to which extent the distribution of relevance values reveals any interesting insight on adversarial robustness. \section{Adversarial Android Malware} \label{sect:advandroid} Machine learning algorithms are known to be vulnerable to adversarial examples. The ones used for Android malware detection do not constitute an exception. The vulnerability of those systems was demonstrated in~\cite{demontis17-tdsc,grosse17-esorics,demontis19-usenix}, and a defense mechanism was proposed in~\cite{demontis17-tdsc}. In this section, we first explain how an attacker can construct Android malware able to fool a classifier (Drebin), being recognized as benign. Then, considering the system called SecSVM~\cite{demontis17-tdsc} as a case-study, we explain how machine learning systems can be strengthened against this attack. \subsection{Attacking Android Malware Detection} The goal of creating adversarial Android malware that evades detection can be formulated as an optimization problem, as detailed below. This optimization problem is constrained to ensure that the solution provides a functional and realizable malware sample, {i.e.}\xspace, that the feature changes suggested by the attack algorithm are feasible and can be implemented as practical manipulations to the actual apk input file. \myparagraph{Problem Formulation.} As explained in the previous section, Drebin is a binary classifier trained on Boolean features. To have a malware sample $\vct z$ misclassified as benign, the attacker should modify its feature vector $\vct x$ in order to decrease the classifier score $f(\vct x)$. The number of features considered by Drebin is quite large (more than one million). However, the attacker can reasonably change only few of them (\emph{sparse attack}) to preserve the malicious functionality of the application. The attacker has thus an $\ell_1$-norm constraint on the number of features that can be modified. The feature vector of the adversarial application can be computed by solving the following optimization problem: \begin{align} \label{eq:evasion} \operatornamewithlimits{\arg\,\min}_{\vct x^{\prime}}& \quad f(\vct x^{\prime}) \\ \label{eq:evasion-constr} \rm s. t. & \quad \| \vct x - \vct x^{\prime} \|_1 \leq \varepsilon \\ \label{eq:evasion-box} & \quad \vct{x}_{\rm lb} \preceq \vct{x}^{\prime} \preceq \vct{x}_{\rm ub} \\ \label{eq:discrete-constr} & \quad \vct{x}^{\prime} \in \{0,1\} \quad , \end{align} where Eq.~\eqref{eq:evasion-constr} is the $\ell_1$ distance constraint between the original $\vct x$ and the modified (adversarial) $\vct x^{\prime}$ sample. Eq.~\eqref{eq:evasion-box} is a box constraint that enforces the features values of the adversarial malware to stay within some lower and upper bounds, while Eq.~\eqref{eq:discrete-constr} enforces the attack to find a Boolean solution. The aforementioned problem can be solved with gradient-based optimization techniques, {e.g.}\xspace, Projected Gradient Descent (PGD), as described in Algorithm~\ref{alg:evasion}~\cite{biggio13-ecml,melis17-vipar,demontis19-usenix}. At each step, this algorithm projects the feature values of the adversarial sample onto the constraints (Eqs.~\ref{eq:evasion-constr}-\ref{eq:evasion-box}), including binarization in $\{0, 1\}$. \begin{algorithm}[t] \caption{PGD-based attack on Android malware.} \label{alg:evasion} \textbf{Input:} $\vct x$, the input malware; $\varepsilon$, the number of features which can be modified; $\eta$, the step size; $\Pi$, a projection operator on the constraints \eqref{eq:evasion-constr} and \eqref{eq:evasion-box}; $t>0$, a small number to ensure convergence.\\ \textbf{Output:} $\vct x^\prime$, the adversarial (perturbed) malware. \begin{algorithmic}[1] \STATE{$\vct x^\prime \gets \vct x$} \REPEAT \STATE{$\vct x^\star \gets \vct x^\prime$} \STATE{$\vct x^\prime \gets \Pi(\vct x^\star - \eta \cdot \nabla f(\vct x^\star))$} \UNTIL{$|f(\vct x^\prime) - f(\vct x^\star)| \leq t$} \STATE{\textbf{return:} $\vct x^\prime$} \end{algorithmic} \end{algorithm} \myparagraph{Feature Addition.} To create malware able to fool the classifier, an attacker may, in theory, both adding and removing features from the original applications. However, in practice, removing features is a non-trivial operation that can easily compromise the malicious functionalities of the application. Feature addition is a safer operation, especially when the injected features belong to the \texttt{manifest}\xspace; for example, adding permissions does not influence any existing application functionality. When the features depend on the \texttt{dexcode}\xspace, it is possible to add them safely introducing information that is not actively executed, {e.g.}\xspace, by adding code after \texttt{return} instructions (\emph{dead code}) or methods that are never called by any \texttt{invoke} type instructions ({i.e.}\xspace, the ones that indicate a method call). Therefore, in this work, we only consider feature addition. To find a solution that does not require removing features from the original application, the attacker can simply define $\vct x^{lb} = \vct x$ in Eq.~\eqref{eq:evasion-box}. However, it is worth mentioning that this injection could be easily made ineffective, simply removing all the features extracted from code lines that are never executed. In this way, the attacker is forced to change the executed code, which is more difficult, as it requires considering the following additional and stricter constraints. Firstly, the attacker should avoid breaking the application functionalities. Secondly, they should avoid introducing possible artifacts or undesired functionalities, which may influence the semantics of the original program. Injecting a large number of features may be, therefore, difficult and not always feasible. \subsection{SecSVM: Defending against Adversarial Android Malware} \label{subsect:secsvm} In~\cite{demontis17-tdsc}, the authors showed that the sparse evasion attack described above is able to fool Drebin, requiring the injection of a negligible number of features, and they propose a robust counterpart of that classifier. The underlying idea behind their countermeasure is to enforcing the classifier to learn more evenly distribute feature weights since this will require the attacker to manipulating more features to evade the classifier. To this end, they added a box constraint on the weights $\vct w$ of a linear SVM, obtaining the following learning algorithm ({Sec-SVM}\xspace): \begin{eqnarray} \label{eq:sec-svm} \min_{\vct w, b} && \tfrac{1}{2}\vct {w}^{\ensuremath{\top}} \vct w + C \textstyle \sum_{i=1}^{\con n} \max \left( 0, 1-y_{i}f(\vct x_{i}) \right) \\ {\rm s. t.} && w^{\rm lb}_{k} \leq w_{k} \leq w^{\rm ub}_{k} \, , \, k = 1, \ldots, \con d \quad , \nonumber \end{eqnarray} where the lower and upper bounds on $\vct w$ are defined by the vectors $\vct w^{\rm lb} = (w^{\rm lb}_{1}, \ldots, w^{\rm lb}_{\con d})$ and $\vct w^{\rm ub} = (w^{\rm ub}_{1}, \ldots, w^{\rm ub}_{\con d})$, which are application dependent. Eq.~\eqref{eq:sec-svm} can be easily optimized using a constrained variant of the Stochastic Gradient Descent (SGD) technique, as described in~\cite{demontis17-tdsc}. \section{Do Gradient-based Explanations Help to Understand Adversarial Robustness?} \label{sect:evenness} In this work, we investigate whether gradient-based attribution methods used to explain classifiers' decisions provide useful information about the robustness of Android malware detectors against sparse attacks. Our intuition is that the classifiers whose attributions are usually evenly-distributed rely upon a broad set of features instead of overemphasizing only a few of them. Therefore, they are more robust against sparse attacks, where the attacker can change only a few features, having a negligible impact on the classifier decision function. To verify our intuition, we present an empirical analysis whose procedure is illustrated in Figure~\ref{fig:evenness-arch} and described below. Firstly, we perform a security evaluation on the tested classifier, obtaining a compact measure we call~\emph{Adversarial Robustness} (see Section~\ref{sect:adv-robustness}), representing its robustness to the adversarial attacks along with an increasing number of added features $\epsilon$. Then, we compute the attributions for each benign and manipulated malware sample $\vct x$ using a chosen gradient-based explanation technique (see Section~\ref{subsect:expltech}) obtaining the relevance vectors $\vct r$. For each of those, we propose to look for a compact metric that encapsulates the degree of~\emph{Evenness} of the attributions (see Section~\ref{subsect:evenness-metrics}). Finally, comparing this value with the adversarial robustness, we asses the connections between attributions' evenness and the robustness to adversarial evasion attacks. In Section~\ref{sect:exp}, we present the results of our analysis on the popular learning-based detector for Android \textrm{Drebin}\xspace, providing the empirical evidence of our intuition. \begin{figure*}[t] \centering \includegraphics[width=0.9\textwidth]{figs/evenness-arch.pdf} \caption{Schematic representation of the analysis employed to verify the correlation between explanation evenness and adversarial robustness. First, for each malware in the test set, we create its adversarial counterpart. Then, for each of those adversarial applications, we evaluate: (1)~a measure of the classifier robustness against it (\emph{adversarial robustness}) (2)~the evenness of the application attributions (\emph{explanation evenness}). Finally, we asses the correlation between them.} \label{fig:evenness-arch} \end{figure*} \subsection{Adversarial Robustness} \label{subsect:adv-robust} We define the robustness to the evasion samples crafted injecting a fixed number of features $\epsilon$ as: \label{sect:adv-robustness} \begin{equation} R(\set D_{\varepsilon}, f) = \frac{1}{n}\sum_{i=1}^{n} e^{ - \ell_i} \quad , \end{equation} where $\ell_i = \ell(y_i, f(\vct x_i))$ is the adversarial loss attained by the classifier $f$ on the data points in $\set D_\varepsilon = \{\vct x_i, y_i \}_{i=1}^n$, containing the $\varepsilon$-sized adversarial samples optimized with Algorithm~\ref{alg:evasion}. We finally define the adversarial robustness $\set R$ of a classifier $f$ as the average of $R(\set D_\varepsilon, f)$ on different $\varepsilon$: \begin{equation} \set R = \mathbb E_\varepsilon \{ R(\set D_\varepsilon, f) \} \quad . \end{equation} \subsection{Gradient-based Explanation Methods} \label{subsect:expltech} In our analysis, we consider gradient-based attribution methods, where \emph{attribution} means the contribution of each input feature to the prediction of a specific sample. The positive (negative) value of an attribution indicates that the classifier considers the corresponding feature as peculiar of the malicious (benign) samples. In the following, we review the three gradient-based techniques considered in this work. \myparagraph{{Gradient}\xspace.} The simplest method to obtain the attributions is to compute the gradient of the discriminant function $f$ with respect to the input sample $\vct x$. For image recognition models, it corresponds to the saliency map of the image~\cite{baehrens10-jmlr}. The attribution of the $i$\textsuperscript{th} feature is computed as: \begin{align} \text{Gradient}_i(\vct x) := \frac{\partial f(\vct x)}{\partial x_i} \quad . \end{align} \myparagraph{{Gradient*Input}\xspace.} This technique has been proposed in~\cite{shrikumar2016just} and utilized in one of our previous work~\cite{melis2018explaining}, to identify the most influential features for an Android malware detector trained on sparse data. As we have shown in that paper, this approach is more suitable than the previously proposed ones when the feature vectors are sparse. The previously proposed approaches~\cite{baehrens10-jmlr,ribeiro16} tended to assign relevance to features whose corresponding components are \emph{not} present in the considered application, thus making the corresponding predictions challenging to interpret. To overcome this issue, this technique leverages the notion of \emph{directional derivative}. Given the input point $\vct x$, it projects the gradient $\nabla f(\vct x)$ onto $\vct x$, to ensure that only the non-null features are considered as relevant for the decision. More formally, the $i$\textsuperscript{th} attribution is computed as: \begin{align} \text{Gradient*Input}_i(\vct x) := \frac{\partial f(\vct x)}{\partial x_i} * x_i \quad . \end{align} \myparagraph{{Integrated Gradients}\xspace.} Sundararajan et al.~\cite{sundararajan2017axiomatic} identified two axioms that attribution methods should satisfy: \emph{implementation invariance} and \emph{sensitivity}. Accordingly to the first, the attributions should always be identical for two functionally equivalent networks, {e.g.}\xspace they should be invariant to the differences in the training hyperparameters, which lead the network to learn the same function. The second axiom is satisfied if, for every input predicted differently from a baseline (a reference vector that models the neutral input, {e.g.}\xspace a black image) and that differs from the baseline in only one feature, has, for that feature, a non-zero attribution. In the same paper, they proposed a gradient-based explanation called Integrated Gradient that satisfies the axioms explained above. This method, firstly, considers the straight-line path from the baseline to the input sample and computes the gradients at all points along the path. Then, it obtains the attribution cumulating those gradients. The attribution along the $i$\textsuperscript{th} dimension for an input $\vct x$ and baseline $\vct x^{\prime}$ is defined as: \begin{equation} \label{eq:integrads} \begin{aligned} &\text{IntegratedGrads}_{i}(\vct x) :=\\&\qquad\left(x_{i}-x_{i}^{\prime}\right) \cdot \int_{\alpha=0}^{1} \frac{\partial f\left(\vct x^{\prime}+\alpha \cdot\left(\vct x-\vct x^{\prime}\right)\right)}{\partial x_{i}} d \alpha \quad . \end{aligned} \end{equation} To efficiently approximate the previous integral, one can sum the gradients computed at $p$ fixed intervals along the joining path from $\vct x^{\prime}$ to the input $\vct x$: \begin{equation} \label{eq:integrads-approx} \begin{aligned} &\text{IntegratedGrads}_{i}^{\text{approx}}(\vct x) :=\\&\qquad\left(x_{i}-x_{i}^{\prime}\right) \cdot \sum_{k=1}^{ p} \frac{\partial f\left(\vct x^{\prime}+\frac{k}{ p} \cdot\left(\vct x-\vct x^{\prime}\right)\right)}{\partial x_{i}} \cdot \frac{1}{ p} \quad . \end{aligned} \end{equation} For linear classifiers, where $\partial f / \partial x_{i} = w_i$, this method is equivalent to Gradient*Input if $\vct x^{\prime} = \vct 0$ is used as a baseline, which is a well-suited choice in many applications~\cite{sundararajan2017axiomatic}. Therefore, in this particular case, also the Gradient*Input method satisfies the abovementioned axioms. \subsection{Explanation Evenness Metrics} \label{subsect:evenness-metrics} To compute the evenness of the attributions, we consider the two metrics, described below. The first is the one proposed in~\cite{kolcz09, biggio10-ijmlc}. To compute the evenness metric, they firstly defined a function $F(\vct r, k)$ which, given a relevance vector $\vct r$, computes the ratio of the sum of the k highest relevance values to the sum of all absolute relevance values, for $k = 1,2,\ldots,m$: \begin{center} \begin{align} F(\vct r,k) = \frac{\sum_{i=1}^k |r_{(i)}|}{\sum_{j=1}^{ m} |r_{(j)}|} \quad , \nonumber \end{align} \end{center} where $r_{1}, r_{2}, \ldots, r_{ m}$ denote the relevance values, sorted in descending order of their absolute values, {i.e.}\xspace, $|r_{1}| \geq |r_{2}| \geq \ldots \geq |r_{ m}|$ and $m$ is the number of considered relevance values ($m\leq d$). This function essentially computes the evenness of the distribution of the relevance among the features. The evenest relevance distribution (the one where they are all equal), corresponds to $F(\vct r, k) = k/n$. Whereas the most uneven is attained when only one relevance differs from zero, and in this case, $F(\vct r, k) = 1$ for each k value. To avoid the dependence on $k$ and to obtain a single scalar value, they compute the evenness as: \begin{align} \ensuremath{{\set E_1}}\xspace(\vct r) = \frac{2}{ m - 1}\left[ m - \sum_{k=1}^{ m} F(\vct r, k)\right] \quad . \label{eq:rel-evenn} \end{align} The range of $\ensuremath{{\set E_1}}\xspace$ is $[0, 1]$, $\ensuremath{{\set E_1}}\xspace = 0$ and $\ensuremath{{\set E_1}}\xspace = 1$ indicates respectively to the most uneven and to the most even relevance vector. The second metric we consider is the one proposed in~\cite{demontis16-spr}, based on the ratio between the $\ell_1$ and $\ell_\infty$ norm: \begin{align} \quad \ensuremath{{\set E_2}}\xspace(\vct r) = \frac{1}{m} \cdot \frac{\|\vct r\|_1}{\|\vct r\|_\infty} \quad . \label{eq:rel-evenn-spr} \end{align} To have a broader perspective of the attributions evenness, we compute the metrics on multiple samples, and we average the results. More formally, we define the \emph{explanation evenness} as: \begin{align} E=\frac{1}{n} \sum_{i=1}^n \set E (\vct r^i) \quad , \label{eq:rel-evenn-global} \end{align} where $\vct r^i$ with $i=1, 2, \ldots, n$ is the relevance vector computed on each sample of a test dataset $\set D = \{\vct x_i, y_i \}_{i=1}^n$, and $\set E$ can be equal either to $\ensuremath{{\set E_1}}\xspace$ or $\ensuremath{{\set E_2}}\xspace$. In the following, we represent the averaged evenness computed considering the per-sample metric $\ensuremath{{\set E_1}}\xspace$ ($\ensuremath{{\set E_2}}\xspace$) with \ensuremath{E_1}\xspace (\ensuremath{E_2}\xspace). \section{Experimental Analysis} \label{sect:exp} In this section, we practically evaluate whether the measures introduced in Section~\ref{sect:evenness} can be used to estimate the robustness of classifiers against sparse evasion attacks. After detailing our experimental setup (Section~\ref{subsect:setup}), we show the classifiers' detection performances, both in normal conditions and under attack (Section~\ref{subsect:perfres}). In our evaluations, we focus on the feature addition attack setting (see Section~\ref{sect:advandroid}), as they are typically the easiest to accomplish for the adversary. We use \texttt{secml} as a framework to implement classification systems, explanation techniques, and attack algorithms~\cite{melis2019secml}. Finally, we assess the relationship of the proposed evenness metrics with adversarial robustness and detection rate (Section~\ref{subsect:correlsres}). \subsection{Experimental Setup} \label{subsect:setup} \begin{figure*}[t] \centering \includegraphics[width=.445\textwidth,trim={0 0 3cm 0},clip]{figs/roc_curve_mean_drebin.pdf} \includegraphics[width=.523\textwidth,trim={.7cm 0 0 0},clip]{figs/eva_drebin_updated_031115.pdf}\\ \vspace{-1em} \caption{(left) Mean ROC curves for the tested classifiers on the \textsl{Drebin}\xspace data. (right) White-box evasion attacks on \textsl{Drebin}\xspace data. Detection Rate at 1\% False Positive Rate against an increasing number of added features $\varepsilon$. We can see how the Sec-SVM, despite providing a slightly lower detection rate compared to the other tested classifiers, requires on average more than 25 different new feature additions to the original apps to be fooled by the attacker.} \label{fig:res-drebin} \end{figure*} \myparagraph{Dataset.} We use the \textsl{Drebin}\xspace dataset~\cite{rieck14-drebin}, consisting of $121,329$ benign applications and $5,615$ malicious samples, labeled with VirusTotal. A sample is labeled as malicious if it is detected by at least five anti-virus scanners, whereas it is labeled as benign otherwise. \myparagraph{Training-validation-test splits.} We average our results on 5 runs. In each run, we randomly selected 60,000 apps from the \textsl{Drebin}\xspace data to train the learning algorithms, and we used the remaining apps for testing. \myparagraph{Classifiers.} We compare the standard \textsl{Drebin}\xspace implementation based on a linear Support Vector Machine ({SVM}\xspace) against the \textit{secured} linear SVM from~\cite{demontis17-tdsc} ({Sec-SVM}\xspace), an SVM with the RBF kernel ({SVM-RBF}\xspace), a logistic regression ({logistic}\xspace) and a ridge regression ({ridge}\xspace). \myparagraph{Parameter setting.} Using a 5-fold cross-validation procedure, we optimize the parameters of each classifier to maximize the detection rate ({i.e.}\xspace, the fraction of detected malware) at $1\% $ false-positive rate ({i.e.}\xspace, the fraction of legitimate applications misclassified as malware). In particular, we optimize $C \in \{10^{-2}, 10^{-1}, \ldots, 10^{2}\}$ for both linear and non-linear SVMs and {logistic}\xspace, the kernel parameter $\gamma \in \{10^{-4}, 10^{-3}, \ldots, 10^{2}\}$ for the {SVM-RBF}\xspace, and the parameter $\alpha \in \{10^{-2}, 10^{-1}, \ldots, 10^{2}\}$ for {ridge}\xspace. For {Sec-SVM}\xspace, we optimized the parameters $-\vct w^{\rm lb} = \vct w^{\rm ub} \in \{0.1, 0.25, 0.5\}$ and $C \in \{10^{-2}, 10^{-1}, \ldots, 10^{2}\}$. When similar detection rates ($\pm 1\%$) are obtained for different hyperparameter configurations, we select the configuration corresponding to a more regularized classifier, as more regularized classifiers are expected to be more robust under attack~\cite{demontis19-usenix}. The typical values of the aforementioned hyperparameters found after cross-validation are $C=0.1$ for {SVM}\xspace, $\alpha = 10$ for {ridge}\xspace, $C=1$ for {logistic}\xspace, $C=1$ and $w=0.25$ for {Sec-SVM}\xspace, $C=10$ and $\gamma=0.01$ for {SVM-RBF}\xspace. \myparagraph{Attribution computation} We compute the attributions on $1,000$ malware samples randomly chosen from the \textsl{Drebin}\xspace test set. We took $\vct x^{\prime} = 0$ as the baseline for {Integrated Gradients}\xspace, and we compute the attributions with respect to the malware class. As a result, positive (negative) relevance values in our analysis denote malicious (benign) behavior. Given the high sparsity ration of the \textsl{Drebin}\xspace dataset, we use $m = 1,000$ to compute the explanation evenness metrics. \subsection{Experimental Results} \label{subsect:perfres} We first perform an evaluation of the performances under normal conditions; the resulting Receiver Operating Characteristic (ROC) curve with the Detection Rate for each classifier, averaged over the 5 repetitions, is reported in the left side of Figure~\ref{fig:res-drebin}. We then perform a white-box evasive attack against each classifier, aiming to have $1000$ malware samples randomly chosen from the \textsl{Drebin}\xspace dataset misclassified as benign. The results are shown on the right side of Figure~\ref{fig:res-drebin}, which reports the variation of the detection rate as the number of modified features $\varepsilon$ increases. We can notice how the {Sec-SVM}\xspace classifier (described in Section~\ref{subsect:secsvm}) provides a slightly worse detection rate compared to the other classifiers but is particularly robust against adversarial evasion attacks. \begin{table}[t] \centering \begin{adjustbox}{width=.24\textwidth,valign=t} \begin{tabular}{cp{6.12cm}r} \multicolumn{3}{c}{ {SVM-RBF}\xspace ($\ensuremath{{\set E_1}}\xspace = 46.24\%$, $\ensuremath{{\set E_2}}\xspace = 22.47\%$, $\varepsilon_{\rm min} = 6$)}\\ \toprule \textbf{Set} & \textbf{Feature Name} & \multicolumn{1}{c}{$\vct r$ (\%)} \\ \midrule S2 & \cellcolor{red!50} SEND\_SMS \rule{0pt}{9pt} & 10.35 \\ S7 & \cellcolor{red!50} \makecell[tl]{android/telephony/TelephonyManager\\;-\ensuremath{>}getNetworkOperator} & 10.05 \\ S4 & \cellcolor{NavyBlue!40} LAUNCHER & -8.89 \\ S5 & \cellcolor{NavyBlue!40} \makecell[tl]{android/os/PowerManager\$WakeLock\\;-\ensuremath{>}release} & -8.01 \\ S2 & \cellcolor{red!30} READ\_PHONE\_STATE & 5.03 \\ S2 & \cellcolor{NavyBlue!30} RECEIVE\_SMS & -5.00 \\ S3 & \cellcolor{red!20} c2dm.C2DMBroadcastReceiver & 4.56 \\ S2 & \cellcolor{red!20} READ\_SMS & 3.52 \\ S4 & \cellcolor{red!20} DATA\_SMS\_RECEIVED & 3.50 \\ S5 & \cellcolor{NavyBlue!20} \makecell[tl]{android/app/NotificationManager\\;-\ensuremath{>}notify} & -3.49 \\ \bottomrule \end{tabular} \end{adjustbox}% \begin{adjustbox}{width=.24\textwidth,valign=t} \begin{tabular}{cp{6.12cm}r} \multicolumn{3}{c}{ {Sec-SVM}\xspace ($\ensuremath{{\set E_1}}\xspace = 73.04\%$, $\ensuremath{{\set E_2}}\xspace = 66.24\%$, $\varepsilon_{\rm min} = 31$)}\\ \toprule \textbf{Set} & \textbf{Feature Name} & \multicolumn{1}{c}{$\vct r$ (\%)} \\ \midrule S2 & \cellcolor{red!20} READ\_PHONE\_STATE \rule{0pt}{10pt} & 3.51 \\ S7 & \cellcolor{red!20} \makecell[tl]{android/telephony/TelephonyManager\\;-\ensuremath{>}getNetworkOperator} & 3.51 \\ S2 & \cellcolor{red!20} SEND\_SMS \rule{0pt}{9pt} & 3.51 \\ S3 & \cellcolor{red!20} c2dm.C2DMBroadcastReceiver \rule{0pt}{11pt} & 3.51 \\ S2 & \cellcolor{red!20} INTERNET\rule{0pt}{10pt} & 3.44 \\ S3 & \cellcolor{red!20} com.software.application.ShowLink \rule{0pt}{12pt} & 3.39 \\ S3 & \cellcolor{red!20} com.software.application.Main \rule{0pt}{12pt} & 3.39 \\ S3 & \cellcolor{red!20} com.software.application.Notificator \rule{0pt}{12pt} & 3.39 \\ S3 & \cellcolor{red!20} com.software.application.Checker \rule{0pt}{12pt} & 3.39 \\ S3 & \cellcolor{red!20} com.software.application.OffertActivity \rule{0pt}{12pt} & 3.39 \\ \bottomrule \end{tabular} \end{adjustbox}\\\vspace{.5em}% \begin{adjustbox}{width=.24\textwidth,valign=t} \begin{tabular}{cp{6.12cm}r} \multicolumn{3}{c}{ {SVM-RBF}\xspace ($\ensuremath{{\set E_1}}\xspace = 60.74\%$, $\ensuremath{{\set E_2}}\xspace = 25.84\%$, $\varepsilon_{\rm min} = 31$)}\\ \toprule \textbf{Set} & \textbf{Feature Name} & \multicolumn{1}{c}{$\vct r$ (\%)} \\ \midrule S4 & \cellcolor{NavyBlue!10} LAUNCHER \rule{0pt}{8pt} & -1.89 \\ S7 & \cellcolor{red!10} android/net/Uri;-\ensuremath{>}fromFile \rule{0pt}{8pt} & 1.34 \\ S5 & \cellcolor{NavyBlue!10} \makecell[tl]{android/os/PowerManager\$WakeLock\\;-\ensuremath{>}release} & -1.25 \\ S2 & \cellcolor{red!10} INSTALL\_SHORTCUT \rule{0pt}{8pt} & 1.23 \\ S7 & \cellcolor{NavyBlue!10} \makecell[tl]{android/telephony/SmsMessage\\;-\ensuremath{>}getDisplayMessageBody} & -1.21 \\ S7 & \cellcolor{NavyBlue!10} \makecell[tl]{android/telephony/SmsMessage\\;-\ensuremath{>}getTimestampMillis} & -1.20 \\ S2 & \cellcolor{NavyBlue!10} SET\_ORIENTATION \rule{0pt}{8pt} & -1.20 \\ S2 & \cellcolor{red!10} ACCESS\_WIFI\_STATE \rule{0pt}{8pt} & 1.15 \\ S4 & \cellcolor{red!10} BOOT\_COMPLETED \rule{0pt}{8pt}& 1.08 \\ S5 & \cellcolor{NavyBlue!10} android/media/MediaPlayer;-\ensuremath{>}start \rule{0pt}{8pt} & -1.06 \\ \bottomrule \end{tabular} \end{adjustbox}% \begin{adjustbox}{width=.24\textwidth,valign=t} \begin{tabular}{cp{6.12cm}r} \multicolumn{3}{c}{ {Sec-SVM}\xspace ($\ensuremath{{\set E_1}}\xspace = 63.14\%$, $\ensuremath{{\set E_2}}\xspace = 52.70 \%$, $\varepsilon_{\rm min} = 39$)}\\ \toprule \textbf{Set} & \textbf{Feature Name} & \multicolumn{1}{c}{$\vct r$ (\%)} \\ \midrule S2 & \cellcolor{red!5} ACCESS\_NETWORK\_STATE & 0.93 \\ S2 & \cellcolor{red!5} READ\_PHONE\_STATE & 0.93 \\ S6 & \cellcolor{red!5} READ\_HISTORY\_BOOKMARKS & 0.93 \\ S7 & \cellcolor{NavyBlue!5} \makecell[tl]{android/telephony/TelephonyManager\\;-\ensuremath{>}getNetworkOperatorName} & -0.93 \\ S6 & \cellcolor{NavyBlue!5} ACCESS\_NETWORK\_STATE & -0.93 \\ S7 & \cellcolor{red!5} android/telephony/SmsMessage;-\ensuremath{>}getDisplayOriginatingAddress & 0.93 \\ S7 & \cellcolor{red!5} \makecell[tl]{android/telephony/TelephonyManager\\;-\ensuremath{>}getNetworkOperator} & 0.93 \\ S7 & \cellcolor{NavyBlue!5} android/net/Uri;-\ensuremath{>}getEncodedPath & -0.93 \\ S2 & \cellcolor{NavyBlue!5} SET\_ORIENTATION & -0.93 \\ S7 & \cellcolor{red!5} java/lang/reflect/Method;-\ensuremath{>}invoke & 0.93 \\ \bottomrule \end{tabular} \end{adjustbox} \caption{Top-10 influential features and corresponding {Gradient*Input}\xspace relevance ($\%$) for a malware of the \texttt{FakeInstaller} family (top) and a malware of the \texttt{Plankton} family (bottom). Notice that the minimum number of features to add $\varepsilon_{min}$ to evade the classifiers increases with the evenness metrics \ensuremath{{\set E_1}}\xspace and \ensuremath{{\set E_2}}\xspace.} \label{tab:local-ranks} \end{table} \begin{figure}[t] \centering \includegraphics[width=0.245\textwidth,trim={0 .7cm 0 0},clip]{figs/correlation_crossentropy/average_gradient_evenness2_seceval_correlation_score_mean-ce.pdf} \includegraphics[width=0.218\textwidth,trim={.7cm .7cm 0 0},clip]{figs/correlation_crossentropy/average_gradient_evenness3_seceval_correlation_score_mean-ce.pdf}\\ \includegraphics[width=0.245\textwidth,trim={0 .7cm 0 0},clip]{figs/correlation_crossentropy/average_feature-relevance_evenness2_seceval_correlation_score_mean-ce.pdf} \includegraphics[width=0.218\textwidth,trim={.7cm .7cm 0 0},clip]{figs/correlation_crossentropy/average_feature-relevance_evenness3_seceval_correlation_score_mean-ce.pdf}\\ \includegraphics[width=0.245\textwidth,trim={0 0 0 0},clip]{figs/correlation_crossentropy/average_integrated-gradients_evenness2_seceval_correlation_score_mean-ce_1.pdf} \includegraphics[width=0.218\textwidth,trim={.7cm 0 0 0},clip]{figs/correlation_crossentropy/average_integrated-gradients_evenness3_seceval_correlation_score_mean-ce_1.pdf}\\ \vspace{-1em} \caption{Evaluation of the adversarial robustness $\set R$ against the evenness \ensuremath{{\set E_1}}\xspace (left), \ensuremath{{\set E_2}}\xspace (right) metrics for the different gradient-based explanation techniques computed on $1000$ samples of the test set (only $100$ samples are shown).} \label{fig:mscore_scatter} \end{figure} \subsection{Is adversarial robustness correlated with explanation evenness?} \label{subsect:correlsres} We now investigate the connection between adversarial robustness and evenness of gradient-based explanations. We start with two illustrative examples. Table~\ref{tab:local-ranks} shows the top-10 influential features for two malware samples\footnote{MD5: f8bcbd48f44ce973036fac0bce68a5d5 (\texttt{FakeInstaller}) and eb1f454ea622a8d2713918b590241a7e (\texttt{Plankton}).} of the \texttt{FakeInstaller} and \texttt{Plankton} families, reported for the {SVM-RBF}\xspace and {Sec-SVM}\xspace algorithms, and obtained through the {Gradient*Input}\xspace technique. All the classifiers correctly label the samples as malware. Looking at the features of the first sample, the \texttt{FakeInstaller} malware, we can observe how both the classifiers identify the cellular- and SMS-related features, {e.g.}\xspace, the \texttt{GetNetworkOperator()} method or the \texttt{SEND\_SMS} permission, as highly relevant. This is coherent with the actual behavior of the malware sample since its goal is to send SMS messages to premium-rate numbers. With respect to the relevance values, the first aspect to point out comes from their relative magnitude, expressed as a percentage in Table~\ref{tab:local-ranks}. In particular, we can observe that the top-10 relevance values for {SVM-RBF}\xspace vary, regardless of their signs, from $3.49\%$ to $10.35\%$, while for {Sec-SVM}\xspace the top values lie in the $3.39\%$--$3.51\%$ range. This suggests that {SVM-RBF}\xspace assigned high prominence to few features; conversely, {Sec-SVM}\xspace distributed the relevance values more evenly. It is possible to catch this behavior more easily through the synthetic evenness measures $\ensuremath{{\set E_1}}\xspace$ (Eq.~\eqref{eq:rel-evenn}) and $\ensuremath{{\set E_2}}\xspace$ (Eq.~\eqref{eq:rel-evenn-spr}) reported in Table~\ref{tab:local-ranks}, which show higher values for {Sec-SVM}\xspace. Table~\ref{tab:local-ranks} also shows the $\varepsilon_{min}$ value, {i.e.}\xspace, the minimum number of features to add to the malware to evade the classifier. We can notice how the $\varepsilon_{min}$ parameter is strictly related to the evenness distribution, since higher values of $\ensuremath{{\set E_1}}\xspace$ and $\ensuremath{{\set E_2}}\xspace$ correspond to higher values of $\varepsilon_{min}$, {i.e.}\xspace, a higher effort for the attacker to accomplish her goal. In particular, it is possible to identify a clear difference between the behavior of {SVM-RBF}\xspace and {Sec-SVM}\xspace: the diversity of their evenness metrics, which cause the $\varepsilon_{min}$ values to be quite different as well, indicates that, for this prediction, {SVM-RBF}\xspace is quite susceptible to a possible attack compared to {Sec-SVM}\xspace. Conversely, considering the second sample, the attributions (regardless of the sign) and the evenness metrics present similar values. Such behavior is also reflected in the associated $\varepsilon_{min}$ values. In this case, the relevance values are more evenly distributed, which indicates that the evasion is more difficult. We now correlate the evenness metrics with the \emph{adversarial robustness} $\set R$, introduced in Section~\ref{subsect:adv-robust}. Figure~\ref{fig:mscore_scatter} shows the relationship between this value and the evenness metrics for $100$ samples chosen from the test set, reported for each explainability technique. From this broader view, we can see how the evenness values calculated on top of the {Gradient*Input}\xspace and {Integrated Gradients}\xspace explanations present a significant connection to the adversarial robustness. This seems not to be applicable to the {Gradient}\xspace technique, and specifically against the linear classifiers, whose dots in Figure~\ref{fig:mscore_scatter} are perfectly vertical-aligned. This fact is caused by the constant value of the gradient across all the samples, which implies constant values for the evenness metrics as well. In order to assess the statistical significance of this plot, we also compute the associated correlation values with three different metrics: Pearson (P), Spearman Rank (S), Kendall's Tau (K). They are shown in Table~\ref{tab:mscore_correlation}. Finally, we inquire whether the connection between the evenness metrics and the detection performance of a classifier can provide a global assessment of its robustness. Figure~\ref{fig:er_scatter} shows the correlation between the explanation evenness and the mean detection rate under attack, calculated for $\varepsilon$ in the range $[1,50]$. Similarly to the previous test, {Gradient*Input}\xspace and {Integrated Gradients}\xspace explanations present a significant connection to the adversarial robustness in most cases, while the {Gradient}\xspace technique does to a less extent. \begin{table}[t] \centering \begin{adjustbox}{width=\columnwidth} \begin{tabular}{r c cc | cc | cc} & & \multicolumn{2}{c|}{\textbf{{Gradient}\xspace}} & \multicolumn{2}{c|}{\textbf{{Gradient*Input}\xspace}} & \multicolumn{2}{c}{\textbf{Int. Gradients}} \\ \cmidrule{3-8} & & $\ensuremath{{\set E_1}}\xspace$ & $\ensuremath{{\set E_2}}\xspace$ & $\ensuremath{{\set E_1}}\xspace$ & $\ensuremath{{\set E_2}}\xspace$ & $\ensuremath{{\set E_1}}\xspace$ & $\ensuremath{{\set E_2}}\xspace$ \\ \toprule \textbf{{logistic}\xspace} & \makecell[tl]{P\\S\\K} & & & \makecell[tl]{ 0.67, $<$1e-5\\0.67, $<$1e-5\\0.51, $<$1e-5} & \makecell[tl]{ 0.75, $<$1e-5\\0.72, $<$1e-5\\0.54, $<$1e-5} & \makecell[tl]{ 0.67, $<$1e-5\\0.67, $<$1e-5\\0.51, $<$1e-5} & \makecell[tl]{ 0.75, $<$1e-5\\0.72, $<$1e-5\\0.54, $<$1e-5} \\ \midrule \textbf{{ridge}\xspace} & \makecell[tl]{P\\S\\K} & & & \makecell[tl]{ 0.48, $<$1e-5\\0.58, $<$1e-5\\0.41, $<$1e-5} & \makecell[tl]{ 0.56, $<$1e-5\\0.67, $<$1e-5\\0.49, $<$1e-5} & \makecell[tl]{ 0.48, $<$1e-5\\0.58, $<$1e-5\\0.41, $<$1e-5} & \makecell[tl]{ 0.56, $<$1e-5\\0.67, $<$1e-5\\0.49, $<$1e-5} \\ \midrule \textbf{{SVM}\xspace} & \makecell[tl]{P\\S\\K} & & & \makecell[tl]{ 0.68, $<$1e-5\\0.66, $<$1e-5\\0.49, $<$1e-5} & \makecell[tl]{ 0.70, $<$1e-5\\0.73, $<$1e-5\\0.54, $<$1e-5} & \makecell[tl]{ 0.68, $<$1e-5\\0.66, $<$1e-5\\0.49, $<$1e-5} & \makecell[tl]{ 0.70, $<$1e-5\\0.73, $<$1e-5\\0.54, $<$1e-5} \\ \midrule \textbf{{SVM-RBF}\xspace} & \makecell[tl]{P\\S\\K} & \makecell[tl]{ 0.03, ~0.769\\0.46, $<$1e-5\\0.34, $<$1e-5} & \makecell[tl]{ 0.46, $<$1e-5\\0.70, $<$1e-5\\0.51, $<$1e-5} & \makecell[tl]{ 0.82, $<$1e-5\\0.94, $<$1e-5\\0.81, $<$1e-5} & \makecell[tl]{ 0.82, $<$1e-5\\0.94, $<$1e-5\\0.80, $<$1e-5} & \makecell[tl]{ 0.89, $<$1e-5\\0.93, $<$1e-5\\0.78, $<$1e-5} & \makecell[tl]{ 0.91, $<$1e-5\\0.93, $<$1e-5\\0.77, $<$1e-5} \\ \midrule \textbf{{Sec-SVM}\xspace} & \makecell[tl]{P\\S\\K} & & & \makecell[tl]{ 0.73, $<$1e-5\\0.76, $<$1e-5\\0.62, $<$1e-5} & \makecell[tl]{ 0.76, $<$1e-5\\0.78, $<$1e-5\\0.67, $<$1e-5} & \makecell[tl]{ 0.73, $<$1e-5\\0.76, $<$1e-5\\0.62, $<$1e-5} & \makecell[tl]{ 0.76, $<$1e-5\\0.78, $<$1e-5\\0.67, $<$1e-5} \\ \bottomrule \end{tabular} \end{adjustbox} \caption{Correlation between the adversarial robustness $\set R$ and the evenness metrics $\ensuremath{{\set E_1}}\xspace$ and $\ensuremath{{\set E_2}}\xspace$. Pearson (P), Spearman Rank (S), Kendall's Tau (K) coefficients along with corresponding $p$-values. The linear classifiers lack a correlation value since the evenness is constant (being the gradient constant as well), thus resulting in a not defined correlation.} \label{tab:mscore_correlation} \end{table} \begin{figure}[t] \centering \includegraphics[width=0.245\textwidth,trim={0 0 0 0},clip]{figs/correlation/average_gradient_evenness2_seceval_correlation_er_mean.pdf} \includegraphics[width=0.219\textwidth,trim={.7cm 0 0 0},clip]{figs/correlation/average_gradient_evenness3_seceval_correlation_er_mean.pdf}\\ \includegraphics[width=0.245\textwidth,trim={0 0 0 0},clip]{figs/correlation/average_feature-relevance_evenness2_seceval_correlation_er_mean.pdf} \includegraphics[width=0.219\textwidth,trim={.7cm 0 0 0},clip]{figs/correlation/average_feature-relevance_evenness3_seceval_correlation_er_mean.pdf}\\ \includegraphics[width=0.245\textwidth,trim={0 0 0 0},clip]{figs/correlation/average_integrated-gradients_evenness2_seceval_correlation_er_mean.pdf} \includegraphics[width=0.219\textwidth,trim={.7cm 0 0 0},clip]{figs/correlation/average_integrated-gradients_evenness3_seceval_correlation_er_mean.pdf}\\ \vspace{-1em} \caption{Evaluation of the evenness metrics $E_1$ (left) and $E_2$ (right) against the Detection Rate (FPR 1\%) for the different gradient-based explanation techniques computed on the \textsl{Drebin}\xspace dataset.} \label{fig:er_scatter} \end{figure} \section{Related Work} \label{sect:relwork} \subsection{Adversarial attacks} \label{subsect:advmal} According to a recent survey by Biggio et al.~\cite{biggio18-pr}, several works questioned the security of machine learning since $2004$. Two pioneering works were proposed by Dalvi et al.~\cite{dalvi04-kdd} in 2004 and by Lowd and Meek~\cite{lowd05-kdd} in 2005. Those works, considering linear classifiers employed to perform spam filtering, demonstrated that an attacker could easily deceive the classifier at test time (\emph{evasion} attacks) by performing a limited amount of carefully-crafted changes to an email. Subsequent works~\cite{barreno06-asiaccs,barreno10, biggio14-tkde} proposed attacker models and frameworks that are still used to study the security of learning-based systems also against training-time (\emph{poisoning}) attacks. The first gradient-based poisoning~\cite{biggio12-icml} and evasion~\cite{biggio13-ecml} attacks were proposed by Biggio et al. respectively in 2012 and 2013. Notably, in~\cite{biggio13-ecml} the authors also introduced two important concepts that are still heavily used in the adversarial field, namely \emph{high-confidence} adversarial examples and the use of a \emph{surrogate} model. This work anticipated the discovery of the so-called \emph{adversarial examples} against deep neural networks~\cite{szegedy14-iclr,goodfellow15-iclr}. The vulnerability to evasion attacks was then studied especially on learning systems designed to detect malware samples (for example, on PDF files~\cite{maiorca19-csur,srndic14}), thus raising serious concerns about their usability under adversarial environments. In particular, for Android malware detectors, Demontis et al.~\cite{demontis17-tdsc} demonstrated that linear models trained on the (static) features extracted by \textrm{Drebin}\xspace can be easily evaded by performing a fine-grained injection of information (a more advanced injection approach that directly operates on the Dalvik bytecode has been proposed by Yang et al.~\cite{yang17-acsac}) by employing gradient descent-based approaches. Grosse et al.~\cite{grosse17-esorics} have also attained a significant evasion rate on a neural network trained with the \textrm{Drebin}\xspace feature set. Although the adversarial robustness of other Android detectors aside from~\cite{rieck14-drebin} was not fully explored, it is evident that employing information that can be easily injected or modified may increase the probability of the attacker to attain successful evasion. \subsection{Explainability} \label{subsect:relexplain} Consequently to the rise of black-box models in the last decade, explainability became a hot research topic. It can be leveraged to achieve multiple goals, from justifying each prediction (the \emph{right of explanation} required by the European General Data Protection Regulation (GDPR)~\cite{goodman16-gdpr}) to discovering new knowledge and causal relations. Explainability became increasingly popular in security as well, as providing a proper explanation of predictions can help to secure the systems against adversarial attacks. Several approaches for interpretability have been proposed, with a particular attention to \emph{post-hoc} explanations for black-box models. In the following, we briefly describe the prominent explainability methodologies proposed in this sense. In 2016, Ribeiro et al.~\cite{ribeiro16} proposed LIME, a model-agnostic technique that provides local explanations by generating small perturbations of the input sample, thus obtaining the explanations from a linear model fitted on the perturbed space. Lundberg and Lee~\cite{Lundberg2017} unified different techniques, including LIME, under the name of SHAP, by leveraging cooperative game theory results to identify theoretically-sound explanation methods and provide feature importance for each prediction. Koh and Liang~\cite{koh17-icml} showed that using a gradient-based technique called \emph{influence functions}, which is well known in the field of robust statistics, it is possible to associate each input sample to the training samples (\emph{prototypes}) that are most responsible for its prediction. The theory behind the techniques proposed by the authors holds only for classifiers with differentiable loss functions. However, the authors empirically showed that their technique provides sensible prototypes also for classifiers with not-differentiable losses if computed on a smoothed counterpart. Finally, Guo et al.~\cite{Guo2018} proposed LEMNA, a method specifically designed for security tasks, {i.e.}\xspace, that is optimized for RNN and MLP networks, and that highlights the feature dependence ({e.g.}\xspace, for binary code analysis). We recommend the recent survey by Guidotti et al.~\cite{guidotti18-acm} for a more detailed description. \section{Conclusions and Future Work} \label{sect:conclusions} In this paper, we empirically evaluate the correlation between multiple gradient-based explanation techniques and the \emph{adversarial robustness} of different linear and non-linear classifiers against sparse evasion attacks. To this end, we leverage two synthetic measures of the \emph{explanation evenness}, which main advantage is not requiring any computationally-expensive attack simulations. Thus, they may be used by system designers and engineers to choose, among a plethora of different models, the one that is most resilient against sparse attacks. As we validate the proposed synthetic vulnerability measure by considering only the \textrm{Drebin}\xspace malware detector as a case study, we plan to inspect other malware detectors as well as other application domains. Moreover, as the proposed metrics may be used to estimate the robustness only against sparse evasion attacks, an interesting research direction would be to devise a similar measure that can be used to estimate the robustness when the attack is subjected to different application constraints. Also, it could be interesting to assess if our vulnerability measures can be successfully applied when the attacker does not know the classifier parameters or when the model is not differentiable; in that case, a surrogate classifier would be used to explain the original unknown model function. Finally, another interesting research avenue is to modify the objective functions used to train the considered machine learning models by adding to them a penalty which is inversely proportional to the proposed evenness metrics, in order to enforce the classifier to learn more evenly distributed relevance scores and, consequently, the model robustness. \section*{Acknowledgments} This work has been partly supported by the PRIN 2017 project RexLearn (grant no.~2017TWNMH2), funded by the Italian Ministry of Education, University and Research, and by BMK, BMDW, and the Province of Upper Austria in the frame of the COMET Programme managed by FFG in the COMET Module S3AI. \bibliographystyle{elsarticle-num-names}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} \subsection{Motivation} Gravitational collapse of massive stellar core to a neutron star or a black hole and the associated supernova explosion are one of the important and interesting events in the universe. From observational view point, they are among the most energetic events in astrophysics, producing a wide variety of observable signatures, namely, electromagnetic radiation, neutrinos, and gravitational radiation. Most of the energy liberated in the collapse is eventually carried away by neutrinos from the system. The total energy of neutrinos emitted is $\approx GM_{\rm NS}^{2}/R_{\rm NS} \sim 0.1 M_{\rm NS}c^{2}$ $\sim$ several times $10^{53}$ ergs, where $M_{\rm NS}$ and $R_{\rm NS}$ are the mass and radius of the neutron star. Observations of gravitational collapse by neutrino detectors will provide important information of the deep inside of the core, because neutrinos can propagate from the central regions of the stellar core almost freely due to their small cross-sections with matters. Electromagnetic radiation, by contrast, interacts strongly with matters and thus gives information of collapse coming only from lower-density regions near the surface of the star. Bursts of neutrinos were first detected simultaneously by the Kamiokande\cite{Hirata87} and Irvine-Michigan-Brookhaven\cite{Bionta87} facilities in the supernova SN1987A, which occurred on February 23, 1987 in the Large Magellanic Cloud (for a review, see Ref.~\citen{Arnett89}). Future detection of neutrinos will provide a direct clue to reveal the physical ingredient for the supernova explosion mechanism. Gravitational wave astronomy will start in this decade. The first generation of ground-based interferometric detectors (LIGO\cite{LIGO}, VIRGO\cite{VIRGO}, GEO600\cite{GEO600}, TAMA300\cite{TAMA300}) are now in the scientific search for gravitational waves. Stellar core collapse is one of the important sources for these observatories. Observations of gravitational collapse by gravitational-wave detectors will provide unique information, complementary to that derived from electromagnetic and neutrino detectors, because gravitational waves can propagate from the innermost regions of a progenitor star to the detectors without attenuation by matters. Combination of the signatures of neutrinos and gravitational waves will provide much information about processes of the core collapse and ultimately, the physics that governs the stellar core collapse. To obtain physically valuable information from these observations, it is necessary to connect the observed data and the physics behind it. For this purpose, a numerical simulation is the unique approach. However, simulating the stellar core collapse is one of the challenging problems because a rich diversity of physics has to be taken into account. All four known forces in nature are involved and play important roles during the collapse. General relativistic gravity plays an essential role in formation of a black hole and a neutron star. The weak interactions govern energy and lepton-number losses of the system. In particular, neutrinos transport most of the energy released during the collapse to the outside of the system. The electromagnetic and strong interactions determine the ingredient of the collapsing matter and the thermodynamical properties of the dense matter. Strong magnetic fields, if they are present, would modify the dynamics of the collapse, subsequent supernova explosion, and evolution of proto-neutron stars. Due to these complexities, the explosion mechanism of core-collapse supernovae has not been fully understood in spite of the elaborate effort in the past about 40 years\cite{Kotake06,Janka07a,Bethe90}. Recent numerical studies\cite{Rampp00,Mezza01,Thompson03,RJ02,Lieben01,Sumi05} have clarified that on the assumption of the spherical symmetry, the explosion does not succeed for the iron core collapse with the currently most elaborate input physics (neutrino interactions, neutrino transfer, and equation of states of the dense matter) on the basis of the standard ``neutrino heating mechanism''\cite{Bethe90} (but see Ref.~\citen{Kitaura06} for successful explosion in O-Ne-Mg core collapse). To increase the neutrino-heating efficiency, a wide variety of multi-dimensional effects have been explored (for recent reviews, see e.g., Refs.~\citen{Janka07a}, \citen{Kotake06} and also Refs.~\citen{MJ09} and \citen{Burrows06a} for simulations where successful explosions are obtained). However, it has not been completely clarified yet whether the increase of the heating efficiency due to such multi-dimensional effects suffices for yielding successful explosion, because the explosion energy resulting from these works is too low $\sim 10^{50}$ ergs. Similarly, accurate predictions of gravitational waveforms are still hampered by the facts that reliable estimates of waveforms require a general relativistic treatment\cite{Dimm02}, and that appropriate treatments of microphysics such as nuclear equation of state (EOS), the electron capture, and neutrino emissions and transfers. Indeed, previous estimates of waveforms have relied either on Newtonian simulations with including microphysics to some extent\cite{Monch91,Burrows96,MJ97,Zwerg97,Kotake,Ott04,Muller04,Fryer04}, or general relativistic simulations with simplified microphysics\cite{Dimm02,SS,Sekiguchi05,Duran05,Shibata06}. Recently, gravitational waveforms emitted in the rotating core collapse were derived by multidimensional simulations in general relativistic frameworks\cite{Ott07,Dimm07} adopting a finite-temperature nuclear EOS\cite{Shen98} and the electron capture. In their studies, however, the electron capture rate was not calculated in a self-consistent manner. Instead, they adopted a simplified prescription proposed in Ref.~\citen{Lieb05} which is based on the result of a spherically symmetric simulation. However, it is not clear whether this treatment is justified for non-spherical collapse or not. Moreover, they did not take account of emission processes of neutrinos. More sophisticated simulations including microphysics are required to make accurate predictions of gravitational waveforms. The gravitational collapse of massive star is also the primary mechanism of black hole formation. Understanding the process of black hole formation is one of the most important issues in the theory of the stellar core collapse. A wide variety of recent observations have shown that black holes actually exist in the universe (e.g., see Ref.~\citen{Rees03}), and so far, about 20 stellar-mass black holes for which the mass is determined within a fairly small error have been observed in binary systems of our Galaxy and the Large Magellanic Clouds\cite{McC06}. The formation of a black hole through the gravitational collapse is a highly nonlinear and dynamical phenomenon, and therefore, numerical simulation in full general relativity is the unique approach to this problem. In spherical symmetry, fully general relativistic simulations of stellar core collapse to a black hole have been performed in a state-of-the-art manner, i.e., employing realistic EOSs, implementing microphysical processes, and the Boltzmann transfer of neutrinos\cite{Sumi06,Nakazato}. In the multidimensional case, by contrast, simulations only with simplified microphysics have been performed \cite{Sekiguchi05,Sekiguchi07,Liu}. Because multidimensional effects such as rotation and convection are likely to play an important role, multidimensional simulations in full general relativity employing a realistic EOS and detailed microphysics are necessary for clarifying the formation process of black holes. Furthermore, recent observations\cite{980425,030329,060218} have found the spectroscopic connections between several SNe and long gamma-ray bursts (GRBs) and clarified that some of long GRBs are associated with the collapse of massive stars. Supported by these observations, the collapsar model\cite{Collapsar} is currently one of promising models for the central engine of GRBs. In this model, one assumes that the central engine of the long GRBs is composed of a rotating black hole and a hot, massive accretion disk. Such a system may be formed as a result of the collapse of rapidly rotating massive core. In this model, one of the promising processes of the energy-deposition to form a GRB fireball is the pair annihilation of neutrinos emitted from the hot, massive disk ($ \nu_{e} + \bar{\nu}_{e} \rightarrow e^{-} + e^{+} $). The collapsar model requires the progenitor core to be rotating rapidly enough that the massive accretion disk can be formed around the black hole. Recent general relativistic numerical analyses have shown that if a progenitor of the collapse is massive and the angular momentum is large enough, a black hole surrounded by a massive disk will be formed\cite{Shibata02,Sekiguchi04,Sekiguchi07}. However, the formation mechanism of such system has not been clarified in detail. These also enhance the importance of exploring the stellar core collapse to a black hole taking account of microphysical processes. As reviewed above, multidimensional simulations of stellar collapse in full general relativity including microphysics is currently one of the most required subjects in theoretical astrophysics. However, there has been no multidimensional code in full general relativity that self-consistently includes microphysics such as realistic EOS, electron capture, and neutrino emission. There have only existed fully general relativistic codes in spherical symmetry\cite{Yamada99,Lieb04,Sumi05} or Newtonian codes in multidimension\cite{Rampp00,Mezza01,Thompson03}. We have developed a fully general relativistic multidimensional code including a finite-temperature nuclear EOS, self-consistent treatment of the electron capture, and a simplified treatment of neutrino emission for the first time. In this code, by contrast with the previous ones\cite{Ott07,Dimm07}, the electron capture rate is treated in a self-consistent manner and the neutrino cooling is taken into account for the first time. Because it is not currently feasible to fully solve the neutrino transfer equations in the framework of general relativity in multidimension because of restrictions of computational resources, it will be reasonable to take some approximation for the transfer equations at the current status. In this paper, the so-called neutrino leakage scheme is adopted as an approximate treatment of neutrino cooling, and a general relativistic version of the leakage scheme is developed. \subsection{The leakage schemes} The neutrino leakage schemes\cite{EP81,vRL81,vanRiper82,BLH82,RCK84,BCK85,Cooperstein88,Kotake03} as an approximate method for the neutrino cooling have a well-established history (e.g.\ Ref.~\citen{Cooperstein88}). The basic concept of the original neutrino leakage schemes\cite{EP81,vRL81} is to treat the following two regions in the system separately: one is the region where the diffusion timescale of neutrinos is longer than the dynamical timescale, and hence, neutrinos are 'trapped' (neutrino-trapped region); the other is the region where the diffusion timescale is shorter than the dynamical timescale, and hence, neutrinos stream out freely out of the system (free-streaming region). The idea of treating the diffusion region separately has been applied to more advanced methods for the neutrino transfer (see e.g., Ref.~\citen{Ott08} and references therein). Then, electron neutrinos and electron anti-neutrinos in the neutrino-trapped region are assumed to be in the $\beta$-equilibrium state. The {\it net} local rates of lepton-number and energy exchange with matters are set to be zero in the neutrino-trapped region. To treat diffusive emission of neutrinos leaking out of the neutrino-trapped region, simple phenomenological source terms based on the diffusion theory are introduced\cite{EP81,vRL81}. In the free-streaming region, on the other hand, it is assumed that neutrinos escape from the system without interacting with matters. Therefore, neutrinos carry the lepton number and the energy according to the local weak-interaction rates. Note that the neutrino fractions are not solved in the original version of the leakage scheme: Only the total lepton fraction (from which the neutrino fractions are calculated under the $\beta$-equilibrium condition) is necessary in the neutrino-trapped region, and the neutrino fractions are set to be zero in the free-streaming region. As a result, neutrino quantities and the electron fraction are discontinuous at the boundary the neutrino-trapped and free-streaming regions. The boundary was given by hand as a single 'neutrino-trapping' density ($\rho_{\rm trap}$) without calculating the optical depths of neutrinos in the previous studies \cite{EP81,vRL81,vanRiper82,BLH82,RCK84,Kotake03}. However, the location at which the neutrino trapping occurs in fact depends strongly on the neutrino energies ($E_{\nu}$) as\cite{Bethe1990} $\rho_{\rm trap} \propto E_{\nu}^{\ -3}$, and hence, there are different neutrino-trapping densities for different neutrino energies. In the previous leakage schemes \cite{EP81,vRL81,vanRiper82,BLH82,Kotake03}, on the other hand, all neutrinos were emitted in one moment irrespective of their energy. Consequently in the case of the so-called neutrino burst emission (e.g., Ref.~\citen{Bethe1990}), for example, the duration in which the neutrinos are emitted was shortened and the peak luminosity at the burst was overestimated\cite{vanRiper82,Kotake03,PhD}. The dependence of the neutrino-trapping densities and the neutrino diffusion rates on the neutrino energies are approximately taken into account in the recent simulations of mergers of binary neutron star\cite{RJS96,RL03}. However, the lepton-number conservation equations for neutrinos are not solved\cite{RJS96}, which is important to estimate the phase space blocking due to neutrinos. {\it Transfer equations} of neutrinos are not solved in the leakage schemes. Therefore, the leakage schemes cannot treat {\it non-local} interactions among the neutrinos and matters. For example, the so-called neutrino heating\cite{BW85} and the neutrino pair annihilation cannot be treated in the leakage scheme. Nevertheless, we believe a detailed general relativistic leakage scheme presented in this paper to be a valuable approach because even by this approximated approach it is possible to incorporate the effects of neutrinos semi-quantitatively as shown in this paper. Also, the neutrino leakage scheme is an appropriate method for studying a number of phenomena for which the neutrino heating and neutrino transfer are expected to be not very important, e.g., prompt formation of a black hole and compact binary mergers. Both of these phenomena are the targets of the present code. A first attempt towards a general relativistic leakage scheme was done in the previous study\cite{PhD}. In that study, not the region of the system but the energy momentum tensor of neutrinos was decomposed into two parts; 'trapped-neutrino' and 'streaming-neutrino' parts. However the source terms of hydrodynamic and lepton-number-conservation equations were determined using the single neutrino-trapping density as in the case of the previous leakage schemes. In this paper, we develop a new code implementing the microphysical processes in the general relativistic framework based on the previous study\cite{PhD}. As an application of the code, we perform simulations of stellar core collapse. A lot of improved ingredients are installed into the present code: (1) The dependence of the neutrino diffusion rates on the neutrino energies are approximately taken into account following the recent study \cite{RL03} with detailed cross sections, instead of adopting the single neutrino-trapping density (see Appendix C). (2) The lepton-number conservation equations for neutrinos are solved\ to calculate self-consistently the chemical potentials of neutrinos. Then, the blocking effects due to the presence of neutrinos and the $\beta$-equilibrium condition can be taken into account more accurately (see \S \ref{GRleak}). (3) A stable explicit method for solving the equations of hydrodynamics, the lepton-number conservations, and neutrinos are developed. Such a special procedure is necessary because the characteristic timescale of the weak-interaction processes (hereafter referred to as the WP timescale $t_{\rm wp} \sim \vert Y_{e}/\dot{Y}_{e} \vert $) is much shorter than the dynamical timescale $t_{\rm dyn}$ in hot, dense matter regions\cite{Bruenn85,RL03}. Note that in the previous leakage schemes\cite{EP81,vRL81,vanRiper82,BLH82,Kotake03}, the $\beta$-equilibrium was assumed to be achieved in such regions (i.e. $\dot{Y}_{e} = 0$) and no such special treatments are required. See \S \ref{Difficulty} for further discussions and \S \ref{GRleak} for details of the method. (4) The electron capture rate are calculated in a detailed manner\cite{FFN85} including effects of the so-called thermal unblocking\cite{CW84} (see Appendix A). The paper is organized as follows. First, issues in implementation of weak interactions and neutrino cooling in full general relativistic simulation is briefly summarized in \S \ref{Difficulty}. Then, framework of the general relativistic leakage scheme is described in detail in \S~\ref{GRleak}. In \S~\ref{S_EOS}, EOSs employed in this paper are described in some details. Basic equations and numerical methods of the simulations are described in \S~\ref{S_Numerical}. Numerical results obtained in this paper are shown in \S~\ref{S_Results}. We devote \S~\ref{S_Summary} to a summary and discussions. In appendices, details of the microphysics adopted in the present paper are summarized for the purpose of convenience. Throughout the paper, the geometrical unit $c=G=1$ is used otherwise stated. \section{Issues in implementation of weak interactions and neutrino cooling in fully general relativistic simulation}\label{Difficulty} Because the characteristic timescale of the weak-interaction processes (the WP timescale $t_{\rm wp} \sim \vert Y_{e}/\dot{Y}_{e} \vert $) is much shorter than the dynamical timescale $t_{\rm dyn}$ in hot dense matters\cite{Bruenn85,RL03}, the {\it explicit} numerical treatments of the weak interactions are computationally expensive in simple methods, as noted in the previous pioneering work by Bruenn\cite{Bruenn85}: A very short timestep ($\Delta t$ $<$ $t_{\rm wp} \ll t_{\rm dyn}$) will be required to solve the equations explicitly. The {\it net} rates of lepton-number and energy exchanges between matters and neutrinos may not be large, and consequently, an {\it effective} timescale may not be as short as the dynamical timescale. However, this does not immediately imply that one can solve the equations explicitly without employing any prescription. For example, the achievement of $\beta$-equilibrium, where $\dot{Y}_{e}=0$ is the consequence of the cancellation of two very {\it large} weak interaction processes (the electron and the electron-neutrino captures, see Eq. (\ref{sourceYe})) and of the action of the phase space blocking. Note that the weak interaction processes depend enormously both on the temperature and the lepton chemical potentials. Therefore, small error in the evaluation of the temperature and a small deviation from the $\beta$-equilibrium due to small error in calculation of the lepton chemical potentials will result in huge error. Then, stiff source terms appear and explicit numerical evolution often becomes unstable. Indeed, we found that a straightforward, explicit solution of the equations did not work. In the following of this section, we describe issues of implementation of weak interactions and neutrino cooling into the hydrodynamic equations in the conservative schemes in fully general relativistic simulations. Fiest, we illustrate that in the Newtonian framework, the equations may be solved implicitly in a relatively simple manner\cite{BW82,Bruenn85,MBHLSR87,MB93,RJ02,Livne04,Buras06,Burrows07,MJ09} (see also Refs.~\citen{MM99} and \citen{Ott08} and references therein). The equations of hydrodynamics, lepton-number conservations, and neutrino processes are schematically written as, \begin{eqnarray} \dot{\rho } &=& 0 , \\ \dot{v_{i} } &=& S_{v_{i} }(\rho, Y_{e}, T, Q_{\nu}) , \\ \dot{Y_{e} } &=& S_{Y_{e} }(\rho, Y_{e}, T, Q_{\nu}) , \\ \dot{e } &=& S_{e }(\rho, Y_{e}, T, Q_{\nu}) , \\ \dot{Q_{\nu}} &=& S_{Q_{\nu}}(\rho, Y_{e}, T, Q_{\nu}) , \end{eqnarray} where $\rho$ is the rest-mass density, $v_{i}$ is the velocity, $Y_{e}$ is the electron fraction, $e$ is the (internal) energy of matter, $T$ is the temperature, and $Q_{\nu}$ stands for the relevant neutrino quantities. We here omit the transport terms. $S$'s in the right-hand side stand for the relevant source terms. Comparing the quantities in the left-hand-side and the argument quantities in the source terms, only the relation between $e$ and $T$ is nontrivial. Usually, EOSs employed in the simulation are tabularized, and one dimensional search over the EOS table is required to solve them. Due to the relatively simple relations between the quantities to be evolved and the argument quantities, the above equations may be solved implicitly in a straightforward (although complicated) manner. In the relativistic framework, the situation becomes much more complicated in conservative schemes, because the Lorentz factor ($\Gamma$) is coupled with rest-mass density and the energy density (see Eqs. (\ref{continuS}) and (\ref{eneS}) where $w \equiv \alpha u^{t}$ is used instead of $\Gamma$), and because the specific enthalpy ($h = h(\rho,Y_{e},T)$) is coupled with the momentum (see Eq. (\ref{momS})). It should be addressed that the previous fully general relativistic works in the spherical symmetry\cite{Yamada99,Lieb04} are based on the so-called Misner-Sharp coordinates\cite{MS64}. There are no such complicated couplings in these {\it Lagrangian} coordinates. Accordingly, the equations may be solved essentially in the same manner as in the Newtonian framework. Because no such simple Lagrangian coordinates are known in the multidimensional case, the complicated couplings inevitably appear in the relativistic framework. Omitting the factors associated with the geometric variables (which are usually updated before solving the hydrodynamics equations) and the transport terms, the equations to be solved in the general relativistic framework are schematically written as, \begin{eqnarray} \dot{\rho_{*}}(\rho,\Gamma) &=& 0 , \label{rhoEq} \\ \dot{\hat{u}}_{i}(u_{i},h) = \dot{\hat{u}}_{i}(u_{i},\rho,Y_{e},T) &=& S_{\hat{u}_{i}}(\rho, Y_{e}, T, Q_{\nu}, \Gamma) , \\ \dot{Y_{e} } &=& S_{Y_{e} }(\rho, Y_{e}, T, Q_{\nu}, \Gamma) , \\ \dot{\hat{e}}(\rho, Y_{e}, T, \Gamma) &=& S_{\hat{e}}(\rho, Y_{e}, T, Q_{\nu}, \Gamma) , \\ \dot{Q_{\nu}} &=& S_{Q_{\nu}}(\rho, Y_{e}, T, Q_{\nu}, \Gamma) , \end{eqnarray} where $\rho_{*}$ is a weighted density, $\hat{u}_{\alpha}$ is a weighted four velocity, $\hat{e}$ is a weighted energy density (see \S~\ref{BasicEq} for the definition of these variables). The Lorentz factor is calculated by solving the normalization condition $u^{\alpha}u_{\alpha}=-1$, which is rather complicated nonlinear equation schematically written as \begin{equation} f_{\rm normalization}(\hat{u_{i}}, \Gamma) = f_{\rm normalization}(u_{i}, \rho, Y_{e}, T, \Gamma) = 0. \label{nomEq} \end{equation} The accurate calculation of the Lorentz factor and the accurate solution of the normalization condition are very important in the numerical relativistic hydrodynamics. Now, it is obvious that the argument quantities in the source terms are not simply related with the evolved quantities in the left-hand-side of Eqs. (\ref{rhoEq})--(\ref{nomEq}). Solving the equations implicitly is not as straightforward as in the Newtonian case and no successful formulations have been developed. Moreover it might be not clear whether a convergent solution can be {\it stably} obtained numerically or not, because they are simultaneous nonlinear equations. Therefore, it may be not a poor choice to adopt an alternative approach in which the equations are solved {\it explicitly} with some approximations as described in the next section\footnote{It should be stated that the implicit schemes are also approximated ones because a short WP timescale associated with the weak interaction is not fully resolved.}. \section{General relativistic neutrino leakage scheme}\label{GRleak} In this section, we describe a method for approximately solving hydrodynamic equations coupled with neutrino radiation in an explicit manner. As described in the previous section, because of the relation $t_{\rm wp} \ll t_{\rm dyn}$ in the hot dense matter regions, the source terms in the equations become too {\it stiff} for the equations to be solved explicitly in the straightforward manner. The characteristic timescale of leakage of neutrinos from the system $t_{\rm leak}$, by contrast, is much longer than $t_{\rm wp}$ in the hot dense matter region. Rather, $t_{\rm leak} \sim L/c \sim t_{\rm dyn}$ where $L$ is the characteristic length scale of the system. On the other hand, $t_{\rm leak}$ is comparable to $t_{\rm wp}$ in the free-streaming regions but $t_{\rm wp}$ is longer than or comparable with $t_{\rm dyn}$ there. All these facts imply that the WP timescale does not directly determine the evolution of the system but the leakage timescale does. Using this fact, we approximate some of original equations and reformulate them so that the source terms are to be characterized by the leakage timescale $t_{\rm leak}$. \subsection{Decomposition of neutrino energy-momentum tensor}\label{EnergyMomentum} The basic equations of general relativistic hydrodynamics with neutrinos are \begin{equation} \nabla_{\alpha}(T^{\rm Total})^{\alpha}_{\beta} = \nabla_{\alpha}\left[(T^{\rm F})^{\alpha}_{\beta} + (T^{\nu})^{\alpha}_{\beta} \right] = 0, \label{eqtot} \end{equation} where $(T^{\rm Total})_{\alpha \beta}$ is the total energy-momentum tensor, and $(T^{\rm F})_{\alpha \beta}$ and $(T^{\nu})_{\alpha \beta}$ are the energy-momentum tensor of fluids and neutrinos, respectively. Equation (\ref{eqtot}) can be written as \begin{eqnarray} \nabla_{\alpha}(T^{{\rm F}})^{\alpha}_{\beta} &=& Q_{\beta}, \label{T_Eq1} \\ \nabla_{\alpha}(T^{\nu})^{\alpha}_{\beta} &=& -Q_{\beta} \label{T_Eq2}, \end{eqnarray} where the source term $Q_{\alpha}$ is regarded as the local production rate of neutrinos through the weak processes. Now, the problem is that the source term $Q_{\alpha}$ becomes too stiff to solve explicitly in hot dense matter regions where $t_{\rm wp} \ll t_{\rm dyn}$. To overcome this situation, the following procedures are adopted. First, it is assumed that the energy-momentum tensor of neutrinos are be decomposed into 'trapped-neutrino' ($(T^{\nu,{\rm T}})_{\alpha\beta}$) and 'streaming-neutrino' ($(T^{\nu,{\rm S}})_{\alpha\beta}$) parts as \cite{PhD}, \begin{equation} (T^{\nu})_{\alpha\beta} = (T^{\nu,{\rm T}})_{\alpha\beta} + (T^{\nu,{\rm S}})_{\alpha\beta} . \label{nudecompose} \end{equation} Here, the trapped-neutrinos phenomenologically represent neutrinos which interact sufficiently frequently with matter and are thermalized. On the other hand, the streaming-neutrino part describes a phenomenological flow of neutrinos streaming out of the system \cite{PhD} (see also Ref.~\citen{Lieb09} where a more sophisticate method in terms of the distribution function is adopted in the Newtonian framework). Second, the locally produced neutrinos are assumed to {\it leak out} to be the streaming-neutrinos with a leakage rate $Q^{\rm leak}_{\alpha}$: \begin{equation} \nabla_{\beta}(T^{\nu,{\rm S}})^{\beta}_{\alpha} = Q^{\rm leak}_{\alpha}. \label{T_Eq_nuS} \end{equation} Then, the equation of the trapped-neutrino part is \begin{equation} \nabla_{\beta}(T^{\nu,{\rm T}})^{\beta}_{\alpha} = Q_{\alpha} - Q^{\rm leak}_{\alpha}. \label{T_Eq_nuT} \end{equation} Third, the trapped-neutrino part is combined with the fluid part as \begin{equation} T_{\alpha\beta} \equiv (T^{\rm F})_{\alpha\beta} + (T^{\nu,{\rm T}})_{\alpha\beta}, \end{equation} and Eqs. (\ref{T_Eq1}) and (\ref{T_Eq_nuT}) are combined to give \begin{equation} \nabla_{\beta}T^{\beta}_{\alpha} = -Q^{\rm leak}_{\alpha} \label{T_Eq_M}. \end{equation} Thus, the equations to be solved are changed from Eqs. (\ref{T_Eq1}) and (\ref{T_Eq2}) to Eqs. (\ref{T_Eq_M}) and (\ref{T_Eq_nuS}). Note that the new equations only include the source term $Q^{\rm leak}_{\alpha}$ which is characterized by the leakage timescale $t_{\rm leak}$. Definition of $Q^{\rm leak}_{\alpha}$ will be given in \S~\ref{leakagerate}. The energy-momentum tensor of the fluid and trapped-neutrino parts ($T_{\alpha \beta}$) is treated as that of the perfect fluid, \begin{equation} T_{\alpha\beta} = (\rho + \rho \varepsilon + P) u_{\alpha}u_{\beta} + P g_{\alpha\beta}, \label{T_fluid} \end{equation} where $\rho$ and $u^{\alpha}$ are the rest mass density and the 4-velocity. The specific internal energy density ($\varepsilon$) and the pressure ($P$) are the sum of contributions from the baryons (free protons, free neutrons, $\alpha$-particles, and heavy nuclei), leptons (electrons, positrons, and {\it trapped-neutrinos}), and the photons as, \begin{eqnarray} P &=& P_{B} + P_{e} + P_{\nu} + P_{ph}, \\ \varepsilon &=& \varepsilon_{B} + \varepsilon_{e} + \varepsilon_{\nu} + \varepsilon_{ph} , \end{eqnarray} where subscripts '$B$', '$e$', '$ph$', and '$\nu$' denote the components of the baryons, electrons and positrons, photons, and trapped-neutrinos, respectively. The streaming-neutrino part, on the other hand, is set to be a general form of \begin{equation} (T^{\nu,{\rm S}})_{\alpha\beta}= E n_{\alpha}n_{\beta} + F_{\alpha}n_{\beta} + F_{\beta}n_{\alpha} + P_{\alpha\beta}, \label{T_neutrino} \end{equation} where $F_{\alpha}n^{\alpha}=P_{\alpha \beta}n^{\alpha}=0$. In order to close the system, we need an explicit expression of $P_{\alpha \beta}$. In this paper, we adopt a simple form $P_{\alpha \beta}=\chi E \gamma_{\alpha \beta}$ with $\chi = 1/3$. This approximation may work well in high density regions but will violate in low density regions. However, the violation will not affect the dynamics because the total amount of streaming-neutrinos emitted in low density regions will be small. Of course, a more sophisticated treatment will be necessary in a future study. \subsection{The lepton-number conservation equations}\label{Lepton} The conservation equations of the lepton fractions are written schematically as \begin{eqnarray} &&\!\! \frac{d Y_{e}}{dt} = -\gamma_{e} , \label{dYe} \\ &&\!\! \frac{d Y_{\nu e}}{dt} = \gamma_{\nu e}, \label{dYnu} \\ &&\!\! \frac{d Y_{\bar{\nu} e}}{dt} = \gamma_{\bar{\nu} e}, \label{dYna} \\ &&\!\! \frac{d Y_{\nu x}}{dt} = \gamma_{\nu x}, \label{dYno} \end{eqnarray} where $Y_{e}$, $Y_{\nu e}$, $Y_{\bar{\nu} e}$, and $Y_{\nu x}$ denote the electron fraction, the electron neutrino fraction, the electron anti-neutrino fraction, and $\mu$ and $\tau$ neutrino and anti-neutrino fractions, respectively. We note that in the previous simulations based on the leakage schemes \cite{EP81,vRL81,Kotake03,RJS96}, the neutrino fractions were not solved. The source terms of neutrino fractions can be written, on the basis of the present leakage scheme, as \begin{eqnarray} &&\!\! \gamma_{\nu e} = \gamma_{\nu e}^{\rm local} - \gamma_{\nu e}^{\rm leak}, \\ &&\!\! \gamma_{\bar{\nu} e} = \gamma_{\bar{\nu} e}^{\rm local} - \gamma_{\bar{\nu} e}^{\rm leak}, \\ &&\!\! \gamma_{\nu x} = \gamma_{\nu x}^{\rm local} - \gamma_{\nu x}^{\rm leak}, \end{eqnarray} where $\gamma^{\rm local}$'s and $\gamma^{\rm leak}$'s are the local production and the leakage rates of each neutrino, respectively (see \S~\ref{leakagerate}). Note that only the trapped-neutrinos are responsible for the neutrino fractions. Assuming that the trapped neutrinos are thermalized and the distribution function is the equilibrium Fermi-Dirac one, the chemical potentials of neutrinos can be calculated from the neutrino fractions. Then the thermodynamical quantities of neutrinos can be also calculated. The source term of the electron fraction conservation is \begin{equation} \gamma_{e} = \gamma_{\nu e}^{\rm local} - \gamma_{\bar{\nu} e}^{\rm local}.\label{sourceYe} \end{equation} Because $\gamma^{\rm local}_{\nu}$\ 's are characterized by the WP timescale $t_{\rm wp}$, some procedures are necessary to solve the lepton conservation equations explicitly. The following simple procedures are proposed to solve the equations stably. First, in each timestep $n$, the conservation equation of the {\it total} lepton fraction ($Y_{l}=Y_{e}-Y_{\nu e}+Y_{\bar{\nu} e}$), \begin{eqnarray} &&\!\! \frac{d Y_{l}}{dt} = -\gamma_{l}, \label{dYl} \end{eqnarray} is solved together with the conservation equation of $Y_{\nu x}$, Eq. (\ref{dYno}), in advance of solving whole of the lepton conservation equations (Eqs. (\ref{dYe}) -- (\ref{dYno})). Note that the source term $\gamma_{l} = \gamma_{\nu e}^{\rm leak} - \gamma_{\bar{\nu} e}^{\rm leak}$ is characterized by the leakage timescale $t_{\rm leak}$ so that this equation can be solved explicitly in the hydrodynamic timescale. Then, assuming that the $\beta$-equilibrium is achieved, values of the lepton fractions in the $\beta$-equilibrium ($Y_{e}^{\beta}$, $Y_{\nu e}^{\beta}$, and $Y_{\bar{\nu} e}^{\beta}$) are calculated from the evolved $Y_{l}$. Second, regarding $Y_{\nu e}^{\beta}$ and $Y_{\bar{\nu} e}^{\beta}$ as the maximum allowed values of the neutrino fractions in the next timestep $n+1$, the source terms are limited so that $Y_{\nu}$'s in the timestep $n+1$ cannot exceed $Y_{\nu}^{\beta}$ 's. Then, the whole of the lepton conservation equations (Eqs. (\ref{dYe}) -- (\ref{dYno})) are solved explicitly using these limiters. Third, the following conditions are checked \begin{eqnarray} \mu_{p}+\mu_{e} < \mu_{n}+\mu_{\nu e} , \\ \mu_{n}-\mu_{e} < \mu_{p}+\mu_{\bar{\nu} e}, \end{eqnarray} where $\mu_{p}$, $\mu_{n}$, $\mu_{e}$, $\mu_{\nu e}$ and $\mu_{\bar{\nu} e}$ are the chemical potentials of protons, neutrons, electrons, electron neutrinos, and electron anti-neutrinos, respectively. If both conditions are satisfied, the values of the lepton fractions in the timestep $n+1$ are set to be those in the $\beta$-equilibrium value; $Y_{e}^{\beta}$, $Y_{\nu e}^{\beta}$, and $Y_{\bar{\nu} e}^{\beta}$. On the other hand, if either or both conditions are not satisfied, the lepton fractions in the timestep $n+1$ is set to be those obtained by solving whole of the lepton-number conservation equations. A limiter for the evolution of $Y_{\nu x}$ may be also necessary in the case where the pair processes are dominant, for example, in the simulations for collapse of population III stellar core. In this case, the value of $Y_{\nu x}$ at the pair equilibrium (i.e. at $\mu_{\nu x}=0$), $Y_{\nu x}^{\rm pair}$ may be used to limit the source term. \subsection{Definition of leakage rates}\label{leakagerate} In this subsection the definitions of the leakage rates $Q_{\alpha}^{\rm leak}$ and $\gamma_{\nu}^{\rm leak}$ are presented. Because $Q^{\rm leak}_{\nu}$ may be regarded as the emissivity of neutrinos measured in the {\it fluid rest frame}, $Q^{\rm leak}_{\alpha}$ is defined as \cite{SSR07} \begin{equation} Q^{\rm leak}_{\alpha} = Q^{\rm leak}_{\nu}u_{\alpha}. \end{equation}\label{leakage_source_Q} The leakage rates $Q^{\rm leak}_{\nu}$ and $\gamma^{\rm leak}_{\nu}$ are assumed to satisfy the following properties. \begin{enumerate} \item The leakage rates approach the local rates $Q_{\nu}^{\rm local}$ and $\gamma_{\nu}^{\rm local}$ in the low density, transparent region. \item The leakage rates approach the diffusion rates $Q_{\nu}^{\rm diff}$ and $\gamma_{\nu}^{\rm diff}$ in the high density, opaque region. \item The above two limits should be connected smoothly. \end{enumerate} Here, the local rates can be calculated based on the theory of weak interactions and the diffusion rates can be determined based on the diffusion theory (see appendices for the definition of the local and diffusion rates adopted in this paper). There will be several prescriptions to satisfy the requirement (iii) \cite{RJS96,RL03}. In this paper, the leakage rates are defined by \begin{eqnarray} &&\!\! Q_{\nu}^{\rm leak}= (1-e^{-b\tau_{\nu}}) Q_{\nu}^{\rm diff} + e^{-b\tau_{\nu}} Q_{\nu}^{\rm local}, \label{Q_leak} \\ &&\!\! \gamma_{\nu}^{\rm leak}= (1-e^{-b\tau_{\nu}}) \gamma_{\nu}^{\rm diff} + e^{-b\tau_{\nu}} \gamma_{\nu}^{\rm local}, \label{g_leak} \end{eqnarray} where $\tau_{\nu}$ is the optical depth of neutrinos and $b$ is a parameter which is typically set as $b^{-1}=2/3$. The optical depth can be computed from the cross sections in a standard manner \cite{RJS96,RL03}. In the present implementation, it is not necessary to artificially divide the system into neutrino-trapped and free-streaming regions by the single neutrino-trapping density. Therefore there is no discontinuous boundary which existed in the previous leakage schemes \cite{EP81,vRL81,Kotake03}. As the local production reactions of neutrinos, the electron and positron captures\cite{FFN85} ($\gamma_{\nu e}^{\rm ec}$ and $\gamma_{\bar{\nu} e}^{\rm pc}$), the electron-positron pair annihilation\cite{CHB86} ($\gamma_{\nu_{e} \bar{\nu}_{e}}^{\rm pair}$ for electron-type neutrinos and $\gamma_{\nu_{x} \bar{\nu}_{x}}^{\rm pair}$ for the other type), the plasmon decays\cite{RJS96} ($\gamma_{\nu_{e} \bar{\nu}_{e}}^{\rm plas}$ and $\gamma_{\nu_{x} \bar{\nu}_{x}}^{\rm plas}$), and the Bremsstrahlung processes\cite{BRT06} ($\gamma_{\nu_{e} \bar{\nu}_{e}}^{\rm Brems}$ and $\gamma_{\nu_{x} \bar{\nu}_{x}}^{\rm Brems}$) are considered in this paper. Then, the local rates for the neutrino fractions are \begin{eqnarray} && \gamma_{\nu e}^{\rm local} = \gamma_{\nu e}^{\rm ec} + \gamma_{\nu_{e} \bar{\nu}_{e}}^{\rm pair} + \gamma_{\nu_{e} \bar{\nu}_{e}}^{\rm plas} + \gamma_{\nu_{e} \bar{\nu}_{e}}^{\rm Brems}, \label{gnlocal}\\ && \gamma_{\bar{\nu} e}^{\rm local} = \gamma_{\bar{\nu} e}^{\rm pc} + \gamma_{\nu_{e} \bar{\nu}_{e}}^{\rm pair} + \gamma_{\nu_{e} \bar{\nu}_{e}}^{\rm plas} + \gamma_{\nu_{e} \bar{\nu}_{e}}^{\rm Brems}, \label{galocal}\\ && \gamma_{\nu x}^{\rm local} = \gamma_{\nu_{x} \bar{\nu}_{x}}^{\rm pair} + \gamma_{\nu_{x} \bar{\nu}_{x}}^{\rm plas} + \gamma_{\nu_{x} \bar{\nu}_{x}}^{\rm Brems}. \label{gxlocal} \end{eqnarray} Similarly, the local neutrino energy emission rate $Q_{\nu}^{\rm local}$ is given by \begin{eqnarray} Q_{\nu}^{\rm local} = Q_{\nu e}^{\rm ec} + Q_{\bar{\nu} e}^{\rm pc} &+& 2\,(Q_{\nu_{e} \bar{\nu}_{e}}^{\rm pair} + Q_{\nu_{e} \bar{\nu}_{e}}^{\rm plas} + Q_{\nu_{e} \bar{\nu}_{e}}^{\rm Brems}) \nonumber \\ &+& 4\,(Q_{\nu_{x} \bar{\nu}_{x}}^{\rm pair} + Q_{\nu_{x} \bar{\nu}_{x}}^{\rm plas} + Q_{\nu_{x} \bar{\nu}_{x}}^{\rm Brems})\ . \label{Qlocal} \end{eqnarray} The explicit forms of the local rates in Eqs. (\ref{gnlocal})--(\ref{Qlocal}) will be found in Appendices A and B. We follow the recent work by Rosswog and Liebend\"orfer\cite{RL03} for the diffusive neutrino emission rates $\gamma_{\nu}^{\rm diff}$ and $Q_{\nu}^{\rm diff}$ in Eqs (\ref{Q_leak}) and (\ref{g_leak}). The explicit forms of $\gamma_{\nu}^{\rm diff}$ and $Q_{\nu}^{\rm diff}$ are presented in Appendix C. \section{Equation of state}\label{S_EOS} In this section we summarize details of EOSs adopted in our current code. \subsection{Baryons} \label{EOS_Baryon} In the present version of our code, we employ an EOS by Shen et al.\cite{Shen98}, which is derived by the relativistic mean field theory\cite{RMF} based on the relativistic Br\"uckner-Hartree-Fock theory\cite{RBHF}. The so-called parameter set TM1\cite{RMF} is adopted to reproduce characteristic properties of heavy nuclei. The maximum mass of a cold spherical neutron star in this EOS is much larger than the canonical neutron star mass $\approx 1.4M_{\odot}$ as $\approx 2.2 M_{\odot}$\cite{Shen98}. The framework of the relativistic mean field theory is extended with the Thomas-Fermi spherical cell model approximation to describe not only the homogeneous matter but also an inhomogeneous one. The thermodynamical quantities of dense matter at various sets of $(\rho, Y_{p}, T)$ are calculated to construct the numerical data table for simulation. The table covers a wide range of density $10^{5.1}$--$10^{15.4}$ g/cm$^{3}$, electron fraction $0.0$--$0.56$, and temperature $0$--$100$ MeV, which are required for supernova simulation. It should be noted that the causality is guaranteed to be satisfied in this framework, whereas the sound velocity sometimes exceeds the velocity of the light in the non-relativistic framework, e.g., in the EOS by Lattimer and Swesty\cite{LS91}. This is one of the benefits of the relativistic EOS. Although we employ the nuclear EOS by Shen et al. in this work, it is easy to replace the EOS. In the future we plan to implement other EOSs such as a hyperonic matter EOS\cite{Ishizuka}. Because the table of the original EOS by Shen et al. does not include the thermodynamical quantities of the leptons (electrons, positrons, and neutrinos if necessary) and photons, one has to consistently include them to the table. \subsection{Electrons and Positrons} To consistently calculate the pressure and the internal energy of the electron and positron, the charge neutrality condition $Y_{p} = Y_{e}$ should be solved to determine the electron chemical potential $\mu_{e}$ for each value of the baryon rest-mass density $\rho$ and the temperature $T$ in the EOS table. Namely, it is required to solve the equation \begin{equation} n_{e}(\mu_{e},T) \equiv n_{-} - n_{+} = \frac{\rho Y_{e}}{m_{u}} \label{n_to_mu} \end{equation} in terms of $\mu_{e}$ for given values of $\rho$, $T$, and $Y_{e}\ (= Y_{p})$. Here, $m_{u} = 931.49432$ MeV is the atomic mass unit, and $n_{-}$ and $n_{+}$ are the total number densities (i.e., including the electron-positron pair) of the electrons and positrons, respectively. Assuming that the electrons obey the Fermi-Dirac distribution (which is derived under the assumption of thermodynamic equilibrium), the number density ($n_{-}$), the pressure ($P_{-}$), and the internal energy density ($u_{-}$) of the electron are written as\cite{Cox} \begin{eqnarray} n_{-} &=& \frac{1}{\pi^{2} \hbar^{3}} \int_{0}^{\infty} \frac{p^{2}dp} {\exp \left[ -\eta_{e} + \tilde{\epsilon}/k_{B} T\right] + 1}, \\ P_{-} &=& \frac{1}{\pi^{2} \hbar^{3}} \int_{0}^{\infty} \frac{p^{3}(\partial \tilde{\epsilon}/\partial p)dp} {\exp \left[ -\eta_{e} + \tilde{\epsilon}/k_{B} T\right] + 1}, \\ u_{-} &=& \frac{1}{\pi^{2} \hbar^{3}}\int_{0}^{\infty} \frac{p^{2}\tilde{\epsilon} dp} {\exp \left[ -\eta_{e} + \tilde{\epsilon}/k_{B} T \right] + 1}. \end{eqnarray} Here $\hbar$, $k_{B}$, and $\eta_{e} \equiv \mu_{e}/k_{B}T$ are the Planck's constant, the Boltzmann's constant and the so-called degeneracy parameter. $\tilde{\epsilon}(p) = \sqrt{m_{e}^{2}c^{4} + p^{2}} - m_{e}c^{2}$ is the kinetic energy of a free electron. If we choose the zero point of our energy scale for electrons at $\tilde{\epsilon} = 0$, we have to assign a total energy of $\tilde{\epsilon}_{+} = \sqrt{m_{e}^{2}c^{4} + p^{2}} + m_{e}c^{2}$ to a free positron\cite{Cox}. Thus the number density ($n_{+}$), the pressure ($P_{+}$), and the internal energy density ($u_{+}$) of positrons are given by\cite{Cox} \begin{eqnarray} n_{+} &=& \frac{1}{\pi^{2} \hbar^{3}}\int_{0}^{\infty} \frac{p^{2}dp}{\exp \left[ -\eta_{+} + \tilde{\epsilon}_{+}/k_{B} T\right] + 1}, \\ P_{+} &=& \frac{1}{\pi^{2} \hbar^{3}}\int_{0}^{\infty} \frac{p^{3}(\partial \tilde{\epsilon}_{+}/\partial p)dp} {\exp \left[ -\eta_{+} + \tilde{\epsilon}_{+}/k_{B} T\right] + 1}, \\ u_{+} &=& \frac{1}{\pi^{2} \hbar^{3}}\int_{0}^{\infty} \frac{p^{2}(\tilde{\epsilon}+2m_{e}c^{2})dp} {\exp \left[ -\eta_{+} + \tilde{\epsilon}_{+}/k_{B} T\right] + 1}, \end{eqnarray} where $\eta_{+} = -\eta_{e}$ is the degeneracy parameter of the positrons. \subsection{Photons} The pressure and the specific internal energy density of photons are given by \begin{eqnarray} P_{r} = \frac{a_{r}T^{4}}{3},\ \ \varepsilon_{r} = \frac{a_{r}T^{4}}{\rho}, \end{eqnarray} where $a_{r}$ is the radiation constant $a_{r} = (\pi^{2}k_{B}^{4})/(15c^{3}\hbar^{3})$ and $c$ is the velocity of light. \subsection{Trapped neutrinos} In this paper, the trapped-neutrinos are assumed to interact sufficiently frequently with matter that be thermalized. Therefore they are described as ideal Fermi gases with the matter temperature. Then, from the neutrino fractions $Y_{\nu}$, the chemical potentials of neutrinos are calculated by solving \begin{equation} Y_{\nu} = Y_{\nu}(\mu_{\nu}, T). \end{equation} Using the chemical potentials, $\mu_{\nu}$, and the matter temperature, the pressure and the internal energy of the trapped-neutrinos are calculated in the same manner as for electrons. \subsection{The sound velocity} In the high-resolution shock-capturing scheme for hydrodynamics, we in general need to evaluate the sound velocity $c_{s}$, \begin{equation} c_{s}^{\,2} = \frac{1}{h}\left[ \left.\frac{\partial P}{\partial \rho}\right|_{\epsilon} +\frac{P}{\rho} \left.\frac{\partial P}{\partial \epsilon}\right|_{\rho} \right]. \label{defcs} \end{equation} The derivatives of the pressure are calculated by \begin{eqnarray} \left.\frac{\partial P}{\partial \rho}\right|_{\epsilon} &=& \sum_{i=B,e,r,\nu} \left[ \left.\frac{\partial P_{i}}{\partial \rho}\right|_{T} -\left.\frac{\partial P_{i}}{\partial T }\right|_{\rho} \left( \sum_{j=B,e,r,\nu} \left.\frac{\partial \epsilon_{j}}{\partial \rho}\right|_{T} \right) \left( \sum_{k=B,e,r,\nu} \left.\frac{\partial \epsilon_{k}}{\partial T}\right|_{\rho} \right)^{-1} \right], \label{Prho} \\ \left.\frac{\partial P}{\partial \epsilon}\right|_{\rho} &=& \left(\sum_{i=B,e,r,\nu} \left.\frac{\partial P_{i}}{\partial T}\right|_{\rho} \right) \left(\sum_{j=B,e,r,\nu} \left.\frac{\partial \epsilon_{j}}{\partial T}\right|_{\rho} \right)^{-1} , \label{Peps} \end{eqnarray} where '$B$', '$e$', '$ph$' and '$\nu$' in the sum denote the baryon, electron, photons, and neutrino quantities. The derivatives for the baryon parts are evaluated by taking a finite difference of the table data. On the other hand, the derivatives for the electron parts can be evaluated semi-analytically. Because there are in general the phase transition regions in an EOS table for baryons and moreover the EOS may contain some non-smooth spiky structures, careful treatments are necessary when evaluating the derivatives of thermodynamical quantities. In the present EOS table, the derivatives are carefully evaluated so that there are no spiky behaviors in the resulting sound velocities. \section{Basic equations and Numerical methods} \label{S_Numerical} \subsection{Einstein's equation and gauge conditions} The standard variables in the 3+1 decomposition are the three-dimensional metric $\gamma_{ij}$ and the extrinsic curvature $K_{ij}$ on the three-dimensional hypersurface\cite{York79} defined by \begin{eqnarray} \gamma_{\mu\nu} &\equiv& g_{\mu\nu} + n_{\mu}n_{\nu}, \\ K_{\mu\nu} &\equiv& - \frac{1}{2} \hbox{${\cal L}\llap{ --\,}$} _{n} \gamma_{\mu\nu}, \end{eqnarray} where $g_{\mu\nu}$ is the spacetime metric, $n_{\mu}$ is the unit normal to a three-dimensional hypersurface, and $\hbox{${\cal L}\llap{ --\,}$} _{n}$ is the Lie derivative with respect to the unit normal $n^{\mu}$. Then we can write the line element in the form \begin{equation} ds^{2} = - \alpha^{2} dt^{2} + \gamma_{ij}(dx^{i}+\beta^{i}dt) (dx^{j}+\beta^{j}dt), \end{equation} where $\alpha$ and $\beta^{i}$ are the lapse function and the shift vector which describe the gauge degree of freedom. In the BSSN reformulation\cite{SN95,BS98}, the spatial metric $\gamma_{ij}$ is conformally decomposed as $\gamma _{ij} = e^{ 4\phi}\tilde{\gamma}_{ij}$ where the condition $\det (\tilde{\gamma}_{ij}) = 1$ is imposed for the conformal metric $\tilde{\gamma}_{ij}$. From this condition, the conformal factor is written as $\phi = \frac{1}{12}\ln \gamma$ and $ \gamma \equiv \det(\gamma_{ij})$. The extrinsic curvature $K_{ij}$ is decomposed into the trace part $K$ and the traceless part $A_{ij}$ as $K_{ij} = A_{ij} + (1/3)\gamma_{ij}K$ . The traceless part $A_{ij}$ is conformally decomposed as $A_{ij} = e^{4\phi}\tilde{A}_{ij}$. Thus the fundamental quantities for the evolution equation are now split into $\phi, \tilde{\gamma}_{ij}$, $K$, and $\tilde{A}_{ij}$. Furthermore, the auxiliary variable $ F_{i} \equiv \delta^{jk}\partial_{k}\tilde{\gamma}_{ij} $ is introduced in the BSSN reformulation\cite{SN95}. The basic equations to be solved are \begin{eqnarray} && \left(\partial_{t} - \beta^{k}\partial_{k} \right) \phi = \frac{1}{6}\left(-\alpha K + \partial_{k} \beta^{k} \right) , \label{phidevelopP}\\ && \left(\partial_{t} - \beta^{k}\partial_{k} \right)\tilde{\gamma}_{ij} = -2 \alpha \tilde{A}_{ij} + \tilde{\gamma}_{ik}\partial_{j}\beta^{k} + \tilde{\gamma}_{jk}\partial_{i}\beta^{k} - \frac{2}{3}\tilde{\gamma}_{ij} \partial_{k}\beta^{k}, \label{tilGamdevelopP}\\ && \left(\partial_{t} - \beta^{k}\partial_{k} \right) K = - D^{k}D_{k} \alpha + \alpha \left[ \tilde{A}_{ij}\tilde{A}^{ij} + \frac{1}{3}K^{2} \right] + 4 \pi \alpha \left( \rho_{h} + S \right) , \label{trKdevelopP}\\ && \left(\partial_{t} - \beta^{k}\partial_{k} \right)\tilde{A}_{ij} = e^{-4\phi}\left[ \alpha \left(R_{ij} - \frac{1}{3} e^{4\phi} \tilde{\gamma}_{ij} R \right) - \left(D_{i}D_{j} \alpha -\frac{1}{3}e^{4\phi} \tilde{\gamma}_{ij} D^{k}D_{k}\alpha \right)\right] \nonumber \\ && \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ + \alpha \left( K \tilde{A}_{ij} - 2 \tilde{A}_{ik} \tilde{A}^{k}_{\ j} \right) + \tilde{A}_{ik}\partial_{j}\beta^{k} + \tilde{A}_{jk}\partial_{i}\beta^{k} - \frac{2}{3} \tilde{A}_{ij} \partial_{k}\beta^{k} \nonumber \\ && \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ - 8 \pi \alpha \left( e^{-4\phi} S_{ij} - \frac{1}{3}\tilde{\gamma}_{ij}S \right), \label{AdevelopP} \\ && \left( \partial_{t} - \beta^{k}\partial_{k} \right)F_{i} = -16\pi \alpha j_{i} \nonumber \\ && \ \ \ \ \ \ \ \ \ \ + 2\alpha \left\{ f^{kj}\partial_{j}\tilde{A}_{ik} + \tilde{A}_{ik} \partial_{j}f^{kj} - \frac{1}{2}\tilde{A}^{jl}\partial_{i}h_{jl} + 6 \tilde{A}^{k}_{\ i}\partial_{k}\phi - \frac{2}{3}\partial_{i}K \right\} \nonumber \\ && \ \ \ \ \ \ \ \ \ \ + \delta^{jk} \left\{ -2 \tilde{A}_{ij} \partial_{k} \alpha + \left(\partial_{k}\beta^{l}\right)\partial_{l}h_{ij} \right. \nonumber \\ && \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \left. + \partial _{k} \left( \tilde{\gamma}_{il}\partial_{j}\beta^{l} + \tilde{\gamma}_{jl}\partial_{i}\beta^{l} -\frac{2}{3}\tilde{\gamma}_{ij} \partial_{l}\beta^{l} \right) \right\}, \label{FdevelopP} \end{eqnarray} where $^{(3)}R$, $^{(3)}R_{ij}$, and $D_{i}$ are the Ricci scalar, the Ricci tensor, and the covariant derivative associated with three-dimensional metric $\gamma_{ij}$, respectively. The matter source terms, $\rho_{h} \equiv (T^{\rm Total})^{\alpha \beta} n_{\alpha}n_{\beta}$, $j_{i} \equiv -(T^{\rm Total})^{\alpha \beta} \gamma_{i\alpha}n_{\beta}$, and $S_{ij} \equiv (T^{\rm Total})^{\alpha \beta} \gamma_{i\alpha}\gamma_{j \beta}$, are the projections of the stress-energy tensor with respect to $n^{\mu}$ and $\gamma_{\mu\nu}$, and $S \equiv \gamma_{ij}S^{ij}$. We assume the axial symmetry of the spacetime and the so-called Cartoon method\cite{Cartoon,Shibata03} is adopted to avoid problems around the coordinate singularities of the cylindrical coordinates. Except for this, the numerical schemes for solving the Einstein's equation are essentially the same as those in Ref.~\citen{BNS1}. We use 4th-order finite difference scheme in the spatial direction and the 3rd-order Runge-Kutta scheme in the time integration. The advection terms such as $\beta^{i}\partial_{i}\phi$ are evaluated by a 4th-order upwind scheme. As the gauge conditions for the lapse, we use the so-called $1+\log$ slicing\cite{AB01}: \begin{equation} (\partial_{t} - \hbox{${\cal L}\llap{ --\,}$} _{\beta})\alpha = -2K \alpha. \end{equation} It is known that the $1+\log$ slicing enables to perform a long term evolution of neutron stars as well as has strong singularity avoidance properties in the black hole spacetime. The shift vector is determined by solving the following dynamical equation\cite{DG} \begin{equation} \partial_{t}\beta^{k} = \tilde{\gamma}^{kl} (F_{l} + \Delta t \partial_{t} F_{l}). \label{Dynbeta} \end{equation} Here the second term in the right-hand side is necessary for numerical stability\cite{DG}. \subsection{The hydrodynamic equations in leakage scheme}\label{BasicEq} The basic equations for general relativistic hydrodynamics in our leakage scheme are the continuity equation, the lepton-number conservation equations, and the local conservation equation of the energy-momentum. We assume the axial symmetry of the spacetime and the hydrodynamics equations are solved in the cylindrical coordinates $(\varpi, \varphi, z)$ where $\varpi = \sqrt{x^{2}+y^{2}}$. In the axisymmetric case, the hydrodynamics equations should be written in the cylindrical coordinate. On the other hand, in the Cartoon method\cite{Cartoon,Shibata03}, Einstein's equation are solved in the $y=0$ plane for which $x=\varpi$, $u_{\varpi} = u_{x}$, $u_{\varphi} = x u_{y}$, and the other similar relations hold for vector and tensor quantities. Taking into these facts, the hydrodynamic equations may be written using the Cartesian coordinates replacing $(\varpi, \varphi)$ by $(x,y)$. In the following, we write down explicit forms of the equations for the purpose of convenience. Numerical tests for basic parts of the code of solving the hydrodynamics equations are extensively performed in Ref.~\citen{Shibata03}. The equations are solved using the third-order high-resolution central scheme of Kurganov and Tadmor\cite{KT00,PhD}. \subsubsection{The Continuity and lepton-number conservation equations} The continuity equation for the baryon rest mass is \begin{equation} \nabla_{\alpha}(\rho u^{\alpha}) = 0 \label{conti}. \end{equation} As fundamental variables for numerical simulations, the following quantities are introduced: $\rho_{\ast} \equiv \rho w e^{6\phi}$ and $v^{i} \equiv u^{i}/u^{t}$ where $ w \equiv \alpha u^{t}$. Then, the continuity equation is written as \begin{equation} \partial_{t}(\rho_{\ast}) + \frac{1}{x}\partial_{x}(\rho_{\ast}v^{x}) + \partial_{z}(\rho_{\ast}v^{z}) = 0 . \label{continuS} \end{equation} Using the continuity equation, the lepton-number conservation equations (\ref{dYe}) -- (\ref{dYno}) are written as \begin{equation} \partial_{t}(\rho_{\ast}Y_{L}) + \frac{1}{x}\partial_{x}(\rho_{\ast}Y_{L}v^{x}) + \partial_{z}(\rho_{\ast}Y_{L}v^{z}) = \rho_{*}\gamma_{L}, \label{e-Y} \end{equation} where $Y_{L}$ and $\gamma_{L}$ are abbreviated expressions of the lepton fractions and the source terms. \subsubsection{Energy-momentum conservation} As fundamental variables for numerical simulations, we define the quantities $\hat{u}_{i} \equiv hu_{i}$ and $\hat{e} \equiv hw - P(\rho w)^{-1}$. Then, the Euler equation $\gamma_{i}^{\alpha} \nabla_{\beta} T^{\beta}_{\ \alpha} = - \gamma_{i}^{\alpha} Q^{\rm leak}_{\alpha}$, and the energy equation $n^{\alpha}\nabla_{\beta}T^{\alpha}_{\beta} =-n^{\alpha}Q^{\rm leak}_{\alpha}$ can be written as \begin{eqnarray} &&\!\!\!\! \partial_{t}(\rho_{\ast} \hat{u}_{A}) + \frac{1}{x}\partial_{x} \left[x \left\{ \rho_{\ast} \hat{u}_{A} v^{x} + P\alpha e^{6\phi}\delta^{x}_{\ A} \right\} \right] + \partial_{z} \left[ \rho_{\ast} \hat{u}_{A} v^{z} + P\alpha e^{6\phi}\delta^{z}_{\ A} \right] \nonumber \\ &&\!\!\!\! \ \ \ \ \ \ \ \ \ \ = - \rho_{\ast}\left[ w h \partial_{A}\alpha - \hat{u}_{i}\partial_{A}\beta^{i} + \frac{\alpha e^{-4\phi}}{2wh} \hat{u}_{k}\hat{u}_{l}\partial_{A}\tilde{\gamma}^{kl} - \frac{2\alpha h (w^{2} -1)}{w} \partial_{A}\phi \; \right] \nonumber \\ &&\!\!\!\! \ \ \ \ \ \ \ \ \ \ \ \ \ + P\partial_{A}(\alpha e^{6\phi}) + \frac{(\rho_{*}u_{y}v^{y} + P\alpha e^{6\phi}) \delta^{x}_{A}}{x} - \alpha e^{6\phi} Q^{\rm leak}_{A}, \label{momS} \\ &&\!\!\!\! \partial_{t}\left( \rho_{\ast}\hat{u}_{y}\right) + \frac{1}{x^{2}} \partial_{x}\left( x^{2} \rho_{\ast}\hat{u}_{y}v^{y} \right) + \partial_{z}\left( \rho_{\ast}\hat{u}_{y}v^{z} \right) = -\alpha e^{6\phi} Q^{\rm leak}_{y}, \\ &&\!\!\!\! \partial_{t}(\rho_{\ast}\hat{e}) + \frac{1}{x} \partial_{x}\left[ x \left\{ \rho_{\ast}v^{x}\hat{e} + P e^{6\phi}(v^{x}+\beta^{x}) \right\} \right] + \partial_{z}\left[ \rho_{\ast}v^{z}\hat{e} + P e^{6\phi}(v^{z}+\beta^{z}) \right] \nonumber \\ &&\!\!\!\! \ \ \ \ \ \ \ \ \ \ \ = \alpha e^{6\phi} PK + \frac{\rho_{\ast}}{u^{t}h} \hat{u}_{k}\hat{u}_{l}K^{kl} - \rho_{\ast}\hat{u}_{i}\gamma^{ij}D_{j}\alpha - \alpha e^{6\phi} Q^{\rm leak}_{\alpha}n^{\alpha}, \label{eneS} \end{eqnarray} where the subscript $A$ denotes $x$ or $z$ component. The evolution equation of streaming-neutrinos $\nabla_{\beta}(T^{\nu,{\rm S}})^{\beta}_{\alpha} = Q^{\rm leak}_{\alpha}$ gives \begin{eqnarray} &&\!\!\!\!\!\!\!\! \partial_{t}(\hat{E}) + \frac{1}{x}\partial_{x}\left[ x (\alpha \hat{F}^{x} - \beta^{x}\hat{E}) \right] + \partial_{z}\left[ (\alpha \hat{F}^{z} - \beta^{z}\hat{E}) \right] \nonumber \\ &&\!\! \ \ \ \ \ \ \ \ = \frac{\alpha \hat{E} K}{3} - \hat{F}^{k}\partial_{k}\alpha +\alpha e^{6\phi} Q^{\rm leak}_{a}n^{a} , \\ &&\!\!\!\!\!\!\!\! \partial_{t}(\hat{F}_{A}) + \frac{1}{x} \partial_{x}\left[ x \left(\frac{1}{3}\alpha \hat{E} \delta^{x}_{A} - \beta^{x}\hat{F}_{A} \right) \right] + \partial_{z}\left[ \left(\frac{1}{3}\alpha \hat{E} \delta^{z}_{A}-\beta^{z}\hat{F}_{A} \right)\right] \nonumber \\ &&\!\! \ \ \ \ \ \ \ \ = -\hat{E}\partial_{A}\alpha + \hat{F}_{k}\partial_{A}\beta^{k} + 2 \alpha \hat{E}\partial_{A}\phi + \frac{(\hat{E}/3 - \hat{F}_{y}\beta^{y})\delta^{x}_{A}}{x} + \alpha e^{6\phi} Q^{\rm leak}_{A}, \\ &&\!\!\!\!\!\!\!\! \partial_{t}(\hat{F}_{y}) - \frac{1}{x^{2}} \partial_{x}\left[ x^{2} \beta^{x}\hat{F}_{y} \right] - \partial_{z}\left[ \beta^{z}\hat{F}_{y} \right] = \alpha e^{6\phi} Q^{\rm leak}_{y}, \end{eqnarray} where $\hat{E} = e^{6\phi}E$ and $\hat{F}_{i} = e^{6\phi}F_{i}$, and the subscript $A$ again denotes $x$ or $z$ component. The closure relation $P_{\alpha \beta} = E\gamma_{\alpha \beta}/3$ is also substituted. \subsection{Recover of ($\rho$, $Y_{e}$/$Y_{l}$, $T$)} \label{Reconst} The quantities numerically evolved in the relativistic hydrodynamics are the conserved quantities ($\rho_{*}$, $\hat{u}_{i}$, $\hat{e}$) and the lepton fraction $Y_{e}$ or $Y_{l}$. The argument variables, ($\rho$, ($Y_{e}$ or $Y_{l}$), $T$), of the EOS table, together with the weight factor $w = \sqrt{1+\gamma^{ij}u_{i}u_{j}}$, should be calculated from the conserved quantities at each time slice. Note that the electron ($Y_{e}$) or lepton fraction ($Y_{l}$) is readily given by numerical evolution at each time slice whereas $\rho$, $u_{i}$, and $T$ are not. This fact requires us to find an efficient method for determining $w$. \subsubsection{Non-$\beta$-equilibrium case} In the case that the $\beta$-equilibrium condition is not satisfied, the argument quantities ($\rho$, $Y_{e}$, $T$) can be reconstructed from the conserved quantities in the following straightforward manner. \begin{enumerate} \item Give a trial value of $w$, referred to as $\tilde{w}$. Then, one obtains a trial value of the rest mass density from $\tilde{\rho} = \rho_{*}/(\tilde{w} e^{6\phi})$. \item A trial value of the temperature, $\tilde{T}$, can be obtained by solving the following equation: \begin{equation} \hat{e} = \left(1+\tilde{\varepsilon} + \frac{\tilde{P}}{\tilde{\rho}}\right)\tilde{w} - \frac{\tilde{P}}{\tilde{\rho}\tilde{w}} \equiv \tilde{e}(\tilde{\rho}, Y_{e}, \tilde{T}). \end{equation} Here, one dimensional search over the EOS table is required to obtain $\tilde{T}$. \item The next trial value of $w$ is given by $\tilde{w} = \sqrt{1+e^{-4\phi}\tilde{\gamma}^{ij}\hat{u}_{i}\hat{u}_{j}\tilde{h}^{-2}}$, where the specific enthalpy was calculated as $\tilde{h} = \tilde{h}(\tilde{\rho}, Y_{e}, \tilde{T})$ in the step 2. \item Repeat the procedures (1)--(3) until a required degree of convergence is achieved. Convergent solutions of the temperature and $w$ are obtained typically in 10 iterations. \end{enumerate} \subsubsection{The $\beta$-equilibrium case} On the other hand, in the case that the $\beta$-equilibrium condition is satisfied, one has to reconstruct the argument quantities ($\rho, Y_{e}, T$) from the conserved quantities and $Y_{l}$, under the assumption of the $\beta$-equilibrium. In this case, two-dimensional recover $(Y_{l}, \hat{e}) \ \Longrightarrow \ (Y_{e}, T)$ would be required for a given value of $\tilde{w}$. A serious problem is that in this case, there may be more than one combination of ($Y_{e}$, $T$) which gives the same values of $Y_{l}$ and $\hat{e}$. Therefore, we have to adopt a different method to recover ($\rho, Y_{e}, T$). Under the assumption of the $\beta$-equilibrium, the electron fraction is related to the total lepton fraction: $Y_{e} = Y_{e}(\rho, Y_{l}, T)$. Using this relation, the EOS table can be rewritten in terms of the argument variables of ($\rho$, $Y_{l}$, $T$). Then, the same strategy as in the non-$\beta$-equilibrium case can be adopted. Namely, \begin{enumerate} \item Give a trial value $\tilde{w}$. Then one obtains a trial value of the rest mass density. \item A trial value of the temperature can be obtained by solving $ \hat{e} = \tilde{e}(\tilde{\rho}, Y_{l}, \tilde{T})$, with one dimensional search over the EOS table. \item The next trial value of $w$ is given by $\tilde{w} = \sqrt{1+e^{-4\phi}\tilde{\gamma}^{ij}\hat{u}_{i}\hat{u}_{j}\tilde{h}^{-2}}$. \item Repeat the procedures (1)--(3) until a required degree of convergence is achieved. The electron fraction is given as $Y_{e} = Y_{e}(\rho, Y_{l}, T)$ in the (new) EOS table. \end{enumerate} In the case of a simplified or analytic EOS, the Newton-Raphson method may be applied to recover the primitive variables. In the case of a tabulated EOS, by contrast, the Newton-Raphson method may not be a good approach because it requires derivatives of thermodynamical quantities which in general cannot be calculated precisely from a tabulated EOS by the finite differentiating method. \subsection{Grid Setting}\label{Grid} In numerical simulations, we adopt a nonuniform grid, in which the grid spacing is increased as \begin{equation} d x_{j+1} = (1 + \delta) d x_{j}, \ \ \ \ d z_{l+1} = (1 + \delta) d z_{l} \end{equation} where $d x_{j} \equiv x_{j+1} - x_{j}$, $d z_{l} \equiv z_{l+1} - z_{l}$, and $\delta$ is a constant. In addition, a regridding technique\cite{Shibata02,Sekiguchi05} is adopted to assign a sufficiently large number of grid points inside the collapsing core, saving the CPU time efficiently. The regridding is carried out whenever the characteristic radius of the collapsing star decreases by a factor of a 2--3. At each regridding, the minimum grid spacing is decreased by a factor of $\sim 2$ while the geometrical factor $\delta$ is unchanged (see Table \ref{regrid}). All the quantities on the new grid are calculated using the fifth-order Lagrange interpolation. To avoid discarding the matter in the outer region, we also increase the grid number at each regridding. For the regridding, we define a relativistic gravitational potential $\Phi_c \equiv 1 -\alpha_c~ (\Phi_c>0)$ where $\alpha_c$ is the central value of the lapse function. Because $\Phi_c$ is approximately proportional to $M/R$ where $M$ and $R$ are characteristic mass and radius of the core, $\Phi_c^{-1}$ can be used as a measure of the characteristic length scale of the stellar core for the regridding. In Table \ref{regrid}, we summarize the regridding parameters of each level of the grid. \begin{table}[t] \begin{center} \begin{tabular}{c|c|ccccc} \hline Model & & $\Phi_{c} \le 0.0125 $ & $ \le \Phi_{c} \le 0.025 $ & $ \le \Phi_{c} \le 0.05 $ & $ \le \Phi_{c} \le 0.1 $ & $\Phi_{c} \ge 0.1$ \\ \hline S15 & $\Delta x_{0}$ & 3.26 & 1.60 & 0.820 & 0.414 & 0.217 \\ & $\delta$ & 0.002 & 0.002 & 0.002 & 0.002 & 0.002 \\ & $N$ & 444 & 668 & 924 & 1212 & 1532 \\ & $L$ (km)& 2330 & 2239 & 2188 & 2124 & 2103 \\ \hline S15 & $\Delta x_{0}$ & 5.10 & 2.90 & 1.44 & 0.760 & 0.396 \\ (low) & $\delta$ & 0.002 & 0.00215 & 0.0023 & 0.00245 & 0.0026 \\ & $N$ & 316 & 444 & 636 & 828 & 1020 \\ & $L$ (km)& 2244 & 2151 & 2073 & 2043 & 2000 \\ \hline \end{tabular} \end{center} \caption{Summary of the regridding procedure. The values of the minimum grid spacing $\Delta x_{0}$ (in units of km), the non-uniform-grid factor $\delta$, and the grid number $N$ for each range of $\Phi_{c} = 1 -\alpha_c$ are listed.}\label{regrid} \end{table} \section{Results} \label{S_Results} As a test problem, we perform a collapse simulation of spherical presupernova core and compare the results with those in the previous studies, to see the validity of the present code. Most of the following results are not novel astrophysically, but are novel in the sense that stellar core collapse can be followed by a {\it multidimensional fully general relativistic simulation taking account of microphysical processes}. In \S~\ref{bounce_shock}, \S~\ref{shock_stall}, and \S~\ref{convective_activities}, we first review the basic features of the collapse dynamics and the shock formation, the stall of shock, and convective activities. Then we compare our results with those in the previous studies in \S~\ref{comparison}. \subsection{Initial condition}\label{init_con} In this paper, we adopt a recent presupernova model of massive star by Woosley, Heger, and Weaver\cite{WHW02}: $15M_{\odot}$ model with solar metallicity (hereafter S15 model). We follow the dynamical evolution of the central part which constitutes the Fe core and the inner part of the Si-shell. We read in the density, the electron fraction, the temperature and the velocity ($v_{i}$) of the original initial data and derive other thermodynamical quantities using the EOS table. Note that the procedure of remapping the original initial data into the grid adopted in the numerical simulations is coordinate-dependent in general relativity. In this paper, we read in the original data {\it as a function of the coordinate radius}. In this case, the baryon rest-mass of the core is slightly larger than the original one, because it is defined by \begin{equation} M_{*} = \int \rho_{*} dx^{3} = \int \rho (w e^{6\phi}) d^{3}x, \label{restM} \end{equation} where $w e^{6\phi} > 1$. \begin{figure}[t] \begin{center} \includegraphics[scale=1.0]{rho-alp.eps} \end{center} \caption{Evolution of the central density $\rho_{c}$ (upper panel) and the central value of the lapse function $\alpha_{c}$ (lower panel). The solid curves are results for the finer grid resolution and the dotted curves are results of the coarser grid resolution.}\label{rho-alp} \end{figure} \subsection{Core bounce and shock formation}\label{bounce_shock} Figure \ref{rho-alp} displays the time evolution of the central rest-mass density $\rho$ and the central value of the lapse function. This figure shows that the stellar core collapse to a neutron star can be divided into three phases; the infall phase, the bounce phase, and the quasi-static evolution phase (see Refs.~\citen{Monch91} and \citen{Zwerg97} for the case of rotational collapse). The general feature of the collapse is as follows. The infall phase sets in due to gravitational instability of the iron core triggered by the sudden softening of the EOS, which is associated primarily with the electron capture and partially with the photo-dissociation of the heavy nuclei. The collapse in an early phase proceeds almost homologously. However, the collapse in the central region is accelerated with time because the electron capture reduces the degenerate pressure of electrons which provides the main part of the total pressure. Furthermore, the neutrino emission associated with the electron capture reduces the thermal pressure of the core. Here the inner part of the core, which collapses nearly homologously with a subsonic infall velocity, constitutes the inner core. On the other hand, the outer region in which the infall velocity is supersonic constitutes the outer core. The collapse proceeds until the central part of the iron core reaches the nuclear density ($\sim 2\times 10^{14}$ g/cm$^{3}$), and then, the inner core experiences the bounce. Because of its large inertia and large kinetic energy induced by the infall, the inner core overshoots its hypothetical equilibrium state. The stored internal energy of the inner core at the maximum compression is released through strong pressure waves generated inside the inner core. The pressure waves propagate from the center to the outer region until they reach the sonic point located at the edge of the inner core. Because the sound cones tilt inward beyond the sonic point, the pressure disturbance cannot propagate further and forms a shock just inside the sonic point. A shock wave is formed at the edge of the inner core and propagates outward. After this phase, the proto-neutron star experiences the quasi-static evolution phase. In this phase, the central value of density (the lapse function) increases (decreases) gradually, because the matter in the outer region falls into the proto-neutron star and because neutrinos are emitted carrying away the energy and lepton-number from the proto-neutron star. Figure \ref{lepb} shows the radial profiles in the equator of the lepton fractions at the bounce. The central values of the electron, the electron-neutrinos, and the total-lepton fractions are $\approx 0.32$, 0.05, and 0.37, respectively. The electron-anti-neutrino fraction is almost zero through out the core because only very small amount of positrons exist due to the high degree of electron degeneracy. \begin{figure}[t] \begin{center} \includegraphics[scale=1.0]{leptonb2.eps} \end{center} \caption{The radial profiles of the electron, $\nu_{e}$-, $\bar{\nu}_{e}$-, and the total lepton fraction at the bounce. The results for the finer grid resolution (solid curve) and for the coarser grid resolution (the dotted curves) are shown together. The two results are almost identical. }\label{lepb} \end{figure} \subsection{Neutrino bursts and stall of shock}\label{shock_stall} \begin{figure}[t] \begin{center} \includegraphics[scale=1.0]{lum3.eps} \end{center} \caption{Time evolution of the neutrino luminosities. The results in the finer grid resolution (solid curves) and in the coarser grid resolution (dashed curves) are shown together. The two results are approximately identical until the convective phase sets in, whereas small disagreement is found in the convective phase.}\label{nlum} \end{figure} \begin{figure}[t] \begin{center} \includegraphics[scale=1.1]{qalx3.eps} \end{center} \caption{The radial profiles of the infall velocity, the density, the entropy per baryon, and the total lepton fraction at bounce, at 2 ms and 6ms after bounce. The results for the finer grid resolution (solid curves) and for the coarser grid resolution (the dotted curves) are shown together and they are shown to be approximately identical. }\label{qalx} \end{figure} As the shock wave propagates outward, the kinetic energy of the infall matter is converted into the thermal energy behind the shock. The conversion rate of infall kinetic energy may be estimated approximately as \begin{eqnarray} L_{\rm heat} &\sim& 4\pi R_{s}^{2} (\rho_{\rm infall} v_{\rm infall}^{3}/2) \nonumber \\ &\sim& 1.4 \times 10^{53}\, {\rm ergs/s}\, \left(\frac{R_{s}}{100\,{\rm km}}\right)^{2} \left(\frac{\rho_{\rm infall}}{10^{9}\,{\rm g/cm}^{3}}\right) \left(\frac{v_{\rm infall}}{0.2 c}\right)^{3}, \label{shockp} \end{eqnarray} where $R_{s}$ and $\rho_{\rm infall}$ are radius of the shock wave and the density of infall matter, and we here recover the velocity of the light ($c$). Here, we assume that all the kinetic energy is converted to the thermal energy. At the same time, the shock wave suffers from the energy loss by the photo-dissociation of the iron to $\alpha$-particles and free nucleons. The fraction of this energy loss is\cite{ST83} \begin{equation} \epsilon_{\rm diss} \sim 1.5 \times 10^{51} {\rm ~ergs ~per~} 0.1M_{\odot}. \end{equation} Thus, the energy loss rate due to the photo-dissociation is \begin{equation} L_{\rm diss} \sim \dot{M}_{\rm shock} \epsilon_{\rm diss} \sim 1.1 \times 10^{53}\, {\rm ergs/s}\, \left(\frac{R_{s}}{100\,{\rm km}}\right)^{2} \left(\frac{\rho_{\rm infall}}{10^{9}\,{\rm g/cm}^{3}}\right) \left(\frac{v_{\rm infall}}{0.2 c}\right), \label{dissp} \end{equation} where $\dot{M}_{\rm shock} \sim 4\pi R_{s}^{2}\rho_{\rm infall}v_{\rm infall}$ is mass-accretion rate to the shock front. The ratio of $L_{\rm heat}$ to $L_{\rm diss}$ is \begin{equation} \frac{L_{\rm heat}}{L_{\rm diss}} \approx 1.2 \left( \frac{v_{\rm infall}}{0.2c} \right)^{2}. \end{equation} Therefore the energy loss rate by the photo-dissociation will eventually overcome the hydrodynamic power, because the infall velocity, which is $\approx (GM_{\rm ic}/R_{s})^{1/2}$, decreases as the shock wave propagates outward. Furthermore, when the shock wave crosses the neutrino-sphere, spiky burst emissions of neutrinos, the so-called neutrino bursts, occur: Neutrinos in the hot post-shock region are copiously emitted without interacting core matter. Figure \ref{nlum} shows the neutrino luminosity as a function of time calculated by\cite{PhD,SSR07} \begin{equation} L_{\nu} = \int \alpha e^{6\phi} u_{t} \dot{Q}_{\nu} d^{3}x . \end{equation} The peak luminosity is $L_{\nu _{e}} \approx 4.5 \times 10^{53}$ ergs/s. This neutrino burst significantly reduces the thermal energy of the shock. Consequently, the shock wave stalls at $\approx 80$ km soon after the neutrino burst. The peak luminosity and the shock-stall radius agree approximately with the previous one-dimensional fully general relativistic study\cite{Lieb05b}. When the shock wave stalls, negative gradients of the entropy per baryon and the total-lepton (electron) fraction appear because neutrinos carry away both the energy and the lepton number. Figure \ref{qalx} shows the radial profiles of the infall velocity, the density, the entropy per baryon, and the total lepton fraction in the equator. This figure clearly shows that negative gradients of the entropy per baryon and the total lepton fraction are formed above the neutrino sphere. As we shall see in \S~\ref{convective_activities}, such configurations are known to be unstable to convection, which is known to as the proto-neutron star convection. \subsection{Convective activities}\label{convective_activities} \begin{figure}[t] \begin{center} \includegraphics[scale=1.1]{con2l.eps} \end{center} \caption{Snapshots of the contours of the density (top left panels), the electron fraction $Y_{e}$ (top right panels), the entropy per baryon (bottom left panels), and the local neutrino energy emission rate (bottom right panels) in the $x$-$z$ plane at selected time slices. }\label{con} \end{figure} \begin{figure}[t] \begin{center} \includegraphics[scale=1.2]{convG2l.eps} \end{center} \caption{Snapshots of the contours of gradients associated with the entropy per baryon $(\partial P / \partial s)_{Y_{l},\rho} (ds/dr) $ (right panels) and associated with the lepton fraction $(\partial P / \partial Y_{l})_{s,\rho} (dY_{l}/dr) $ (left panels) in the $x$-$z$ plane at selected time slices. }\label{convG} \end{figure} Let us investigate the stability of the envelope of the proto-neutron star following Lattimer and Mazurek\cite{LM81}. We consider the following parameter \begin{equation} N^{2} \equiv \frac{g_{\rm eff}}{\rho} \left[ \left(\frac{d \rho}{d r}\right)_{\rm amb} -\left(\frac{d \rho}{d r}\right)_{\rm blob} \right], \label{VB0} \end{equation} where $g_{\rm eff}$ is the effective gravitational acceleration defined to be positive in the negative radial direction, the subscript 'amb' refers to the ambient core structure, and 'blob' denotes the blob element which is under an isolated displacement. The condition $N^{2} <0$ implies that the structure is unstable to convective overturn (e.g. Ref.~\citen{LM81}). Assuming that the fluid elements maintain the pressure equilibrium with its surroundings, we have \begin{equation} \left(\frac{d \rho}{d r}\right)_{\rm blob} = \left(\frac{d \rho}{d P}\right)_{\rm blob} \left(\frac{dP}{dr}\right)_{\rm amb}. \end{equation} Using this relation, Eq. (\ref{VB0}) is written as \begin{equation} N^{2} = \frac{g_{\rm eff}}{\rho} \left(\frac{d \rho}{d P}\right)_{\rm blob} \left[ \left(\frac{d P}{d \rho}\right)_{\rm blob} \left(\frac{d \rho}{d r}\right)_{\rm amb} -\left(\frac{d P}{d r}\right)_{\rm amb} \right]. \end{equation} Because the pressure is a function of the entropy per baryon, the density, and the lepton fraction, $(d P/d r)_{\rm amb}$ is rewritten to give\cite{LM81} \begin{equation} N^{2} = \frac{g_{\rm eff}}{\rho} \left(\frac{\partial \rho}{\partial P}\right)_{s, Y_{l}} \left[ \left(\frac{\partial P}{\partial s}\right)_{\rho, Y_{l}} \left(\frac{d s}{d r}\right)_{\rm amb} +\left(\frac{\partial P}{\partial Y_{l}}\right)_{\rho, s} \left(\frac{d Y_{l}}{d r}\right)_{\rm amb} \right] \label{VB1}. \end{equation} Here, we also assume that the blob elements do not interact the ambient matters both thermally and chemically, i.e. $ds = dY_{l} = 0$ for the blob. Then, we have \begin{equation} \left(\frac{dP}{d\rho}\right)_{\rm blob} = \left(\frac{\partial P}{\partial \rho} \right)_{s,Y_{l}}. \end{equation} Equation (\ref{VB1}) shows that when the pressure derivatives of given EOS ($(\partial P/\partial s)_{\rho Y_{e}}$ and $(\partial P/\partial Y_{l})_{\rho s}$) are positive, configurations with negative gradients of entropy and $Y_{l}$ ($N^{2} < 0$) are unstable. (Note that in the above treatment, we have ignored the dissociative effects caused by energy and lepton transports due to neutrinos.) Thus, the negative gradients of the entropy per baryon and the total lepton fraction formed above the neutrino sphere lead to the convective overturn (the proto-neutron star convection). Indeed, convection occurs in our simulation. Figure \ref{con} shows contours of the density, the electron fraction, the entropy per baryon, and the neutrino energy-emission rate. Convective motions are activated at about 8 ms after the bounce in the region located above the neutrino-sphere where the gradients of the entropy per baryon and $Y_{l}$ are imprinted (see Fig. \ref{qalx}). At about 10 ms after the bounce, the lepton rich, hot blobs rise to form 'fingers' (see in top left panel in Fig. \ref{con}). Note that the neutrino energy emission rate in this finger is relatively higher than that in other region. This is responsible for the small hump seen in the time-evolution of neutrino luminosity (see Fig. \ref{nlum}). Subsequently, the hot fingers expand to form 'mushroom structures', and push the surface of the stalled shock (see top right panel in Fig. \ref{con}). At the same time, the lepton poor, colder matters sink down to the proto-neutron star ($r\mathrel{\mathpalette\vereq<}}}{ 20$ km). The entropy per baryon just behind the shock increases to be $s \mathrel{\mathpalette\vereq>}}}{ 10 k_{B}$ and the stalled shock gradually moves outward to reach $r\approx 200$ km. As the hot, lepton rich matters are dug out from the region below the neutrino-sphere, the neutrino luminosity is enhanced (see Fig. \ref{nlum}). However, the energy released in the convective overturn is not sufficient to keep pushing the shock wave, and eventually, the shock stalls and turns to be a standing accretion shock (bottom two panels of Fig. \ref{con}). All these features qualitatively agree with the previous multidimensional Newtonian simulations\cite{BF93,Herant94,BHF95,KJM96,MJ97}. A more detailed comparison with the previous simulations is given in \S~\ref{comparison}. It will be interesting to investigate which gradient (entropy per baryon or electron fraction) is more responsible for the convection. To see this, we calculate the gradients associated with the entropy per baryon $(\partial P / \partial s)_{Y_{l},\rho} (ds/dr) $ (right panels in Fig. \ref{convG}) and associated with the lepton fraction $(\partial P / \partial Y_{l})_{s,\rho} (dY_{l}/dr)$ (left panels in Fig. \ref{convG}). This figure clearly shows that negative gradient of the entropy per baryon is more important for the convection activated promptly after the bounce. \subsection{Comparison with the previous studies}\label{comparison} To check the validity of the code, the results presented in \S~\ref{bounce_shock}, \S~\ref{shock_stall}, and \S~\ref{convective_activities} are compared with the previous simulations. \subsubsection{Comparison of the results before the convection sets in} We first compare our results with those in the state-of-the-art one-dimensional (1D) simulations in full general relativity\cite{Lieben01,Lieb04,Lieb05b,Sumi05}, in which 1D general relativistic Boltzmann equation is solved for neutrino transfer with relevant weak interaction processes. Because neutrino heating processes ($\nu_{e}+n \rightarrow p+e^{-}$ and $\bar{\nu}_{e}+p\rightarrow n+e^{+}$) are not included in the present implementation, and on the other hand, multidimensional effects such as convection cannot be followed in the one-dimensional reference simulations, we pay particular attention to comparing results during the collapse and the early phase ($\sim 10$ ms) after the bounce (see results in \S~\ref{bounce_shock} and \S~\ref{shock_stall}). Our radial profiles of the lepton fractions at the bounce (see Fig. \ref{lepb}) approximately agree or at least are consistent with the previous simulations, implying that our code can correctly follow the collapse until the bounce. Also, the radial profiles of the infall velocity, the density, and the entropy per baryon just after the bounce show good agreements with the previous studies. No such good agreement was reported in the previous simulations\cite{Kotake03,PhD} where simple leakage schemes based on the single neutrino-trapping density were adopted. Quantitatively, the negative gradients of the entropy per baryon and the lepton fraction are little bit steeper in the present simulation than those in 1D full Boltzmann simulations. The reason may be partly because the {\it transfers} of lepton-number and energy are not fully solved in the present leakage scheme. Except for this small quantitative difference, the two results agree well. For validating a scheme for the neutrino cooling, agreement of the neutrino luminosities with those by 1D full Boltzmann simulation should be particularly checked because they depend on both implementations of weak interactions (especially electron capture in the present case) and treatments of neutrino cooling (the detailed leakage scheme). Also, accurate computation of the neutrino luminosities is required for astrophysical applications, because neutrinos carry away the most of energy liberated during the collapse as the main cooling source and can be primary observable. Our results, in particular the duration and the peak luminosity of the neutrino bursts, agree approximately with those in the previous simulations. Again, no such good agreement was reported in the previous simulation\cite{Kotake03,PhD}. The shock stall-radius is $\approx 80$ km. This value is consistent with (although slightly smaller than) that in Liebend\"orfer et al. \cite{Lieb05b} ($R_{\rm stall} \approx 85$ km) and smaller than that in Sumiyoshi et al.\cite{Sumi05} ($R_{\rm stall}\approx 100$ km). This is likely because in our leakage scheme, neutrino heating is not taken into account. To summarize, the results in the present simulation agree well with those in the previous 1D Boltzmann simulations qualitatively. Quantitatively, the present results agree approximately with those in the previous 1D Boltzmann simulations. We can obtain approximately correct results with a not computationally expensive scheme without solving the Boltzmann equation. Thus, the present code may be adopted, as a first step, to other multidimensional simulations such as the rotating stellar collapse to a black hole and mergers of compact binaries. \subsubsection{Comparison of the results after the convection sets in} In this section, we compare our results in the convective phase with those in the two-dimensional (2D) Newtonian simulations\cite{BF93,Herant94,BHF95,KJM96,MJ97,Mezza98a,Dessart06,Buras06} in which a wide variety of approximations were adopted for the treatment of neutrinos. In the present simulation, we have found both the vigorous convective activities (the proto-neutron star convection) and the enhancement of neutrino luminosities due to the convection. These features agree approximately with those in the previous 2D simulations with a fluid-like treatment of neutrinos\cite{Herant94} and with radial ray-by-ray, gray flux-limited diffusion approximation of neutrino transfers\cite{BF93,BHF95,KJM96}. In a spherically symmetric, gray flux-limited diffusion scheme\cite{Mezza98a}, by contrast, only mildly active convection was found and no enhancement in the neutrino luminosities was observed. Note that the transport of energy and lepton number by neutrinos can flatten the negative gradients of entropy and lepton fraction, and as a result, the convection will be suppressed. In purely hydrodynamic simulations without neutrino processes\cite{MJ97,Mezza98a} (using a postbounce core obtained in 1D simulations with neutrinos), the proto-neutron star convection is strongly activated. In the radial ray-by-ray simulations\cite{BF93,BHF95,KJM96}, the transfer of neutrinos in the angular direction is not taken into account and the stabilizing effect is underestimated, resulting in the proto-neutron star convection with the enhancement of neutrino luminosities. In the spherically symmetric simulation\cite{Mezza98a}, the transfer of neutrino in the angular direction is assumed to occur fast enough to make the neutrino distribution function spherically symmetric, and consequently, the stabilizing effect is overestimated. Recently, Buras et al.\cite{Buras06} performed simulations with a modified ray-by-ray, multi-group scheme in which some part of the lateral components are included, and found that the proto-neutron star convection indeed sets in but has minor effects on the enhancement of the neutrino luminosities. Dessart et al.\cite{Dessart06} performed simulations employing a 2D multi-group flux-limited diffusion scheme and found similar results as in Buras et al. Thus, although the proto-neutron star convection indeed occurs, its influence on enhancing the neutrino luminosities may be minor. The strong convective activities and the enhancement of neutrino luminosities found in the present simulation should be considered as the maximum ones. Note that it is in intermediate regions ($\tau_{\nu} \sim 1$) that the stabilizing effect due to the neutrino transfer works efficiently: At higher density region with $\tau_{\nu} \gg 1$, neutrinos cannot efficiently transport the energy and the lepton number due to the large opacities; At lower density region with $\tau_{\nu} \ll 1$, on the other hand, neutrinos carry away the energy and the lepton number without interacting with the matter. Therefore a careful and detailed treatment of the neutrino transfer is required to clarify the degree of the stabilizing effect and the convection, although such a computationally expensive sumulation is beyond the scope of this paper. The present result that the proto-neutron star convection occurs qualitatively agrees with the recent simulations with detailed neutrino transfer\cite{Dessart06,Buras06}. If simulations are perfomed keeping in mind that the stabilizing effect due to the neutrino transfer is not taken into account in the present scheme, the present code will be acceptable to explore the the rotating stellar collapse to a black hole and mergers of compact binaries. \subsection{Gravitational radiation}\label{cGW} \begin{figure}[t] \begin{center} \includegraphics[scale=1.0]{gw_HL.eps} \end{center} \caption{Gravitational wave quadrupole amplitude $A_{2}$ due to the prompt convection as a function of post bounce time $t_{b}$. The results for the finer grid resolution (solid curve) and for the coarser grid resolution (the dotted curves) are shown together. }\label{gw} \end{figure} \begin{figure}[t] \begin{center} \includegraphics[scale=1.0]{fgw_hl.eps} \end{center} \caption{The frequency spectra of the characteristic gravitational-wave strain due to the prompt convection. The results for the finer grid resolution (solid curve) and for the coarser grid resolution (the dotted curves) are shown together. }\label{fgw} \end{figure} \begin{figure}[t] \begin{center} \includegraphics[scale=1.2]{convN2l.eps} \end{center} \caption{Snapshots of the contours of $\sqrt{-N^{2}}/2\pi$, in the $x$-$z$ plane at selected time slices. }\label{convN} \end{figure} Associated with the convective motions, gravitational waves are emitted. The gravitational waveforms are computed using a quadrupole formula\cite{SS03}. In quadrupole formulae, only the $+$-mode of gravitational waves with $l=2$ and $m=0$ is nonzero in axisymmetric spacetime and is written as \begin{equation} h_+^{\rm quad} = {\ddot I_{zz}(t_{\rm ret}) - \ddot I_{xx}(t_{\rm ret}) \over r}\sin^2\theta \equiv \frac{A_{2}(t)}{r}\sin^{2}\theta, \label{quadr} \end{equation} where $I_{ij}$ denotes a quadrupole moment, $\ddot I_{ij}$ its second time derivative, and $t_{\rm ret}$ a retarded time. In fully general relativistic and dynamical spacetime, there is no unique definition for the quadrupole moment and nor is for $\ddot I_{ij}$. Following Shibata and Sekiguchi\cite{SS03}, we choose the simplest definition of the form \begin{equation} I_{ij} = \int \rho_* x^i x^j d^3x. \end{equation} Then, using the continuity equation, the first time derivative can be written as \begin{equation} \dot I_{ij} = \int \rho_* (v^i x^j +x^i v^j)d^3x, \end{equation} and $\ddot I_{ij}$ is computed by the finite differencing of the numerical result for $\dot I_{ij}$. In the following, we present $A_{2}$, which provides the amplitude of a given mode measured by an observer located in the most optimistic direction (in the equatorial plane). We also calculate the characteristic gravitational-wave strain\cite{FH98}, \begin{equation} h_{\rm char}(f) \equiv \sqrt{ \frac{2}{\pi^{2}} \frac{G}{c^{3}}\frac{1}{D^{2}}\frac{dE}{df}}, \end{equation} \label{hchar} where \begin{equation} \frac{dE}{df} = \frac{8\pi^{2}}{15}\frac{c^{3}}{G} f^{2} \left| \tilde{A}_{2}(f) \right|^{2} \end{equation} is the energy power spectra of the gravitational radiation and \begin{equation} \tilde{A}_{2}(f) = \int A_{2}(t) e^{2\pi i f t} dt. \end{equation} Figure \ref{gw} shows $A_{2}(t)$. Because the system is initially spherically symmetric, no gravitational radiation is emitted before the onset of the convection. When the proto-neutron star convection sets in at $\approx 10$ ms after the bounce, gravitational waves start to be emitted. The peak amplitudes are $A_{2} \sim 100$ cm. After the peak is reached, gravitational waves generated by the smaller-scale convective motions are emitted with $A_{2} \approx 50$ cm. Figure \ref{fgw} shows the spectra of $h_{\rm char}$ due to the convective motions. In contrast to the spectra due to the core bounce (e.g. Refs.~\citen{Dimm02} and \citen{Sekiguchi05}), there is no dominant peak frequency in the power spectra. Instead, several maxima for the frequency range $100$--$1000$ Hz are present. Note that for gravitational waves due to the core bounce, the characteristic peak frequency is associated with the bounce timescale of the core. The effective amplitude of gravitational waves observed in the most optimistic direction is $h_{\rm char} \approx$ 6--$8 \times 10^{-21}$ for an event at a distance of 10 kpc, which is as large as that emitted at the bounce of rotating core collapse\cite{Dimm02}. To check that gravitational waves are indeed originated by the convective motions, we calculate the frequency $\sqrt{-N^{2}}/2\pi$ (see Eq. (\ref{VB0})) as shown in Fig. \ref{convN}. This frequency is in good agreement with the gravitational-wave frequency, implying that gravitational waves are indeed due to the convective activities. M\"uller and Janka\cite{MJ97} investigated gravitational waves due to the convective motion inside the proto-neutron star. It is interesting to compare our results with theirs. They adopted a post-bounce model of Hillebrandt\cite{Hilleb87}. They put an inner boundary at radius $r_{\rm in} = 15$ km and assumed the hydrostatic equilibrium there. They do not include neutrino transfer while a sophisticated EOS is adopted. They found qualitatively similar results to ours. According to their results, the maximum amplitude of the quadrupole mode is $A_{2} \approx 100$ cm, which agrees well with our results. The spectrum of the gravitational-wave strain has several maxima for $f=50$--500 Hz with the maximum value of $h_{\rm char} \approx 3\times 10^{-21}$. The peaks in $h_{\rm char}$ are distributed for higher frequency side in our results probably due to the general relativistic effects. We note that a similar general relativistic effect is observed for gravitational waves at the bounce phase\cite{Dimm02}. These facts show that for deriving quantitatively correct spectra of gravitational waves, fully general relativistic simulations are necessary. \subsection{Numerical accuracy} \begin{figure}[t] \begin{center} \includegraphics[scale=1.1]{ham-mass2.eps} \end{center} \caption{Evolution of the averaged violation of the Hamiltonian constraint (upper panel) and baryon mass conservation (lower panel).} \label{ham-mass-b} \end{figure} In Figs. \ref{rho-alp}--\ref{nlum} we show the results both in the higher resolution (solid curves) and in the lower resolution (dashed curves). The radial profiles of the two resolutions are almost identical, showing that convergent results are obtained in the present simulation (see Fig. \ref{qalx}). In the time evolution of neutrino luminosities (see Fig. \ref{nlum}), the two results are almost identical before the convective activities set in. In the later phase, on the other hand, the two results show slight disagreement. Because the convection and the turbulence can occur in an infinitesimal scale length, the smaller-scale convection and turbulence are captured in the finer grid resolution. However, the influence of the grid resolution on the neutrino luminosities is minor because the convection and turbulence are strongly activated in the region above the neutrino sphere (see the contours of the electron fraction and the entropy in Fig. \ref{con}). On the other hand, most of the neutrinos are emitted from the region inside the neutrino sphere (see the contour of the local neutrino energy emission rate in Fig. \ref{con}). The effect of the grid resolution can be seen in gravitational waves. In Figs. \ref{gw} and \ref{fgw} we show the quadrupole mode $A_{2}(t)$ and the characteristic strain $h_{\rm char}(f)$ both in the higher resolution (solid curves) and in the lower resolution (dashed curves). After the formation of the lepton-rich, hot finger at $\approx 10$ ms after the bounce (see \S~\ref{convective_activities}), convective activities set in. Then, disagreement of $A_{2}(t)$ between the finer and the coarser grid resolutions becomes noticeable (see Fig. \ref{gw}). The characteristic peaks of $h_{\rm char}(f)$ in higher frequencies ($f\sim 200$--500 Hz) are more prominent (see Fig. \ref{fgw}). It is likely to be because the smaller-scale turbulant motions are captured in the finer grid resolution. To check the accuracy of our numerical results, the violation of the Hamiltonian constraint are calculated, which is written as \begin{equation} H=-8 \psi^{-5} \biggl[ \tilde \Delta \psi - {\psi \over 8}\tilde R +2\pi \rho_{h} \psi^5 +{\psi^5 \over 8}\tilde A_{ij}\tilde A^{ij}-{\psi^ 5 \over 12}K^2\biggr], \end{equation} where $\psi \equiv e^{\phi}$, and $\tilde \Delta$ denotes the Laplacian with respect to $\tilde \gamma_{ij}$. In this paper, the averaged violation is defined according to\cite{Shibata03} \begin{equation} {\rm ERROR}={1 \over M_*} \int \rho_* |V| d^3x, \end{equation} where $M_{*}$ is the rest-mass density of the core (see Eq. \ref{restM}) \begin{equation} V={\displaystyle \tilde \Delta \psi - {\psi \over 8}\tilde R +2\pi E \psi^5 +{\psi^5 \over 8}\tilde A_{ij}\tilde A^{ij}-{\psi^ 5 \over 12}K^2 \over \displaystyle |\tilde \Delta \psi| + \Big|{\psi \over 8}\tilde R \Big| +2\pi \rho_{h} \psi^5 +{\psi^5 \over 8}\tilde A_{ij}\tilde A^{ij}+{\psi^ 5 \over 12}K^2}. \end{equation} Namely, we use $\rho_*$ as a weight factor for the average. This weight factor is introduced to monitor whether the main bodies of the system (proto-neutron stars and inner cores), in which we are interested, are accurately computed or not. We display the time evolution of the Hamiltonian-constraint violation and the conservation of the baryon mass of the system in Fig. \ref{ham-mass-b}. Several discontinuous changes in the Hamiltonian-constraint violation and the conservation of the baryon mass originate from the regridding procedures in which some matters of the outer region are discarded. Before the bounce, the baryon mass is well conserved and the Hamiltonian-constraint violation is very small as $\sim 10^{-4}$. After the bounce, the violation of the baryon-mass-conservation and the Hamiltonian constraint is enhanced due to the existence of shock waves where the hydrodynamic scheme becomes essentially a first-order scheme. The convergence of the baryon-mass-conservation and the Hamiltonian-constraint violation also becomes worse in the convective phase. However, the degree of violation of the Hamiltonian constraint and the baryon-mass-conservation is small and we may believe that the numerical results obtained in the paper are reliable. \section{Summary and Discussion} \label{S_Summary} \subsection{Summary} In this paper, we present a fully general relativistic hydrodynamic code in which a finite-temperature EOS and neutrino cooling are implemented for the first time. Because the characteristic timescale of weak interaction processes $t_{\rm wp} \sim \vert Y_{e}/\dot{Y}_{e} \vert$ (WP timescale) is much shorter than the dynamical timescale $t_{\rm dyn}$ in hot dense matters, stiff source terms appear in the equations. In general, an implicit scheme may be required to solve them \cite{Bruenn85}. However, it is not clear whether implicit schemes do work or not in the relativistic framework. The Lorentz factor is coupled with the rest-mass density and the energy density. The specific enthalpy is also coupled with the momentum. Due to these couplings, it is not straightforward to recover the primitive variables and the Lorentz factor from conserved quantities. Taking account of these facts, we proposed an explicit method to solve all the equations noting that the characteristic timescale of neutrino leakage from the system $t_{\rm leak}$ is much longer than $t_{\rm wp}$ and is comparable to $t_{\rm dyn}$. By decomposing the energy-momentum tensor of neutrinos into the trapped-neutrino and the streaming-neutrino parts, the hydrodynamic equations can be rewritten so that the source terms are characterized by the leakage timescale $t_{\rm leak}$ (see Eqs. (\ref{T_Eq_M}) and (\ref{T_Eq_nuS})). The lepton-number conservation equations, on the other hand, include the source terms characterized by the WP timescale. Taking account of these facts, {\it limiters} for the stiff source terms are introduced to solve the lepton-number conservation equations explicitly (see \S~\ref{Lepton}). In the numerical relativistic hydrodynamics, it is required to calculate the primitive variables and the Lorentz factor from the conserved quantities. In this paper, we develop a robust and stable procedure for it (\S~\ref{Reconst}). To check the validity of the numerical code, we performed a simulation of spherical stellar core collapse. As initial conditions, we adopted the $15M_{\odot}$ spherical model with the solar metallicity computed by Woosley et al.\cite{WHW02} After the shock formation and propagation, the shock wave suffers from the severe reduction of its energy due to neutrino burst emission when the shock wave passes the neutrino-sphere. Eventually, the shock wave stalls soon after it passes through the neutrino sphere. The neutrino burst makes negative gradients of the entropy and $Y_{l}$ above the neutrino sphere. Because such configuration is convectively unstable, vigorous convective motions are induced. All these properties agree qualitatively with those by the resent 2D Newtonian simulations\cite{Dessart06,Buras06}. We also compared our results with those in the previous simulations. Before the convection sets in, we compare our result with those in the state-of-the-art 1D Boltzmann simulations in full general relativity\cite{Lieben01,Lieb04,Lieb05,Sumi05}. As shown in this paper, the radial structure of the core and the neutrino luminosities agree qualitatively well with those in their simulations. Quantitatively, they also agree approximately with the previous results. After the convection sets in, we compare our result with those in 2D Newtonian simulations\cite{BF93,Herant94,BHF95,KJM96,MJ97,Mezza98a,Dessart06,Buras06}. Our result that the proto-neutron star convection occurs agree qualitatively with that in the previous simulations\cite{BF93,Herant94,BHF95,KJM96,MJ97,Dessart06,Buras06}. However, quantitative properties show disagreement because the transfer of neutrinos are not fully solved in the present scheme. Note that the transport of energy and lepton-number by neutrinos can flatten the negative gradients of entropy and lepton fraction, stabilizing the convection. Therefore the convective activities obtained in the present simulation should be considered as the maximum ones. If we keep in mind the above facts and note the good agreements of the radial structure and neutrino luminosities, the present implementation will be applied to simulations of rotating core collapse to a black hole and mergers of binary neutron stars as a first step towards more sophisticated models. A detailed treatment of the neutrino transfer is required to determine the degree of stabilizing effect, but this is far beyond the scope of this paper. Gravitational waves emitted by the convective motions are also calculated. The gravitational-wave amplitude is $\approx 3 \times 10^{-21}$ for an event of the distance $10$ kpc. Reflecting the contributions of multi-scale eddies with characteristic overturn timescale 1--10 ms, the energy power spectrum shows several maxima distributed in $f\approx 100$--1000 Hz. We compare our results with those in M\"uller and Janka\cite{MJ97} in which a similar calculation (but in Newtonian gravity) is performed. The maximum amplitude of gravitational waves in our results agrees well with that in M\"uller and Janka. The several maxima in the energy power spectrum are distributed at higher-frequency side in our results due to the general relativistic effects, showing that fully general relativistic simulations are necessary for the accurate calculation of gravitational-wave spectra. \subsection{Discussions} Because the present implementation of the microphysics is simple and explicit, it has advantage that the individual microphysical processes can be easily improved and sophisticated. For example, the neutrino emission via the electron capture can be easily sophisticated as follows. To precisely calculate the electron capture rate, the complete information of the parent and daughter nuclei is required. In EOSs currently available, however, a representative single-nucleus average for the true ensemble of heavy nuclei is adopted. The representative is usually the most abundant nucleus. The problem in evaluating the capture rate is that the nuclei which cause the largest changes in $Y_{e}$ are neither the most abundant nuclei nor the nuclei with the largest rates, but the combination of the two. In fact, the most abundant nuclei tend to have small rates because they are more stable than others, and the fraction of the most reactive nuclei tend to be small\cite{AFWH94,Janka07a}. Assuming that the nuclear statistical equilibrium (NSE) is achieved, the electron capture rates under the NSE ensemble of heavy nuclei may be calculated for a given set of ($\rho$, $Y_{e}$, $T$). Such a numerical rate table can be easily employed in the present implementation. Also, the neutrino cross sections can be improved. As summarized in Ref.~\citen{Horowitz}, there are a lot of corrections to the neutrino opacities. Note that small changes in the opacities may result in much larger changes in the neutrino luminosities, because the neutrino energy emission rates depend strongly on the temperature, and the temperature at the last scattering surface ($\tau_{\nu}\sim \sigma T^{2} \sim 1$) changes as $T \sim \sigma^{-1/2}$. Although the correction terms are in general very complicated, it is straightforward to include the corrections into our code. Note that the corrections become more important for higher neutrino energies. Therefore, the correction terms might play a crucial role in the collapse of population III stellar core and the formation of a black hole, in which very high temperatures ($T>100$ MeV) will be achieved. A study to explore the importance of these corrections in the case of black hole formation is ongoing. As briefly described in the introduction, one of the main drawbacks in the present implementation of the neutrino cooling is that the {\it transfer} of neutrinos are not solved. Although {\it fully} solving the transfer equations of neutrinos is far beyond the scope of this paper, there are a lot of rooms for improvements in the treatment of the neutrino cooling. For example, the relativistic moment formalism\cite{AS72,Thorne81}, in particular the so-called M1 closure formalism, may be adopted. For this purpose, a more sophisticated treatment of the closure relation for $P_{\alpha \beta}$ is required. We plan to implement a relativistic M1 closure formalism for the neutrino transfer in the near future. To conclude, the present implementation of microphysics in fully general relativistic, multidimensional code works well and has a wide variety of applications. We are now in the standpoint where simulations of stellar core collapse to a black hole and merger of compact stellar binaries can be performed including microphysical processes. Fruitful scientific results will be reported in the near future. \section*{acknowledgments} The author thanks M. Shibata for valuable discussions and careful reading of the manuscript, and L. Rezzolla, and K. Sumiyoshi for valuable discussions. He also thanks T. Shiromizu and T. Fukushige for their grateful aids. He thanks S. E. Woosley, A. Heger and T. A. Weaver for providing the presupernova core used in the present simulation as an initial condition. Numerical computations were performed on the NEC SX-9 at the data analysis center of NAOJ and on the NEC SX-8 at YITP in Kyoto University. This work is partly supported by the Grant-in-Aid of the Japanese Ministry of Education, Science, Culture, and Sport (21018008,21105511).
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} \label{sec:introduction} The use of vacuum windows is ubiquitous throughout science and engineering. In certain applications there is a need to implement two vacuum windows in series. Examples include balloon-borne or space-based applications in which one prefers to use a thicker window for the larger differential pressure on the ground, and a thinner window when the payload is above Earth's atmosphere. If the experiment is to be reusable, the vacuum window mechanism must be reversible, that is, the atmospheric pressure seal must be resealable. Ground testing and calibration of experiments that are sensitive to electromagnetic radiation require that both vacuum windows in the series be transparent to the desired wavelengths. The choice of vacuum window material depends, among other factors, on the wavelength of use. Many such applications require windows with low loss, low reflection, and low emission. Useful materials in the millimeter wavelength include polyethylene and polypropylene. They are readily available in a large range of thicknesses and sizes, and have low cost. They have relatively low index of refraction, making the fabrication of anti-reflection coating less of a challenge compared to higher index materials. Ultra high molecular weight polyethylene (UHMWPE) and polypropylene have loss tangents of $8\times10^{-5}$ and $4\times 10^{-4}$, respectively, among the lowest at this waveband~\cite{Cardiffabsorption, lamb96}. In this paper we describe a vacuum window that we developed for the balloon-borne E and B Experiment (EBEX). It consisted of two polyethylene vacuum windows in series, one of them removable and resealable. We call the apparatus the \ac{DWM}. In Section~\ref{sec:ebex} we describe the requirements that led to the development of the \ac{DWM}; and in Section~\ref{sec:dwm_design} we discuss the specifics of the design of the \ac{DWM} as well as the testing performed on the \ac{DWM} and the results found. \section{EBEX} \label{sec:ebex} \ac{EBEX} was a stratospheric balloon-borne experiment designed to measure the polarization of the cosmic microwave background radiation~\cite{ebexpaper1,ebexpaper3}. It consisted of an ambient temperature telescope focusing light into a cryogenic receiver. The receiver had an array of nearly a thousand bolometric transition edge sensors operating at a temperature of 0.25~K. The experiment had three frequency bands centered on 150, 250 and 410~GHz and collected data in a flight circumnavigating Antarctica in January 2013. The optical design determined the 300~mm open diameter of the receiver's vacuum window. Below the window we placed reflective filters to reject high frequency radiation. Space constraints dictated that these filters be placed no farther than 10~mm below the vacuum window, limiting the maximum bowing acceptable for the window material before damaging the filters. We considered several materials for the vacuum window including alumina, silicon, sapphire, Zotefoam, polypropylene, and varieties of polyethylene. We chose \ac{UHMWPE} because it has low loss, a relatively low index and our collaborators had already developed a broad-band anti-reflection coating for it; because it is not fragile; and because it is readily available at many sizes and thicknesses. We measured the central deflection of 300~mm diameter \ac{UHMWPE} window under atmospheric differential pressure for a number of thicknesses. We found that a minimum of 12.7~mm thick material was needed to provide a deflection of less than 10~mm. Although total absorption with this thickness is only 0.5\%, 1.1\%, and 3.0\%, at the three EBEX frequency bands, the emission of such a room temperature window would have represented 8\%, 11\%, and 14\% of the total optical load absorbed by the detectors at 150, 250, and 410~GHz, respectively. The temperature at float altitude is close to room temperature. To make the optical load and resultant photon noise from the window negligible compared to other sources, we decided to use two windows in series: a removable thick window for ground operations, and, below it, a thinner window only for float altitude. A comparison between the optical load for the two thicknesses is given in Table~\ref{tab:loading}. We use the words `above' and `below' to indicate relative positions closer to the higher and lower pressures, respectively. \begin{table}[h] \begin{tabular}{| c | c | c |} \hline Band & 12.7 mm window & 1 mm window \\ \hline 150 GHz & 8 \% & 0.1 \%\\ \hline 250 GHz & 11 \% & 1 \%\\ \hline 410 GHz & 14 \% & 1 \%\\ \hline \end{tabular} \caption{ Calculated fractional in-band optical loading on the EBEX detectors due to a single window of either 12.7~mm or 1~mm thickness in the optical path.} \label{tab:loading} \end{table} \section{Double Window Design} \label{sec:dwm_design} \subsection{Overview} The \ac{DWM} consisted of a permanent 1~mm \ac{UHMWPE}, anti-reflection coated window, which we call the `thin window'. On the ambient pressure side of the thin window we placed a movable plate with two apertures. One 300~mm aperture had a 12.7~mm thick \ac{UHMWPE}, anti-reflection coated `thick window'; the other 380~mm aperture was open. Figure~\ref{fig:cross_section} shows a cross-section of the construction and Figure~\ref{fig:dwm_sketch_pic} shows a solid model of the entire apparatus. \begin{figure}[ht!] \centering \includegraphics[width=0.48\textwidth]{./figure1} \caption{Cross-section view of the \ac{DWM}. The thick window was movable in the y direction, exposing an open aperture above the thin window (see Figure~\ref{fig:dwm_sketch_pic}). It was also resealable. The cavity between the windows was connected with a tube and valve (not shown) to the receiver cavity. \label{fig:cross_section} } \end{figure} A copper tube with a valve connected the receiver cavity to the volume immediately above the thin window. On the ground the thick window was always positioned above the thinner one. When the receiver was evacuated we also evacuated the chamber between the two windows. Before the payload was launched we closed the valve connecting the receiver and intra-window cavity. When the payload reached float altitude we actuated a motor to move the two-aperture plate and place the open aperture in position above the thin window. Before flight termination the motor was actuated again to move the thick window back into its ground-operations position. \begin{figure}[ht] \centering \includegraphics[width=0.48\textwidth]{./figure2} \caption{Model of the \ac{DWM} assembly. It consisted of a plate with two apertures (blue), a drive mechanism (brown), and six spring-loaded rollers, each with two springs (yellow). The mechanism gave the capability to either have the thick window seal the receiver, the configuration shown, or move the sliding plate in the $+y$ direction and replace the thick window with an open aperture. \label{fig:dwm_sketch_pic} } \end{figure} \subsection{Thin Window} The 3636~kg total suspended weight below the 963,000~m$^{3}$ helium balloon gave high likelihood for flight altitude above 33~km and therefore an ambient pressure below $\sim$6~torr. Figure~\ref{fig:thin_wind_defl} shows our measurement of the central deflection of a 300~mm \ac{UHMWPE} window as a function of window thickness for several differential pressures of 4~torr and above. The differential pressures span equivalent altitudes between 29 and 35~km~\cite{standardatmosphere}. We chose a thickness of 1~mm because it gave negligible additional emission, and even at altitude as low as 28 km its maximum deflection was only 3~mm, giving ample space margin from the infrared filters below it. Two flat aluminum rings with inside (outside) diameter of 310 (368)~mm were glued to the top and bottom of the thin window with Miller Stephenson Epoxy 907 and bolted against an o-ring situated in a standard o-ring groove. \begin{figure}[htp] \centering \includegraphics[width=0.45\textwidth]{./figure3} \caption{Central deflection versus window thickness for different pressure differentials. A 1.0 mm window was chosen.} \label{fig:thin_wind_defl} \end{figure} \subsection{Movable, Resealable Window} The \ac{DWM} mechanism had two main elements: motion and seal. (We followed a design similar to one initially implemented by the XPER experiment~\cite{staggs96}.) The motion part of the \ac{DWM} consisted of a stepper motor driving a horizontal shaft with two worms, one on either side of the moving plate; see Figure~\ref{fig:dwm_drive}. The worms coupled to worm gears on two vertical shafts that also had spur gears. The spur gears coupled to racks mounted on either side of the moving plate. The plate rode between 6 pairs of 300-grade stainless steel rollers, 3 pairs on each side of the plate. A gear system was used due to its simplicity, compactness, and high output torque, as well as tolerance for low-temperatures, such as may be experienced during the ascend period of flight with temperatures down to -50$^\circ$C. The rack was mounted on the sides of the sliding plate, rather than on top or bottom, because the seal/reseal function was provided through motion in the $z$ direction (see Figures~\ref{fig:cross_section} and~\ref{fig:dwm_sketch_pic}). We used two symmetric spur gear and worm/worm gear systems on either side of the sliding plate to ensure proper movement. One of the worm/worm gear systems was right-handed and the other was left-handed due to the mirror symmetry of the sliding plate. \begin{figure}[ht] \centering \includegraphics[width=0.45\textwidth]{./figure4} \caption{The \ac{DWM} drive mechanism consisted of a stepper motor (green) driving a horizontal shaft drive shafts (cyan) with two worm/worm gears (brown), coupling to two vertical shafts (cyan) with two spur gear/racks (magenta). \label{fig:dwm_drive} } \end{figure} The thick window was permanently clamped directly onto a Buna-N o-ring on the movable plate. We also used an o-ring to facilitate the seal between the moving plate and the stationary surface of the \ac{DWM}; both were made of aluminum. A relatively high hardness, Shore A~75, Buna-N o-ring was mounted onto the stationary surface. The movable vacuum seal required two functions, sealing in fixed known locations, and sliding between these locations without damaging the o-ring. Importantly, no vacuum sealing function was required during the motion. To achieve these functions we provided for a small $z$ displacements of the moving plate by means of triangular notches on the bottom side of the moving plate; see Figure~\ref{fig:dwm_sl_plate}. The position of the slots matched the positions of the bottom rollers in seal positions. In these positions the effectively thinner plate would move down and seal against the o-ring. Sealing force was provided by springs that forced the top rollers in the direction of the bottom rollers. During motions, the plate rolled out of the notches against the force of the springs, thus moving away from the o-ring seal. \begin{figure}[htp] \centering \includegraphics[width=0.48\textwidth]{./figure5} \caption[\ac{DWM} sliding plate]{The \ac{DWM} sliding plate with triangular notches highlighted on the bottom surface.} \label{fig:dwm_sl_plate} \end{figure} The triangular notches in the sliding plate made an angle of 26.7$^\circ$ relative to the track face and were 3.3~mm deep. The geometry of the notches was chosen to give clearance for the sliding plate to travel above the o-ring when moving and to compress the o-ring when the 25.4~mm diameter rollers were seated in the notches. The o-ring was seated in a dovetail groove which set its maximum linear compression to be 0.71 mm, 20\% of the 3.53 mm diameter. This compression and the o-ring hardness required a compressive force per length of 1.2 to 6.6~N/mm for proper vacuum sealing and o-ring durability~\cite{parkerhandbook}. We chose a value of 2.9~N/mm, and with the 410~mm nominal diameter o-ring, this gave a total required compressive force of 3700~N. In a ground configuration, a force of 13300~N was exerted uniformly on the o-ring due to the pressure differential across the thick window; however, at float altitude, the pressure differential drops, providing only 180~N of force. Furthermore, upon resealing at the end of flight, there is no pressure differential across the thick window at all. To provide the required force during flight, 12 springs with spring constant of 41.3~N/mm were compressed 7.5~mm each by engaging the compression screws by hand, which then pressed on the sliding plate and o-ring via the rollers located above the sliding plate; see Figure~\ref{fig:dwm_sketch_pic}. We calculated that a force of 2300~N was required to move the rollers out of the notches. Given the radius of the spur gears of 12.7 mm and the 60:1 worm/worm gear ratio, this required a minimum motor torque of 0.49~Nm. However, worm/worm gear efficiency depends on the coefficient of friction between the two surfaces as, \begin{equation} \mathrm{Efficiency} = \tan{\gamma}\frac{1 - \mu \tan{\gamma}}{\mu + \tan{\gamma}} \end{equation} where $\gamma=4.8^\circ$ was the worm lead angle~\cite{wormgear_eff}. We assumed that the kinetic coefficient of friction $\mu$ between the cast iron worm gear and steel worm was 0.2\cite{CoeffFrict} when unlubricated, which gives an efficiency of 29\%. In this case the minimum required torque is 1.69~Nm. We used Dow Corning Molykote dry lubricant to decrease the friction between the components and protect the open gearing from oxidation. With the dry lubricant we expected a coefficient of friction between 0.02 and 0.06\cite{CoeffFrict}, and thus an increase in the worm/worm gear efficiency to between 58\% and 81\%, and a reduction in the required motor torque to between 0.84 and 0.60~Nm, respectively. A stepper motor with a stall torque of 3.88 Nm (Kollmorgen model M093-LE14) was chosen to give more than a factor of two safety for the unlubricated case. The \ac{DWM} was operated at low speeds, taking 3.3 minutes to move the 419~mm from the thick window to open aperture states. At this speed the stepper motor operated in its high-torque regime. Since the \ac{DWM} was operated only twice during flight, its operation time had negligible effect on total observation time. We monitored the position of the sliding plate with two electrical limit switches that were located at either end of the travel and were depressed when the sliding plate was in the thick window or open aperture positions. The state of these switches were continuously read out as analog voltages. Aluminum components were used extensively and all steel components were light-weighted. In total, the \ac{DWM} had a mass of $22$ kg, including the windows, vacuum valve, bellows, and stepper motor. Military specification Buna-N o-rings rated to $-54^\circ$C were used for their thermal properties and abrasion durability. \section{Testing and In-Flight Operation} \label{sec:dwm_testing} We tested the \ac{DWM} in vacuum and flight-like temperatures in an environmental chamber at the Columbia Scientific Balloon Facility in Palestine, TX. Tests were conducted twice, once in the summer of 2011 and again in the summer of 2012. For testing purposes the \ac{DWM} was mounted on a fixture that simulated the receiver vacuum. We monitored the intra-window cavity pressure to check the integrity of the thick window's vacuum seal and reseal. We placed temperature sensors on several key components of the \ac{DWM} including the stepper motor, sliding plate, and the plate that simulated the receiver, and we had an ambient chamber temperature sensor. We also readout the position switches and chamber pressure continuously. The simulated receiver and intra-window cavities were evacuated to a pressure of few torr when the apparatus was outside the environmental chamber. The valve connecting the two cavities was closed to separate them, as would be done pre-flight. The \ac{DWM} testing apparatus was then placed in the chamber. The chamber was cooled to approximately $-45^\circ$ C, which is near the temperature experienced during initial ascent of the payload. The chamber and \ac{DWM} were then allowed to warm up and the following cycle was tested $6$ and $10$ times for the two testing periods, respectively: pump the chamber down to between $2$ and $10$ torr of pressure, move the sliding plate to the open aperture position, move the sliding plate to the thick window position, and backfill the chamber to $\sim 100$ torr with N$_2$ gas. The data showed robust motion of the moving plate throughout the necessary range. It also showed proper resealing of the intra-window cavity. EBEX was launched from the balloon Facility at McMurdo Station in Antarctica at 00:30 GMT December 29, 2012. The payload achieved an altitude of approximately 36 km about 5 hours later. The payload was science operational until 06:00 GMT January 9, 2013, for a duration of 10.83 days, at which point the liquid helium cryogen expired. \begin{figure}[htpb] \centering \includegraphics[width=0.48\textwidth]{./figure6} \caption{\ac{DWM} position encoders' signals during opening (upper) and closing (lower). High voltage indicates `true' for the closed (solid) and open (dash) signals.} \label{fig:dwm_flight} \end{figure} We operated the \ac{DWM} twice during the flight. Three hours after launch at an altitude of 36.6 km and \ac{DWM} temperature of 15$^\circ$C we removed the thick window. The second time was approximately two hours after liquid helium expired at an altitude of 36.4 km and \ac{DWM} temperature of 39$^\circ$C when we repositioned the thick window above the thin in preparation for flight termination. The position encoder signals during these times are shown in Figure~\ref{fig:dwm_flight}. These monitors indicate proper opening and closing the window. Visual inspection post-flight showed the thick window in nominal position above the thin window. Both windows and the fragile filters below them were recovered intact post-flight indicating that there was no major leak by the thick window either before it was opened or after it was resealed. Had there been a major leak, the thin window would have ruptured, or bowed sufficiently to tear the filters mounted below it. Nominal cryogenic operation of the receiver through flight and the fact that liquid helium hold time was commensurate with pre-flight predictions indicated nominal gas pressure inside the cryostat and therefore the absence of gas leaks through the thin window. We conclude that the \ac{DWM} performed successfully. \begin{acknowledgments} Support for the development and flight of the EBEX instrument was provided by NASA grants NNX12AD50G, NNX13AE49G, NNX08AG40G, and NNG05GE62G, and by NSF grants AST-0705134, and ANT-0944513. Zilic acknowledges support by the Minnesota Space Grant Consortium. We are grateful to Suzanne Staggs for providing the original XPER window upon which our design was based. We thank Xin Zhi Tan for help with figures. \end{acknowledgments} \begin{acronym} \acro{ACS}{attitude control system} \acro{ADC}{analog-to-digital converters} \acro{ADS}{attitude determination software} \acro{AHWP}{achromatic half-wave plate} \acro{AMC}{Advanced Motion Controls} \acro{ARC}{anti-reflection coating} \acro{ATA}{advanced technology attachment} \acro{BRC}{bolometer readout crates} \acro{BLAST}{Balloon-borne Large-Aperture Submillimeter Telescope} \acro{CANbus}{controller area network bus} \acro{CMB}{cosmic microwave background} \acro{CMM}{coordinate measurement machine} \acro{CSBF}{Columbia Scientific Balloon Facility} \acro{CCD}{charge coupled device} \acro{DAC}{digital-to-analog converters} \acro{DASI}{Degree~Angular~Scale~Interferometer} \acro{dGPS}{differential global positioning system} \acro{DfMUX}{digital~frequency~domain~multiplexer} \acro{DLFOV}{diffraction limited field of view} \acro{DSP}{digital signal processing} \acro{DWM}{double window mechanism} \acro{EBEX}{E~and~B~Experiment} \acro{EBEX2013}{EBEX2013} \acro{ELIS}{EBEX low inductance striplines} \acro{EP1}{EBEX Paper 1} \acro{EP2}{EBEX Paper 2} \acro{EP3}{EBEX Paper 3} \acro{ETC}{EBEX test cryostat} \acro{FDM}{frequency domain multiplexing} \acro{FPGA}{field programmable gate array} \acro{FCP}{flight control program} \acro{FOV}{field of view} \acro{FWHM}{full width half maximum} \acro{GPS}{global positioning system} \acro{HPE}{high-pass edge} \acro{HWP}{half-wave plate} \acro{IA}{integrated attitude} \acro{IP}{instrumental polarization} \acro{JSON}{JavaScript Object Notation} \acro{LDB}{long duration balloon} \acro{LED}{light emitting diode} \acro{LCS}{liquid cooling system} \acro{LC}{inductor and capacitor} \acro{LPE}{low-pass edge} \acro{MLR}{multilayer reflective} \acro{MAXIMA}{Millimeter~Anisotropy~eXperiment~IMaging~Array} \acro{NASA}{National Aeronautics and Space Administration} \acro{NDF}{neutral density filter} \acro{PCB}{printed circuit board} \acro{PE}{polyethylene} \acro{PME}{polarization modulation efficiency} \acro{PSF}{point spread function} \acro{PV}{pressure vessel} \acro{PWM}{pulse width modulation} \acro{RMS}{root mean square} \acro{SLR}{single layer reflective} \acro{SMB}{superconducting magnetic bearing} \acro{SQUID}{superconducting quantum interference device} \acro{SQL}{structured query language} \acro{STARS}{star tracking attitude reconstruction software} \acro{TES}{transition edge sensor} \acro{TDRSS}{tracking and data relay satellites} \acro{TM}{transformation matrix} \acro{UHMWPE}{ultra high molecular weight polyethylene} \end{acronym} \bibliographystyle{plain}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} The near-Earth Asteroid (196256) 2003 EH1 (hereafter 2003 EH1) was discovered on UT 2003 March 6 in the course of the Lowell Observatory Near-Earth-Object Search (LONEOS) \citep{S03}. Dynamical studies show that the asteroid is associated with, and is presumed to be the parent body of, the Quadrantid meteoroid stream \citep{Jennis04,Wi04,WB05,Ba08,Jo11,Abedin15}. The orbit has semimajor axis ${\it a}$ = 3.126\,AU, eccentricity ${\it e}$= 0.619, and inclination ${\it i}$=$70^{\circ}.8$ (NASA/JPL HORIZON). The Tisserand parameter with respect to Jupiter, ${\it T_{\rm J}}$ = 2.063, is consistent with the dynamical classification of 2003 EH1 as a Jupiter-family comet (JFC), although no activity has yet been reported. A straightforward interpretation is that 2003 EH1 is a dormant or weakly active comet \citep{Koten06,Ba08,Bo10,Tanc14}. Dynamical studies of the recent ($<$10$^4$ yr) evolution of the orbit of 2003 EH1 under the action of planetary perturbations are suggestive in this regard. The semimajor axis lies close to the 2:1 mean-motion resonance with Jupiter at 3.27 AU, causing strong orbital variations that drive 2003 EH1 into a sun-approaching dynamical state \citep{WB05,Nes13b,FJ14}. Numerical integrations show that the perihelion distance has increased approximately linearly with time from 0.2 AU 1000 years ago to the present-day value of 1.2 AU. The minimum $q$ $\sim$ 0.12\,AU (${\it e}$ $\sim$ 0.96) occurred only $\sim$1500\,yr ago \citep{Nes13b,FJ14}. As a result, it is reasonable to expect that the surface layers should have been devolatilized at the high temperatures reached near past perihelia, leading to the present, apparently inert state. The Quadrantid meteor shower was first reported in 1835 \citep{Q39}. The shower has a very short duration in its core activity (Earth crosses the core stream in $\sim$0.5 day) superimposed on a broader, long-lived background activity (crossing time $\sim$4 days), suggesting that young and old meteoroid streams coexist \citep[][and references therein]{WB05}. The width of a meteor stream increases with age, as a result of the progressive influence of planetary perturbations. The small width of the Quadrantid core stream indicates ejection ages only $\sim$200-500 years \citep{Jennis04,Wi04,WB05,Abedin15} and there is some suggestion that the first reports of meteoroid stream activity coincide with the formation of the stream. On the other hand, the broader background stream implies larger ages of perhaps $\sim$3,500 years or more \citep{Oh95,WB05,KN07,Oh08}. Comet 96P/Machholz is also suspected to form part of the ``Quadrantid complex'', possibly releasing meteoroids between 2,000--5,000 years ago~\citep{McI90, BaO92, Go92, JJ93,WB05}. Comet 96P/Machholz currently has a small perihelion orbit ($\it a$= 3.034\,AU, $\it e$= 0.959, $\it i$= $58^{\circ}.312$ and $\it q$= 0.124\,AU from NASA/JPL HORIZON) substantially different from that of 2003 EH1. Despite these differences, the rapid dynamical evolution shows that it is possible that 2003 EH1 is a split fragment of 96P/Machholz or that both were released from a now defunct precursor body~\citep[together defining the Machholz complex:][]{SC05}. One or both of these bodies can be the parents of the Quadrantid meteoroids~\citep{KN07,Ba08,Nes13a,Nes13b,Nes14}. The small lifetime of the Quadrantid stream suggests that 2003 EH1 could still be active, particularly when in the small-perihelion orbital state. In this paper we report the first measurements of the physical properties of 2003 EH1, including colors, limits on coma activity, size, mass loss rate, fractional active area on the object and rotational period and further discuss the possible relation of this body to the Quadrantid stream and complex. \section{Observations} We observed on the nights of UT 2013 August 8, 9 and 12 using the Kitt Peak National Observatory 2.1\,m diameter telescope (hereafter, KPNO\,2.1) in Arizona and on October 2 at the Keck-I 10\,m diameter telescope at the top of Mauna Kea, Hawaii. The KPNO\,2.1 employed a STA3 4000 $\times$ 2600 pixel charged-coupled device (CCD) camera at the f/7.5 Cassegrain focus. We used a 2$\times$2 binned image scale $0\arcsec.298$ ${\rm pixel}^{-1}$, giving a field of view (FOV) approximately $9^{'}.6\times$6$^{'}$.7. On Keck-I, the Low Resolution Imaging Spectrometer (LRIS) camera \citep{Oke1995} was used to image the object. The LRIS camera has two separate channels having red and blue optimized CCD imagers separated by a dichroic filter. One is a red-side detector having a mosaic of two LBNL 2000 $\times$ 4000 pixels \citep{Roc10} and the other is a blue-side detector having a mosaic of two 2K$\times$4K Marconi CCDs, both with imaging scale 0.$^{''}$135 ${\rm pixel}^{-1}$. The field of view in both modes of operation is $6^{'}.0 \times$ 7$^{'}$.8. For imaging data, both telescopes were tracked non-sidereally to follow the motion of 2003 EH1. On KPNO\,2.1, images were taken through the Johnson-Kron-Cousins $\textit{BVRI}$-filter system. On Keck-I, images in the $\it R$-filter were recorded using the red-side detector of LRIS. The images were flattened by subtracting a bias image and dividing by a bias-subtracted flat-field image constructed using artificial illumination of the inside the each dome for each filter. Photometric calibrations were obtained using standard stars from~\cite{Landolt1992}, including SA113-163, SA113-337, SA113-265 and SA92-412. The full width at half-maximum (FWHM) measured on 2003 EH1 varied from $\sim$0.8\arcsec~to 1.5\arcsec. The sky was photometric on the nights of UT 2013 August 9, 12 and October 2. Data obtained under slightly non-photometric conditions on August 8 were photometrically calibrated using field stars observed on a photometric night. An observational log is given in Table~\ref{logEH1}. \section{Results} \label{Obs} Object 2003 EH1 appeared point-like in all image data (see Figure \ref{image}). Photometry was performed using synthetic circular apertures projected onto the sky. The photometric aperture radius used was twice the FWHM in the image ($\sim$1.6\arcsec~to 3.0\arcsec) and the sky background was determined within a concentric annulus having projected inner and outer radii of 6.6\arcsec~and 13.2\arcsec, respectively. Photometric results are listed in Tables~\ref{colorEH1} and \ref{lightphot}. \subsection{Colors} \label{ColorC} The weighted mean colors of 2003 EH1 are ${\it B-V}$ = 0.69$\pm$0.01, ${\it V-R}$ = 0.39$\pm$0.01 and ${\it R-I}$ = 0.38$\pm$0.01 from N=16 measurements (see Table~\ref{colorEH1}). Figures~\ref{VRBV} and~\ref{RIVR} show ${\it V-R}$ vs.~${\it B-V}$ and ${\it R-I}$ vs.~${\it V-R}$ respectively, together with the Tholen taxonomy classes~\citep{Th84} from \cite{Da03}. The V-R data of 2003 EH1 together with the various small body populations and the solar color are summarized in Table~\ref{distcolor}. We also list the normalized reflectivity slope, ${\it S'}$ $[ \% (1000\,$\rm \AA$)^{-1}]$, measured in the V-R region~\citep{LuuJewitt1990}. The optical colors of 2003 EH1 are similar to, but slightly redder than, those of the Sun (Table~\ref{colorEH1}), being most taxonomically compatible with those of C-type asteroids (Figs.~\ref{VRBV} and~\ref{RIVR}). The V-R color (0.39$\pm$0.01) is similar to the weighted mean color of 96P/Machholz (${\it V - R}$ = 0.40$\pm$0.03, from \cite{Lica00} and \cite{Me04}). Table~\ref{distcolor} indicates that 2003 EH1 has a spectral slope less red than those of dead comets, cometary nuclei, Jupiter Trojans and Damocloids, many of which are spectrally classified as D-type asteroids~\citep{JewittLuu1990, Fi94, Jewitt2002, Jewitt2004, J05, Fornasier2007, Karlsson2009}. On the other hand, 2003 EH1 has a nearly neutral spectral slope, as do many main belt comets~\citep[MBCs:][]{HJ06} (see Table~\ref{distcolor}). We note that the colors and ${\it S'}$ of 2003 EH1 are remarkably less red than the average colors of cometary nuclei~\citep{Jewitt2002, Lamy2004}. This could be a result of past thermal processing when the object had a perihelion far inside Earth's orbit. Indeed, the weighted mean color of 8 near-Sun asteroids having perihelion distances $\le$0.25 AU (subsolar temperatures $\ge$800 K) is V-R = 0.36$\pm$0.01 \citep{Jewitt2013}, consistent with the color of EH1. We conclude that the colors of 2003 EH1 are broadly consistent with those measured in dead cometary nuclei, presumably as a result of mantling from now-gone activity. \subsection{Surface Brightness} \label{SBS} Here we search for evidence of a coma, which would indicate ongoing mass loss from 2003 EH1. We compared the measured surface brightness profile with the profiles of a field star nearby and a seeing-convolution model. Since the non-sidereal motion of 2003 EH1 makes the images of background stars appear trailed in the data, the one-dimensional surface brightness profiles were examined using the procedures of \cite{Luu1992}. To determine the profile, we used two $\it R$-band images taken using the Keck-I telescope on UT 2013 October 2 (Table\,\ref{logEH1}), without any background contamination. The Keck signal-to-noise ratio: S/N $\geq$ 70 - 140 is greater than that of the KPNO\,2.1 (S/N $\simeq$ 20 - 30). Each image was rotated to bring the direction of the projected motion of 2003 EH1 to the horizontal, shifted to align the images using fifth-order polynomial interpolation, then combined into a single image (total integration time of 360\,sec). The resulting image of 2003 EH1 has a FWHM of 0.86\arcsec, compatible with the seeing in the individual images used to make the composite. The seeing was determined from the point spread function (PSF) of a field star measured perpendicular to the direction of trail and convolved with ``nucleus plus coma" comet models. In the model images, each of 100 $\times$ 100 pixels, the nucleus was represented as a ``point source" located at the central pixel embedded in a circularly symmetric coma of varying activity levels. The surface brightness is assumed to decrease inversely with distance from the nucleus, as expected for steady-state, isotropic expansion of a coma. The principal parameter $\eta$, is equal to the ratio of the cross sections of the coma to that of the nucleus, with $\eta$ = 0 corresponding to a bare nucleus and $\eta$ = 1 to nucleus and coma having the same cross sections within the projected photometry aperture \citep{Luu1992}. The flux density of each pixel in the coma is given by ${\it K}$/${\it r}$, where ${\it K}$ is a constant of proportionality and ${\it r}$ is the distance from the nucleus in the plane of the sky. Figure~\ref{2D} shows surface brightness profiles of 2003 EH1, the field star (solid line) and seeing-convolution models with coma levels of $\eta$ =0.03, 0.05, 0.10 (dotted lines). All profiles are normalized to be unity at the center for comparison. The surface brightness profiles of 2003 EH1 and a field star were measured in the direction perpendicular to the motion of the asteroid. The individual profile, after the sky background subtraction, was averaged along the rows over the width of the asteroid and the field star. The normalized profiles of the asteroid and the field star are indistinguishable. From the figure we set an upper limit on the coma level $\eta$ $\lesssim$ 0.025 $\pm$ 0.007. A limit to near-nucleus coma can also be set on the basis of simple aperture photometry \citep{Jewitt1984}. Observations set a limit to the surface brightness, $\Sigma$$(\phi)$ mag\,${\rm arcsec^{-2}}$ at angular distance $\phi \arcsec$ from the image center. If the coma is in steady-state production (i.e.~the surface brightness varies with the inverse of the distance from the nucleus), then $m_c$($\phi)$, the total magnitude of the coma inside radius $\phi$, is given by \cite{Jewitt1984} as \begin{equation} m_c(\phi) = \Sigma (\phi) -2.5 {\rm log} (2\pi \phi^2). \label{Jewitt1984} \end{equation} From Figure~\ref{2D}, we can be confident that an upper limit to the coma surface brightness at $\phi$=3$\arcsec$ is $\Sigma$$(3'')$ $\sim$ 27\,mag\,${\rm arcsec^{-2}}$. Substitution into Equation (\ref{Jewitt1984}) gives $m_{\rm c}$($3.0''$) = 22.6\,magnitude, which is 2.7\,mag (factor of $\sim$ 12) fainter than the total magnitude 19.9\,mag in the $R$-band. Therefore, we conclude that the magnitude of coma within a 3$\arcsec$ radius circle is $\leq$ 0.08 of the measured brightness. This is consistent with, but less stringent than, the limit deduced from the profile-fitting model. \subsection{Size and Active Fractional Area} \label{sizeS} To derive the size of 2003 EH1, we used results of the ${\it R}$-band photometry taken on the nights of UT 2013 August 9 and 12 from KPNO\,2.1 (Table\,\ref{colorEH1}) and those taken on UT 2013 October 2 from Keck\,10 (${\it R}$ = 20.21$\pm$0.01\,mag and 20.26$\pm$0.02\,mag). The apparent red magnitude $m_{\rm R}$ was corrected to the absolute red magnitude, $m_{\rm R}(1,1,0)$ using \begin{equation} m_R (1,1,0) = m_R - 5\,{\rm log}(R\, \Delta) -\beta \alpha, \label{R} \end{equation} where ${\it R}$ and $\Delta$ are the heliocentric and geocentric distances (both in AU), $\alpha$(deg), is the phase angle (Observer-asteroid-sun), and $\beta$ is the linear phase coefficient (mag deg$^{-1}$). We took $\beta$ = 0.04 mag\,deg$^{-1}$, which is compatible with values measured fior JFC nuclei \citep{Lamy2004}. We used absolute red magnitude, $m_{\rm R}(1,1,0)$, to calculate the effective object radius in meters, $r_e$, using~\cite{Russell1916} \begin{equation} r_{\rm e} = \frac{1.496 \times 10^{8}}{\sqrt{p_{R}}}10^{0.2(R_\odot - m_R (1,1,0))}, \label{re} \end{equation} where $R_\odot$ = --27.1 is the apparent red magnitude of the Sun \citep{Cox2000}. We adopt the typical value of geometric albedo, $p_{v}(\approx p_{R})$ = 0.04, from the visible and thermal (mid-infrared) measurements for JFC nuclei \citep{Lamy2004, Fe13}. For the averaged absolute red magnitude $m_{\rm R}(1,1,0)$=15.82$\pm$0.17\,mag, Equation\,(\ref{re}) gives $r_e$ = 1950$\pm$150 m, which we approximate as $r_e$ = 2.0$\pm$0.2 km. The nucleus, represented by a sphere of this radius and assumed bulk density $\rho$ = 2000 kg m$^{-3}$ (the density of the Quadrantid meteoroids \citep{BaK09}), is $M_n \sim$ 6$\times$10$^{13}$ kg. This is comparable to, but slightly larger than, the estimated stream mass of (1 to 2)$\times$10$^{13}$ kg. The asteroid 2003 EH1 shows point-like surface brightness. Here we estimate the maximum allowable coma activity. Assuming that the water ice still exists and occupies the object surface, we estimate limits both to ongoing mass-loss rate and fractional active area on the surface. The approximate rate of the isotropic dust ejection from the object is expressed as a function of the parameter $\eta$ \citep{Luu1992}: \begin{equation} \frac{dM}{dt} = \frac{1.0 \times 10^{-3} \pi \rho \bar{a} \eta r_{\rm e}^2}{\theta R^{1/2}\Delta} \label{massloss} \end{equation} where $\rho$=2000 ${\rm kg\,m^{-3}}$ is the assumed bulk density determined by the Quadrantid meteoroids~\citep{BaK09}, $\bar{a}$=0.5$\times$$10^{-6}$\,m is the assumed mean grain radius, $r_{\rm e}$=1950$\pm$150\,m is the effective radius of 2003 EH1, $\theta$ is the reference photometry aperture radius of 30 pixels $(4.05'')$, and ${\it R}$ = 2.139 \,AU, ${\it \Delta}$=2.038\,AU given in Table\,\ref{logEH1}. The estimated limit to the mass loss rate is $dM / dt$ $\lesssim$ 2.5$\times$10$^{-2}$ $\,{\rm kg\,s^{-1}}$ with $\eta$\,$\lesssim$\,0.025$\pm$0.007. The $dM / dt$ is converted into the fraction of active area on the nucleus surface, $f_A$, using \cite{Luu1992}: \begin{equation} f_A = \frac{dM / dt}{4 \pi r^2_{e}\, \mu \,dm/dt}, \label{fraction} \end{equation} where $dm/dt$ is the specific sublimation mass loss rate of water in ${\rm kg\,m^{-2}\,s^{-1}}$ and $\mu$ = 1 is the assumed dust-to-gas mass ratio \citep{Greenberg1998,Luu1992}. (A value $\mu$ = 4$\pm$2 was measured in a recent encounter with JFC 67P/Churyumov-Gerasimenko~\citep[][]{R15}). The $dm/dt$ is calculated from the energy-balance equation \begin{equation} \frac{S_\odot (1-A)}{R^2} = \chi [\epsilon \sigma T^4 + L(T) dm/dt], \label{ebala} \end{equation} where $S_\odot$ = 1365 W ${\rm m^{-2}}$ is the solar constant, ${\it R}$ (in AU) is the heliocentric distance, $\epsilon$ = 0.9 is the wavelength-averaged emissivity, $\sigma$= 5.67 $\times$ $10^{-8}$ W ${m^{-2}}$ ${K^{-4}}$ is the Stephan-Boltzmann constant and $T$ K is the equilibrium temperature. Quantity A is the Bond albedo, defined by $A$ = $p_v\,q$ = 0.012, where $p_v$=0.04 \citep{Lamy2004, Fe13} and $q \sim$ 0.3 is the phase integral determined from cometary nuclei and Jupiter Trojan asteroids \citep{Fernandez2003, Buratti2004}. The latent heat of sublimation for water at temperature $T$ (in K) is given by $L(T)$ = (2.875 $\times$ $10^6$) -- (1.111 $\times$ $10^{3}$)$T$ in J ${\rm kg^{-1}}$, taking the polynomial fit to the thermodynamic data in \cite{Delsemme1971}. The dimensionless parameter $\chi$ represents the ratio of the effective cross-section for emission of thermal radiation from the nucleus to that for absorption of solar power. The lowest value, $\chi$=1, corresponds to subsolar ice on a non-rotating object, while the highest value, $\chi$=4, corresponds to an isothermal, spherical nucleus. For comet-like objects, the night-side thermal radiation is negligible (i.e. day-side emission only) due to the low thermal diffusivity of the surface layers, suggesting the intermediate value, $\chi$=2, is appropriate for providing a maximum active fractional area and minimum specific mass ross rate \citep{Fe13, LJ15}. However, since we are interested in obtaining a limit to $f_A$, we assume the lowest possible surface temperatures (corresponding to the isothermal case, $\chi$ = 4) and find $dm/dt$ = 7.5$\times$10$^{-6}$ kg m$^{-2}$ s$^{-1}$ and $T$ = 180 K at $R$ = 2.139 AU, by Equation (\ref{ebala}). To supply 2.5$\times$10$^{-2}$ kg s$^{-1}$ would require an exposed patch of ice on the surface having area 3300 m$^2$, corresponding to $f_A \lesssim$ 10$^{-4}$ by Equation (\ref{fraction}). This fraction is smaller by an order of magnitude than is characteristic of even low activity JFC nuclei \citep{AHearn1995}. \subsection{Rotational Period and Shape} \label{ps} To search for the rotation period for 2003 EH1, we used a spectral analysis technique that employs the Discrete Fourier Transform (DFT) algorithm \citep[][]{Lomb1976,Scargle1982} on the relative ${\it R}$-band time-series photometric data (Table~\ref{lightphot}). The DFT analysis evaluates the spectral power as a function of angular frequency using the fitting quality at a given frequency in the data. The maximum power at the frequency indicates the highest significance level, reflecting the most convincing solution for the periodicity. The light curve shape is presumed to be two-peaked as seen in most small bodies in the Solar System, implying elongated body shape. The fitting solution for the two-peaked rotational period is $P_{\rm rot}$=12.650\,hr. The uncertainty on the period is computed using the equation given by \cite{GF85} \begin{equation} \frac{\Delta f}{f} = \left[ \frac{0.0256}{(fT)^4} + \frac{0.5625 \sigma^2}{n (fT)^2 A^2} \right]^{1/2}, \label{err} \end{equation} where $\Delta$$f$ is the root-mean square error, $f$ is the number of cycles per day (24\,hrs), $T$ is the observing period (in days), $A$ is the signal amplitude, $n$ is the number of measurements and $\sigma^2$ is the variance of the data. Substituting $f$ = 1.8972 (= 24\,hr/$P_{\rm rot}$), $T$ = 4.2299, $A$=0.44\,mag, $n$ = 205 and $\sigma^2$=0.0025, we obtain $\Delta f/f$ $\sim$ 0.26\%, namely, the uncertainty on the period is $\pm$0.033\,hr. The phased light curve with this period, $P_{\rm rot}$=12.650$\pm$0.033\,hr, is shown in Figure~\ref{LC}. The fitted model for the light curve finds the maximum photometric range of 2003 EH1 is $\Delta m_{\rm R}$= 0.44 $\pm$ 0.01, which gives a lower limit to the intrinsic axis ratio, ${\it a/b}$, between long axis ${\it a}$ and short axis ${\it b}$. Assuming the object's rotational axis is perpendicular to our line of sight, the ratio is expressed as $a/b = 10^{0.4 \Delta m_{\rm R}}$. We find ${\it a/b}$ = 1.50 $\pm$ 0.01. In practice, this is a lower limit to $a/b$ because the rotation axis may not be perpendicular to the line of sight. Our observations of 2003 EH1 are consistent with the shapes of typical cometary nuclei, which tend to be elongated (${\it a/b}$$\ge$ 1.5 \citep{Jewitt2004}) relative to asteroids of comparable size. The slow rotation and modest $a/b$ do not present any threat to the rotational stability of 2003 EH1 for bulk densities $>$100 kg m$^{-3}$, even assuming zero tensile strength. Non-central outgassing (mass loss) can generate torques that change the angular momentum of the nucleus and which can drive an object into an excited rotational energy state. We estimated the timescale for rotational excitation of 2003 EH1 assuming continuous mass loss at the maximum rate allowed by our data and using the formalism described in \cite{Jewitt1997}. With values of the dimensionless moment arm for the torque in the range 10$^{-3}$ to 10$^{-1}$, we obtain excitation timescales in the range from 10$^5$ to 10$^7$ yr. These are long compared to the few$\times$10$^4$ yr active lifetimes of JFC comets \citep{LevisonDuncan1997}, suggesting that rotational excitation of 2003 EH1 is unlikely, at least given the present activity state. \subsection{Mantle Formation} Rubble mantles in comets consist of refractory blocks that are large enough not to be ejected by outgassing drag forces against the gravity of the nucleus, although cohesion also likely plays a role. The timescale for growth of a cohesionless rubble mantle in the presence of a sublimating ice surface is given by Jewitt (2002). From Figure 5 of that paper, we read that the mantling time for a 2 km nucleus between 1 and 5 AU from the Sun is in the range 0.3 $\lesssim \tau \lesssim$ 100 yr. Even the upper limit to the timescale is short compared to the timescale of the dynamical evolution of 2003 EH1, showing that mantle formation is likely and explaining the very low (or absent) present-day mass loss. Given that 2003 EH1 has followed a complicated and rapidly changing dynamical path, including recent close-passages by the Sun, it is likely that the existing rubble mantle reflects depletion of near-surface volatiles occurring at higher temperatures than those that now prevail. The timescale for heat to conduct across the radius of the nucleus, $r_e$, is of order $\tau_h \sim r_e^2 / \kappa$. With $r_e$ = 2 km and thermal diffusivity $\kappa$ = 10$^{-8}$ to 10$^{-7}$ m$^2$ s$^{-1}$ (as appropriate for a porous dielectric material), we find $\tau_h \sim$ 10$^6$ to 10$^7$yr. The $\tau_h$ exceeds the dynamical lifetime of JFC comets $\tau_{\rm JFC}$ $\sim$ 10$^5$yr \citep[][]{LevisonDuncan1994} by one or more orders of magnitude, showing that the heat from the Sun would not reach deep interior of the asteroid during the time spent in the inner solar system. Therefore, we conclude that it is very plausible that 2003 EH1 retains volatiles in its deep interior, but that it is inactive during most of its orbit owing to the recent (and probably recurring) formation of a rubble mantle. \section{Discussion} \label{discuss} As noted earlier, the Quadrantid core stream is estimated from dynamical spreading to be 200 to 500 years in age \citep{Jennis04, WB05, Abedin15}. Steady mass loss at the maximum rates allowed by the optical data, namely 2.5 $\times$ $10^{-2}$ kg ${\rm s^{-1}}$, would deliver only about (1.6 - 3.9)$\times$ $10^8$ kg in 200 - 500 yr, even if these rates were sustained all around the orbit (which itself seems unlikely). For comparison, the total mass of the meteoroids in the Quadrantid core stream is estimated to be about $10^{13}$\,kg~\citep{jennis06}, which has been updated from earlier estimates of $\leq$ $10^{11-12}$\,kg \citep{HM89, jenn94,Jennies97}. We conclude that the current production rates from 2003 EH1 are about five orders of magnitude too small to supply the mass of the core Quadrantid stream. This result is perhaps not surprising, given the current mis-match between the orbits of 2003 EH1 and the Quadrantid stream \citep{WB05}. Could the core stream meteoroids have been released from 2003 EH1 a few centuries ago, when the perihelion was substantially smaller? For example, 200 to 500 years ago, the perihelion distance was $\sim$0.7 to 0.9 AU \citep{Jennis04,WB05}. We solved Equation~(\ref{ebala}) to find hemispherically averaged specific mass loss rates (2.8 - 4.9) $\times$10$^{-4}$ kg m$^{-2}$ s$^{-1}$ at these distances, only 2 to 3 times larger than at 1.2 AU. Thus, perihelion variations alone are not sufficient to account for the mass of the Quadrantids. Within the context of the equilibrium sublimation model, only by changing the active fraction, $f$, can the production rates and the stream mass be reconciled. For example, setting $dm/dt$ = 4.9$\times$10$^{-4}$ kg m$^{-2}$ s$^{-1}$ and $f_A$ = 1 in Equation (\ref{fraction}) we find that the stream mass could be supplied by equilibrium sublimation in $\sim$30 years. We consider it more likely that the injection of mass to the meteoroid stream occurred out of equilibrium, perhaps by a volatile-driven process related to cometary outbursts or break-ups, and triggered by deep penetration of conducted heat into the ice-rich interior of this body. Intense solar heating can cause fracturing and dust production through thermal fracture and desiccation. For example, asteroid (3200) Phaethon, the parent body of the Geminid meteoroid stream, has shown recurrent activity around its perihelion ${\it q}$$\sim$0.14\,AU \citep{JL10,LJ13,Jetal13} where the surface temperature reaches 750\,K $\le$ $\it T$ $\le$ 1100\,K~\citep{Oh09}. Phaethon is essentially a ``rock comet" and the activity is caused by the production of small dust particles with radii $\sim$ 1\,$\micron$ due to thermal fracture and decomposition cracking of hydrated minerals (not sublimation of ice). Since 2003 EH1 recently possessed similarly small perihelia~\citep[][]{Nes13b,FJ14}, thermal fracture and surface desiccation may likewise be expected. At its smallest perihelion, $\it q$ $\sim$ 0.12\,AU, we estimate surface temperatures 800\,K $\le$ $\it T$ $\le$ 1200\,K on 2003 EH1. However, as on (3200) Phaethon, the particles produced this way should be of micron size and swept from the nucleus by solar radiation pressure \citep{Jetal13,J15IV}, so that they do not contribute to the meteoroid streams of either body. Spectroscopic measurements of the Na contents in the meteoroid streams are also suggestive of thermal processing of the parent bodies. The Geminid meteoroids show extreme diversity in their Na abundance, from strong depletion to near sun-like Na content \citep{Hv73,kasuga2005,Jiri2005}. Presumably, this compositional diversity reflects different thermal modification on Phaethon (or perhaps the larger sized precursor body) itself ~\citep{kasuga2006,Ohtsuka2006,jewitt2006,Ohtsuka2008,KJ08,K09,Oh09, CB09}. For the Quadrantid meteoroids, the measured line intensity ratios show that Na is less depleted than in the majority of Geminid meteoroids~\citep[][]{Koten06,Bo10}. This may imply less thermal modification on 2003 EH1 even though it recently had perihelion distances smaller than Phaethon's. Alternatively, the Quadrantid meteoroids could be released from sub-surface regions on 2003 EH1 deeper than a thermal skin depth and thereby have escaped the most severe thermal effects~\citep[][]{Koten06}. \section{Summary} Optical observations of suggested Quadrantid stream parent 2003 EH1 lead to the following results. \begin{enumerate} \item The absolute red magnitude, ${\it m}_{\rm R}$(1,1,0)=15.82$\pm$0.17\,mag., corresponds to effective radius $r_{\rm e}$=2.0$\pm$0.2\,km assuming red geometric albedo $p_{\rm R}$=0.04. The ratio of the nucleus mass to the Quadrantid stream mass is $\sim$3 to 6, although uncertainty remains because both masses are approximate. \item The surface brightness profile is point-like, limiting the fractional light scattered by steady-state, near-nucleus coma to $\leq$ 2.5 \%. The maximum mass loss rate deduced from a model fitted to the profile is $\sim$ 2.5$\times$10$^{-2}$ kg s$^{-1}$. Water ice can occupy a fraction of the surface no larger than $f_A <$ 10$^{-4}$. \item The two-peaked rotational light curve has period $P_{\rm rot}$=12.650$\pm$0.033\,hr. The photometric range, $\Delta m_{\rm R}$= 0.44$\pm$0.01 , indicates a minimum axis ratio of 1.50 $\pm$ 0.01. \item The optical colors (${\it B-V}$ = 0.69$\pm$0.01, ${\it V-R}$ = 0.39$\pm$0.01, and ${\it R-I}$ = 0.38$\pm$0.01) are slightly redder than the Sun and consistent with the mean colors of dead or dormant cometary nuclei. \item Current dust production from 2003 EH1 is orders of magnitude too small to supply the mass of the Quadrantid core meteoroid stream in the 200-500 year dynamical lifetime. If 2003 EH1 is the source of the Quadrantids, we infer that mass must be delivered episodically, not in steady-state. \end{enumerate} \acknowledgments We acknowledge Lusine Kamikyan for assistance with the observations at KPNO\,2.1 telescope. TK is grateful to support for this work provided by Tomoko Arai and Takafumi Matsui, collaborated with the International Space Station METEOR project at Planetary Exploration Research Center, Chiba Institute of Technology. TK also thanks financial supports from National Astronomical Observatory of Japan and a Young Researcher Overseas Visit Program (2013), The Graduate University for Advanced Studies. DJ appreciates support of this work from NASA's Solar System observations program. We thank Dianne Harmer, Beatrice Mueller, Anna Daniel and other members in the KPNO\,2.1 telescope team for their help in planning and scheduling the observations. We thank Joel Aycock for operating the Keck telescope and the anonymous referee for comments. Some of the data presented herein were obtained at the W.M. Keck Observatory, which is operated as a scientific partnership among the California Institute of Technology, the University of California and the National Aeronautics and Space Administration. The Observatory was made possible by the generous financial support of the W.M. Keck Foundation. \clearpage \input{logEH1} \clearpage \input{colorEH1_ap4} \clearpage \input{lc_ap4} \clearpage \input{Slope \clearpage \clearpage \begin{figure*}[htbp] \epsscale{1} \plotone{Fig1w4025.eps} \caption{ The $\it R$-band image of 2003 EH1 taken by Keck-I 10\,m on UT 2013 October 2. The image has total integration time of 360\,s. The frame size is 40$^{''}$ $\times$ 25$^{''}$. No coma or tail are visible on the object, which has a FWHM of $0.86^{''}$.} \label{image} \end{figure*} \clearpage \begin{figure*}[htbp] \epsscale{1.} \plotone{VRBV.eps} \caption{Color plots of ${\it V-R}$ vs. ${\it B-V}$ for 2003 EH1 (blue circle) on weighted mean and Tholen taxonomic classifications~\citep{Th84}, as tabulated by~\cite{Da03}. The color of the Sun (red circle) is also plotted. The uncertainty of ${\it B-V}$ for 2003 EH1 is within the circle. } \label{VRBV} \end{figure*} \clearpage \begin{figure*}[htbp] \epsscale{1.} \plotone{RIVR.eps} \caption{The same as Figure~\ref{VRBV} but in the ${\it R-I}$ vs. ${\it V-R}$ color plane.} \label{RIVR} \end{figure*} \clearpage \begin{figure*}[htbp] \epsscale{1} \plotone{2D.eps} \caption{Normalized $\it R$-band surface brightness profiles of 2003 EH1, the field star, and seeing-convolution models having coma levels of $\eta$ =0.03, 0.05 and 0.10. One unit of the surface brightness of the the asteroid is $\Sigma$=21.3\,mag arcsec$^{-2}$. } \label{2D} \end{figure*} \clearpage \begin{figure*}[htbp] \epsscale{1.} \plotone{Relap4.eps} \caption{${\it R}$-band photometry of 2003 EH1 observed on UT 2013 August 8, 9 and 12 , phased to the two-peaked period $P_{\rm rot}$=12.650$\pm$0.033\,hr. Dotted curve displays fitting result having the maximum photometric range $\Delta m_{\rm R}$= 0.44 $\pm$ 0.01\,mag.} \label{LC} \end{figure*} \clearpage \newpage
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{introduction} Many models of inflation predict that gravitational waves were produced along with density perturbations in the early universe~\cite{1982PhLB..115..189R,1983PhLB..125..445F,1984NuPhB.244..541A,1988PhRvD..37.2078A,1993PhRvL..70.2371G}. While the density perturbations have been detected and mapped out very precisely, the gravitational waves have yet to be detected. Perhaps our best hope is to observe the unique pattern of polarization -- so-called B-modes -- that gravitational waves produce when photons in a warped radiation field scatter off free electrons~\cite{Seljak:1996gy,Kamionkowski:1996zd,Baumann:2008aq,Dodelson:2009kq}. One exciting feature of this signal is that it was produced at two distinct epochs: first, during recombination when the scattering rate starts to tail off and photons can travel longer distances, and, then, after reionization at much later times. These two epochs are imprinted in the angular spectrum on large scales (multipoles $l\sim 2-10$) and small scales ($\sim50-100$). Hence primordial gravitational waves leave the distinct signature of a double-humped peak in the power spectrum of polarization B-modes. Detection, however, is far from guaranteed. Among the hurdles that must be overcome are: (i) unknown amplitude of the signal~\cite{Lyth:1996im}, (ii) high sensitivity~\cite{Bock:2006yf} needed to detect even the largest signal possible, and (iii) foreground contamination~\cite{Dunkley:2008am}. Even if the first two of these are overcome, and a B-mode detection is claimed, distinguishing a putative signal from foregrounds is likely to remain a significant problem. Here, I explore one possible way to test a future B-mode detection against foreground contamination: supplementing the polarization B-mode signal with an all-sky, deep weak lensing signal. The idea is simple: gravitational waves also leave an imprint on the cosmic distortion tensor that governs the propagation of light over long distances, and hence the observed shapes of galaxies. Density perturbations in the distortion tensor produce {\it scalar shear}, which was first detected in 2000~\cite{vanWaerbeke:2000rm,Wittman:2000tc,Kaiser:2000if,Bacon:2000sy} and has since been measured over larger scales by many experiments (see, e.g., \cite{Hoekstra:2008db} for a review), but gravitational waves produce a different type of distortion~\cite{Stebbins:1996wx}, sometimes called the pseudo-scalar shear, or the {\it curl} mode of shear, the exact analogue of the B-mode of the polarization field. A number of studies~\cite{Dodelson:2003bv,Sarkar:2008ii} have shown that the power spectrum of this mode, even in the most optimistic scenarios (large amplitude, all-sky survey with low shape noise), will be difficult to detect except perhaps at the lowest multipoles. Here I point out that the situation is not quite as grim as these studies suggest. First, even a moderate signal to noise independent detection in an arena with completely different systematics would significantly increase our confidence in a putative CMB detection. Further, the cross-correlation between the CMB and lensing B-modes is non-zero and this helps the search for a detection. The main purpose of this paper is to calculate the extent to which this is true, the extent to which the same modes which produce the CMB B-mode reionization signal are responsible for the B-mode in a lensing experiment. This is an interesting academic question in its own right: two very different phenomena owe their existence to the same spectrum of gravitational waves, and it is interesting to quantify how correlated they are. Sections II and III present the formula for the auto-correlations of the B-modes in these two different arenas. Since I am interested only in the large scale feature, the focus in \S{II} is on the reionization signal, and, as far as I can tell, the final formula for this signal in Eqs. \Ec{clbint} and \Ec{clbfin} has not been presented elsewhere. In \S{IV}, I calculate the cross-correlation between the two and show that it is non-negligible. Finally, \S{V} addresses the more general question of how much additional information can be gleaned from a lensing survey, taking into account the cross-correlation. Some of the details of the calculations are relegated to appendices. There is one scenario for which the probe introduced here would be of tremendous utility. Over a decade ago, cosmologists studied gravitational wave production in open inflation models~\cite{GarciaBellido:1997hy,Tanaka:1997kq,Linde:1999wv,Hertog:1999kg,Hawking:2000ee} and concluded that the amplitude would be boosted near the horizon. This would lead to a much different spectrum than the double humped peak which modulates the scale-invariant standard inflation spectrum. Recently, there have been suggestions~\cite{Freivogel:2005vv,Susskind:2007pv} that inflation was preceeded by an earlier epoch of vacuum domination such that our Universe was produced via a Coleman-De Luccia tunneling process~\cite{Coleman:1980aw} and is therefore open, albeit with a small value of the curvature density. Even with the small curvature expected in our Universe, though, the large scale gravitational wave spectrum might be different than the simple scale-invariant one generally assumed. A test -- such as the B-mode of cosmic shear -- sensitive to only the largest moments seems tailor-made to probe this class of models. \section{B-modes of Polarization} The CMB radiation is described by a $2\times2$ intensity tensor (see, e.g., Ref.~\cite{1997PhRvD..55.1830Z,Cabella:2004mk}), the traceless, symmetric part of which consists of two fields $Q(\hat n)$ and $U(\hat n)$. It is useful to decompose $Q$ and $U$ into E/B modes, where the B-mode multipole moments are given by \begin{equation} B_{lm} = \frac{i}{2}\int d^2n \left[ \big({}_2Y^*_{lm}(\hat n)\big) \big(Q+iU)(\hat n) - \big({}_{-2}Y^*_{lm}(\hat n)\big) \big(Q-iU)(\hat n) \right]\eql{defB} \end{equation} where the ${}_{\pm2}Y_{lm}$ are the spin-2 spherical harmonics. The polarization fields $Q(\hat n)$ and $U(\hat n)$ are induced by Thomson scattering in the presence of a quadrupole distribution, so the polarization produced after reionization is \begin{equation} \left(Q\pm iU\right)(\hat n) = \frac{3}{5\sqrt{6}} \int_{0}^{D_{\rm reion}} dD\, \dot\tau(\eta) \sum_{m''=-2}^2\big({}_{\pm2}Y^*_{lm''}(\hat n)\big) \Theta_{2m''}(\vec x=D\hat n;\eta).\eql{qpmu} \end{equation} where $\eta$ is the conformal time associated with the comoving distance $D$ (in the flat universe we will assume throughout, $\eta=\eta_0-D$). That is, one integrates along the line of sight out to a spherical shell a distance $D_{\rm reion}$ away (see Fig. \rf{circ}, adapted from Ref.~\cite{Dvorkin:2007jp}), weighting by the scattering rate $\dot\tau=n_e\sigma_T a$ (where $n_e$ is the free electron number density, $\sigma_T$ the Thomson cross-section, and $a$ the scale factor), and the different components of the quadrupole $\Theta_{2m}$. Eq.~\Ec{qpmu} relates the polarization field produced after reionization to the quadrupole of the radiation field, no matter what the origin of that quadrupole. The quadrupole produced by gravitational waves (GW) is of course linearly related to the tensor perturbation $h_{ij}(\vec x,\eta)$ with~(e.g., Ref. \cite{2009AIPC.1132...86C}) \begin{equation} \Theta_{2m}(\vec x,\eta) = \int d^2 n'\, Y^*_{2m}(\hat n') \int_{0}^{\Delta D} d(\Delta D') \left[ \frac{-1}{2} \hat n'^i \hat n'^j\dot h_{ij}(\vec x+\vec x';\eta')\right]\eql{quad} \end{equation} where $\Delta D \equiv D_*-D$ is the distance between the point of interest and the surface of last scattering, $\dot h \equiv \partial h/\partial \eta'$, and $\vec x'\equiv \Delta D'\hat n'$ identifies a point within a sphere of radius $\Delta D$ centered at $\vec x$ (see Fig. \rf{circ}). To be clear, the argument $\eta'$ in the tensor perturbation denotes the conformal time when the GW (which travels at the speed of light) was at position $\vec x + \vec x'$. That is, it was a distance $\Delta D'$ away from the point $[\vec x=(D\hat n)]$. This corresponds to conformal time $\eta'=\eta-\Delta D'=\eta_0-(D+\Delta D')$. \Sfig{circ}{The double integral which determines the B-mode signal due to reionization. The outer integral (\ec{qpmu}) is along the line of sight $D\hat n$ out to the time/distance of reionization, $D_{\rm reion}$. The quadrupole in the integrand is determined by an integral along $\Delta D'\hat n'$ (\ec{quad}) out to the surface of last scattering (a distance $\Delta D$ away) to capture the contribution of the gravitational waves.} The tensor field is the sum of two independent modes in Fourier space: \begin{equation} h_{ij}(\vec x;\eta) = \int \frac{d^3k}{(2\pi)^3} e^{i\vec k\cdot\vec x} T(k,\eta) \sum_{\alpha=+,\times} \tilde h^{(\alpha)}(\vec k) \epsilon^{(\alpha)}_{ij}(\vec k) \end{equation} where $T(k,\eta)$ captures the evolution of the GW when they enter the horizon. In a matter dominated universe, $T(k,\eta)=3j_1(k\eta)/(k\eta)$, but we use the exact solution to the transfer function here which differs slightly at late times due to dark energy domination. The orientation of the two modes depends on the direction of the $\hat k$ vector. If $\hat k$ is chosen to lie along the $z$-axis, then \begin{equation} \epsilon^{(+)}_{ij}= \left(\matrix{1 &0 &0\cr 0 & -1 &0\cr 0 &0&0}\right) \qquad\qquad \epsilon^{(\times)}_{ij}= \left(\matrix{0 &1 &0\cr 1 & 0 &0\cr 0 &0&0}\right) . \end{equation} More generally, they are transverse, traceless matrices normalized so that Tr[$e^{(\alpha)} e^{(\beta)}] =2\delta_{\alpha\beta}$. Armed with these results, we can write \begin{equation} B_{lm} = \int\frac{d^3k}{(2\pi)^3} \sum_{\alpha=+,\times} \tilde h^{(\alpha)}(\vec k) T^{P,(\alpha)}_{lm}(\vec k)\eql{blm} \end{equation} where the transfer function for the B-modes in polarization is \begin{equation} T^{P,(\alpha)}_{lm}(\vec k) = -i\sqrt{\frac{3}{800}} \int_{0}^{D_{\rm reion}} dD\, \dot\tau \sum_{m''=-2}^2 I_{lmm''}(\vec k,D) \int_{0}^{\Delta D} d(\Delta D') \, \dot T(k,\eta') J^{(\alpha)}_{m''}(\vec k,\Delta D') ;\eql{tran} \end{equation} \begin{equation} J^{(\alpha)}_{m''}(\vec k,\Delta D') \equiv \epsilon_{ij}^{(\alpha)}(\vec k) \int d^2n'\, Y^*_{2m''}(\hat n') \hat n'^i \hat n'^j e^{i\vec k\cdot \hat n' \Delta D'} \eql{defj}; \end{equation} and \begin{equation} I_{lmm''}(\vec k,D) \equiv \int d^2n\, e^{i\vec k\cdot \hat n D} \left[ \big({}_{2}Y^*_{lm}(\hat n)\big) \big({}_{2}Y_{2m''}(\hat n)\big)- \big({}_{-2}Y^*_{lm}(\hat n)\big)\big({}_{-2}Y_{2m''}(\hat n)\big)\right].\eql{defi} \end{equation} The transfer function is difficult to compute for general $\vec k$ but simplifies considerably when $\vec k$ lies along the $\hat z$-axis. Fortunately, the auto- and cross-spectra that can be observed are rotationally invariant so in the end we will need only this simple case. The calculation is presented in Appendix~\ref{app:pol} with the result that \begin{eqnarray} T^{P,(+)}_{lm}(k\hat z) &=& i^{l+1}\sqrt{\frac{9\pi}{4(2l+1)}} \left[\delta_{m,2}-\delta_{m,-2}\right] \int_{0}^{D_{\rm reion}} dD\, \dot\tau \left[ (l+2) j_{l-1}(kD) -(l-1) j_{l+1}(kD) \right] \nonumber\\ &&\times \int_{0}^{\Delta D} d(\Delta D') \, \dot T(k,\eta') \frac{j_2(k\Delta D')}{(k\Delta D')^2} .\eql{tranfin} \end{eqnarray} The $m=2$ component is plotted for the lowest moments in Fig.~\rf{tlbb} for $z_{\rm reion}=10$. \Sfig{tlbb}{The $m=2$ moment of the B-mode transfer function in polarization for $l=2-6$ as a function of wavenumber $k$. Here reionization is assumed to occur instantaneously at $z=10$.} We will use \ec{tranfin} to compute the cross-spectra with lensing. But, as long as we have it, we can first use it to derive a simple formula for the auto-spectrum of polarization B-modes due to reionization. Of course this spectrum is by now a standard feature of freely available codes which compute temperature and polarization two-point functions~\cite{cmbfast,camb}, but it is nice to capture the physics in a simple semi-analytic formula. The amplitude of the GW mode is drawn from a Gaussian distribution with the same power spectrum: \begin{equation} \langle \tilde h^{(\alpha)}(\vec k) \tilde h^{(\beta)\dagger}(\vec k')\rangle =(2\pi)^3 \delta_{\alpha\beta}\delta^3(\vec k-\vec k') P_h(k).\eql{ph} \end{equation} In standard slow roll inflation, the power spectrum is nearly scale invariant: $k^3 P_h(k)/(2\pi^2) =(4/\pi) (H_I/m_{\rm pl})^2$, where $H_I$ is the expansion rate during inflation. Squaring \ec{blm} and taking the expectation value using \ec{ph} then leads to the polarization B-mode spectrum \begin{eqnarray} C_l^{PP} &=& \frac{1}{2l+1}\sum_{m=-l}^l \langle \big\vert B_{lm} \big\vert^2 \rangle \nonumber\\ &=& \frac{1}{2l+1}\int\frac{d^3k}{(2\pi)^3} P_h(k) W^{PP}_l(k),\eql{clbint} \end{eqnarray} where \begin{eqnarray} W^{PP}_l(k) &\equiv& \sum_{\alpha}\sum_{m=-l}^l\left\vert T_{lm}^{P,(\alpha)}(\vec k)\right\vert^2\nonumber\\ &=& \frac{9\pi}{2l+1} \Bigg\vert\int_{0}^{D_{\rm reion}} dD\, \dot\tau\left(\eta_0-D\right) \nonumber\\ &&\times \left[ (l+2) j_{l-1}(kD) -(l-1) j_{l+1}(kD) \right] \int_{0}^{\Delta D} d(\Delta D') \, \dot T(k,\eta') \frac{j_2(k\Delta D')}{(k\Delta D')^2}\Bigg\vert^2. \eql{clbfin} \end{eqnarray} \section{B-mode of Cosmic Shear} The deformation tensor which describes propagation of light through the inhomogeneous universe can also be written as a $2\times2$ matrix, and its traceless, symmetric part can also be characterized by two fields: the two components of shear $\gamma_1$ and $\gamma_2$ replacing $Q$ and $U$. Thus, the moments of the B-mode of cosmic shear can be defined just as in \ec{defB}. There is a difference between the full deformation matrix and the full intensity matrix. In addition to $Q$ and $U$, the polarization matrix contains a piece describing the intensity $I$ and circular polarization $V$. Neither $I$ nor $V$ is related to $Q$ and $U$ (in cosmology, $I$ of course describes the CMB temperature anisotropies while $V$ is thought to vanish because Thomson scattering induces no circular polarization). The analogous quantities in the lensing deformation tensor are the convergence $\kappa$ and the rotation $\omega$. These {\it are} related to the two components of the shear, with $\kappa$ equal to the $E$-mode and $\omega$ to the $B$-mode. Stebbins~\cite{Stebbins:1996wx} first examined the structure of this matrix and derived a number of useful relations among its components. Starting from the equivalent of \ec{defB} (with $\gamma_1$ and $\gamma_2$ replacing $Q$ and $U$), one arrives at a simple expression for the $B$-mode moments in terms of the rotation field \begin{equation} B_{lm} = \int d^2n Y_{lm}^*(\hat n) \omega(\hat n). \end{equation} The rotation receives no contributions from scalar perturbations, so it is non-zero only if tensor modes are present (at first order; see Ref.~\cite{Sarkar:2008ii} for second order scalar effects). Explicitly, the moments of the B-mode of the shear field due to GW's are~\cite{Dodelson:2003bv} \begin{equation} B_{lm} = \int \frac{d^3k}{(2\pi)^3} \sum_{\alpha=+,\times} h^{(\alpha)}(\vec k) T^{L,(\alpha)}_{lm}(\vec k) \end{equation} with the lensing transfer function defined as \begin{equation} T^{L,(\alpha)}_{lm}(\vec k) \equiv \frac{1}{2} \int d^2n Y^*_{lm}(\hat n) \epsilon_{ijk} \hat n^i \hat n^l \epsilon^{(\alpha)}_{kl}(\hat k) k_j \int_0^{D_s} dD \, e^{i\vec k\cdot \hat n D} T(k,\eta_0-D) \end{equation} where here $\epsilon_{ijk}$ is the 3D Levi-Civita symbol as opposed to the polarization tensor $\epsilon^{(\alpha)}_{kl}(\hat k)$. The integral here is out to the source galaxies, all assumed to be at distance $D_s$. Again the integrals over angles simplify considerably when $\hat k$ is chosen to lie along the $z$-axis. Appendix~\ref{applen} contains the details leading to \begin{equation} T^{L,(+)}_{lm}(k\hat z) = (i)^{l+1}\left[\delta_{m,2}-\delta_{m,-2}\right] \sqrt{2l+1} \frac{ \sqrt{\pi(l+2)(l+1)l(l-1)}}{2}k \int_0^{D_s} dD \, T(k,\eta_0-D) \frac{j_l(kD)}{(kD)^2} .\eql{tranlen} \end{equation} Fig.~\rf{tlww} shows the transfer functions for the lowest multipoles, assuming all galaxies are at redshift $z_s=1$. The amplitudes here are important: the transfer function for $l=2$ is largest, reflecting the fact that the signal will be largest on the largest scales. \Sfig{tlww}{The $m=2$ moment of the B-mode lensing transfer function for $l=2-6$. Sources are all assumed to be at $z=1$.} \section{Cross-Correlation} We can now collect the window functions for the two auto-spectra and the cross-spectrum and compute the correlation co-efficient. Explicitly, in addition to \ec{clbfin}, we have \begin{eqnarray} W^{LL}_l(k) &=& \pi(2l+1) (l+2)(l+1)l(l-1)) \bigg\vert k \int_0^{D_s} dD \, T(k,\eta_0-D) \frac{j_l(kD)}{(kD)^2}\bigg\vert^2\nonumber\\ W^{PL}_l(k) &=& 6\pi \sqrt{(l+2)(l+1)l(l-1)}k \int_0^{D_s} dD' \, T(k,\eta_0-D') \frac{j_l(kD')}{(kD')^2}\int_{0}^{D_{\rm reion}} dD\, \nonumber\\ &&\times \dot\tau\left(\eta_0-D\right) \left[ (l+2) j_{l-1}(kD) -(l-1) j_{l+1}(kD) \right] \int_{0}^{\Delta D} d(\Delta D') \, \dot T(k,\eta') \frac{j_2(k\Delta D')}{(k\Delta D')^2} \end{eqnarray} The correlation coefficients which express the degree to which the moments are correlated are defined as \begin{equation} \alpha_l \equiv \frac{ C^{PL}_l}{\sqrt{C^{PP}_l C^{LL}_l} } \end{equation} where each $C_l$ is an integral over the power spectrum modulated by the window function $W_l$. The window functions for the lowest moments of the cross-spectra are plotted in Fig.~\rf{wlcross} when the background galaxies in the lensing survey are all assumed to be at $z=1$ and reionization takes place instantaneously at $z=1$. As is clear, the $l=2$ moment has a negative correlation coefficient and is particularly sensitive to modes of precisely the size of the horizon. The lowest correlation coefficients -- again when $z_{\rm source}=1$ and $z_{\rm reion}=10$ -- are: $(\alpha_2,\alpha_3,\alpha_4,\alpha_5,\alpha_6=-0.32,0.10,0.31,-0.09,-0.20)$. \Sfig{wlcross}{Cross-correlation window function for lowest moments when lensing galaxies are all at $z=1$ and reionization occurs instantaneously at $z_{\rm reion}=10$.} A fascinating aspect of the cross-correlation is that, because of the oscillations in the gravitational waves, the correlation coefficient is sensitive to the redshift of the source galaxies in the lensing survey. Fig.~\rf{zs} shows this dependence. In principle, one could imagine exploiting this dependence by weighting each galaxy by some factor to maximize the signal to noise extraction, but I do not pursue this possibility here. \Sfig{zs}{Cross-correlation coefficient of CMB and weak lensing B-modes for the lowest two moments as a function of the redshift of the galaxies in the lensing survey.} \section{Significance} How much would a lensing survey sensitive to the lowest moments add to our knowledge about the primordial gravitational waves? And how does the cross-correlation affect this answer? As political pollsters have discovered, the answers to these questions depend sensitively on how they are worded. For example, a simple question one might ask is how much tighter the constraints on the gravitational wave amplitude would become if the information from an all-sky lensing survey were added to that from a CMB polarization experiment. A quantitative answer to this can be obtained with the Fisher formalism. Consider two data points, the $l=2,m=2$ moment of the CMB B-mode and the same moment of the lensing B-mode. Normalize each so that the expected contributed to the variance from noise is unity. Also, allow for one free parameter, the normalization of the GW amplitude, $A$, with ``true'' value equal to 1. Finally, call the ratio of the signal variance to the noise variance $\lambda$ for each probe. Then the $2\times2$ covariance matrix is \begin{equation} C = \left( \matrix{ A\lambda_P + 1 & \alpha A\sqrt{\lambda_P\lambda_L}\cr \alpha A\sqrt{\lambda_P\lambda_L} & A\lambda_L+1} \right). \end{equation} Here $\alpha$ is the correlation coefficient computed in the previous section. The $_{11}$ entry contains contributions from both signal ($a\lambda_P$) and noise (1). The signal contribution to the variance is $\lambda_P\equiv C^{PP}_{l=2}/N^{PP}_{l=2}$ where $C^{PP}$ is the power computed in \S{II} and $N^{PP}$ is the noise variance which of course depends on the experiment. Here and throughout this section, I will focus only on the lowest $l=2$ moment since lensing contributions fall off rapidly with $l$. For orientation, current bounds on the gravitational wave amplitude from the CMB restrict $C^{PP}_{l=2}$ to be of order $(100\,{\rm nK})^2$ while future experiments~\cite{Bock:2009xw} aim for noise as low as $N^{PP}\sim (1 \,{\rm nK})^2$, so thoughts of $\lambda_P$ as large as $\sim 10^4$ are not crazy. Our goal is to estimate the uncertainty on the one free parameter $A$. The off-diagonal elements are free from noise so depend only on the cross-correlation of the two signals. Starting from this covariance matrix, and including all five $l=2$ moments, the standard formula for the Fisher matrix from a Gaussian process leads to a fractional error on $A$ of \begin{equation} \frac{\Delta A}{A} = \sqrt{\frac{2}{5}} \frac{1+\lambda_L + \lambda_P+ \lambda_P\lambda_L(1-\alpha^2)} {\left[ 2\lambda_P^2\lambda_L^2(1-\alpha^2)^2 + 2(1-\alpha^2)\lambda_P\lambda_L(\lambda_P+\lambda_L) + \lambda_L^2+\lambda_P^2+2\alpha^2\lambda_P\lambda_L\right]^{1/2}}. \end{equation} The factor of $\sqrt{5}$ in the denominator here comes from summing all five $l=2$ harmonics. In the limit that $\lambda_P$ is large (corresponding to high signal to noise detection in the CMB), Fig.~\rf{fisher} shows how the constraints on the amplitude depend on the correlation coefficient and the lensing signal to noise. When the lensing signal is very small ($\lambda_L=0.1$) the error is simply equal to $\sqrt{2/5}=0.63$, the minimum possible error due to the cosmic variance of the $l=2$ CMB mode. If the signal to noise of lensing is higher, then the fractional error can go down, in principle as low as $\sqrt{1/5}$ since the amount of information doubles. Fig.~\rf{fisher} illustrates that correlations {\it degrade} the extraction of the amplitude. This makes sense: when trying to measure a variance one wants as many independent numbers as possible. If $\alpha=1$, then the two numbers (B-modes from CMB polarization and from lensing) are not independent so the error on the variance increases. As $\alpha\rightarrow 1$, the constraint on $A$ reverts back to the $\sqrt{2/5}$ limit that would be obtained without the additional lensing information. The conclusion from this exercise is that lensing information would help very little in the effort to pin down the gravitational wave amplitude. At best -- information only from $l=2$ and large signal to noise from both sets of experiments -- the reduction in the error would be only a factor of $\sqrt{2}$. Most likely, if GW are detected, the $l>2$ moments in the CMB will be very important while those higher moments will be undetectable in lensing. So the gain from a lensing survey would be diluted significantly. The effect studied here -- cross-correlations between the two signals -- serves to further dilute the impact of lensing, as the information would be redundant and hence useless in constraining the GW amplitude. However, the results of the previous section suggest that $\alpha^2$ is likely to be small enough so that the dilution would be minimal. If the lensing signal were detected, it would provide -- for the most part -- independent information about the GW amplitude. \Sfig{fisher}{Constraints on the gravitational wave amplitude as a function of the cross-correlation co-efficient if the signal in the CMB experiment is large. The signal to noise in the lensing survey is $\lambda_L$. For large values of $\lambda_L$, the error on the gravitational wave amplitude goes down by $\sqrt{2}$ unless correlations between the two sets of measurements are large ($\alpha\rightarrow 1$).} Another way to probe the importance of lensing and cross-correlations is to focus not on parameter determination ($\Delta A$) but rather on the firming up the case for detection. To show the difference between these two sets of questions, consider a simple example: temperature anisotropies in the $l=2$ mode as measured by WMAP~\cite{wmap}. The expected value of $l(l+1)C_l/(2\pi)=3C_2/\pi$ in the standard $\Lambda$CDM model is $928 (\mu K)^2$, while the measurement error contributes only $0.01 (\mu K)^2$, so the value of $\lambda$ is this case [the ratio of cosmic variance to measurement error] is equal to $928/0.0124=75,000$. The Fisher one-sigma error on the fractional amplitude is $\sqrt{2/5}(1+1/\lambda)$, essentially equal to the 0.71 cosmic variance limit. This large error hides the extreme non-gaussianity of the likelihood function. Indeed, even using $l=2$ only, the statistical probability that there is no signal is infinitesimally small. To quantify this, consider the ratio of the likelihoods for two different models: (i) the best fit $\Lambda$CDM model with $2C_2/\pi\simeq 928 (\mu K)^2$ and (ii) a model where $C_2=0$ with no signal. Since the measured value of $3C_2/\pi$ is $201 (\mu K)^2$, the ratio of these two likelihoods -- using only $l=2$ data -- is \begin{equation} \frac{\mathcal{L}_1}{\mathcal{L}_2} = \frac{1}{(1+\lambda)^{5/2}} \exp\left\{ -\frac{5}{2} \Big[ \frac{201/0.0124}{1+\lambda} - \frac{201}{0.0124} \Big]\right\} \end{equation} of order $e^{40,000}$! Therefore, while the measured $l=2$ moments give very little information about the amplitude of the anisotropies, they weigh in very heavily on the question of whether or not anisotropies have been detected. Returning to B-mode detection from the CMB, instead of using the one-sigma Fisher error, we compute the ratio of the likelihood for detection $\mathcal{L}(A=1)$ vs. the likelihood of no detection $\mathcal{L}(A=0)$. This likelihood ratio will vary depending on the data. Fig.~\rf{nofg} shows the distribution of the likelihood ratio $\mathcal{L}(A=1)/\mathcal{L}(A=0)$ for 1000 mock ``skies'' (here sky simply means the five $l=2$ CMB moments) generated from a true model with $A=1$. Even when the signal to noise is unity ($\lambda_P=1$) so that the 1-sigma Fisher error is $\Delta A/A=1.26$, the likelihood ratio in a given experiment could be very large, signaling a detection. Ten percent of the mocks produced likelihood ratios greater than 100; that is, one would have concluded with 99\% certainty that there are B-modes. A 90\% CL detection would have occurred 41\% of the time. If the signal to noise were larger, $\lambda_P=10$, then detection is virtually assured, with a 99\% detection emerging from 95\% of the runs. \Sfig{nofg}{Distribution of likelihood ratios for gravitational waves (GW) and no GW for 1000 fake skies generated when the true model contains GW's (using $l=2$ CMB data only). When the underlying model has $\lambda_P=1$, the Fisher estimated 1-sigma fractional error on the amplitude of GW's would be $\sqrt{2/5}\times (1+\lambda_P)/\lambda_P = 1.26$, while the $\lambda_P=10$ model produces a Fisher error of $0.7$. Nonetheless, a statistically significant detection is possible in the first case and virtually assured in the second.} This suggests that, while lensing will not be of much use in constraining the GW amplitude, it might help in firming up the evidence for a detection. For example, suppose a CMB experiment measured a non-zero B-mode, but one wanted to compute the likelihood that this was due to GW or to foregrounds. With a CMB experiment only, the likelihood ratio of these two ``models'' (signal due to GW or signal due to foregrounds) would be unity: there would no way to tell them apart. How much would a lensing experiment improve on this? To answer this, I generated 1000 realizations of both lensing and CMB $l=2$ moments with $\lambda_P=10$ and $\lambda_L=1$ and a given value of $\alpha$. For each realization, I computed the likelihood ratio of two models: \begin{itemize} \item{\bf Model 1: True Model} Signal to Noise in CMB $\lambda_P=10$; in lensing $\lambda_L=1$ and the true value of $\alpha$ \item{\bf Model 2: Null Model} Foregrounds produce $\lambda_P=10, so \lambda_L=0$ \end{itemize} The distribution of these likelihood ratios is shown in Fig.~\rf{histo} for several values of $\alpha$. When $\alpha=0$, (7,39)\% of the realizations led to a (99,90)\% detection; For $\alpha=0.5$ those percentages go up to (12,45), and for larger values of $\alpha=0.9$ up to (21,57). So when the question posed is one of detection, as opposed to parameter determination, non-zero cross-correlation is beneficial. \Sfig{histo}{Likelihood ratios of GW vs. (foregrounds in CMB and no signal in lensing) for 1000 mock skies generated from a true model with GW.} A harder question is whether one could detect the cross-correlation. To approach this question, I set the true values of $\lambda_P=100$ and $\lambda_L=1$ and computed the likelihood ratio of two models with: \begin{itemize} \item{\bf Model 1:} $\alpha$ equal to its true value \item{\bf Model 2:} $\alpha=0$ \end{itemize} The likelihood ratio was rarely significant. A 90\% detection is never obtained if $\alpha=0.5$ and only 11\% of the time even if $\alpha=0.95$. So we are unlikely to detect the cross-correlation of the lensing and CMB B-modes. \section{conclusions} Primordial gravitational waves produce indirect effects on both the polarization of the CMB and the lensing of distant galaxies. These effects are correlated, with a size and sign which depends on the angular scale and on the redshifts of the background galaxies. The correlation may help sort out systematics if a B-mode detection is made in the CMB. It seems likely that all-sky lensing surveys will be carried out for other purposes, so the search for the B-mode in that arena will cost nothing. It might even be used to motivate such surveys should the CMB B-modes be detected. I thank Anthony Challinor, Rob Crittenden, Wayne Hu, Lam Hui, Matthew Kleban, Eiichiro Komatsu, Hiranya Peiris, and Albert Stebbins for helpful conversations. This work was supported by the DOE at Fermilab and by NSF Grant AST-0908072.
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{\bf Introduction} Electric-magnetic duality in supersymmetric gauge theories and the profound role of duality in the second superstring revolution in 1990s generated a lot of interest amongst theoretical high energy physicists in this subject; see a nice review with extensive citation to the literature by Duff \cite{1}. Witten articulated the conceptual development of duality paradigm with emphasis on superstring theory in a lucid expository article in \cite{2}. Interestingly Zee in his book \cite{3} remarks that, 'In contrast, according to one of my distinguished condensed matter colleagues, the important notion of duality is still underappreciated in the condensed matter physics community'. However SL(2, Z) symmetry has been discussed in connection with topological aspects of Hall effect phase transitions by Fradkin and Kivelson \cite{4}. Fradkin (in private communication) incidentally points out that Zee's remarks are not correct, and the condensed matter physicists had long ago realized the significance of duality. Our view is that duality has wider significance in theoretical physics. Note that duality symmetry prior to these developments was primarily associated with electromagnetism and optical phenomena \cite{5, 6}. Recently Bunster and Henneaux \cite{7} raised the question whether the electric-magnetic duality could be gauged and concluded that it could not be. However nearly two decades ago \cite{8} a local duality invariant formulation was presented generalizing four vector action integral proposed by Sudbery \cite{9}. Not only this, local duality gauge theory in the Schroedinger-like form of Maxwell equations following \cite{10} was discussed in \cite{11} to shed light on Pancharatnam topological phase in optics \cite{12}. Deser has further demonstrated \cite{13} the no-go result of \cite{7} in the canonical formulation \cite{14} of the Maxwell theory of electromagnetism. We do not dispute this conclusion of \cite{7, 13} but emphasize that local duality gauge theory is not unphysical and could be implemented both at the level of equations of motion and manifestly in the action \cite{8}. The aim of the present paper is to resolve this seemingly paradoxical situation and offer new insights on the question of local duality invariance. The key issue in this context would be to delineate the fundamental principles involved and the independent field variables assumed. Though we are only concerned with duality in electromagnetism in the light of its wider ramifications first we give a brief description of this idea in next section as it is understood in diverse fields. Global duality invariance in which the rotation angle is a constant parameter is reviewed in Sec. III. In Sec. IV the variational principle of Anderson and Arthurs \cite{15} and Rosen \cite{16} is briefly discussed that motivated the work in \cite{10}. Salient features in Sudbery's theory \cite{10} and the generalized local gauge invariant action are presented next. New field equations in covariant form are derived and their consistency with local duality invariant Maxwell equations given recently \cite{17} is shown. In Sec. V physical significance of duality gauge theory is discussed and possible applications are outlined with the concluding remarks. \section{\bf Preliminaries on duality symmetry} If symmetry symbolizes beauty Maxwell equations are most beautiful. These are invariant under a 15-parameter group of conformal transformations with 10-parameter Lorentz sub-group \cite{18}. Remarkably both general covariance and metric-free topological invariance co-exist in the Maxwell equations. Regarding electric-magnetic duality Deser \cite{13} rightly remarks that it is an 'ancient lore'. Lipkin discovered ten new conserved quantities for vacuum Maxwell equations \cite{19} and named them zilch of electromagnetic field. Note that conservation laws are intimately related with symmetry. Of course, the most fruitful symmetry that paved the path for modern unified theories is that of gauge invariance once the electromagnetic potential is introduced \cite{20}. The complete set of vacuum Maxwell equations with sources has two manifest asymmetries: absence of magnetic source terms and sign asymmetry in the time derivative. In 1931 Dirac postulated magnetic monopole with the main aim to explain electric charge quantization. Later in a comprehensive theory in 1948 \cite{21} the role of symmetry was underlined by him. Insightful remarks in the last section of this paper deserve attention. A question is raised whether an elementary particle could possess both electric and magnetic charges, and the answer is left undecided in the absence of a theory that accounts for self interactions. He further notes that the theory is 'essentially symmetrical between electric charges and magnetic poles', however there is a difference due to the different values of the coupling strengths $e^2/\hbar c = 1/137$ and $g^2/\hbar c = 137/4$. We reproduce two statements relevant to duality from this article: 'However, one could work equally well with the roles of charges and poles interchanged'. And, 'The final result would be an equivalent quantum electrodynamics referred to a different representation'. As explained below Dirac could be credited anticipating the most modern version of duality principle. In the passing we just mention a discussion on time asymmetry in \cite{22} and focus our attention on the notion of duality in the following. No experiment till date has found any evidence for the existence of a magnetic monopole, yet monopole continues to be a linking thread in the evolution of duality principle. First, a monopole solution was discovered in non-abelian classical Yang-Mills theory, and then in unified gauge theories monopole with a huge mass of the order of $10^{19} ~Gev ~c^{-2}$ was predicted. Recall that Schwinger's speculation on particles having both electric and magnetic charges named dyons by him by implication could be utilized to understand absence of magnetic charges and currents making duality rotation \cite{23}. A class of gauge theories admit stable particles with electric charges $Q_e = p e$ and magnetic charges $Q_m= qg$ where p, q are integers, with mass formula of the form $M=\sqrt {Q_e^2+Q_m^2}$. In 1977 Montonen and Olive interpreted this in terms of a symmetry between the exchange of electric to magnetic charges and vice versa. In such a symmetry the coupling constant transforms as $\alpha \leftrightarrow \alpha ^{-1}$ and exchanges elementary quanta with collective excitations. For weak coupling $\alpha$ electric charge is elementary and monopole is a soliton-like excitation whereas for strong coupling monopole is elementary and electric charge emerges as a collective excitation. In four dimensions this duality can be realized only in supersymmetric gauge theories. In a stronger version one has the notion of self-duality: the dual theory is the same as the original one. We refer to Duff \cite{1} for a discussion on $N=4$ supersymmetric self-dual gauge theory. In a supersymmetric gauge theory there is an additional parameter, namely the vacuum angle $\theta$ that can be combined with charge $e$ to define a complex parameter $S$ \begin{equation} S= \frac{\theta}{2\pi} + i \frac{4\pi}{e^2} \end{equation} where magnetic charge $Q_m =\frac{n}{e}$ and electric charge $Q_e =e(m+\frac{n\theta}{2\pi})$. Here n, m are integers. Electric-magnetic duality forms a group of order 2 and $Z$ in SL(2, Z) signifies that the matrix elements in $2 \times 2$ matrices $\left(\begin{array}{cc} a & b \\ c & d\\ \end{array}\right)$ that form the modular group are integers. Note that the determinant of the matrices is one, that is $ad - bc =1$. Townsend \cite{24} in a pleasant popular-level but insightful article on duality remarks regarding SL(2, Z) symmetry: {\it By an abuse of language it has become customary to refer also to this generalization of electromagnetic duality (and others) as a 'duality'.} The action of SL(2, Z) on parameter $S$ is \begin{equation} S \rightarrow \frac{aS+b}{cS + d} \end{equation} In superstring theory analogue of complex coupling constant is a complex scalar field in which angle $\theta$ corresponds to the vacuum expectation value of axion field and charge to that of a dilaton field. S-duality in superstrings becomes a transformation law for axion-dilaton field. The picture that seems to emerge is that there is an underlying unity amongst various superstrings in what is called M theory. Last sentences in \cite{1, 24} aptly capture the essence when Duff sees the role of duality in 'unification via diversification' and Townsend finds it ironic that 'newly emerging unity is the result of a renewed interest in the old idea of duality'. Unfortunately the envisaged dream of unification remains elusive so far. \section{\bf Duality invariance of Maxwell field equations and action} Originally electric-magnetic duality meant ${\bf E} \rightarrow {\bf B}$ and ${\bf B} \rightarrow -{\bf E}$. It can be extended to a more general rotation with arbitrary constant rotation angle $\zeta$ such that \begin{equation} {\bf E} \rightarrow {\bf E}~ cos \zeta + {\bf B} ~sin\zeta \end{equation} \begin{equation} {\bf B} \rightarrow - {\bf E} ~sin \zeta + {\bf B} ~cos \zeta \end{equation} It can be easily verified that source-free vacuum Maxwell equations \begin{equation} {\bf \nabla.E} = 0 \end{equation} \begin{equation} {\bf \nabla} \times {\bf B} - \frac {\partial {\bf E}}{\partial t} =0 \end{equation} \begin{equation} {\bf \nabla} \times {\bf E} + \frac {\partial {\bf B}}{\partial t} =0 \end{equation} \begin{equation} {\bf \nabla.B} = 0 \end{equation} are invariant under this transformation. However, the action \begin{equation} I = \frac{1}{2} \int ({\bf E}^2 - {\bf B}^2)~ d^4 x \end{equation} is not manifestly invariant under duality rotation. This would seem puzzling, however the authors in \cite{7} emphasize that it is erroneous to say that duality is not a symmetry of action but only that of equations of motion. There is a technical subtelity involved in this assertion, therefore this section is devoted to revisit this issue with a fresh outlook and also to bring to the notice significant past contributions that gave deep physical insights. Let us consider infinitesimal rotation $\delta \zeta$ that reduces (3) and (4) to \begin{equation} {\bf E} \rightarrow {\bf E} + {\bf B} ~\delta\zeta \end{equation} \begin{equation} {\bf B} \rightarrow - {\bf E} ~\delta\zeta + {\bf B} \end{equation} then action (9) changes by \begin{equation} \delta I = 2 \delta \zeta \int {\bf E.B}~ d^4x \end{equation} neglecting higher order terms in $\delta \zeta$. As it stands, expression (12) shows that action is not invariant even under infinitesimal duality rotation. Deser and Teitelboim \cite{25} point out that ${\bf E.B}$ term could be re-written as $\partial_\mu ~C^\mu$ where \begin{equation} C^\mu = \epsilon^{\mu\nu\rho\sigma} A_\nu \partial_\rho A_\sigma \end{equation} since according to them '$F_{\mu\nu}$ is only a shorthand for $\partial_\mu A_\nu -\partial_\nu A_\mu$'. Here $\epsilon ^{\mu\nu\rho\sigma}$ is Levi-Civita tensor in four dimension and $F^{\mu\nu}$ is electromagnetic field tensor. It then follows that $\delta I$ vanishes and action is invariant. In the case of finite rotation though the ${\bf E.B}$ term would again arise and could be made to vanish for the identical reason the action transforms to \begin{equation} I \rightarrow (cos^2 \zeta -sin^2 \zeta) ~I \end{equation} and clearly not invariant. Invariance of action (9) in both second-order Lagrangian and first-order Hamiltonian forms is proved in \cite{25}, and off-shell duality invariance is further clarified in \cite{14}. In two-vector potential formalism the action (9) assumes a manifestly duality invariant form, see Sec. IIA of \cite{7} \begin{equation} I_V =\frac{1}{2} \int (\epsilon _{ab} {\bf B}^a . \dot{{\bf A}}^b - \delta_{ab} {\bf B}^a. {\bf B}^b) ~ d^4x \end{equation} Note that Kronecker symbol $\delta_{ab}$ and Levi-Civita tensor $\epsilon_{ab}$ are invariant under rotation in two dimension and $a, b =1, 2$. Here over-dot denotes time derivative and \begin{equation} {\bf B}^a = {\bf\nabla} \times {\bf A}^a \end{equation} with ${\bf B}^1$ and $-{\bf B}^2$ identified as magnetic and electric fields respectively and the duality rotation corresponds to the rotation in ${\bf A}^1, {\bf A}^2$. The main argument to implement duality transformation in action is that one has to consider the basic dynamical field variables, and satisfy time locality requirement \cite{25}. In the action integral $A_\mu$ is a fundamental dynamical variable for Lagrangian form and $A_\mu$ and its canonically conjugate variable for the Hamiltonian formulation. At this point it would be of interest to briefly mention other significant contributions. Calkin using Noether's theorem arrived at an important result in \cite{5}. He asked the question: What conservation law corresponds to the duality symmetry? Infinitesimal duality rotation is performed on the electromagnetic potentials and the invariance of the Lagrangian (or action) is shown to lead to a conserved quantity: this conserved quantity is proportional to the difference in the number of right circularly polarized (RCP) and left circularly polarized (LCP) photons. Possible connection with Lipkin's zilch \cite{19} is also suggested. Note that the constant of motion for duality invariance given in \cite{25} also has correspondence with Lipkin's conserved zilch. In an important work Zwanziger \cite{6} considers the transformation (3)-(4) for the Maxwell equations in vacuum in the presence of both electric and magnetic sources. He makes a distinction in terminology that for $\zeta = \pi /2$ the transformation is known as duality and for arbitrary $\zeta$ it is called chiral transformation. In the light of Calkin's conserved quantity related with helicity the term chiral appears appealing, and was used in \cite{8}. However here we adopt duality for arbitrary rotation angle $\zeta$. Zwanziger argues that for unitarily equivalent Hamiltonians under duality (chiral) transformation the photon state of momentum ${\bf k}$ and helicity $\lambda$ undergoes the transformation \begin{equation} |{\bf k}, \lambda> ~ \rightarrow ~ U(\zeta)~|{\bf k}, \lambda> =e^{i\lambda\zeta}~|{\bf k}, \lambda> \end{equation} A nice physical interpretation is given by him: the relative phase of left and right CP light or the absolute plane of polarization of linearly polarized light cannot be determined. The mathematical analysis of duality invariance is developed in a two-dimensional real vector space for the transverse radiation field variables ${\bf E}^r,~ {\bf H}^r$ and then introducing two vector potentials akin to those given in Eq. (16). The generator $G$ in the unitary duality transformation \begin{equation} U(\zeta) = e^{i\zeta G} \end{equation} is Hermitian, and determines the difference between the number of RCP and LCP photons. Regarding Dirac's charge quantization condition it is shown that duality invariant theory gives a new rule, namely the quantization of the chiral combination of electric and magnetic charges $(e_m g_n -g_m e_n)$. A formal complex vector representation ${\bf \Psi} = {\bf E} + i {\bf B}$ renders the curl Maxwell equations (6) and (7) in a suggestive Schroedinger-like form \cite{10} \begin{equation} {\bf S}.{\bf \nabla \Psi}=\frac{\partial {\bf \Psi}}{\partial t} \end{equation} where $3\times3$ matrices ${\bf S}$ are defined as \begin{equation} {(S_i)}_{jk} = i \epsilon _{ijk} \end{equation} with $\epsilon_{ijk}$ the Levi-Civita tensor in three dimension. Multiplying by $i\hbar$ on both sides of Eq. (19) and making the identification ${\bf p} = -i\hbar {\bf \nabla}$ it follows that the Hamiltonian is $H =-{\bf S}.{\bf p}$. The duality rotation (3) and (4) assumes the form \begin{equation} {\bf \Psi} ~ \rightarrow ~ e^{-i\zeta} {\bf \Psi} \end{equation} Note that the divergence equations (5) and (8) combine to give \begin{equation} {\bf \nabla}.{\bf \Psi} =0 \end{equation} Assuming that initially both electric and magnetic fields have zero divergence one could treat Eq. (22) as a subsidiary condition. Drawing analogy of photon equation (19) with Weyl equation for massless neutrino, and postulating two elementary fields in two dimensional real vector space the chiral invariance or duality invariance was used in \cite{26} to speculate on the nature of monopole and composite photon. An interesting review by Kobe \cite{27} shows that relativistic Schroedinger-like or rather more appropriately Dirac's relativistic spinor-like equation for photon is a fascinating subject, and has a long history. Preceding discussion indicates the profound physical significance of duality symmetry in electromagnetism. Regarding the difference in implementing duality in Maxwell equations and action integral two major issues elaborated below seem to be crucial. ${\bf A.}~$ Fundamental dynamical field variables The action (9) in manifestly covariant form can be written as \begin{equation} I_c = -\frac{1}{4} \int F^{\mu\nu} F_{\mu\nu}~ d^4x \end{equation} and recall that it is invariant under the gauge transformation \begin{equation} A_\mu \rightarrow A_\mu + \partial_\mu \chi \end{equation} In the standard theory the Lagrangian density in the Lorentz scalar action integral (23) has a functional dependence on independent field variable $A_\mu$ and its derivatives $\partial_\mu A_\nu$. Variational principle in the usual way for infinitesimal variations $\delta A_\mu$ and $\delta(\partial_\mu A_\nu)$ gives rise to the Euler-Lagrange equation of motion \begin{equation} \partial_\mu F^{\mu\nu} = 0 \end{equation} Eq. (25) is a Lorentz covariant representation of half of the full set of Maxwell equations, i. e. only Eqs. (5) and (6). The obvious and well known fact is that $A_\mu$ does not appear in Maxwell equations: Eqs. (5)-(8) or Eqs. (19) and (22) or Eq. (25). However the definition of $F_{\mu\nu}$ could be viewed as a re-statement of one of the pairs of Maxwell equations (7) and (8). To see it in a more explicit way, using elementary vector calculus Eq. (8) implies that \begin{equation} {\bf B} = {\bf \nabla} \times {\bf A} \end{equation} and substituting (26) in (7) we have \begin{equation} {\bf E} = - {\bf \nabla} \phi - \frac{\partial {\bf A}}{\partial t} \end{equation} Expressions (26) and (27) define the electromagnetic field tensor $F_{\mu\nu}$. It is evident that variation of action (23) does not give the full set of Maxwell equations as the equations of motion. For the sake of completeness we write the remaining pair also in compact covariant form \begin{equation} \partial_\mu ~^* F^{\mu\nu} =0 \end{equation} where the dual tensor $^*F^{\mu\nu} =\frac{1}{2} \epsilon^{\mu\nu\rho\sigma}~F_{\rho\sigma}$. All of this is a textbook matter but given here for the added emphasis on some aspects: when duality invariance or lack of it is discussed the distinction between the experimental laws abstracted in the form of complete Maxwell equations, equations of motion derived from variational principle and functional form of action has to be kept in mind. It is true that the currently accepted view is, what, for example, Witten underlines \cite{2} that four vector potential has fundamental importance in 20-th century physics. As we have noted above $A_\mu$ is a basic dynamical variable in the action for electromagnetism. However from the experimental point of view in classical electrodynamics $A_\mu$ is superfluous or at best an auxiliary convenient mathematical tool. It may be mentioned that though observed Aharonov-Bohm effect is a manifestation of vector potential, treating it as a typical quantum effect the debate on the physical reality of electromagnetic potentials continues unabated. It seems the origin of the failure of local duality invariant theory lies at the level of ambiguity in implementing global duality rotation itself in the action, and thus the review by Saa \cite{28} in response to \cite{7, 13} though interesting seems to be of limited scope. Could one dispense with $A_\mu$ completely and construct action purely with electromagnetic field as independent dynamical variable? If one could do it the aforementioned problems would not arise. Remarkably Sudbery \cite{9} achieves this goal but then paying a price since the action is not conventional Lorentz scalar but a pseudo-four vector. We present new results based on this action in the next section. The problem persists also in the Hamiltonian formulation. See Dirac's quite elegant analysis on the role of constraints in Hamiltonian field theory \cite{29}, a good introductory treatment in Sec. 24, Ch. III of \cite{20} and with emphasis on gauge theories in Ch. 7 of \cite{30}. In the present context the language of constraints used by the authors in \cite{14, 25} in both Hamiltonian and Lagrangian forms helps in formal reconciliation of duality invariance in the action: Eq. (5) above is re-named as Gauss constraint, Eq. (26) as algebraic constraint, and Eq. (27) as an identity not a field equation. Another point made by the authors is that no gauge condition is imposed; only the transversality of fields is used to describe the system by a reduced set of field variables. Note that Zwanziger \cite{6} precisely deals with such a reduced set of radiation fields. We argue that possibly this approach succeeds as a result of an ambiguity in the construction of action for the source-free case: it is a matter of convention that the field tensor is defined by (26) and (27) that essentially embody Maxwell equations (7) and (8); one could alternatively define electric and magnetic field vectors (with a sign change) by (26) and (27) respectively that would represent the first pair of Maxwell equations (5) and (6) and the variation in action would give rise to the equations of motion (7) and (8). If we insist on the physical origin of Maxwell equations none of the choices could be treated as mere constraints. It is significant that the standard definition of the field tensor retains the distinction between the pair of equations that emanates in the presence of sources. ${\bf B.}~$ Time and relativistic invariance Corson in Sec. 19 Ch. III \cite{20} presents a conceptual analysis of the action principle in field theories. Of special interest here is the role of time while performing the variations since reference to particular Lorentz frames spoils relativistic invariance. He notes that instead of the volume between surfaces of constant time an invariant concept of space-like surfaces could be used. In a very lucid discussion Dirac \cite{29} points out the special role that time plays singling out a specific observer for developing the theory of Hamiltonian dynamics. Now even after assuming a preferred time coordinate the canonically conjugate momentum variable for Maxwell action (23) \begin{equation} \pi _\mu = \frac{\partial L}{\partial \dot{A} _\mu} \end{equation} poses a serious problem as its time component vanishes. As a consequence there arises inconsistency with the fundamental Poisson bracket relation. The procedure to obtain the final form of physically acceptable Hamiltonian \cite{30} involves intermediate gauge transformations as one takes the system from one point of time to another. Deser and Teitelboim implement duality in terms of time-local variations: the change in action under duality rotation is obtained in the form of total time derivatives; see Eqs. (2.4) and (2.13) in their paper \cite{25}. The implicit role of using superfluous gauge variable could be seen there to arrive at a set of reduced dynamical variables: gauge invariant transverse fields/vector potential. Ramond \cite{30} gives examples of Coulomb and Arnowitt-Fickler gauges and makes a remark to the effect that one has to be careful about the difference between a non-dynamical variable and a genuine gauge condition. Is there some non-obvious role of gauge condition in implementing duality transformation on the action? We have not been able to arrive at a definite answer and suggest that this question and the role of time needs further examination to avoid the likely source of confusion in the formalism. \section{\bf Local duality in Sudbery's formalism} Sudbery's unconventional formalism \cite{9} is motivated by a new variational principle in which electric and magnetic field vectors are independent dynamical variables not the standard electromagnetic four vector potential; it was proposed by Anderson and Arthurs \cite{15} and independently by Rosen \cite{16}. Novel and intriguing features of Sudbery's action are that it is a pseudo-four vector not a conventional Lorentz scalar, and Noether's theorem leads to the conserved energy-momentum tensor as a consequence of duality invariance whereas the invariance under spacetime translations leads to a conserved third rank tensor that is related to Lipkin's tensor \cite{19}. My interest in this formalism \cite{8} arose due to the interesting role of duality in it, however it must be made clear from the outset that duality invariance was not the motivation or the focus of attention in \cite{9, 15, 16} as it is in the present paper. Let us begin with the action proposed in \cite{15, 16} \begin{equation} I_{AAR} = \int ( {\bf B}.\frac{\partial {\bf E}}{\partial t} - {\bf E}.\frac{\partial {\bf B}}{\partial t} -{\bf E}.({\bf \nabla} \times {\bf E})-{\bf B}.({\bf \nabla} \times {\bf B})+ 2 {\bf J}. {\bf B}) ~ d^4 x \end{equation} which is a pseudo-scalar, and the variations in ${\bf E}$ and ${\bf B}$ in the usual way lead to the following Euler-Lagrange equations \begin{equation} {\bf \nabla} \times {\bf E} = - \frac{\partial {\bf B}}{\partial t} \end{equation} \begin{equation} {\bf \nabla} \times {\bf B} = {\bf J} + \frac{\partial {\bf E}}{\partial t} \end{equation} In the preceding section the ambiguity in the Mawxwell action was noted that either of the pairs (5) and (6) or (7) and (8) could correspond to the Euler-Lagrange equations depending on the definition of the field tensor; here interestingly the pair (31) and (32) represents the curl equations (6) and (7) (setting the current density ${\bf J}=0$). Sudbery notes two deficiencies of (30): first it is not in a Lorentz covariant form, and second that only partial set of Maxwell equations are obtained from it. Regarding the second it may be recalled that only half of the Maxwell equations follow as Euler-Lagrange equations from the standard action (23), however the definition of the field tensor implicitly contains the remaining pair. Since electric and magnetic field vectors in (30) are treated as fundamental field variables one cannot get the divergence equations in the AAR formulation. The Lagrangian density for ${\bf J} =0$ in the action (30) can be written in an elegant form using the complex vector representation for the fields ${\bf \Psi}$ similar to the Good's formalism \cite{10} discussed in Sec. III. The new form is given by \begin{equation} L_{AAR} = \frac{1}{2i} ({\bf \Psi} \frac{\partial {\bf \Psi}^*}{\partial t} -{\bf \Psi^*} \frac{\partial {\bf \Psi}}{\partial t}-{\bf\Psi ~S.\nabla\Psi}^* - {\bf \Psi}^* ~ {\bf S.\nabla \Psi}) \end{equation} It is easy to verify that (33) is invariant under the duality rotation (21). Though we do not pursue it here this form of Lagrangian seems interesting to seek its generalization in analogy with that of Dirac's relativistic spinor field. Sudbery makes a radical proposal in which Lagrangian is a pseudo-four vector and the action is defined to be \begin{equation} I^s_\sigma = \int L^s_\sigma ~ d^4 x \end{equation} \begin{equation} L^s_\sigma =^*F^{\mu\nu}\partial_\nu F_{\mu\sigma} - F^{\mu\nu} \partial_\nu ~ ^*F_{\mu\sigma} - 2~ ^* F_{\sigma \mu} J^\mu \end{equation} Using the variational principle the complete set of Maxwell equations with sources are obtained as Euler-Lagrange equations. In the source free case the action is invariant under the duality transformation \begin{equation} F_{\mu\nu} = F_{\mu\nu} cos \zeta + ^*F_{\mu\nu} sin \zeta \end{equation} \begin{equation} ^*F_{\mu\nu} = - F_{\mu\nu} sin \zeta + ^*F_{\mu\nu} cos \zeta \end{equation} and using Noether's theorem one gets the conservation law for symmetric energy-momentum tensor $T_{\mu\nu}$. Another intriguing result is that the invariance under spacetime translations leads to conserved third rank tensor related with Lipkin's tensor \cite{19}. It is noteworthy that duality invariance leads to the conservation of symmetric energy-momentum tensor in contrast to nonsymmetric and gauge noninvariant canonical tensor $E_{\mu\nu}$ obtained in the standard formulation as a consequence of spacetime translation symmetry of the action (23). Recalling that the term added to $E_{\mu\nu}$ to obtain $T_{\mu\nu}$ has an interpretation corresponding to spin energy \cite{20} it seems duality rotation has a significant role in the polarization and angular momentum of light. Sudbery's Lagrangian can be further generalized to incorporate local duality invariance \cite{8} in an unambiguous manner. The new Lagrangian density in the absence of sources is given by \begin{equation} L_\sigma = L^s_\sigma - g(F^{\mu\nu} F_{\mu\sigma} + ^*F^{\mu\nu}~ ^*F_{\mu\sigma}) W_\nu \end{equation} It is straightforward to verify that (38) is invariant under the transformations (36) and (37) with $\zeta$ being a function of spacetime provided the pseudo-four vector $W_\nu$ transforms as $W_\nu \rightarrow W_\nu + g^{-1} \partial_\nu \zeta$ where $g$ is a coupling constant. The variational procedure for action integral (34) with the integrand (38) in it gives rise to the field equations \begin{equation} \partial_\mu F^{\mu\nu} = g~ W_\mu ~^*F^{\mu\nu} \end{equation} \begin{equation} \partial_\mu ~ ^*F^{\mu\nu} = - g~ W_\mu~ F^{\mu\nu} \end{equation} Equations (39) and (40) written in terms of electric and magnetic field vectors make physical content more transparent \begin{equation} {\bf \nabla.E} = -g ~{\bf W.B} \end{equation} \begin{equation} {\bf \nabla.B} = g~ {\bf W.E} \end{equation} \begin{equation} {\bf \nabla} \times {\bf E} + \frac {\partial {\bf B}}{\partial t} + g {\bf W} \times {\bf B} +g W_0 {\bf E} =0 \end{equation} \begin{equation} {\bf \nabla} \times {\bf B} - \frac {\partial {\bf E}}{\partial t} - g {\bf W} \times {\bf E} +g W_0 {\bf B} =0 \end{equation} A nice property of Sudbery's Lagrangian is that it can be further generalized to include electric charge and current densities $J^\mu _e$ as well as magnetic charge and current densities $J^\mu _m$. Notice that the Lagrangian density $L_{AAR}$ in (30) could be modified adding a term $-2 {\bf J}_m .{\bf E}$ that changes (31) to \begin{equation} -{\bf \nabla} \times {\bf E} = \frac{\partial {\bf B}}{\partial t} +{\bf J}_m \end{equation} Proposed new generalization of Lagrangian density (38) is \begin{equation} L^N_\sigma = L_\sigma -2 ~ ^*F_{\sigma\mu} J^\mu _e + 2 F_{\sigma\mu} J^\mu_m \end{equation} It can be verified that under the local duality rotation (36) and (37) and simultaneous duality transformations of current densities \begin{equation} J^\mu_e ~ \rightarrow ~ J^\mu_e cos \zeta + J^\mu_m sin \zeta \end{equation} \begin{equation} J^\mu_m ~ \rightarrow ~ -J^\mu_e sin \zeta + J^\mu_m cos \zeta \end{equation} the new action \begin{equation} I^N_\sigma = \int L^N_\sigma ~ d^4 x \end{equation} is invariant. The Euler-Lagrange equations of motion derived from the action (49) using the variational principle \cite{9} are obtained to be the generalization of Eqs. (39) and (40) \begin{equation} \partial_\mu F^{\mu\nu} = g~ W_\mu ~^*F^{\mu\nu}+ J^\nu_e \end{equation} \begin{equation} \partial_\mu ^*F^{\mu\nu} = - g~ W_\mu~ F^{\mu\nu}+ J^\nu_m \end{equation} The vector transcription of Eqs. (50) and (51) in terms of electric and magnetic field vectors is given by \begin{equation} {\bf \nabla.E} = -g ~{\bf W.B} + \rho_e \end{equation} \begin{equation} {\bf \nabla.B} = g~ {\bf W.E} +\rho_m \end{equation} \begin{equation} {\bf \nabla} \times {\bf E} + \frac {\partial {\bf B}}{\partial t} + g {\bf W} \times {\bf B} +g W_0 {\bf E} =- {\bf J}_m \end{equation} \begin{equation} {\bf \nabla} \times {\bf B} - \frac {\partial {\bf E}}{\partial t} - g {\bf W} \times {\bf E} +g W_0 {\bf B} = {\bf J}_e \end{equation} The pseudo- four vector field $W_\mu$ plays the role of a duality gauge field; could it be promoted to a genuine dynamical field? Any added kinetic term for this field must be invariant under the gauge transformation, and therefore it has to be similar in form to the electromagnetic field tensor. The suggested additional Lagrangian density is \begin{equation} L^g_\sigma = -\frac {1}{4} W^{\mu\nu} W_{\mu\nu} C_\sigma \end{equation} where \begin{equation} W_{\mu\nu} = \partial_\mu W_\nu - \partial_\nu W_\mu \end{equation} Here $C_\sigma$ is an arbitrary Lorentz four-vector field; it is not a constant vector. The field equations (50) and (51) remain unchanged while infinitesimal variation $\delta W_\mu$ in the action gives rise to the following equation \begin{equation} \partial_\nu (C_\sigma W^{\rho\nu}) = - g ( F^{\mu\rho} F_{\mu\sigma} + ^*F^{\mu\rho} ~ ^*F_{\mu\sigma}) \end{equation} It is instructive to write the time components of (58) for the assumed gauge condition \begin{equation} \partial_\mu W^\mu = 0 \end{equation} \begin{equation} -C_0 \partial^\mu \partial_\mu W_0 + {\bf \nabla} C_0.({\bf \nabla} W_0 +\frac {\partial {\bf W}}{\partial t}) = g (E^2 + B^2) \end{equation} \begin{equation} -C_0\partial^\mu \partial_\mu {\bf W} - \frac {\partial C_0}{\partial t}({\bf \nabla} W_0 +\frac {\partial {\bf W}}{\partial t})= 2g({\bf E} \times {\bf B}) \end{equation} In a special case assuming $C_0$ is constant Eqs. (60) and (61) formally resemble Wilson's equations for the gravitational vector potential \cite{31}; since the source terms are energy-momentum densities Wilson suggested a vector potential theory for gravitation. However such a theory disagrees with experiments, for example it cannot account for the observed perihelion advance of the planet Mercury. Further $W_\mu$ is a pseudo-vactor and space reflection symmetry being a good symmetry in gravitation we cannot identify it with gravitational potential. Could it be weak gauge boson? Parity violation in weak interactions and the existence of neutral weak gauge boson $Z$ would tempt one to consider this possibility, however $Z$ is a massive particle. Could $C_0$ play some role in giving mass to $W_\mu$? We do not have answer to these questions at present. However there are interesting possibilities to apply the present theory to physical phenomena discussed in the next section. \section{\bf Physical implications} Electric-magnetic local duality could be implemented in a neat form and in an unambigous manner: it had been proved for source-free case in \cite{8} using Sudbery's pseudo-four vector Lagrangian density \cite{9} and was demonstrated in the Maxwell field equations in a Lorentz covariant form recently \cite{17}. In the present paper we have generalized Sudbery's action to local duality invariant form in the presence of electric and magnetic sources and showed that the Euler-Lagrange equations obtained from the new action are consistent with the generalized Maxwell equations proposed in \cite{17}. The most uncomfortable part of the present theory seems to be the use of a pseudo- four vector action rather than the traditional scalar action. It may be emphasized that relativistic invariance is maintained in this approach. On the positive note, the equations of motion derived from Sudbery's action using the variational principle represent the complete set of the experimentally established Maxwell field equations thus providing a posteriori justification for it. Moreover this theory offers an alternative approach in which the electromagnetic fields are fundamental dynamical variables in the action. Could there be a more appealing conceptual justification to treat action as a vector? We do not know for sure; a plausible argument could be made remembering first that physical dimension of action and angular momentum vector are identical, and second that energy traditionally considered scalar, in the relativistic framework becomes a time component of energy-momentum four vector. Perhaps we have a hint here that deserves serious attention and search for concrete examples. As we have seen Sudbery's theory is not a U(1) gauge theory and its quantization has not been attempted; in fact it is not known if quantization would succeed in this case. We have delineated subtle technical questions in Sec. III regarding the global duality symmetry in the standard action (23). It could be stated straightforwardly that in the U(1) gauge framework local duality cannot be implemented in the action formulation in agreement with the conclusion arrived at in \cite{7, 13}. The literature on duality symmetry in superstrings and supergravity \cite{1, 2} and elegant elucidation of SL(2, Z) symmetry in condensed matter and high energy physics by Fradkin and Kivelson \cite{4} show that the application of the new local duality invariant theory in them is not clear. Bunster and Henneaux \cite{7} in Sec. IV of their paper point out that gauging electric-magnetic duality in supergravity is not possible; since vector potentials are dynamical fields in this context it is not immediately obvious if the present ideas could be useful. Aforementioned drawbacks do not imply that the present theory is just a vacuous intellectual curiosity rather it holds promise for new developments in mathematical physics and has important physical implications. First, consider the Lagrangian density (33) for complex vector field representation of electromagnetism - it could be generalized to implement local duality and possibly could be quantized following the approach given by Kobe \cite{27}. From the historical point of view we must mention that Wilson \cite{31} independently of Anderson and Arthurs \cite{15} and Rosen \cite{16} discovered the Lagrangian density (33) and also derived the local duality invariant Lagrangian and the Euler-Lagrange equations from it. We have pointed out that Noether's theorem applied to Sudbery's action \cite{9} leads to new conserved quantities. There is no general theory where unconventional proposition of four vector action, symmetries and conservation laws have been developed in the spirit of Noether's theorem; such a work would open new avenues in mathematical physics. The most significant utility of the new theory would be in the macroscopic classical phenomena; below we discuss two areas of current interest. {\bf The nature of polarized light} Enigma of photon and its precise relationship with the properties of light, in particular, polarization continues to inspire new experiments in (quantum !) optics and foundational discourse on the physical reality of photon. The present discussion is limited to certain aspects where the new generalized theory could be applicable to advance our understanding on them. First we mention vacuum birefringence predicted by Heisenberg- Euler effective Lagrangian for quantum electrodynamics. Though it has been known since long it acquired renewed interest recently motivated by experiments and searches for elusive axions. Phenomenologically one can incorporate this effect in terms of polarization tensors and modified constitutive relations in Maxwell equations, see \cite{32} for a concise discussion and references. Birefringence in material media is another area where intriguing features on polarized light propagation have emerged. Chiral media represent an important class of materials: a chiral medium has mirror asymmetry and RCP and LCP light interact differently in such a medium. Optical rotation in a natural optically active medium can be understood in terms of circular birefringence. To probe the light - media interaction one could study reflection of light from a plane boundary of chiral-achiral interface, reflection from an achiral-chiral interface and lastly reflection from achiral-achiral interface. A nice comprehensive discussion on these problems can be found in \cite{33}; the authors Silverman and Badoz bring out the importance of great experimental ingenuity in such investigations, moreover there are controversial issues pertaining to theoretical interpretation. What are the correct constitutive relations? What are the proper boundary conditions? Also related with them is the question of conserved quantities. In an excellent monograph \cite{34} Post presents a profound analysis on the general covariance of Maxwell equations and moving a step ahead of general covariance underlines the significance of their (i. e. of Maxwell equations) natural invariance (metric-independence). The role of symmetry on constructing constitutive relations becomes quite transparent. In Chapter 6 of the electron monograph \cite{35} different mathematical representations of Maxwell equations and modifications are reviewed; Imbert's shift and Jones-Richards experiment are discussed in \cite{8}. Regarding initial boundary value problem we refer to a notable contribution in \cite{36}. Duality has fundamental significance for chiral phenomena and polarized light as discussed in Sec. III, also see \cite{8}. In the source-free vacuum case local duality invariant Maxwell equations (41) - (44) can be re-written in terms of macroscopic fields ${\bf D}$ and ${\bf H}$ defining new constitutive relations; these could be useful for a class of magneto-electric materials, see for example the references in \cite{28}. In the presence of material medium the most general representation of the constitutive relations constrained by duality invariance can be obtained from the generalized Lagrangian density (46) introducing the polarization currents in addition to source currents following the approach of \cite{17} \begin{equation} J^\mu_P = \partial_\nu ~ P^{\nu\mu} \end{equation} \begin{equation} ^*J^\mu_P = \partial_\nu ~ ^*P^{\nu\mu} \end{equation} Here $^*P^{\mu\nu}$ is dual to antisymmetric polarization tensor $P^{\mu\nu}$ and $P^{0i}$ and $P^{ij}$ correspond to electric ${\bf P}$ and magnetic ${\bf M}$ polarization vectors respectively. The Euler-Lagrange equations (50) and (51) get modified to \begin{equation} \partial_\mu F^{\mu\nu} = g~ W_\mu ~^*F^{\mu\nu}+ J^\nu_e +J^\nu_P \end{equation} \begin{equation} \partial_\mu ~^*F^{\mu\nu} = - g~ W_\mu~ F^{\mu\nu}+ J^\nu_m +^*J^\nu_P \end{equation} In the absence of sources $J^\mu_e$ and $J^\mu_m$ and vanishing duality gauge potential $W_\mu$ it becomes straightforward to write the generalized Maxwell equations in terms of the tensor $G^{\mu\nu} = F^{\mu\nu} - P^{\mu\nu}$ that embodies the duality symmetric constitutive relations. Thus generalized Maxwell field equations (50) - (51) and (64) - (65) give a unified picture for the propagation of electromagnetic waves in different kinds of media including birefringent vacuum. Topological properties of light have been a subject of intense research activity for past more than two decades, specially those associated with geometric phases and vortices. Discovery of Berry phase in nonrelativistic quantum mechanics in 1984 and subsequent recognition \cite{37} that Pancharatnam in 1956 \cite{12} had observed a nontrivial geometric phase in polarization optics stimulated enormous work in this field. A polarized light wave propagating along a fixed direction in space subjected to undergo a polarization cycle in the polarization state space acquires a geometric phase named after Pancharatnam. Poincare sphere is a nice geometric representation of polarization state of light, and the geometric phase equals half of the solid angle subtended by the the closed cycle. Mathematically Pancharatnam phase can be understood as a consequence of the change in the direction of a vector parallel transported on the surface of the sphere. Is there any physical origin of this effect? Singular optics or optical vortices have roots in the topological defects in continuous fields: wavefront dislocations or phase singularities for scalar waves \cite{38} and disclinations or polarization singularities for vector waves \cite{39}. In a recent article \cite{40} the experimental evidence to relate angular momentum (AM), topological defects, and geometric phases in optics has been critically reviewed. Another geometric phase, namely spin redirection phase results when the cyclic change is made in the momentum state space of light wave that we have termed as Reetov-Vladimirskii-Chiao-Wu (RVCW) phase \cite{40}. Interestingly in RVCW phase there is a sort of geometric circular birefringence. In 1992 we put forward AM holonomy conjecture as a common physical origin for geometric phases in optics. Spin angular momentum (SAM) is related with polarization of light and since there is a sequence of polarization changes in Pancharatnam phase it is natural to expect SAM exchange in this process. Cycles in momentum space would entail changes in orbital angular momentum (OAM) of light indicating the role of OAM exchange in RVCW phase. AM holonomy conjecture is founded on a re-interpretation of a well known result in standard electrodynamics that emanates from constructing gauge invariant and Lorentz covariant conserved quantities from Noether conserved quantities. A very good discussion can be found in Corson's monograph \cite{20} and for a shorter one see \cite{23} ; curiously Noether's theorem is not mentioned in either of the books. Consider the Maxwell action (23) then Noether's theorem shows that infinitesimal spacetime translational invariance leads to conserved canonical energy-momentum tensor $E^{\mu\nu}$ and infinitesimal Lorentz transformation invariance gives a conserved third rank AM tensor $M^{\alpha\mu\nu}$. One can split the AM tensor into SAM and OAM parts \begin{equation} M^{\alpha\mu\nu} = - (E^{\alpha\mu} x^\nu -E^{\alpha\nu} x^\mu) - (F^{\alpha\mu} A^\nu - F^{\alpha\nu}A^\mu) = L^{\alpha\mu\nu} +S^{\alpha\mu\nu} \end{equation} Note that $E^{\mu\nu}$ and $M^{\alpha\mu\nu}$ are natural Noether conserved quantities for the action (23); unfortunately these are not gauge invariant, energy-momentum tensor is not symmetric and the separation (66) is not Lorentz invariant. There is a well known prescription to construct symmetric and gauge invariant energy-momentum tensor $T^{\mu\nu}$ and using it to define a third rank AM tensor $J^{\alpha\mu\nu}$. These new quantities are believed to be physically observable ones. The two AM tensors are related as \begin{equation} M^{\alpha\mu\nu} = J^{\alpha\mu\nu} - \partial_\lambda (F^{\lambda\alpha} (A^\mu x^\nu - A^\nu x^\mu)) \end{equation} In the usual way one argues that the integral of the total divergence term in (67) could be made to vanish having no physical significance. In contrast to this the AM holonomy conjecture is founded on making an analogy with Aharonov-Bohm effect: this term could make a topological contribution in nontrivial situations and this manifests as a geometric phase. Could one give a firm foundation for this conjecture? Advancement in duality invariant theory presented in Sec. IV offers a possible approach to achieve this goal. Qualitative arguments \cite{8} based on the role of duality symmetry in polarized light phenomena \cite{5, 6} having bearing on geometric phases, and a simple duality gauge theory \cite{11} generalizing Good's formulation \cite{10} to explain Pancharatnam phase establishes a connection between duality symmetry and geometric phases in optics. Duality symmetry in Sudbery's theory \cite{9} surprisingly leads to Noether conserved quantity $T^{\mu\nu}$. What is the physics behind it? This question led us to develop local duality generalization of Sudbery's action in \cite{8}. It is plausible to argue that duality gauge potential mediates AM transfer as was first proposed in \cite{11}. It is further supported considering the Lagrangian density (38) and adding a topological term for $W^\mu$ similar to (13) that gives a topological effect. Such a form is suggested by Sudbery in a private communication. To prove AM holonomy conjecture the total derivative term in (67) may have to be related with duality gauge potential and it may become necessary to analyze technical aspects in dealing with two kinds of gauge potentials $A^\mu$ and $W^\mu$. We do not know how to do it at present, however in the light of an existing gauge theory for Pancharatnam phase \cite{11} and a recent work \cite{41} that gives a gauge theoretic explanation for geometric phases in astigmatic optical modes this tentative idea deserves serious attention. There is another intriguing result in Sudbery's theory, namely the appearance of a third rank tensor as a Noether conserved quantity for spacetime translational invariance that is related to Lipkin's tensor $Z^{\alpha\mu\nu}$. Lipkin in section 8 of his paper \cite{19} calculates zilch density tensor $Z^{\alpha\mu\nu}$ for a plane monochromatic electromagnetic wave and finds that it depends on the polarization state of the wave. For a linearly polarized wave there is no transport of zilch, but there is a flow of zilch at the rate proportional to the frequency of the wave and oppositely directed for LCP and RCP waves. It would be interesting to calculate $Z^{\alpha\mu\nu}$ for OAM bearing light beams \cite{42} and those with radial and azimuthal polarization singularities \cite{39, 40}. In case one discovers a classification scheme based on this, this classification along with SAM and OAM characterization may throw light on recent apparently counter-intuitive phenomena reported in the literature, see \cite{40} for references. {\bf Topological insulators} The subject of topological insulators is a fast developing and exciting field in condensed matter. Qi and Zhang \cite{43} begin their expository article on this recalling quantum Hall effect as a first distinctly topological system and interestingly SL(2, Z) symmetry seems to have been applied first for this system \cite{4}. Recently Karch \cite{44} presented a duality perspective to topological insulators, and though no new result is given the idea of duality in topological insulators is an attractive one. Noninteracting topological band theory and topological field theory are believed to describe satisfactorily topological insulators \cite{43}. For a three dimensional topological insulator an effective action that resembles (12) has been of great value in understanding its electromagnetism. Could one envisage it as a duality rotation induced effect? Focusing on the physics of topological insulators in \cite{17} we have derived generalized local duality invariant Maxwell equations and shown that topological magneto-electric effect and axion-like electrodynamics could be obtained from this. In the present paper the main theme is that of duality symmetry, however consistency of the Euler-Lagrange equations of motion derived here with the generalized Maxwell equations given in \cite{17} suggests the role of duality invariant theory for topological insulators at a fundamental level in which duality gauge potential $W^\mu$ rather than the elusive axion has physical significance. On the experimental side optical studies of topological insulators are very interesting \cite{45}. Recalling our discussion on chiral media interfaces investigated using polarized light \cite{33} and the interesting features of optical vortices noted above it would be desirable to probe the surfaces of topological insulators using such beams. In conclusion, we have presented a fresh outlook on global duality symmetry and developed a local duality invariant theory of electromagnetism in the most generalized form, and discussed the mathematical and physical implications of this theory. {\bf APPENDIX} Following response from known/unknown physicists we address briefly a question regarding the vector field $C^\mu$. The role of the field $C_\mu$ in the additional Lagrangian density (56) has to be clearly explained keeping in mind the subtle aspects on relativistic invariance and superfluous gauge variables briefly presented in point B of Sec. II following the treatments given in \cite{20, 29, 30}. The gauge potential $W_\mu$ is natural in a local duality gauge theory, however unlike it the field $C_\mu$ is a nondynamical auxiliary field. It is similar to spinless fields $F$ and $G$ introduced in the Wess-Zumino supersymmetric action, see Section 1.8 of \cite{30}. Note that we are not discussing supersymmetry here, the point is just to draw attention to the existence of a nondynamical field variable. Auxiliary fields having no dynamics or kinetic energy terms are also well known in implementing BRST symmetry in gauge field theories. Thus we do not need extension of expression (56) for making $C_\mu$ a dynamical field variable. It is important to realize that Eqs. (60) and (61) are also consistent with the usual gauge theoretic prescription. Global gauge symmetry leads to a conserved Noether current and for local gauge invariance one obtains the conserved current as the source of the gauge field. In the present case, global duality invariance of Sudbery's action gives rise to energy-momentum as conserved Noether current and this current appears as the source term in Eqs. (60) and (61). More delicate is the issue of relativistic invariance at the level of the principle in the action, however the equations of motion derived from it, (50), (51) and (58) are Lorentz covariant. Thus within this limitation the generalized local duality invariant theory developed here is complete. One could, of course, raise an interesting question: Could we dispense with the necessity of an auxiliary field altogether? We provide the answer in affirmative and present an alternative. Instead of $L^g_\sigma$ given by expression (56) let us assume a term $L^{ga}_\sigma$ proportional to $\epsilon_{\mu\nu\rho\sigma} W^{\mu\nu} W^\rho$ for the additional Lagrangian density corresponding to the duality gauge field. Its similarity to the topological current (13) may be noted. This form was suggested by A. Sudbery in a private communication. It is important to note that the generalized Maxwell field equations (50) and (51) would remain unaltered ( there would be no auxiliary field $C^\mu$ ). Equation of motion for the duality gauge field $W_\mu$ would be different than Eq. (58), however none of the physical implications based on the generalized Maxwell equations discussed in the last section would change. Though the auxiliary field $C^\mu$ gives somewhat more freedom in the equation of motion for the duality gauge field $W^\mu$ the Euler-Lagrange equation derived from the action with the alternative term $L^{ga}_\sigma$ is given below for the sake of completeness \begin{equation} 2\epsilon_{\alpha\beta\mu\sigma} \partial ^\alpha ~W^\beta ~+~\epsilon_{\alpha\beta\mu\sigma} W^{\alpha\beta} =~g(F_\mu^\rho F_{\rho\sigma} +~^*F_\mu^\rho F_{\rho\sigma}) \end{equation} The time component of the above equation gives an interesting result: the curl of the duality gauge field ${\bf W}$ is proportional to the Poynting vector. A new result presented in the paper on topological insulators \cite{17} needs to be re-emphasized in the context of duality: one could treat constitutive relations as dual to source currents; Eqs. (13) and (14) would be dual to Eqs. (17) and (18) in \cite{17}. The vector transcription of generalized Maxwell equations, i. e. Eqs. (19) to (22) in \cite{17} may also be corrected with the ones given in the present paper; of course the results of \cite{17} would not change. Acknowledgements I dedicate this paper to R. V. Jones - a great insightful experimentalist in optics. I thank Professor A. Sudbery for his interest in this work and Dr. A. Karch for pointing out reference \cite{4}, and Professor E. Fradkin for encouraging communications. Library facility at Banaras Hindu University, Varanasi is acknowledged.
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} \subsection{Galton--Watson forests and their scaling limits.} A planar discrete rooted tree is a rooted tree where edges have unit length and which is endowed with an ordering on siblings, in such a way that it can be naturally embedded in the plane. Since the seminal work of Aldous, Neveu, Pitman and others~\cite{Aldous91:0, Aldous91:1, Aldous93:0, Le-Gall93:0, Neveu89:0, Neveu89:1}, it is well known that such a tree is conveniently encoded by its height and contour processes. To generate these processes, one can envision a particle starting from the root and traveling along the edges of the tree at unit speed, from left to right. The contour process is simply constructed by recording the distance of the particle from the root of the tree. To generate the height process, we start by labeling the vertices of the tree according to their order of visit by the exploration particle (i.e., from left to right): the height process evaluated at $k$ is then given by the distance from the root of the $k$th vertex. From a probabilistic standpoint, a particularly interesting case is the Galton--Watson case where each individual $u$ in the tree begets a random number of offspring $\xi_u$, these random variables being i.i.d.\ with common distribution $\xi$. In the critical and subcritical cases -- i.e., when ${\mathbb{E}}(\xi)\leq 1$ -- the tree is almost surely finite. Considering an infinite sequence of such i.i.d.\ random rooted planar trees, we can generate a random (planar) forest with its corresponding contour and height processes -- respectively denoted by $\mathcal C$ and ${\mathcal H}$ -- obtained by pasting sequentially the height and contour processes of the trees composing the forest. When ${\mathbb{E}}(\xi^2)<\infty$, Aldous~\cite{Aldous93:0} proved that the large time behavior of those processes (properly normalized in time and space) can be described in terms of a reflected Brownian motion. More precisely, in the critical case ${\mathbb{E}}(\xi)=1$ and if $0<\sigma = \mbox{Var}(\xi^2)<\infty$, we have \[ \left( \frac{1}{\sqrt{p}} {\mathcal H}([pt]), \frac{1}{\sqrt{p}} {\mathcal C}(pt) \right) \ \Longrightarrow \ \frac{2}{\sigma} \ \left( |w(t)|, |w(t/2)| \right) \] with $w$ a standard Brownian motion and the convergence holds weakly (in the functional sense). When the second moment of the offspring distribution is infinite and the offspring distribution is in the domain of attraction an $\alpha$-stable law with $\alpha\in(1,2)$, Le Gall and Le Jan~\cite{Le-Gall98:0} and then Duquesne and Le Gall~\cite{Duquesne02:0} proved the existence of a scaling sequence $(\varepsilon_p, p\in{\mathbb{N}})$ and a limiting continuous path ${\mathcal H}_\infty$ such that \[ \left( \varepsilon_p {\mathcal H}([pt]), \varepsilon_p {\mathcal C}(pt) \right) \ \Longrightarrow \ \ \left( {\mathcal H}_\infty(t), {\mathcal H}_\infty(t/2) \right) \] where ${\mathcal H}_\infty$ can be expressed as a functional of a spectrally positive L\'evy process. As in the finite second moment case alluded above, we note that the height and contour processes are asymptotically related by a simple deterministic and constant time change. \subsection{Crump-Mode-Jagers forests} The subject of the present paper is the study of the height and contour processes of planar Crump--Mode--Jagers (CMJ) forests, which are random instances of \emph{chronological forests}. Chronological trees generalize discrete trees in the following way: each individual $u$ is endowed with a pair $(V_u, {\mathcal P}_u)$ such that: \begin{enumerate} \item[(1)] $V_u \in (0,\infty)$ represents the life-length of $u$; \item[(2)] ${\mathcal P}_u$ is a point measure which represents the age of $u$ at childbearing. In particular, we enforce $\mbox{Supp}({\mathcal P}_u)\subset (0,V_u]$, so that individuals produce their offspring during their lifetime. \end{enumerate} Note that $\lvert {\mathcal P}_u \rvert = {\mathcal P}_u(0,V_u]$ is the number of children of $u$. As noted by Lambert in \cite{Lambert10:0}, a chronological tree can be regarded as a tree satisfying the rule ``edges always grow to the right''. This is illustrated in Figure~\ref{fig:sequential-construction} where we present a sequential construction of a planar chronological forest from a sequence of ``sticks'' $\omega=(\omega_n, n\geq0)$, where $\omega_n=(V_n, {\mathcal P}_n)$. \begin{figure}[p!] \centering \begin{tikzpicture} [color=black] \begin{scope}[shift={(-.5,0)}] \node[anchor=north] at (0,0) {$n = 0$}; \end{scope} \begin{scope}[shift={(0,0)}] \draw (0,0) -- (0,2); \stub{1.5} \stub{.5} \node[anchor=north] at (1,0) {$n = 1$}; \end{scope} \begin{scope}[shift={(3,0)}] \draw (0,0) -- (0,2); \stub{1.5} \draw[dashed] (0,1.5) -- (1,1.5); \begin{scope}[shift={(1,1.5)}] \draw (0,0) -- (0,1.5); \stub{1.2} \stub{.5} \end{scope} \stub{.5} \node[anchor=north] at (.5,0) {$n = 2$}; \end{scope} \begin{scope}[shift={(6,0)}] \draw (0,0) -- (0,2); \stub{1.5} \draw[dashed] (0,1.5) -- (1,1.5); \begin{scope}[shift={(1,1.5)}] \draw (0,0) -- (0,1.5); \stub{1.2} \draw[dashed] (0,1.2) -- (1,1.2); \begin{scope}[shift={(1,1.2)}] \draw (0,0) -- (0,1.5); \stub{.9} \end{scope} \stub{.5} \end{scope} \stub{.5} \node[anchor=north] at (1,0) {$n = 3$}; \end{scope} \begin{scope}[shift={(9,0)}] \draw (0,0) -- (0,2); \stub{1.5} \draw[dashed] (0,1.5) -- (1,1.5); \begin{scope}[shift={(1,1.5)}] \draw (0,0) -- (0,1.5); \stub{1.2} \draw[dashed] (0,1.2) -- (1,1.2); \begin{scope}[shift={(1,1.2)}] \draw (0,0) -- (0,1.5); \stub{.9} \draw[dashed] (0,.9) -- (1,.9); \begin{scope}[shift={(1,.9)}] \draw (0,0) -- (0,1); \end{scope} \end{scope} \stub{.5} \end{scope} \stub{.5} \node[anchor=north] at (1.5,0) {$n = 4$}; \end{scope} \begin{scope}[shift={(0,-6)}] \draw (0,0) -- (0,2); \stub{1.5} \draw[dashed] (0,1.5) -- (1,1.5); \begin{scope}[shift={(1,1.5)}] \draw (0,0) -- (0,1.5); \stub{1.2} \draw[dashed] (0,1.2) -- (1,1.2); \begin{scope}[shift={(1,1.2)}] \draw (0,0) -- (0,1.5); \stub{.9} \draw[dashed] (0,.9) -- (1,.9); \begin{scope}[shift={(1,.9)}] \draw (0,0) -- (0,1); \end{scope} \end{scope} \stub{.5} \draw[dashed] (0,.5) -- (3,.5); \begin{scope}[shift={(3,.5)}] \draw (0,0) -- (0,2); \end{scope} \end{scope} \stub{.5} \node[anchor=north] at (2,0) {$n = 5$}; \end{scope} \begin{scope}[shift={(5,-6)}] \draw (0,0) -- (0,2); \stub{1.5} \draw[dashed] (0,1.5) -- (1,1.5); \begin{scope}[shift={(1,1.5)}] \draw (0,0) -- (0,1.5); \stub{1.2} \draw[dashed] (0,1.2) -- (1,1.2); \begin{scope}[shift={(1,1.2)}] \draw (0,0) -- (0,1.5); \stub{.9} \draw[dashed] (0,.9) -- (1,.9); \begin{scope}[shift={(1,.9)}] \draw (0,0) -- (0,1); \end{scope} \end{scope} \stub{.5} \draw[dashed] (0,.5) -- (3,.5); \begin{scope}[shift={(3,.5)}] \draw (0,0) -- (0,2); \end{scope} \end{scope} \stub{.5} \draw[dashed] (0,.5) -- (5,.5); \begin{scope}[shift={(5,.5)}] \draw (0,0) -- (0,4); \stub{3.5} \stub{2.5} \stub{1} \end{scope} \node[anchor=north] at (2.5,0) {$n = 6$}; \end{scope} \begin{scope}[shift={(1,-13.5)}] \draw (0,0) -- (0,2); \stub{1.5} \draw[dashed] (0,1.5) -- (1,1.5); \begin{scope}[shift={(1,1.5)}] \draw (0,0) -- (0,1.5); \stub{1.2} \draw[dashed] (0,1.2) -- (1,1.2); \begin{scope}[shift={(1,1.2)}] \draw (0,0) -- (0,1.5); \stub{.9} \draw[dashed] (0,.9) -- (1,.9); \begin{scope}[shift={(1,.9)}] \draw (0,0) -- (0,1); \end{scope} \end{scope} \stub{.5} \draw[dashed] (0,.5) -- (3,.5); \begin{scope}[shift={(3,.5)}] \draw (0,0) -- (0,2); \end{scope} \end{scope} \stub{.5} \draw[dashed] (0,.5) -- (5,.5); \begin{scope}[shift={(5,.5)}] \draw (0,0) -- (0,4); \stub{3.5} \draw[dashed] (0,3.5) -- (1,3.5); \begin{scope}[shift={(1,3.5)}] \draw (0,0) -- (0,2); \end{scope} \stub{2.5} \draw[dashed] (0,2.5) -- (2,2.5); \begin{scope}[shift={(2,2.5)}] \draw (0,0) -- (0,1); \end{scope} \stub{1} \draw[dashed] (0,1) -- (3,1); \begin{scope}[shift={(3,1)}] \draw (0,0) -- (0,1); \stub{1} \draw[dashed] (0,1) -- (1,1); \begin{scope}[shift={(1,1)}] \draw (0,0) -- (0,1); \end{scope} \end{scope} \end{scope} \node[anchor=north] at (4,0) {$n = 10$}; \end{scope} \end{tikzpicture} \caption{We start at $n = 0$ with nothing, then add $\omega_0$ at time $n=1$. At this time, there are two stubs and so the next stick $\omega_1$ is grafted to the highest stub, and we repeat until time $n = 10$ at which time no more stub is available and the tree is built. Then, the next step proceeds with the construction of the next tree, thus constructing the second tree of the forest, etc.} \label{fig:sequential-construction} \end{figure} At time $n = 0$ we start with the empty forest and we add the stick $\omega_0$ at time $n = 1$. In the case considered in Figure~\ref{fig:sequential-construction}, ${\mathcal P}_0$ has two atoms which correspond to birth times of individuals, but these two atoms are not yet matched with the sticks corresponding to these individuals. These unmatched atoms are called \emph{stubs}, and when there is at least one stub we apply the following rule: \begin{description} \item[Rule \#$1$] if there is at least one stub, we graft the next stick to the highest stub. \end{description} Thus, we iteratively apply this rule until there is no more stub, at which point we have built a complete chronological tree with a natural planar embedding. Figure~\ref{fig:sequential-construction} illustrates a particular case where at time $10$ there is no more stub, in which case we apply the following rule: \begin{description} \item[Rule \#$2$] if there is no stub, we start a new tree with the next stick. \end{description} Thus, starting at time $n = 0$ from the empty forest and iterating these two rules, we build in this way a forest ${\mathbb{F}}^\infty$, possibly consisting of infinitely many chronological trees. By definition, a CMJ forest is obtained when the initial sticks are i.i.d., and throughout the paper we will denote their common distribution by $(V^*,{\mathcal P}^*)$. \subsection{ Chronological height and contour processes of CMJ forests} As for discrete trees, the contour process of a CMJ forest is obtained by recording the position of an exploration particle traveling at unit speed along the edges of the forest from left to right, moving, when a chronological tree is represented as in Figure~\ref{fig:sequential-construction}, at infinite speed along dashed lines. This process will be referred to as the {\it chronological} contour process associated to the CMJ forest, and the chronological height of the $n$th individual is defined as its date of birth. We define the genealogical contour and height processes as the contour and height processes associated to the discrete forest encoding the genealogy of ${\mathbb{F}}^\infty$. Contour processes of CMJ forests have been considered by Lambert in~\cite{Lambert10:0} in the particular setting where birth events are distributed in a Poissonian way along the sticks independently of the life-length -- the so-called binary, homogeneous case. Under this assumption, the author showed that the (jumping) contour process is a spectrally positive L\'evy process. See also~\cite{Delaporte:15, Felipe:15, Lambert15:0, Lambert13:0, Mathieu:13, Richard:14} for related works. To our knowledge, little is known in the general case and in the present study, we determine in full generality: \begin{enumerate} \item[(1)] the distribution of the contour/height process of a CMJ forest; \item[(2)] the correlation between the height/contour process of a CMJ forest and the height/contour process of its underlying genealogy. \end{enumerate} One of our first result is a description of the one-dimensional marginal of the height processes of a CMJ forest in terms of a bivariate renewal process. This two-dimensional process is constructed as a random functional of the weak ascending ladder height process associated to the dual Lukasiewicz path starting from $n$. This is is the subject of Section~\ref{sec:dec}. \subsection{Scaling limits} In the near-critical case it is well-known that, properly scaled in time and space, the genealogical height and contour processes associated to Galton--Watson trees converge toward a continuous process. Except for the binary, homogeneous case and to the best of our knowledge, little is known outside this case: we claim that our results highlighting the distribution of the chronological height process can be used to deal with a broad class of CMJ forests. To support this claim, we treat in details in the present paper the case of short edges where the genealogical and chronological structures become deterministically proportional to one another. Moreover, current work in progress~\cite{Schertzer:2} suggests that our techniques can be extended to a broader class of CMJ forests including cases where the genealogical and chronological structures are not deterministically obtained from one another, see Section~\ref{sub:perspectives} below for more details. \\ To explain our results in the short edge case, let ${\mathbb{Y}}^*$ be the random number obtained by first size-biasing the random variable $\lvert {\mathcal P}^* \rvert$ (i.e., the number of atoms in the point measure ${\mathcal P}^*$) and then by recording the age of the individual when giving birth to a randomly chosen child. The mean of ${\mathbb{Y}}^*$ has a simple expression, namely \[ {\mathbb{E}}({\mathbb{Y}}^*) = {\mathbb{E}} \left( \int u {\mathcal P}^*(\mathrm{d} u) \right). \] As noticed by Nerman~\cite{Nerman84:0}, this random variable describes the age of an ancestor of a typical individual $u$ when giving birth to the next ancestor of $u$. For this reason, ${\mathbb{Y}}^*$ and in particular the condition ${\mathbb{E}}({\mathbb{Y}}^*) < \infty$ -- which is one way to formalize the ``short edge'' condition -- plays a major role in previous works on CMJ processes, see for instance~\cite{Sagitov86:0, Sagitov90:0, Sagitov94:0, Sagitov94:1, Sagitov95:0, Sagitov97:0}. In the present paper we prove that if ${\mathbb{E}}({\mathbb{Y}}^*) < \infty$, then in the near-critical regime the asymptotic behavior of the chronological height process is obtained by stretching the genealogical height process by the deterministic factor ${\mathbb{E}}({\mathbb{Y}}^*)$. This result is stated and proved in Section~\ref{sect:cv-height}. The analysis of the contour process is more delicate (see Section~\ref{tech:challenges} below for more details). Our main result shows that when ${\mathbb{E}}(V^*)<\infty$ -- another way to formalize the ``short edge'' condition -- the chronological contour process is obtained from the chronological height process by rescaling time by the deterministic factor $1/(2{\mathbb{E}}(V^*))$. Hence, again provided that edges are short enough, this result provides a relation between the height and contour processes which is analogous to the discrete case. This result is stated in Section~\ref{sec:cv:contour} where the general structure of the proof is given, and details are provided in Section~\ref{sec:proofs}. Finally, we prove that when both ${\mathbb{Y}}^*$ and $V^*$ have finite means, the minimum of the chronological contour process is obtained by scaling the minimum of the genealogical height process, in space by ${\mathbb{E}}({\mathbb{Y}}^*)$ and in time by $1/{\mathbb{E}}(V^*)$. This shows that the genealogical and chronological trees, and not only the height/contour processes, are asymptotically close to one another. In particular, under these assumptions the CMJ trees themselves converge in the sense of finite-dimensional distributions. \subsection{Technical challenges}\label{tech:challenges} As already discussed, Duquesne and Le Gall~\cite{Duquesne02:0} showed under rather mild conditions that the contour and height processes of Galton--Watson trees converge weakly to a continuous function. In the CMJ framework, we establish convergence in the sense of finite-dimensional distributions to a limiting object provided that edges are short enough. In Section~\ref{sec:example} we present simple examples where finite-dimensional distributions of the scaled contour and height processes converge, but the processes themselves fail to converge in a functional sense. To be more precise, in this example the contour process becomes unbounded on any finite time-interval. This gap between convergence of finite-dimensional distributions and weak convergence also exists in the Galton--Watson case, however we argue in Section~\ref{sec:example} that it is more significant in the CMJ case. The main steps of the proof of our result on the relation between the contour and height processes in the case of short edges (i.e., when ${\mathbb{E}}(V^*)<\infty$) are highlighted in Section~\ref{sec:overview}. Due to the potential existence of pathological times when the contour/height process becomes degenerate (as illustrated by the example in Section~\ref{sec:example}), the convergence of the contour process raises new technical challenges that are absent in the discrete setting. In order to overcome those difficulties, we develop new tools presented in Sections~\ref{sec:preliminary-results} and~\ref{sec:proofs}. \subsection{Perspectives} \label{sub:perspectives} The present paper aims at initiating the systematic study of scaling limits of CMJ forests. Most of the present paper is devoted to developing fundamental tools which, we believe, have the potential to tackle a broad class of CMJ forests and which will be the basis of subsequent papers. The cornerstone of our approach is Proposition~\ref{prop:formula-H} below, which indicates how to recover a CMJ forest from its underlying genealogy by a random stretching. At the discrete level, this stretching operation is correlated with the genealogical structure in intricate ways but it suggests three possible universality classes: \begin{description} \item[First class] the random stretching becomes asymptotically deterministic (this is the class to which Galton--Watson forests belong); \item[Second class] the stretching remains random in the limit, but uncorrelated with the genealogy; \item[Third class] the stretching remains random and correlated with the underlying genealogical structure. \end{description} To show the potential of our techniques, we deal in the present paper with the first class, which corresponds to the ``short edge'' condition discussed earlier. In current work in progress~\cite{Schertzer:2} we are dealing with the second class. Starting from the limiting genealogical structure, encoded by a continuous path, the chronological height process is obtained by marking the branches of the forest with a Poisson point process: each mark carries a random number encoding the chronological contribution of the vertex under consideration. We conjecture that the limiting object should be related to the Poisson snake (see e.g., \cite{Abraham02:0} and \cite{Bertoin97:2}). Finally, studying the third class will presumably require new ideas given that the correlation structure may be quite involved: this will be the subject of further study. \section{Spine, height and contour processes} \label{sec:presentation} In this section, we introduce the spine process, that can be thought of as a generalization of the exploration process first defined by Le Gall and Le Jan in \cite{Le-Gall98:0}. The idea underlying the definition relies on the decomposition of the ``spine'' -- or ``ancestral line'' -- lying below the point of the tree corresponding to the birth of the $n$th individual. In the $n$th step of the sequential construction presented on Figure \ref{fig:sequential-construction}, this corresponds to the path in the forest starting from the root and reaching up to $n$ (which also corresponds to the right-most path in the planar forest constructed at step $n$). As can be seen from the figure, this path is naturally decomposed into finitely many segments that correspond to each ancestor's contribution to the spine. The spine process at $n$ is then defined as a sequence of measures that encodes this decomposition. More precisely, we start by labeling ancestors from highest to lowest. Then, the $k$th element of the spine process (evaluated at $n$) is simply the measure that records the location of the stubs on the $k$th segment -- crosses on Figure \ref{fig:sequential-construction-with-spine} -- and the age of the $k$th ancestor upon giving birth to the $(k-1)$st ancestor -- circles on Figure \ref{fig:sequential-construction-with-spine}. \begin{figure}[p!] \captionsetup{singlelinecheck=off} \centering \begin{tikzpicture} [color=black] \begin{scope}[shift={(-.5,0)}] \node[anchor=north] at (0,0) {$n = 0$}; \end{scope} \begin{scope}[shift={(1,0)}] \draw (0,0) -- (0,2); \laststub{1.5} \stub{.5} \node[anchor=north] at (0,0) {$n = 1$}; \end{scope} \begin{scope}[shift={(3,0)}] \draw (0,0) -- (0,2); \ancestor{1.5} \draw[dashed] (0,1.5) -- (1,1.5); \begin{scope}[shift={(1,1.5)}] \draw (0,0) -- (0,1.5); \laststub{1.2} \stub{.5} \end{scope} \stub{.5} \node[anchor=north] at (.5,0) {$n = 2$}; \end{scope} \begin{scope}[shift={(6,0)}] \draw (0,0) -- (0,2); \ancestor{1.5} \draw[dashed] (0,1.5) -- (1,1.5); \begin{scope}[shift={(1,1.5)}] \draw (0,0) -- (0,1.5); \ancestor{1.2} \draw[dashed] (0,1.2) -- (1,1.2); \begin{scope}[shift={(1,1.2)}] \draw (0,0) -- (0,1.5); \laststub{.9} \end{scope} \stub{.5} \end{scope} \stub{.5} \node[anchor=north] at (1,0) {$n = 3$}; \end{scope} \begin{scope}[shift={(9,0)}] \draw (0,0) -- (0,2); \ancestor{1.5} \draw[dashed] (0,1.5) -- (1,1.5); \begin{scope}[shift={(1,1.5)}] \draw (0,0) -- (0,1.5); \done{1.2} \draw[dashed] (0,1.2) -- (1,1.2); \begin{scope}[shift={(1,1.2)}] \draw (0,0) -- (0,1.5); \done{.9} \draw[dashed] (0,.9) -- (1,.9); \begin{scope}[shift={(1,.9)}] \draw (0,0) -- (0,1); \end{scope} \end{scope} \laststub{.5} \end{scope} \stub{.5} \node[anchor=north] at (1.5,0) {$n = 4$}; \end{scope} \begin{scope}[shift={(0,-6)}] \draw (0,0) -- (0,2); \done{1.5} \draw[dashed] (0,1.5) -- (1,1.5); \begin{scope}[shift={(1,1.5)}] \draw (0,0) -- (0,1.5); \done{1.2} \draw[dashed] (0,1.2) -- (1,1.2); \begin{scope}[shift={(1,1.2)}] \draw (0,0) -- (0,1.5); \done{.9} \draw[dashed] (0,.9) -- (1,.9); \begin{scope}[shift={(1,.9)}] \draw (0,0) -- (0,1); \end{scope} \end{scope} \done{.5} \draw[dashed] (0,.5) -- (3,.5); \begin{scope}[shift={(3,.5)}] \draw (0,0) -- (0,2); \end{scope} \end{scope} \laststub{.5} \node[anchor=north] at (2,0) {$n = 5$}; \end{scope} \begin{scope}[shift={(5,-6)}] \draw (0,0) -- (0,2); \done{1.5} \draw[dashed] (0,1.5) -- (1,1.5); \begin{scope}[shift={(1,1.5)}] \draw (0,0) -- (0,1.5); \done{1.2} \draw[dashed] (0,1.2) -- (1,1.2); \begin{scope}[shift={(1,1.2)}] \draw (0,0) -- (0,1.5); \done{.9} \draw[dashed] (0,.9) -- (1,.9); \begin{scope}[shift={(1,.9)}] \draw (0,0) -- (0,1); \end{scope} \end{scope} \done{.5} \draw[dashed] (0,.5) -- (3,.5); \begin{scope}[shift={(3,.5)}] \draw (0,0) -- (0,2); \end{scope} \end{scope} \ancestor{.5} \draw[dashed] (0,.5) -- (5,.5); \begin{scope}[shift={(5,.5)}] \draw (0,0) -- (0,4); \laststub{3.5} \stub{2.5} \stub{1} \end{scope} \node[anchor=north] at (2.5,0) {$n = 6$}; \end{scope} \begin{scope}[shift={(1,-13.5)}] \draw (0,0) -- (0,2); \done{1.5} \draw[dashed] (0,1.5) -- (1,1.5); \begin{scope}[shift={(1,1.5)}] \draw (0,0) -- (0,1.5); \done{1.2} \draw[dashed] (0,1.2) -- (1,1.2); \begin{scope}[shift={(1,1.2)}] \draw (0,0) -- (0,1.5); \done{.9} \draw[dashed] (0,.9) -- (1,.9); \begin{scope}[shift={(1,.9)}] \draw (0,0) -- (0,1); \end{scope} \end{scope} \done{.5} \draw[dashed] (0,.5) -- (3,.5); \begin{scope}[shift={(3,.5)}] \draw (0,0) -- (0,2); \end{scope} \end{scope} \done{.5} \draw[dashed] (0,.5) -- (5,.5); \begin{scope}[shift={(5,.5)}] \draw (0,0) -- (0,4); \done{3.5} \draw[dashed] (0,3.5) -- (1,3.5); \begin{scope}[shift={(1,3.5)}] \draw (0,0) -- (0,2); \end{scope} \done{2.5} \draw[dashed] (0,2.5) -- (2,2.5); \begin{scope}[shift={(2,2.5)}] \draw (0,0) -- (0,1); \end{scope} \done{1} \draw[dashed] (0,1) -- (3,1); \begin{scope}[shift={(3,1)}] \draw (0,0) -- (0,1); \done{1} \draw[dashed] (0,1) -- (1,1); \begin{scope}[shift={(1,1)}] \draw (0,0) -- (0,1); \end{scope} \end{scope} \end{scope} \node[anchor=north] at (4,0) {$n = 10$}; \end{scope} \end{tikzpicture} \caption[foo bar]{Same construction as in Figure~\ref{fig:sequential-construction}, but now with the spine highlighted in thick line. This allows to differentiate three kinds of atoms: \begin{description}[leftmargin=20pt] \item[Cross] represents a stub and corresponds to an atom on the spine whose subtree has not been explored yet; \item[Circle] represents an atom on the spine whose subtree is being explored; \item[Square] represents an atom whose subtree has been explored and that is no longer on the spine. \end{description}} \label{fig:sequential-construction-with-spine} \end{figure} \subsection{Notation} Let ${\mathbb{Z}}$ denote the set of integers and ${\mathbb{N}}$ the set of non-negative integers. For $x \in {\mathbb{R}}$ let $[x] = \max\{n \in {\mathbb{Z}}: n \leq x\}$ and $x^+ = \max(x, 0)$ be its integer and positive parts, respectively. If $A \subset {\mathbb{R}}$ is a finite set we denote by $\lvert A \rvert$ its cardinality. Throughout we adopt the convention $\max \emptyset = \sup \emptyset = -\infty$, $\min \emptyset = \inf \emptyset = +\infty$ and $\sum_{k=a}^b u_k = 0$ if $b < a$, with $(u_k)$ any real-valued sequence. \subsubsection{Measures} Let ${\mathcal M}$ be the set of finite point measures on $(0,\infty)$ endowed with the weak topology, $\epsilon_x \in {\mathcal M}$ for $x > 0$ be the Dirac measure at $x$ and ${\bf{z}}$ be the zero measure, the only measure with mass $0$. For a measure $\nu \in {\mathcal M}$ we denote its mass by $\lvert \nu \rvert = \nu(0,\infty)$ and the supremum of its support by $\pi(\nu) = \inf \{ x > 0: \pi(x, \infty) = 0 \}$ with the convention $\pi({\bf{z}}) = 0$. For $k \in {\mathbb{N}}$ we define $\Upsilon_k(\nu) \in {\mathcal M}$ as the measure obtained by removing the $k$ largest atoms of $\nu$, i.e., $\Upsilon_k(\nu) = {\bf{z}}$ for $k \geq \lvert \nu \rvert$ and, writing $\nu = \sum_{i=1}^{\lvert \nu \rvert} \epsilon_{a(i)}$ with $0 < a(\lvert \nu \rvert) \leq \cdots \leq a(1)$, $\Upsilon_k(\nu) = \sum_{i=k+1}^{\lvert \nu \rvert} \epsilon_{a(i)}$ for $k = 0, \ldots, \lvert \nu \rvert-1$. \subsubsection{Finite sequences of measures} We let ${\mathcal M}^* = \cup_{n \in {\mathbb{N}}} ({\mathcal M} \setminus \{{\bf{z}}\})^n$ be the set of finite sequences of non-zero measures in ${\mathcal M}$. For $Y \in {\mathcal M}^*$ we denote by $\length{Y}$ the only integer $n \in {\mathbb{N}}$ such that $Y \in ({\mathcal M} \setminus \{{\bf{z}}\})^n$, which we call the length of $Y$, and identify ${\bf{z}}$ with the only sequence of length $0$. For two sequences $Y_1 = (Y_1(1), \ldots, Y_1(H_1))$ and $Y_2 = (Y_2(1), \ldots, Y_2(H_2))$ in ${\mathcal M}^*$ with lengths $H_1, H_2 \geq 1$, we define $[Y_1, Y_2] \in {\mathcal M}^*$ as their concatenation: \[ [Y_1, Y_2] = \big( Y_1(1), \ldots, Y_1(H_1), Y_2(1), \ldots, Y_2(H_2) \big). \] Further, by convention we set $[{\bf{z}}, Y] = [Y, {\bf{z}}] = Y$ for any $Y \in {\mathcal M}^*$ and we then define inductively \[ [Y_1, \ldots, Y_N] = \big[ [Y_1, \ldots, Y_{N-1}], Y_N \big], \ N \geq 2. \] Note that, with these definitions, we have $\length{[Y_1, \ldots, Y_N]} = \length{Y_1} + \cdots + \length{Y_N}$ for any $N \geq 1$ and $Y_1, \ldots, Y_N \in {\mathcal M}^*$. Identifying a measure $\nu \in {\mathcal M} \setminus \{{\bf{z}}\}$ with the sequence of length one $(\nu) \in {\mathcal M}^*$, the above definitions give sense to, say, $[Y, \nu]$ with $Y \in {\mathcal M}^*$ and $\nu \in {\mathcal M} \setminus \{{\bf{z}}\}$. The operator $\pi$ defined on ${\mathcal M}$ is extended to ${\mathcal M}^*$ through the relation \[ \pi(Y) = \sum_{k=1}^{\length{Y}} \pi(Y(k)), \ Y = (Y(1), \ldots, Y(\length{Y}) \in {\mathcal M}^*. \] Recalling the convention $\sum_{k=1}^0 = 0$, we see that $\pi({\bf{z}}) = 0$ and further, it follows directly from the above relation that $\pi([Y_1, \ldots, Y_N]) = \pi(Y_1) + \cdots + \pi(Y_N)$. \subsubsection{Measurable space} \label{subsub:space} We define ${\mathbb{L}} = \{ (v, \nu) \in (0,\infty) \times {\mathcal M}: v \geq \pi(\nu) \}$ and call an element $s \in {\mathbb{L}}$ either a \emph{stick} or a \emph{life descriptor}. We work on the measurable space $(\Omega, \mathcal{F})$ with $\Omega = {\mathbb{L}}^{\mathbb{Z}}$ the space of doubly infinite sequences of sticks and $\mathcal{F}$ the $\sigma$-algebra generated by the coordinate mappings. An elementary event $\omega \in \Omega$ is written as $\omega = (\omega_n, n \in {\mathbb{Z}})$ and $\omega_n = (V_n, {\mathcal P}_n)$. For $n \in {\mathbb{Z}}$ we consider the three operators $\theta_n, \vartheta^n, {\mathcal G}: \Omega \to \Omega$ defined as follows: \begin{itemize} \item $\theta_n$ is the shift operator, defined by $\theta_n(\omega) = (\omega_{n + k}, \in {\mathbb{Z}})$; \item $\vartheta^n$ is the dual (or time-reversal) operator, defined by $\vartheta^n(\omega) = (\omega_{n - k - 1}, \in {\mathbb{Z}})$; \item ${\mathcal G}$ is the genealogical operator, mapping the sequence $((V_n, {\mathcal P}_n), n \in {\mathbb{Z}})$ to the sequence $((1, \lvert {\mathcal P}_n \rvert \epsilon_1), n \in {\mathbb{Z}})$. \end{itemize} We say that a mapping $\Gamma: \Omega \to {\mathfrak X}$ is a genealogical mapping if it is invariant by the genealogical operator, i.e., if $\Gamma \circ {\mathcal G} = \Gamma$. The shift and dual operators are related by the following relations: \begin{equation} \label{eq:sigma-dual} \vartheta^m \circ \vartheta^n = \theta_{n - m} \ \text{ and } \ \vartheta^n \circ \theta_m = \vartheta^{n+m}, \ m,n \in {\mathbb{Z}}, \end{equation} and for any random time $\Gamma: \Omega \to {\mathbb{Z}}$ we have \begin{equation} \label{eq:dual-relations} {\mathcal P}_\Gamma \circ \vartheta^n = {\mathcal P}_{n - 1 - \Gamma \circ \vartheta^n}. \end{equation} \subsection{Spine, height and contour processes} \label{sub:exploration-process} We now proceed to a formal definition of the various processes which will be studied. \subsubsection{Spine process} Consider the operator $\Phi: {\mathcal M}^* \times {\mathcal M} \to {\mathcal M}^*$ defined for $\nu \in {\mathcal M}$ and $Y = (Y(1), \ldots, Y(\length{Y}) \in {\mathcal M}^*$ by \begin{equation} \label{eq:Phi} \Phi(Y, \nu) = \begin{cases} [Y, \nu] & \text{ if } \nu \neq {\bf{z}},\\ \big( Y(1), \ldots, Y(H-1), \Upsilon_1(Y(H)) \big) & \text{ if } \nu = {\bf{z}} \ \text{ and } \ H \geq 1,\\ {\bf{z}} & \text{ else}, \end{cases} \end{equation} where $H = \max \{k \geq 1: \lvert Y(k) \rvert \geq 2\}$. Note that by definition, we have $\Phi(Y, \nu) \in {\mathcal M}^*$ for $Y \in {\mathcal M}^*$ and $\nu \in {\mathcal M}$ and that further, if $\nu \neq {\bf{z}}$ then $\Phi(Y, \nu) \neq {\bf{z}}$. Next, we consider the ${\mathcal M}^*$-valued sequence ${\mathbb{S}}_0 = ({\mathbb{S}}^n_0, n \geq 0)$ (the subscript $0$ will be justified below, see~\eqref{eq:S^n_m}) defined recursively by \begin{equation} \label{eq:dynamic-spine} {\mathbb{S}}^0_0 = {\bf{z}} \ \text{ and } \ {\mathbb{S}}^{n+1}_0 = \Phi({\mathbb{S}}^n_0, {\mathcal P}_n), \ n \geq 0. \end{equation} This dynamic is illustrated on Figure~\ref{fig:sequential-construction-with-spine}. As already discussed in the introduction, the $k$th element of ${\mathbb{S}}_0^n$ (ordered from top to bottom) records (1) the location of the stubs on the $k$th segment in the spine decomposition illustrated in Figure \ref{fig:sequential-construction-with-spine}, and (2) the age of the $k$th ancestor (of $n$) when begetting the $(k-1)$st ancestor (identifying, for $k = 1$, the individual with its $0$th ancestor). In words, the recursive relation~\eqref{eq:dynamic-spine} encodes the fact that the birth event corresponding to the $(n+1)$st individual coincides with the next available stub after grafting the $n$th stick on top of ${\mathbb{S}}_0^n$. In particular, if no stub is available, a new spine is started from scratch (third relation). We note that when ${\mathbb{S}}_0^n\neq {\bf z}$, any element of the sequence ${\mathbb{S}}_0^n$ contains at least one atom: the one corresponding the birth of an ancestor, which is not counted as a {\it stub}: In particular, the condition $H = \max \{k \geq 1: \lvert Y(k) \rvert \geq 2\}$ in~\eqref{eq:Phi} reads ``look for the first available segment with a stub''. \begin{remark}\label{rem:exploration} The definition of the spine process is similar, but not completely analogous to the exploration process of Le Gall and Le Jan in \cite{Le-Gall98:0}. Therein, the authors only consider the stubs attached to the spine. However, in the chronological case, not only do we need to keep track of the number of available stubs, but one needs to also record the length of the segments carrying those stubs (in the discrete case, this is always equal to $1$). This is done by adding the additional atom corresponding to the birth of the ``previous'' ancestor (when ancestors are labelled from top to bottom), and whose location coincides with the length of the corresponding segment. \end{remark} \subsubsection{Chronological height and contour processes} \label{subsub:chronological-processes} We define the \emph{chronological height process} ${\mathbb{H}} = ({\mathbb{H}}(n), n \geq 0)$ by the relation \[ {\mathbb{H}}(n) = \pi({\mathbb{S}}^n_0), \ n \geq 0. \] Informally, ${\mathbb{H}}(n)$ is the birth time of the $n$th individual. We consider the associated \emph{chronological contour process}, which is the continuous-time process ${\mathbb{C}} = ({\mathbb{C}}(t), t \geq 0)$ with continuous sample paths defined inductively as follows. In the sequel, we define \[ {\mathcal V}(-1) = 0, \ {\mathcal V}(n) = V_0 + \cdots + V_n \ \text{ and } \ K_n = 2 {\mathcal V}(n-1) - {\mathbb{H}}(n), \ n \geq 0. \] Note that the sequence $(K_n, n \geq 0)$ is non-decreasing, and we will assume that its terminal value is infinite. (This assumption will hold a.s.\ for (sub)critical CMJ forests). We start the initialization by setting ${\mathbb{C}}(K_0) = 0$. Assume that ${\mathbb{C}}$ has been built on $[0, K_n]$, we extend the construction to $[0, K_{n+1}]$ in the following way: ${\mathbb{C}}$ first increases at rate $+1$ up to ${\mathbb{H}}(n) + V_n$ and then decreases at rate $-1$ to ${\mathbb{H}}(n+1)$. Since ${\mathbb{H}}(n+1) \leq {\mathbb{H}}(n) + V_n$, this is well-defined and this extends the construction up to the time $K_n + 2V_n + {\mathbb{H}}(n) - {\mathbb{H}}(n+1) = K_{n+1}$ as desired. It is not hard to prove that ${\mathbb{C}}$ is the usual contour process associated with the forest ${\mathbb{F}}^\infty$ seen as a forest of continuous trees. Indeed, our definition coincides with the usual definition of ${\mathbb{C}}(t)$ as the distance to the origin of a particle going up along the left side of an edge and going down along the right side, see for instance Le Gall~\cite{Le-Gall05:0} for a formal and general definition in the realm of real trees. \subsubsection{Genealogical height and contour processes and exploration process} \label{subsub:genealogical-processes} We define ${\mathcal H} = {\mathbb{H}} \circ {\mathcal G}$ and ${\mathcal C} = {\mathbb{C}} \circ {\mathcal G}$ which we call \emph{genealogical height and contour processes}, respectively, and $\rho^n_0 = {\mathbb{S}}^n_0 \circ {\mathcal G}$ the \emph{exploration process}. As explained in Remark~\ref{rem:exploration}, it is closely related to the classical exploration process introduced by Le Gall and Le Jan~\cite{Le-Gall98:0}. \section{The spine process and the Lukasiewicz path.} \label{sec:dec} In this section, we relate the spine process to the well-known Lukasiewicz path. More precisely, the spine process is expressed in terms of a random functional of the weak ascending ladder height process associated to the dual Lukasiewicz path. This is the content of Proposition~\ref{prop:formula-spine} below. In a forthcoming section, this result will allow us to express the one-dimensional marginal of the spine process in terms of a bivariate renewal process, and will be instrumental in proving our main scaling limit results for height and contour processes in the short edges case. \subsection{Lukasiewicz path} \label{subsub:Lukasiewicz} We define the Lukasiewicz path $S = (S(n), n \in {\mathbb{Z}})$ by $S(0) = 0$ and, for $n \geq 1$, \[ S(n) = \sum_{k=0}^{n-1} \left( \lvert {\mathcal P}_k \rvert - 1 \right) \ \text{ and } \ S(-n) = -\sum_{k=-n}^{-1} \left( \lvert {\mathcal P}_k \rvert - 1 \right). \] Note that if $\Gamma$ is a random time, the dual operator acts as follows: \begin{equation} \label{eq:S(Gamma)-dual} S(\Gamma) \circ \vartheta^n = S(n) - S(n - \Gamma \circ \vartheta^n), \ n \in {\mathbb{Z}}. \end{equation} It is well known that in the discrete case, the height process is directly related to the sequence of weak ascending ladder times. As we shall see, in the chronological case, more structure of the ladder height process is needed. In particular, the height (and spine) process will be expressed not only in terms of the ladder height times, but also in terms of the undershoot upon reaching the successive records of $S$ (through the quantity ${\mathcal Q}$ defined below). In order to make this more precise, we consider the following functionals associated to $S$, which will be used repeatedly in the rest of the paper: \begin{itemize} \item the sequence of weak ascending ladder height times: $T(0)=0$ and for $k \geq 0$, \[ T(k+1) = \inf \big\{ \ell > T(k): S(\ell) \geq S(T(k)) \big\} = T(1) \circ \theta_{T(k)} + T(k); \] \item the hitting times upward and downward: \[ \tau_\ell = \inf \left\{ k > 0: S(k) \geq \ell \right\} \ \text{ and } \ \tau^-_\ell = \inf \left\{ k \geq 0: S(k) = -\ell \right\}, \ \ell \geq 0, \] so that in particular $\tau_0 = T(1)$; \item for $\ell \in {\mathbb{N}}$ with $\tau_\ell < \infty$, \[ \zeta_\ell = \ell - S(\tau_\ell-1) \ \text{ and } \ \mu_{\ell} = \Upsilon_{\zeta_\ell}({\mathcal P}_{\tau_\ell-1}),\] so that $\zeta_\ell$ is the undershoot upon reaching level $\ell$; \item and the backward maximum \[ L(m) = \max_{k = 0, \ldots, m} S(-k), \ m \geq 0. \] \end{itemize} Note that, since $S(\tau_0) \geq 0$, $\zeta_0 = - S(\tau_0 - 1) \leq S(\tau_0) - S(\tau_0 - 1) = \lvert {\mathcal P}_{\tau_0 - 1} \rvert - 1$, so that $\mu_0 \neq {\bf{z}}$. We will pay special attention to the following functionals of the ladder height process: \begin{itemize} \item for $k \geq 1$ with $T(k) < \infty$, \[ {\mathcal Q}(k) = \mu_0\circ \theta_{T(k-1)} \ \text{ and } \ {\mathbb{Y}}(k) = \pi \circ {\mathcal Q}(k); \] \item the following two inverses associated to the sequence $(T(k), k \geq 0)$: \[ T^{-1}(n) = \min \left\{ k \geq 0: T(k) \geq n \right\} \ \text{ and } \ {\widetilde T}^{-1}(n) = \max \left\{ k \geq 0: T(k) \leq n \right\}, \ n \geq 0. \] \end{itemize} The fact that $\mu_0 \neq {\bf{z}}$ implies that ${\mathcal Q}(k) \neq {\bf{z}}$ whenever it is well-defined, a simple fact that will be used later on. If $n$ is a weak ascending ladder height time, then ${\widetilde T}^{-1}(n) = T^{-1}(n)$ with $T({\widetilde T}^{-1}(n)) = n = T(T^{-1}(n))$, while if $n$ is not a weak ascending ladder height time, then ${\widetilde T}^{-1}(n) +1 = T^{-1}(n)$ with $T({\widetilde T}^{-1}(n)) < n < T(T^{-1}(n))$. Define \[ {\mathcal A}(n) = \left\{ n - T(k): k \geq 0 \right\} \circ \vartheta^n, \ n \geq 0. \] It is well-known that ${\mathcal A}(n) \cap {\mathbb{R}}_+$ is the set of $n$'s ancestors, see for instance Duquesne and Le Gall~\cite{Duquesne02:0}. This property relates the height process and the weak ascending ladder height times $T$ through the following identity: \begin{equation} \label{eq:identity-H} {\mathcal H}(n) = {\widetilde T}^{-1}(n) \circ \vartheta^n, \ n \geq 0. \end{equation} The genealogical height is also given by the length of ${\mathbb{S}}^n_0$ as we show now. \begin{lemma} For any $n \geq 0$ we have $\length{{\mathbb{S}}^n_0} = {\mathcal H}(n)$. \end{lemma} \begin{proof} As highlighted in Remark \ref{rem:exploration}, the exploration process $\rho^n_0 = {\mathbb{S}}^n_0 \circ {\mathcal G}$ slightly differs from the classical definition of the exploration process in Le Gall and Le Jan~\cite{Le-Gall98:0}: however, this slight difference does not alter the length of the sequence, which remains unchanged between the two definitions. Since the length of the sequence in the classical exploration process coincides with the height process, this implies that $\length{\rho^n_0} = {\mathcal H}(n)$. Thus, $\length{{\mathbb{S}}^n_0} = \length{{\mathbb{S}}^n_0} \circ {\mathcal G} = \length{{\mathbb{S}}^n_0 \circ {\mathcal G}}$, which proves the desired result. \end{proof} Define \[ \mrca{m}{n} = \max \big( {\mathcal A}(m) \cap {\mathcal A}(n) \big), \ m, n \geq 0. \] Then $\mrca{m}{n} \in {\mathbb{Z}}$ and $m$ and $n$ have an ancestor in common (i.e., belong to the same tree) if and only if $\mrca{m}{n} \geq 0$ in which case $\mrca{m}{n}$ is the lexicographic index of their most recent common ancestor -- see for instance~\cite{Duquesne02:0}. We end this section by listing the following identities, which are proved in the Appendix~\ref{appendix:proof-useful-identities}. The second identity involves the condition $L(n-m) \circ \vartheta^m > 0$: it is readily checked that \begin{equation} \label{eq:formula-L} L(n-m) \circ \vartheta^m = S(m) - \min_{\{m, \ldots, n\}} S, \ 0 \leq m \leq n, \end{equation} \begin{lemma} \label{lemma:condition-mrca} For any $0 \leq m \leq n$, $m$ is an ancestor of $n$, i.e., $\mrca{m}{n}=m$, if and only if $L(n-m)\circ\vartheta^m=0$. \end{lemma} \begin{lemma}\label{lemma:useful-identities} For any $n \geq m \geq 0$ with $L(n-m) \circ \vartheta^m > 0$, we have \begin{equation} \label{eq:identity-mrca} \mrca{m}{n} = n - T(T^{-1}(n-m)) \circ \vartheta^n = m - \tau_{L(n-m)} \circ \vartheta^m \end{equation} and \begin{equation}\label{eq:R-mrca} {\mathcal Q}(T^{-1}(n-m)) \circ \vartheta^n = \mu_{L(n-m)} \circ \vartheta^m. \end{equation} \end{lemma} \subsection{Fundamental formula for ${\mathbb{H}}(n)$} As mentioned earlier, ${\mathcal A}(n) \cap {\mathbb{R}}_+$ is the set of ancestors of $n$. More precisely, $n-T(k)\circ\vartheta^n$ is the index of the $k$th ancestor of $n$, assuming that ancestors are ordered from highest to lowest date of birth (or height). Further, interpreting ${\mathbb{Y}}(k) \circ \vartheta^n$ as the age of the $k$th ancestor when giving birth to the $(k{-}1)$st ancestor motivates the following result. \begin{prop} \label{prop:formula-H} For every $n \geq 0$, we have \begin{equation} \label{eq:formula-H} \big( {\mathcal H}(n), \ {\mathbb{H}}(n) \big) = \left( {\widetilde T}^{-1}(n) \ , \ \sum_{k = 1}^{{\widetilde T}^{-1}(n)} {\mathbb{Y}}(k) \right) \circ \vartheta^n. \end{equation} \end{prop} Since ${\mathbb{H}}(n) = \pi({\mathbb{S}}^n_0) = \sum_{k=1}^{\length{{\mathbb{S}}^n_0}} \pi({\mathbb{S}}^n_0(k))$, Proposition \ref{prop:formula-H} is an immediate corollary of the following result which proves a more general relation between the spine process and the Lukasiewicz path. \begin{prop} \label{prop:formula-spine} We have \begin{equation} \label{eq:formula-spine} {\mathbb{S}}^n_0 = \left( {\mathcal Q}({\widetilde T}^{-1}(n)), \ldots, {\mathcal Q}(1) \right) \circ \vartheta^n, \ n \geq 0. \end{equation} \end{prop} The rest of this section is devoted to proving Proposition ~\ref{prop:formula-spine}. We prove it through several lemmas, several of which will be used in the sequel. To prove these results, for $m \geq 0$ and $k \in \{0, \ldots, \lvert {\mathcal P}_m \rvert\}$ we introduce \[ \chi(m,k) = \tau^-_k \circ \theta_{m+1} + m+1 = \inf \left\{ n \geq m+1: S(n) = S(m+1) - k \right\} \] and define $\chi(m) = \chi(m,\lvert {\mathcal P}_m \rvert)$ so that \[ \chi(m) = \inf \left\{ n \geq m+1: S(n) = S(m+1) - \lvert {\mathcal P}_m \rvert \right\} = \inf \left\{ n \geq m+1: S(n) = S(m) - 1 \right\} \] which is also equal to $\tau^-_1 \circ \theta_m + m$. Intuitively, for $k\in\{0,\cdots,|{\mathcal P}_m|-1\}$, $\chi(m,k)$ corresponds to the index of $(k+1)$st child of the $m$th individual (with the convention that children are ranked from youngest to oldest); whereas $\chi(m)$ is the index of the highest stub on ${\mathbb{S}}_0^m$ (i.e., right before attaching the $m$th individual). In particular, any individual $n\in\{m+1,\dots,\chi(m)-1\}$ belongs to a subtree attached to $m$. In view of this interpretation, the two following lemmas seem quite natural. (On Figure \ref{fig:sequential-construction-with-spine}, and $\chi(1)=4$). For the proof of Lemma~\ref{lemma:snake-3} we will need the following identity, whose proof is defered to Appendix~\ref{appendix:proof-identity-chi}. \begin{lemma}\label{lemma:chi} Let $n \geq 0$, $m = n - \tau_0 \circ \vartheta^n$ and $i = \zeta_0 \circ \vartheta^n$. If $m \geq 0$, then it holds that $i \in \{0, \ldots, \lvert {\mathcal P}_m \rvert - 1 \}$ and $\chi(m,i) = n$. \end{lemma} \begin{lemma} \label{lemma:snake-1} For any $m \geq 0$ such that $|{\mathcal P}_m|>0$, $n \in \{m+1, \ldots, \chi(m)-1\}$ and $\ell \in \{1, \ldots, {\mathcal H}(m)\}$ we have \[ {\mathcal H}(n) > {\mathcal H}(m) \ \text{ and } \ {\mathbb{S}}^n_0(\ell) = {\mathbb{S}}^m_0(\ell). \] \end{lemma} \begin{proof} Let $\ell, m$ and $n$ be as in the statement: we first prove that ${\mathcal H}(n) > {\mathcal H}(m)$. Since $S$ only makes negative jumps of size $-1$, we have by definition of $\chi(m)$ \[ \min_{\{m+1, \ldots, \chi(m)-1\}} S \geq S(m). \] This inequality implies that, since $n \in \{m+1, \ldots, \chi(m)-1\}$, there is at least one more ladder height time for the dual Lukasiewicz process seen from $n$ as compared to the dual Lukasiewicz process seen from $m$. In view of the relation~\eqref{eq:identity-H} which expresses ${\mathcal H}(n) = {\widetilde T}^{-1}(n) \circ \vartheta^n$ as the number of weak ascending ladder height times of the dual Lukasiewicz process, this means precisely that ${\mathcal H}(n) > {\mathcal H}(m)$. We now prove that ${\mathbb{S}}^n_0(\ell) = {\mathbb{S}}^m_0(\ell)$. Since $n \in \{m+1, \ldots, \chi(m)-1\}$, in order to prove this it is enough to prove that $\chi' \geq \chi(m)$ where we define \[ \chi' = \inf \left\{ k \geq m+1: {\mathbb{S}}^k_0(\ell) \neq {\mathbb{S}}^m_0(\ell) \right\}. \] In view of the definition~\eqref{eq:Phi} of $\Phi$ and the dynamic~\eqref{eq:dynamic-spine}, we see that the $\ell$th element of the spine between $m$ and $n$ is modified only if the length of the spine goes below $\ell$ between $m$ and $n$. Since the length of the spine coincides with ${\mathcal H}$, this implies ${\mathcal H}(\chi') = \ell \leq {\mathcal H}(m)$. Finally, since ${\mathcal H}(m) < \min_{\{m+1, \ldots, \chi(m)-1\}} {\mathcal H}$, this implies that $\chi' \geq \chi(m)$ and concludes the proof. \end{proof} \begin{lemma} \label{lemma:snake-2} For $m \geq 0$ such that $|{\mathcal P}_m|>0$ and $k \in \{0, \ldots, \lvert {\mathcal P}_m \rvert - 1 \}$ we have \[ {\mathcal H}(\chi(m, k)) = {\mathcal H}(m)+1 \ \text{ and } \ {\mathbb{S}}^{\chi(m,k)}_0({\mathcal H}(m)+1) = \Upsilon_k({\mathcal P}_m). \] \end{lemma} \begin{proof} By definition of $\chi(m,k)$ and the fact that $S$ only makes jumps of negative size $-1$, we have \[ S(\chi(m,k)) = \min_{m+1, \ldots, \chi(m,k)} S \geq S(m). \] A similar argument as in the proof of the previous lemma then leads to the conclusion ${\mathcal H}(\chi(m,k)) = {\mathcal H}(m)+1$ (i.e., by showing that there is exactly one extra ladder height time for the dual walk seen from $\chi(m,k)$). We now prove that ${\mathbb{S}}^{\chi(m,k)}_0({\mathcal H}(m)+1) = \Upsilon_k({\mathcal P}_m)$. For $k = 0$ this is seen to be true by looking at the dynamic (\ref{eq:dynamic-spine}). We now prove that this is true by induction: so assume this is true for $k \in \{0, \ldots, \lvert {\mathcal P}_m \rvert - 2\}$ and let us prove that this continues to hold for $k+1$. In order to do so, it is sufficient to combine the induction hypothesis with the following claim: \[ {\mathbb{S}}^{\chi(m,k+1)}_0({\mathcal H}(m)+1) = \Upsilon_1 \left( {\mathbb{S}}^{\chi(m,k)}_0({\mathcal H}(m)+1) \right). \] In order to prove this identity, we first note that (again, this is seen by comparing the number of ladder height times of the dual processes seen from the two times) \[ {\mathcal H}(n) > {\mathcal H}(\chi(m,k))={\mathcal H}(m)+1 \ \mbox{for} \ n = \chi(m,k)+1, \ldots, \chi(m,k+1)-1. \] Finally, we already know that ${\mathcal H}(\chi(m,k+1)) = {\mathcal H}(m)+1$. From the dynamic~\eqref{eq:dynamic-spine}, this implies that the $({\mathcal H}(m)+1)$st element of ${\mathbb{S}}_0^n$ remains unchanged for $n = \chi(m,k)+1, \ldots, \chi(m,k+1)-1$, but that one stub is removed at time $\chi(m,k+1)$, i.e., \[ {\mathbb{S}}^{\chi(m,k+1)}_0({\mathcal H}(m)+1) = \Upsilon_1 \left( {\mathbb{S}}^{\chi(m,k)}_0({\mathcal H}(m)+1) \right). \] This proves the claim made earlier and ends the proof of Lemma \ref{lemma:snake-2}. \end{proof} \begin{lemma} \label{lemma:snake-3} For any $n, k \geq 0$ with $T(k) \circ \vartheta^n \leq n$ we have \[ {\mathbb{S}}^n_0 = \left[ {\mathbb{S}}^{n - T(k) \circ \vartheta^n}_0, {\mathcal Q}(k) \circ \vartheta^n, \ldots, {\mathcal Q}(1) \circ \vartheta^n \right]. \] \end{lemma} Recall the convention $[{\bf{z}}, Y] = Y$ for any $Y \in {\mathcal M}^*$: in particular, \[ \left[ {\mathbb{S}}^{n - T(k) \circ \vartheta^n}_0, {\mathcal Q}(k) \circ \vartheta^n, \ldots, {\mathcal Q}(1) \circ \vartheta^n \right] = \left[ {\mathcal Q}(k) \circ \vartheta^n, \ldots, {\mathcal Q}(1) \circ \vartheta^n \right]\] when ${\mathbb{S}}^{n - T(k) \circ \vartheta^n}_0={\bf z}$. \begin{proof} [Proof of Lemma~\ref{lemma:snake-3}] Let us first prove the result for $k = 1$, so we consider $n \geq 0$ with $\tau_0 \circ \vartheta^n \leq n$ and we prove that ${\mathbb{S}}^n_0 = [ {\mathbb{S}}^{n - T(1) \circ \vartheta^n}_0, {\mathcal Q}(1) \circ \vartheta^n ]$. Combining the two previous lemmas, we see that \[ {\mathbb{S}}^{\chi(m,i)}_0 = \left[ {\mathbb{S}}^m_0, \Upsilon_i({\mathcal P}_m) \right] \] for any $m \geq 0$ and any $i \in \{0, \ldots, \lvert {\mathcal P}_m \rvert - 1\}$. In particular, Lemma~\ref{lemma:chi} shows that we can apply this to $m = n - \tau_0 \circ \vartheta^n$ and $i = \zeta_0 \circ \vartheta^n$, which gives \[ {\mathbb{S}}^{\chi(m,i)}_0 = \left[ {\mathbb{S}}^{n - \tau_0 \circ \vartheta^n}_0, \Upsilon_{\zeta_0 \circ \vartheta^n}({\mathcal P}_{n - \tau_0 \circ \vartheta^n}) \right]. \] On the one hand, we have $\chi(m,i) = n$ (again by Lemma~\ref{lemma:chi}) and so in particular ${\mathbb{S}}^{\chi(m,i)}_0 = {\mathbb{S}}^n_0$, while on the other hand, we have \[ \Upsilon_{\zeta_0 \circ \vartheta^n}({\mathcal P}_{n - \tau_0 \circ \vartheta^n}) = \Upsilon_{\zeta_0}({\mathcal P}_{\tau_0 - 1}) \circ \vartheta^n = {\mathcal Q}(1) \circ \vartheta^n. \] Combining the above arguments concludes the proof for $k = 1$. The general case follows by induction left to the reader. \end{proof} We can now prove Proposition~\ref{prop:formula-spine}. \begin{proof} [Proof of Proposition~\ref{prop:formula-spine}] By definition, $T({\widetilde T}^{-1}(n)) \leq n$ and so Lemma~\ref{lemma:snake-3} with $k = {\widetilde T}^{-1}(n)$ yields \[ {\mathbb{S}}^{n}_0 = \left[ {\mathbb{S}}_0^{n-T({\widetilde T}^{-1}(n))\circ\vartheta^n}, {\mathcal Q}({\widetilde T}^{-1}(n))\circ\vartheta^n, \cdots, {\mathcal Q}(1)\circ\vartheta^n \right]. \] Since ${\mathcal Q}(k) \neq {\bf{z}}$ whenever it is well-defined, in particular for $k \in \{1, \ldots, {\widetilde T}^{-1}(n)\}$, it follows that \[ \length{{\mathbb{S}}^n_0} = {\widetilde T}^{-1}(n) \circ \vartheta^n + \length{{\mathbb{S}}_0^{n-T({\widetilde T}^{-1}(n))\circ\vartheta^n}}. \] However, $\length{{\mathbb{S}}^n_0} = {\widetilde T}^{-1}(n) \circ \vartheta^n$ by~\eqref{eq:identity-H}, and thus $\length{{\mathbb{S}}_0^{n-T({\widetilde T}^{-1}(n))\circ\vartheta^n}} = 0$ which means ${\mathbb{S}}_0^{n - T({\widetilde T}^{-1}(n))\circ\vartheta^n} = {\bf z}$. This achieves the proof of Proposition~\ref{prop:formula-spine}. \end{proof} \subsection{Right decomposition of the spine} \label{sub:right-decomposition} In the case of i.i.d.\ life descriptors, the spine process is easily seen to be a Markov process. In the forthcoming Section \ref{sec:proba:spine}, Proposition \ref{prop:formula-spine} will allow us to express the one-dimensional marginal of this process in terms of a bivariate renewal process. The present section can be seen as a description of the transition probabilities of the spine process: we show that for $m\leq n$, the spine at $n$ is deduced from the spine at $m$ by truncating ${\mathbb{S}}_0^m$ and then by concatenating a spine that is independent of the past up to $m$, a construction reminiscent of the snake property -- see Duquesne and Le Gall~\cite{Duquesne02:0}. As we shall now see, the independent ``increment'' will be given by \begin{equation} \label{eq:S^n_m} {\mathbb{S}}_m^n := {\mathbb{S}}_0^{n-m}\circ \theta_m, \ 0 \leq m \leq n, \end{equation} which, when life descriptors are i.i.d., is distributed as the original spine at time $n-m$. In particular, since $\pi({\mathbb{S}}^n_m) = \pi({\mathbb{S}}_0^{n-m}\circ \theta_m) = \pi({\mathbb{S}}_0^{n-m})\circ \theta_m = {\mathbb{H}}(n-m) \circ \theta_m$, we note that an immediate consequence of~\eqref{eq:sigma-dual} and~\eqref{eq:formula-H} is that \begin{equation} \label{eq:formula-pi(rhonm)} \pi({\mathbb{S}}^n_m) = \left( \sum_{k = 1}^{{\widetilde T}^{-1}(n-m)} {\mathbb{Y}}(k) \right) \circ \vartheta^n, \ 0 \leq m \leq n. \end{equation} \begin{prop} \label{prop:snake-det} Let $n \geq m \geq 0$. If $\mrca{m}{n} \geq 0$, then ${\mathbb{S}}^n_0 = [{\mathbb{S}}^{\mrca{m}{n}}_0, {\mathbb{S}}^n_{\mrca{m}{n}}]$ and \begin{equation} \label{eq:snake-det} {\mathbb{S}}^n_{\mrca{m}{n}} = \begin{cases} \left[ \mu_{L(n-m)}\circ\vartheta^m , {\mathbb{S}}^n_{m} \right] & \text{ if } {L(n-m)}\circ\vartheta^m > 0,\\ {\mathbb{S}}^n_{m} & \text{ else.} \end{cases} \end{equation} \end{prop} In order to prove Proposition \ref{prop:snake-det}, we will need the following lemma. \begin{lemma} \label{cor:two-S} For any $n \geq m \geq 0$ we have \begin{equation} \label{sp-1} {\mathbb{S}}^n_m = \left( {\mathcal Q}({\widetilde T}^{-1}(n-m)), \ldots, {\mathcal Q}(1) \right) \circ \vartheta^n. \end{equation} If in addition $\mrca{m}{n} \geq 0$, then \begin{equation} \label{sp-2} {\mathbb{S}}_{\mrca{m}{n}}^n = \left( {\mathcal Q}\circ T^{-1}(n-m),\ldots,{\mathcal Q}(1) \right) \circ \vartheta^n. \end{equation} \end{lemma} \begin{proof} By definition we have ${\mathbb{S}}^n_m = {\mathbb{S}}^{n-m}_0 \circ \theta_m$ and so Proposition~\ref{prop:formula-spine} implies that \begin{equation} \label{eq:manipulation-1} {\mathbb{S}}^n_m = \left( {\mathcal Q}({\widetilde T}^{-1}(n-m)), \ldots, {\mathcal Q}(1) \right) \circ \vartheta^{n-m} \circ \theta_m. \end{equation} The first relation~\eqref{sp-1} thus follows from the identity $\vartheta^{n-m} \circ \theta_m = \vartheta^n$ of~\eqref{eq:sigma-dual}. To prove the other relation~\eqref{sp-2}, we use~\eqref{sp-1} with $m$ random, which in this case reads as follows: for any random time $\Gamma$, the relation \begin{equation} \label{eq:manipulation} {\mathbb{S}}^n_{\Gamma \circ \vartheta^n} = \left( {\mathcal Q}({\widetilde T}^{-1}(n-\Gamma)), \ldots, {\mathcal Q}(1) \right) \circ \vartheta^n. \end{equation} holds in the event $0 \leq \Gamma \circ \vartheta^n \leq n$ (see remark below). Apply now this relation to $\Gamma = n - T(T^{-1}(n-m))$, so that $\mrca{m}{n} = \Gamma \circ \vartheta^n$ by~\eqref{eq:identity-mrca}. Then we always have $\Gamma \leq n$ and so under the assumption $\mrca{m}{n} \geq 0$, we obtain \[ {\mathbb{S}}^n_{\mrca{m}{n}} = \left( {\mathcal Q}({\widetilde T}^{-1}(T(\Gamma'))), \ldots, {\mathcal Q}(1) \right) \circ \vartheta^n \] with $\Gamma' = T^{-1}(n-m)$. Since ${\widetilde T}^{-1}(T(k)) = k$ for any $k \geq 0$, we obtain the result. \end{proof} \begin{remark} \label{rk:manipulation} Let us comment on~\eqref{eq:manipulation} as similar identities will be used in the sequel. To see how it follows from~\eqref{eq:manipulation-1}, write~\eqref{eq:manipulation-1} in the form ${\mathbb{S}}^n_m = (U \circ \vartheta^n) (m)$ for some mapping $U$ with domain $\Omega$ and values in the space of ${\mathcal M}^*$-valued sequence, so that $(U \circ \vartheta^n) (m)$ is the $m$th element of the dual sequence. With this notation, we can directly plug in a random time, i.e., if $m = \Gamma$ is random then we have ${\mathbb{S}}^n_\Gamma = (U \circ \vartheta^n) (\Gamma)$ and in particular, ${\mathbb{S}}^n_{\Gamma \circ \vartheta^n} = (U \circ \vartheta^n) (\Gamma \circ \vartheta^n) = U(\Gamma) \circ \vartheta^n$. \end{remark} \begin{proof} [Proof of Proposition~\ref{prop:snake-det}] By~\eqref{eq:identity-mrca}, $\mrca{m}{n}\geq0$ implies that $T(T^{-1}(n-m))\circ\vartheta^n \leq n$ and so Lemma \ref{lemma:snake-3} with $k=T^{-1}(n-m)$ gives $$ {\mathbb{S}}_0^n = \left[{\mathbb{S}}_0^{n-T(T^{-1}(n-m))\circ\vartheta^n}, {\mathcal Q}(T^{-1}(n-m))\circ\vartheta^n, \ldots, {\mathcal Q}(1)\circ\vartheta^n \right]. $$ Combining~\eqref{eq:identity-mrca}, which shows that ${\mathbb{S}}_0^{n-T(T^{-1}(n-m))\circ\vartheta^n} = {\mathbb{S}}^{\mrca{m}{n}}_0$, and the expression for ${\mathbb{S}}^n_{\mrca{m}{n}}$ given in~\eqref{sp-2} under the assumption $\mrca{m}{n} \geq 0$ gives the first part of the result, namely that ${\mathbb{S}}^n_0 = \left[ {\mathbb{S}}^{\mrca{m}{n}}_0,\ {\mathbb{S}}^n_{\mrca{m}{n}} \right]$. In order to show~\eqref{eq:snake-det} and thus complete the proof, we distinguish between the two cases $L(n-m) \circ \vartheta^m = 0$ and $L(n-m) \circ \vartheta^m > 0$. If $L(n-m) \circ \vartheta^n = 0$, then $\mrca{m}{n} = m$ according to Lemma~\ref{lemma:condition-mrca} which proves~\eqref{eq:snake-det}. Assume now that $L(n-m) \circ \vartheta^n > 0$: in view of~\eqref{eq:formula-L}, this means that $n-m$ is not a weak ascending ladder height time of $S \circ \vartheta^n$ and so $T^{-1}(n-m)\circ\vartheta^n = {\widetilde T}^{-1}(n-m)\circ\vartheta^n + 1$. We then obtain by Lemma~\ref{cor:two-S} the relation ${\mathbb{S}}^n_{\mrca{m}{n}} = [{\mathcal Q}(T^{-1}(n-m)) \circ \vartheta^n, {\mathbb{S}}^n_m]$ and since ${\mathcal Q}(T^{-1}(n-m)) \circ \vartheta^n = \mu_{L(n-m)} \circ \vartheta^m$ in this case by~\eqref{eq:R-mrca}, we obtain the result. \end{proof} \subsection{Probabilistic description of the spine}\label{sec:proba:spine} In this paper we are interested in the chronological height and contour processes associated to CMJ forests, which corresponds to the case where the planar forest is constructed from an i.i.d.\ sequence of sticks. Formally, let $(V^*, {\mathcal P}^*)$ be a random variable with values in ${\mathbb{L}}$, and let ${\mathbb{P}}$ be the probability distribution on $\Omega$ such that $\omega$ under ${\mathbb{P}}$ is i.i.d.\ with common distribution $(V^*, {\mathcal P}^*)$. In this paper we consider the subcritical and critical cases, i.e., we assume that \[ {\mathbb{E}}(\lvert {\mathcal P}^* \rvert) \leq 1. \] Under this (sub)critical assumption, $S$ under ${\mathbb{P}}$ is a random walk with step distribution $\lvert {\mathcal P}^* \rvert - 1$, which therefore does not drift to $+\infty$. In particular, all the trees considered in the informal sequential construction of the Introduction are finite and the sequence $K_n$ almost surely grows to $\infty$. In this case, for any $n \in {\mathbb{Z}}$ the dual operator $\vartheta^n$ leaves ${\mathbb{P}}$ invariant, i.e., ${\mathbb{P}} = {\mathbb{P}} \circ (\vartheta^n)^{-1}$. In the rest of the paper, this property will be called \emph{duality}, it implies for instance that $S$ and $S \circ \vartheta^n$ under ${\mathbb{P}}$ are equal in distribution, and the same goes with ${\mathcal H}(m)$ and ${\widetilde T}^{-1}(m) \circ \vartheta^n$ for any $m, n \geq 0$. \\ The fundamental result which makes it possible to study the asymptotic behavior of the height process is the following lemma. It entails in particular that \[ \left( \big(T(k), \sum_{i=0}^k {\mathbb{Y}}(i) \big), k \geq 1 \right) \] under ${\mathbb{P}}$ is a bivariate renewal process stopped at some independent geometric random variable, which thus describes the law of $({\mathcal H}(n), {\mathbb{H}}(n))$ in view of~\eqref{eq:formula-H}. Recall that $\tau^-_\ell = \inf \{k \geq 0: S(k) = -\ell\}$ for $\ell \geq 0$. \begin{lemma} \label{lemma:y} Let $G = \inf\{k \geq 0: T(k) = \infty\}$. Then under ${\mathbb{P}}$, the sequence \[ \left( \big(T(k) - T(k-1), {\mathcal Q}(k) \big), k = 1, \ldots, G-1 \right) \] is equal in distribution to $(( T^*(k), {\mathcal Q}^*(k)), k = 1, \ldots, G^*-1)$, where the random variables $(( T^*(k), {\mathcal Q}^*(k)), k \geq 1)$ are i.i.d.\ with common distribution $( T^*, {\mathcal Q}^*)$ satisfying \begin{equation} \label{eq:def-y^*} {\mathbb{E}} \left[ f({\mathcal Q}^*) g( T^* ) \right] = \frac{1}{{\mathbb{E}}(\lvert {\mathcal P}^* \rvert)} \sum_{t\geq 1} \sum_{x\geq0} {\mathbb{E}}\left[ f \circ \Upsilon_x({\mathcal P}^*) ; \lvert {\mathcal P}^* \rvert \geq x + 1 \right] g(t) \ {\mathbb{P}}\left( \tau^-_x = t - 1 \right) \end{equation} for every bounded and measurable functions $f: {\mathcal M} \to {\mathbb{R}}_+$ and $g: {\mathbb{Z}}_+ \to {\mathbb{R}}_+$, and $G^*$ is an independent geometric random variable with parameter $1 - {\mathbb{E}}(\lvert {\mathcal P}^* \rvert)$. \end{lemma} By duality, this result describes the law of $((T(k) - T(k-1), {\mathcal Q}(k)), k < G) \circ \vartheta^n$ under ${\mathbb{P}}$ and justifies the claim made before the statement of the lemma. By combining this result with the spine decomposition of Proposition~\ref{prop:formula-H}, we thus get that the genealogical height process at a fixed time can be expressed as a functional of an explicit bivariate renewal process. \\ Moreover, we note that the random variable ${\mathbb{Y}}^* = \pi({\mathcal Q}^*)$ admits a natural interpretation. Indeed, the previous result implies that \begin{equation} \label{eq:law-y^*} {\mathbb{E}} \left[ f({\mathbb{Y}}^*) \right] = \sum_{k=1}^\infty \ \sum_{r=0}^{k-1} \frac{1}{k} {\mathbb{E}} \left[ f \circ \pi \circ \Upsilon_r({\mathcal P}^*) \mid \lvert {\mathcal P}^* \rvert = k \right] \times \frac{k{\mathbb{P}}(\lvert {\mathcal P}^* \rvert=k)}{{\mathbb{E}}(\lvert {\mathcal P}^* \rvert)}. \end{equation} Identifying $( k {\mathbb{P}}(\lvert {\mathcal P}^* \rvert =k)/{\mathbb{E}}(\lvert{\mathcal P}^*\rvert), k \geq 0)$ as the size-biased distribution of $\lvert {\mathcal P}^* \rvert$, we see that if we bias the life descriptor ${\mathcal P}^*$ by its number of children, then ${\mathbb{Y}}^*$ is the age of the individual when its begets a randomly chosen child. As mentioned in the introduction, in the critical case ${\mathbb{E}}({\mathbb{Y}}^*) = 1$, the random variable ${\mathbb{Y}}^*$ and its genealogical interpretation can already be found in Nerman~\cite{Nerman84:0}. \begin{proof}[Proof of Lemma~\ref{lemma:y}] The strong Markov property implies that $G$ is a geometric random variable with parameter ${\mathbb{P}}(\tau_0=T(1) = \infty)$ and that conditionally on $G$, the random variables $((T(k)-T(k-1),{\mathcal Q}(k)), k =1, \ldots, G-1)$ are i.i.d.\ with common distribution $(\tau_0, {\mathcal Q}(1))$ conditioned on $\{\tau_0 < \infty\}$. Thus in order to prove Lemma~\ref{lemma:y}, we only have to show that $(\tau_0, {\mathcal Q}(1))$ under ${\mathbb{P}}( \, \cdot \mid \tau_0<\infty)$ is equal in distribution to $( T^*, {\mathcal Q}^*)$. Recalling that ${\mathcal Q}(1) = \Upsilon_{\zeta_0}({\mathcal P}_{\tau_0-1})$, we will actually show a more complete result and characterize the joint distribution of $({\mathcal P}_{\tau_0-1}, \tau_0, \zeta_0)$ under ${\mathbb{P}}( \, \cdot \mid \tau_0<\infty)$. Fix in the rest of the proof $x, t \in {\mathbb{N}}$ with $t \geq 1$ and $h: {\mathcal M} \to [0,\infty)$ measurable: we will prove that \begin{equation} \label{erb0} {\mathbb{E}} \left[ h\left({\mathcal P}_{\tau_0-1} \right) \indicator{\zeta_0 = x} \indicator{\tau_0 = t} \right] = {\mathbb{E}}\left[ h({\mathcal P}^*) ; \lvert {\mathcal P}^* \rvert \geq x + 1 \right] {\mathbb{P}}\left( \tau^-_x = t - 1 \right). \end{equation} By standard arguments, this characterizes the law of $({\mathcal P}_{\tau_0-1}, \tau_0, \zeta_0)$ and implies for instance that for any bounded measurable function $F: {\mathcal M} \times {\mathbb{N}} \times {\mathbb{N}} \to [0,\infty)$, we have \begin{multline*} {\mathbb{E}} \left[ F\left({\mathcal P}_{\tau_0-1}, \zeta_0, \ \tau_0 \right) \mid \tau_0 <\infty \right]\\ = \frac{1}{{\mathbb{P}}(\tau_0<\infty)} \sum_{t\geq 1} \sum_{x \geq 0} {\mathbb{E}}\left[ F({\mathcal P}^*,x,t) ; \lvert {\mathcal P}^* \rvert \geq x + 1 \right] {\mathbb{P}}\left( \tau^-_x = t - 1 \right). \end{multline*} Since $\tau^-_x$ is ${\mathbb{P}}$-almost surely finite, the above relation for $F(\nu, x, t) = 1$ entails the relation ${\mathbb{P}}(\tau_0 < \infty) = {\mathbb{E}}(\lvert {\mathcal P}^* \rvert)$ which implies in turn the desired result by taking $F(\nu, x, t) = f(\Upsilon_x(\nu)) g(t)$. Thus we only have to prove~\eqref{erb0}, which we do now. First of all, note that if \[ B = \big\{ S(t-1) = -x \ \text{ and } \ S(k) < 0 \ \text{ for } \ k = 1, \ldots, t-1 \big\}, \] then the two events $\{\zeta_0 = x, \tau_0 = t\}$ and $B \cap \{ \lvert {\mathcal P}_{t-1} \rvert \geq x+1\}$ are equal. It follows from this observation that \[ {\mathbb{E}}\left[ h({\mathcal P}_{\tau_0-1}) \indicator{\zeta_0 = x} \indicator{\tau_0 = t} \right] = {\mathbb{E}}\left[ h({\mathcal P}_{t-1}) \indicator{\lvert {\mathcal P}_{t-1} \rvert \geq x+1} ; B \right] \] and since ${\mathcal P}_{t-1}$ and the indicator function of the event $B$ are independent and ${\mathcal P}_{t-1}$ under ${\mathbb{P}}$ is equal in distribution to ${\mathcal P}^*$, we obtain \[ {\mathbb{E}}\left[ h({\mathcal P}_{\tau_0-1}) \indicator{\zeta_0 = x} \indicator{\tau_0= t} \right] = {\mathbb{E}}\left[ h({\mathcal P}^*) ; \lvert {\mathcal P}^* \rvert \geq x+1 \right] {\mathbb{P}}(B). \] Since ${\mathbb{P}}(B) = {\mathbb{P}}(\tau^-_x = t-1)$ by duality, this proves Lemma~\ref{lemma:y}. \end{proof} \section{Convergence of the height process}\label{sect:cv-height} \subsection{Probabilistic set-up.} \label{sub:probabilistic-set-up} For each $p \geq 1$, let $(V^*_p, {\mathcal P}^*_p)$ be an ${\mathbb{L}}$-valued random variable corresponding to a (sub)critical CMJ branching process, i.e., which satisfies \begin{equation} \label{eq:(sub)critical} 0 \leq {\mathbb{E}}(\lvert {\mathcal P}^*_p \rvert) \leq 1. \end{equation} We further assume that the sequence $({\mathcal P}^*_p)$ is near-critical in the sense that \begin{equation} \label{eq:near-critical} \lim_{p \to \infty} {\mathbb{E}}(\lvert {\mathcal P}^*_p \rvert) = 1. \end{equation} Let ${\mathbb{Y}}^*_p$ be the random variable with distribution prescribed by~\eqref{eq:law-y^*} with ${\mathcal P}^* = {\mathcal P}^*_p$, and ${\mathbb{P}}_p$ be the probability distribution on $\Omega$ under which $\omega$ is an i.i.d.\ sequence with common distribution $(V^*_p, {\mathcal P}^*_p)$. We let $\Rightarrow$ denote weak convergence under ${\mathbb{P}}_p$ and $\stackrel{\textnormal{fdd}}{\Rightarrow}$ denote convergence in the sense of finite-dimensional distributions under ${\mathbb{P}}_p$. For instance, $B_p \stackrel{\textnormal{fdd}}{\Rightarrow} B_\infty$ if and only if $(B_p(t), t \in I)$ under ${\mathbb{P}}_p$ converges weakly to $(B_\infty(t), t \in I)$ for any finite set $I \subset [0,\infty)$. \subsection{Convergence of the height process.} We now state our main results concerning the convergence of the chronological height process: we fix a sequence $\varepsilon_p \to 0$ and consider the rescaled processes \begin{equation} \label{eq:scaling-1} {\mathcal H}_p(t) = \varepsilon_p {\mathcal H}([pt]), \ {\mathbb{H}}_p(t) = \varepsilon_p {\mathbb{H}}([pt]) \ \text{ and } \ S_p(t) = \frac{1}{p\varepsilon_p} S([pt]), \ t \geq 0. \end{equation} Our results will involve the following condition. Except for the first integrability condition, it is automatically satisfied in the non-triangular case where the law of ${\mathbb{Y}}^*_p$ does not depend on $p$. \begin{condition}{T-H} \label{cond-H} For every $p \geq 1$, ${\mathbb{E}}({\mathbb{Y}}^*_p) < \infty$. Moreover, there exists an integrable random variable $\bar {\mathbb{Y}}$ with ${\mathbb{E}}\bar {\mathbb{Y}} = 0$ such that ${\mathbb{Y}}^*_p - {\mathbb{E}}({\mathbb{Y}}^*_p) \Rightarrow \bar {\mathbb{Y}}$ and ${\mathbb{E}}[({\mathbb{Y}}^*_p - {\mathbb{E}}({\mathbb{Y}}^*_p))^+] \to {\mathbb{E}}(\bar {\mathbb{Y}}^+)$. \end{condition} \begin{theorem} \label{thm:H-fdd} Fix some $t > 0$. If Condition~\textnormal{\ref{cond-H}} holds and the sequence $({\mathcal H}_p(t), p \geq 1)$ is tight, then ${\mathbb{H}}_p(t) - {\mathbb{E}}({\mathbb{Y}}^*_p) {\mathcal H}_p(t) \Rightarrow 0$. \end{theorem} \begin{proof} First of all, note that ${\mathcal H}([pt]) \Rightarrow \infty$ since ${\mathcal H}(n)$ and ${\widetilde T}^{-1}(n)$ are equal in distribution by duality. Further, the fundamental formula~\eqref{eq:formula-H} gives \[ {\mathbb{H}}_p(t) - {\mathbb{E}}({\mathbb{Y}}^*_p) {\mathcal H}_p(t) = {\mathcal H}_p(t) \times \left( \frac{1}{{\widetilde T}^{-1}([pt])} \sum_{k = 1}^{{\widetilde T}^{-1}([pt])} \big( {\mathbb{Y}}(k) - {\mathbb{E}}({\mathbb{Y}}^*_p) \big) \right) \circ \vartheta^{[pt]}. \] Let in the sequel $W_p(n) = \bar {\mathbb{Y}}_p(1) + \cdots + \bar {\mathbb{Y}}_p(n)$ and $W(n) = \bar {\mathbb{Y}}(1) + \cdots + \bar {\mathbb{Y}}(n)$, where the two sequences $(\bar {\mathbb{Y}}_p(k), k \geq 1)$ and $(\bar {\mathbb{Y}}(k), k \geq 1)$ are i.i.d.\ with common distribution ${\mathbb{Y}}^*_p - {\mathbb{E}}({\mathbb{Y}}^*_p)$ and $\bar {\mathbb{Y}}$ introduced in Condition~\textnormal{\ref{cond-H}}, respectively. Fix $\eta > 0$ and $M, N \geq 1$: by duality, it follows from Lemma~\ref{lemma:y} and standard manipulations that \begin{multline*} {\mathbb{P}}_p \left( \left \lvert {\mathbb{H}}_p(t) - {\mathbb{E}}({\mathbb{Y}}^*_p) {\mathcal H}_p(t) \right \rvert \geq \eta \right) \leq {\mathbb{P}}_p \left( {\mathcal H}_p(t) \geq M \right) + {\mathbb{P}}_p\left({\mathcal H}([pt]) \leq N \right)\\ + {\mathbb{P}} \left( \sup_{n \geq N} \frac{1}{n} \left \lvert W_p(n) \right \rvert \geq \eta/M \right). \end{multline*} Letting first $p \to \infty$, then $N \to \infty$ and finally $M \to \infty$ makes the two first terms of the above upper bound vanish: the first one because the sequence $({\mathcal H}_p(t), n \geq 1)$ is tight and the second one because ${\mathcal H}([pt]) \Rightarrow \infty$, and so we end up with \begin{equation} \label{eq:bound-H} \limsup_{p \to \infty} {\mathbb{P}}_p \left( \left \lvert {\mathbb{H}}_p(t) - {\mathbb{E}}({\mathbb{Y}}^*_p) {\mathcal H}_p(t) \right \rvert \geq \eta \right) \leq \limsup_{N \to \infty} \limsup_{p \to \infty} {\mathbb{P}} \left( \sup_{n \geq N} \frac{1}{n} \left \lvert W_p(n) \right \rvert \geq 2\eta' \right) \end{equation} with $\eta' = \eta / (2M)$. We omit the $\limsup_{M \to \infty}$ because, as we now show, the previous limit is equal to $0$ for each fixed $M > 0$. In the non-triangular case where the law of ${\mathbb{Y}}^*_p$ (and thus $W_p$) does not depend on $p$, this follows from the strong law of large numbers, and we now extend this to the triangular setting under Condition~\textnormal{\ref{cond-H}}. Writing \[ \sup_{n \geq N} \frac{1}{n} \left \lvert W_p(n) \right \rvert \leq \frac{1}{N} \left \lvert W_p(N) \right \rvert + \sup_{n \geq N} \frac{1}{n} \left \lvert W_p(n) - W_p(N) \right \rvert \] and using that $(W_p(n) - W_p(N), n \geq N)$ is equal in distribution to $W_p$, we get \[ {\mathbb{P}} \left( \sup_{n \geq N} \frac{1}{n} \left\lvert W_p(n) \right \rvert \geq 2\eta' \right) \leq {\mathbb{P}} \left( \frac{1}{N} \lvert W_p(N) \rvert \geq \eta' \right) + {\mathbb{P}} \left( \sup_{n \geq 0} \frac{1}{n + N} \left\lvert W_p(n) \right \rvert \geq \eta' \right). \] By the Portmanteau Theorem, we have \[ \limsup_{p\rightarrow\infty} {\mathbb{P}}\left(\frac{1}{N} \lvert W_p(N) \rvert \geq \eta' \right) \leq {\mathbb{P}} \left( \frac{1}{N} \lvert W(N) \rvert \geq \eta' \right), \] which entails \[ \limsup_{p \to \infty} {\mathbb{P}} \left( \frac{1}{N} \lvert W_p(N) \rvert \geq \eta' \right) \mathop{\longrightarrow}_{N \to \infty} 0. \] As for the second term, if we define $W^\pm_p(n) = W_p(n) \pm \eta' n$ and $W^\pm(n) = W(n) \pm \eta' n$, then simple manipulations lead to \[ {\mathbb{P}} \left( \sup_{n \geq 0} \frac{1}{n + N} \left\lvert W_p(n) \right \rvert \geq \eta' \right) \leq {\mathbb{P}} \left( \sup_{n\geq0} W^-_p \geq \eta' N \right) + {\mathbb{P}} \left( \inf_{n\geq0} W^+_p \leq - \eta N \right). \] Under Condition~\textnormal{\ref{cond-H}}, we have $\sup W^-_p \Rightarrow \sup W^-$ and $\inf W^+_p \Rightarrow \inf W^+$, see for instance Theorem~$22$ in Borovkov~\cite{Borovkov76:0}. The result thus follows from the fact that, since $W^+$ (resp.\ $W^-$) is a random walk drifting to $+\infty$ (resp.\ $-\infty$), its infimum (resp.\ supremum) is finite. \end{proof} \begin{remark}\label{rem:conv-seq} By the exact same argument, we leave the reader convince herself that if $t_p$ is a deterministic sequence such that $t_p/p\rightarrow 0$, then $\varepsilon_p {\mathbb{H}}(t_p) \Rightarrow 0$. This fact will be used later in proving the convergence of the contour process. \end{remark} We now state one immediate corollary of this result, which states that under mild conditions on the ${\mathbb{Y}}^*_p$'s, the paths ${\mathcal H}_p$ and ${\mathbb{H}}_p$ converge jointly in the sense of finite-dimensional distributions. \begin{corollary} \label{cor:H-fdd} Assume that Condition~\textnormal{\ref{cond-H}} holds and that: \begin{enumerate}[label={\textnormal{(H\arabic*)}}] \item \label{HH} $p \varepsilon_p \to \infty$; \item ${\mathbb{E}}({\mathbb{Y}}^*_p) \to \alpha^*$ for some $\alpha^* \in (0,\infty)$; \item \label{H} ${\mathcal H}_p \Rightarrow {\mathcal H}_\infty$ for some ${\mathcal H}_\infty$ satisfying ${\mathbb{P}}({\mathcal H}_\infty(t) > 0) = 1$ for every $t > 0$. \end{enumerate} Then \begin{equation} \label{eq:cvfdd} \left( {\mathcal H}_p, {\mathbb{H}}_p \right) \stackrel{\textnormal{fdd}}{\Longrightarrow} \left( {\mathcal H}_\infty, \, \alpha^* {\mathcal H}_\infty \right). \end{equation} \end{corollary} Condition~\ref{HH} is essentially a non-degeneracy condition: when $|{\mathcal P}|=1$ a.s.\ it is not satisfied. Theorem~$2.3.1$ in Duquesne and Le Gall~\cite{Duquesne02:0} provides explicit conditions for Condition~\ref{H} to hold. Namely, the following three conditions together imply~\ref{H}: \begin{enumerate} \item[(H3a)] $S_p \Rightarrow S_\infty$ for some L\'evy process $S_\infty$ with infinite variation; \item[(H3b)] the Laplace exponent $\psi$ of $S_\infty$ satisfies $\int_1^\infty \mathrm{d} u / \psi(u) < \infty$; \item[(H3c)] if $(Z^p_k, k \geq 0)$ is a Galton-Watson process with offspring distribution $\lvert {\mathcal P}^*_p \rvert$ and started with $[p \varepsilon_p]$ individuals, then for every $\delta > 0$, \[ \liminf_{p \to \infty} \ {\mathbb{P}} \left( Z^p_{[\delta / \varepsilon_p]} = 0 \right) > 0. \] \end{enumerate} \section{Convergence of the contour process}\label{sec:cv:contour} \subsection{Main results} The probabilistic set-up is the same as in Section~\ref{sub:probabilistic-set-up}, in particular relations~\eqref{eq:(sub)critical} and~\eqref{eq:near-critical} hold, and we now turn to the asymptotic behavior of the chronological contour process ${\mathbb{C}}$. Under the assumption ${\mathbb{E}}({\mathbb{Y}}^*_p) \rightarrow \alpha^*<\infty$ and other mild conditions, we showed in Corollary~\ref{cor:H-fdd} that the genealogical and chronological height processes are essentially proportional to one another. In this section, we study the contour process when this assumption is not enforced, which allows the chronological and genealogical processes to scale in different ways. We thus consider two sequences $\varepsilon_p$ and $\bar \varepsilon_p$, both converging to $0$, rescale the genealogical processes using $\bar \varepsilon_p$ as \[ {\mathcal H}_p(t) = \bar \varepsilon_p {\mathcal H}([pt]), \ {\mathcal C}_p(t) = \bar \varepsilon_p {\mathcal C}(pt) \ \text{ and } \ S_p(t) = \frac{1}{p \bar \varepsilon_p} S([pt]), \] and the chronological processes using $\varepsilon_p$ as \[ {\mathbb{H}}_p(t) = \varepsilon_p {\mathbb{H}}([pt]) \ \text{ and } \ {\mathbb{C}}_p(t) = \varepsilon_p {\mathbb{C}}(pt). \] \begin{remark} When ${\mathbb{E}}(V^*)<\infty$, Theorem \ref{thm:H-fdd} ensures that the difference of scaling between the genealogical and the chronological height processes can only occur when ${\mathbb{E}}({\mathbb{Y}}^*)=+\infty$. For instance, this will occur in the (non-triangular) case of Poissonian birth events along the edges (as in \cite{Lambert10:0}) and when ${\mathbb{E}}((V^*)^2)=\infty$. \end{remark} In the Galton-Watson case, it is well-known that ${\mathcal C}_p$ is essentially obtained from ${\mathcal H}_p$ by a deterministic time-change under rather mild assumptions (essentially conditions~\ref{A.epsilon}--\ref{A.X} below). We now show that a similar statement holds at the chronological level. \begin{condition}{T-C1} \label{cond-C} We have $(V^*_p, {\mathcal P}^*_p) \Rightarrow (V^*_\infty, {\mathcal P}^*_\infty)$ for some ${\mathbb{L}}$-valued random variable $(V^*_\infty, {\mathcal P}^*_\infty)$ with ${\mathbb{E}}(V^*_\infty) < \infty$ and ${\mathbb{E}}(\lvert {\mathcal P}^*_\infty \rvert) = 1$. \end{condition} Let $V > 0$ be some random variable and $G$ be the additive subgroup generated by the support of its distribution. In the sequel we say that $V$ is \emph{non-arithmetic} if $G$ is dense in ${\mathbb{R}}$; otherwise, we say that $V$ is \emph{arithmetic} and in this case, there exists a unique $h > 0$, called the \emph{span} of $V$, such that $G = h {\mathbb{Z}}$. For a random variable $V > 0$ with finite mean, we define $\hat V$ as follows: \begin{itemize} \item if $V$ is non-arithmetic, we define \[ {\mathbb{P}}(\hat V \geq x) = \frac{1}{{\mathbb{E}}(V)} \int_x^\infty {\mathbb{P}}(V \geq y) \mathrm{d} y, \ x \geq 0; \] \item if $V$ is arithmetic and $h$ is its span, we define \[ {\mathbb{P}}(\hat V = k h) = \frac{1}{{\mathbb{E}}(V)} {\mathbb{P}}(V > k h), \ k \in {\mathbb{N}}. \] \end{itemize} \begin{condition}{T-C2} \label{cond-C2} We have $\hat V^*_p \Rightarrow \hat V^*_\infty$ with $V^*_\infty$ as in Condition~\textnormal{\ref{cond-C}}, and moreover: \begin{itemize} \item if $V^*_\infty$ is non-arithmetic, then $V^*_p$ for each $p$ is non-arithmetic; \item if $V^*_\infty$ is arithmetic, then $V^*_p$ for each $p$ is arithmetic. \end{itemize} \end{condition} In the sequel, we will refer to the first case as the \emph{non-arithmetic case} and to the second case as the \emph{arithmetic case}. Note that, except for the integrability condition ${\mathbb{E}}(V^*_\infty) < \infty$, Conditions~\ref{cond-C} and \ref{cond-C2} as well as condition~\ref{A.V} below are automatically satisfied in the non-triangular case where the law of $(V^*_p, {\mathcal P}^*_p)$ does not depend on~$p$. \begin{theorem} \label{thm:C-fdd} Assume that Conditions~\textnormal{\ref{cond-C}} and~\textnormal{\ref{cond-C2}} hold and that: \begin{enumerate}[label={\textnormal{(C\arabic*)}}] \item \label{A.V} ${\mathbb{E}}(V^*_p) \to \beta^*$ with $\beta^* = {\mathbb{E}}(V^*_\infty)< \infty$; \item \label{A.epsilon} $\lim_{p \to \infty} p \varepsilon_p = \lim_{p \to \infty} p \bar \varepsilon_p = \infty$; \item\label{A.X} $S_p \Rightarrow S_\infty$ for some L\'evy process $S_\infty$ with infinite variation; \item\label{A.HC} $({\mathcal H}_p, {\mathcal C}_p) \Rightarrow ({\mathcal H}_\infty, {\mathcal C}_\infty)$ for some (almost surely) continuous processes ${\mathcal H}_\infty, {\mathcal C}_\infty$ satisfying the condition ${\mathbb{P}}({\mathcal H}_\infty(t), {\mathcal C}_\infty(t) \ > 0) = 1$ for every $t > 0$; \item \label{A.Hc} ${\mathbb{H}}_p \stackrel{\textnormal{fdd}}{\Longrightarrow} {\mathbb{H}}_\infty$ for some process ${\mathbb{H}}_\infty$ which is (almost surely) continuous at $0$ and satisfies the condition ${\mathbb{P}}({\mathbb{H}}_\infty(t) > 0) = 1$ for every $t > 0$; \end{enumerate} and let $\varphi_\infty(t) = t / (2\beta^*)$. Then \begin{equation} \label{eq:conv-C-fdd} \left( {\mathbb{H}}_p, {\mathbb{C}}_p \right) \stackrel{\textnormal{fdd}}{\Longrightarrow} \left( {\mathbb{H}}_\infty, {\mathbb{H}}_\infty \circ \varphi_\infty \right). \end{equation} \end{theorem} Note that the three assumptions~(H3a)--(H3c) stated after Corollary~\ref{cor:H-fdd} actually imply~\ref{A.HC} with ${\mathcal C}_\infty(t) = {\mathcal H}_\infty(t/2)$. Moreover, instead of assuming~\ref{A.X}+\ref{A.HC}, we could merely assume~\ref{A.X} and that ${\mathcal H}_p \Rightarrow {\mathcal H}_\infty$ with ${\mathcal H}_\infty$ continuous and with ${\mathbb{P}}({\mathcal H}_\infty(t) > 0) = 1$: indeed, results in~\cite{Duquesne02:0} show that this implies~\ref{A.HC} with ${\mathcal C}_\infty$ as above. Combining Theorems~\ref{thm:H-fdd} and~\ref{thm:C-fdd}, we obtain the following joint convergence. \begin{corollary} \label{cor:C} Assume that except for~\ref{A.Hc}, the conditions of Theorems~\ref{thm:H-fdd} and~\ref{thm:C-fdd} hold with $\bar \varepsilon_p=\varepsilon_p$: then \[ \left( {\mathcal H}_p, {\mathcal C}_p, {\mathbb{H}}_p, {\mathbb{C}}_p \right) \stackrel{\textnormal{fdd}}{\Longrightarrow} \left( {\mathcal H}_\infty, {\mathcal H}_\infty(\, \cdot \,/2), \alpha^* {\mathcal H}_\infty, \alpha^* {\mathcal H}_\infty \circ \varphi_\infty \right). \] \end{corollary} We finally complement these results by showing that the trees themselves converge in the sense of finite-dimensional distributions. To do so, we only need considering the minimum of the contour process, see for instance Le Gall~\cite{Le-Gall05:0} for more details. \begin{theorem}\label{thm:fdd-trees} Assume that except for~\ref{A.Hc}, the conditions of Theorems~\ref{thm:H-fdd} and~\ref{thm:C-fdd} hold with $\bar \varepsilon_p=\varepsilon_p$. Assume moreover that the sequence of random variables $({\mathbb{Y}}^*_p)$ is uniformly integrable: then for every $0 \leq u \leq v$ we have \[ \inf_{u \leq t \leq v} {\mathbb{C}}_p(t) - \alpha^* \inf_{u \leq t \leq v} {\mathcal C}_p(2\varphi_\infty(t)) \Rightarrow 0. \] \end{theorem} \begin{remark} In~\cite{Sagitov95:0}, Sagitov investigated ({in the non-triangular setting}) the size of a CMJ process conditioned to survive at large time under the short edge assumption, i.e., when ${\mathbb{E}}(V^*_1)<\infty$ and ${\mathbb{E}}({\mathbb{Y}}^*_1) < \infty$ (see also Section~\ref{sec:example} and Green~\cite{Green77:0}). The population size is described in the limit in terms of a continuous state branching process where space and time are scaled analogously as in Corollary~\ref{cor:C}. As a consequence, the previous corollary can be seen as a genealogical version of~\cite{Sagitov95:0}. We also note that in~\cite{Sagitov95:0}, the results are obtained through an entirely different approach, namely analytic computations involving some non-trivial extension of the renewal theorem. \end{remark} In the rest of this section we discuss the proof of Theorem~\ref{thm:C-fdd}: the proof of Theorem~\ref{thm:fdd-trees}, provided in Section~\ref{sub:proof-trees}, uses essentially the same arguments, together with the additional result of Corollary~\ref{cor:formula-min}. In order to prove~\eqref{eq:conv-C-fdd} and in view of the assumption~\ref{A.Hc}, we only need to prove that \begin{equation} \label{eq:goal-proof-thm-C} \forall t \geq 0, \ {\mathbb{C}}_p(t) - {\mathbb{H}}_p\circ\varphi_\infty(t) \Rightarrow 0. \end{equation} To show this result, it is tempting to draw inspiration from the proof of Theorem~2.4.1 in Duquesne and Le Gall~\cite{Duquesne02:0}, where it is proved that $\sup_{0 \leq s \leq t} \lvert {\mathcal C}_p(s) - {\mathcal H}_p(s/2) \rvert \Rightarrow 0$ for each fixed $t \geq 0$. The proof of this result relies heavily on the assumption that the discrete height process converges weakly (i.e., in a functional sense) to its continuum counterpart. At the genealogical level, assuming weak convergence is not much stronger than assuming convergence of the finite-dimensional distributions, see~\cite[Theorem 2.3.1]{Duquesne02:0}. At the chronological level however, the simple example presented in the Section~\ref{sec:example} illustrates that the gap between these two modes of convergence is more significant. In Section~\ref{sec:overview} we give an overview of the main steps for proving~\eqref{eq:goal-proof-thm-C}, thereby highlighting key differences with the Galton-Watson case. \subsection{Overview of the proof of Theorem~\ref{thm:C-fdd}} \label{sec:overview} Except in Section~\ref{sec:example}, we assume in the rest of the paper that Conditions~\textnormal{\ref{cond-C}} and~\textnormal{\ref{cond-C2}} and Conditions~\ref{A.V}--\ref{A.Hc} of Theorem~\ref{thm:C-fdd} hold. The two conditions $V^*_p \Rightarrow V^*_\infty$ with $V^*_\infty$ integrable and ${\mathbb{E}}(V^*_p) \to {\mathbb{E}}(V^*_\infty)$ imply that the sequence $(V^*_p)$ is uniformly integrable (see for instance~\cite[Theorem $3.6$]{Billingsley99:0}), which implies the following triangular weak law of large numbers. It can be directly checked by computing Laplace transforms or by invoking \textsection22 in Gnedenko and Kolmogorov~\cite{Gnedenko68:0}. \begin{lemma}\label{lemma:triangular-LLN} For any sequence $u_p \to \infty$, we have ${\mathcal V}([u_p]) / u_p \Rightarrow \beta^*$. In particular, for any $s \geq 0$ we have ${\mathcal V}([ps]) / p \Rightarrow \beta^* s$. \end{lemma} In view of the construction of the chronological contour process ${\mathbb{C}}$ in Section~\ref{subsub:chronological-processes}, we have \begin{equation} \label{eq:C-H} \sup_{t \in [K_n, K_{n+1}]} \left \lvert {\mathbb{C}}(t) - {\mathbb{H}}(n) \right \rvert \leq \left \lvert {\mathbb{H}}(n+1) - {\mathbb{H}}(n) \right \rvert + V_n. \end{equation} Let $\varphi$ be the left-continuous inverse of $(K_{[t]}, t \geq 0)$, defined by \begin{equation} \label{eq:def-varphi} \varphi(t) := \min \left\{ j \geq 0: K_{j} \geq t \right\}, \ t \geq 0. \end{equation} Then defining \begin{equation}\label{vpn} \varphi_p(t) := \frac{1}{p} \varphi(pt), \end{equation} the inequality~\eqref{eq:C-H} translates after scaling to \[ \left \lvert {\mathbb{C}}_p(t) - {\mathbb{H}}_p(\varphi_p(t)) \right \rvert \leq \varepsilon_p V_{\varphi(pt)} + \left \lvert {\mathbb{H}}_p(\varphi_p(t)+1/p) - {\mathbb{H}}_p(\varphi_p(t)) \right \rvert, \ t \geq 0, \] and so going back to~\eqref{eq:goal-proof-thm-C}, we obtain for any $t \geq 0$ \begin{multline} \label{eq:decomposition} \left \lvert {\mathbb{C}}_p(t) - {\mathbb{H}}_p(\varphi_\infty(t)) \right \rvert \leq \varepsilon_p V_{\varphi(pt)} + \left \lvert {\mathbb{H}}_p(\varphi_p(t) + 1/p) - {\mathbb{H}}_p(\varphi_\infty(t)) \right \rvert\\ + 2 \left \lvert {\mathbb{H}}_p(\varphi_p(t)) - {\mathbb{H}}_p(\varphi_\infty(t)) \right \rvert. \end{multline} The proofs of ${\mathbb{H}}_p(\varphi_p(t) + 1/p) - {\mathbb{H}}_p(\varphi_\infty(t)) \Rightarrow 0$ and of ${\mathbb{H}}_p(\varphi_p(t)) - {\mathbb{H}}_p(\varphi_\infty(t)) \Rightarrow 0$ proceed along similar lines, and so in the sequel we only focus on the latter convergence. The above relation shows that, asymptotically, the correct time-change should be the limit of $\varphi_p$, and we now explain why this is indeed $\varphi_\infty$. Plugging in the definition $K_n = 2 {\mathcal V}(n-1) - {\mathbb{H}}(n)$ into the definition of $\varphi$, we obtain \[ \varphi_p(t) = \frac{1}{p} \inf\left\{ j \geq 0: 2 {\mathcal V}(j-1) - {\mathbb{H}}(j) \geq p t \right\}. \] For large $p$, the triangular law of large numbers of Lemma~\ref{lemma:triangular-LLN} suggests the approximation ${\mathcal V}(p) \approx \beta^* p$; while under assumptions~\ref{A.epsilon} and ~\ref{A.Hc} , ${\mathbb{H}}(p)$ for large $p$ is of the order of $1/\varepsilon_p \ll p$. These two observations thus give a rationale for the following result. \begin{lemma} \label{lemma:phi} For every $t \geq 0$ we have $\varphi_p(t) \Rightarrow \varphi_\infty(t)$. \end{lemma} \begin{proof} Consider any $t' < \varphi_\infty(t)$: using the definition of $\varphi_p$, the fact that ${\mathbb{H}}(j) \geq 0$ and that ${\mathcal V}$ is increasing, one obtains that \[ {\mathbb{P}}_p \left( \varphi_p(t) < t' \right) \leq {\mathbb{P}}_p \left( 2 {\mathcal V}\big( n p' \big) < n p \right). \] Since ${\mathcal V}(ps) / p \Rightarrow \beta^* s$ for any $s \geq 0$ by Lemma~\ref{lemma:triangular-LLN}, we obtain ${\mathbb{P}}_p \left( \varphi_p(t) < t' \right) \to 0$ for $t' < \varphi_\infty(t)$. Let now $t' > \varphi_\infty(t)$, and write \[ {\mathbb{P}}_p\left( \varphi_p(t) > t' \right) \leq {\mathbb{P}}_p \left( 2 {\mathcal V}(pt') - {\mathbb{H}}(pt') \leq nt \right). \] Since the sequence $(\varepsilon_p {\mathbb{H}}([pt']), p \geq 1)$ is tight and $p \varepsilon_p \to \infty$, we obtain ${\mathbb{H}}(pt') / p \Rightarrow 0$ and so $(2 {\mathcal V}(pt') - {\mathbb{H}}(pt')) / p \Rightarrow 2 \beta^* t'$ by Lemma~\ref{lemma:triangular-LLN}. Consequently, we obtain the convergence ${\mathbb{P}}_p \left( 2 {\mathcal V}(pt') - {\mathbb{H}}(pt') \leq pt \right) \to 0$ which concludes the proof. \end{proof} In view of this result, a natural idea to prove ${\mathbb{H}}_p(\varphi_p(t)) - {\mathbb{H}}_p(\varphi_\infty(t)) \Rightarrow 0$ is to use a uniform control of the kind \[ \left \lvert {\mathbb{H}}_p(\varphi_p(t)) - {\mathbb{H}}_p(\varphi_\infty(t)) \right \rvert \leq \sup \left\{ \left \lvert {\mathbb{H}}_p(s) - {\mathbb{H}}_p(\varphi_\infty(t)) \right \rvert : \left \lvert s - \varphi_\infty(t) \right \rvert \leq \eta_p \right\} \] for some $\eta_p \to 0$ such that ${\mathbb{P}}_p(\lvert \varphi_p(t) - \varphi_\infty(t) \rvert \leq \eta_p) \to 1$. However, the example considered in Section~\ref{sec:example} strongly suggests that even for $\eta_p$ precisely of the order of $\lvert \varphi_p(t) - \varphi_\infty(t) \rvert$, the supremum of the previous upper bound may blow up. Such a control is therefore too rough and more care is needed. One of the main obstacle for a finer control is the convoluted relation between ${\mathbb{H}}_p$ and $\varphi_p(t)$, whereby ${\mathbb{H}}_p$ appears in the definition of $\varphi_p(t)$; this is also the reason why it is not straightforward to prove the apparently innocuous convergence $\varepsilon_p V_{\varphi(pt)} \Rightarrow 0$ which is required in order to deal with the first term in the upper bound of~\eqref{eq:decomposition}. \\ In order to circumvent this difficulty, we introduce a random time $\bar \varphi_p(t)$ close to $\varphi_p(t)$ and which will be easier to control. More precisely, we consider \[ \bar \varphi_p(t) = \frac{1}{p} \bar \varphi(pt) \ \text{ with } \ \bar \varphi(t) = \inf \left\{ j \geq 0: 2 {\mathcal V}(j) \geq t \right\} \] the first passage time of the renewal process $2{\mathcal V}$ above level $t$. Note that, since $V_n$ and ${\mathbb{H}}(n)$ are non-negative, we have $\bar \varphi(t) \leq \varphi(t)$ for every $t \geq 0$. For fixed $p$, the renewal theorem provides an asymptotic description as $t \to \infty$ of the process $2 {\mathcal V}$ shifted at time $\bar \varphi(t)$. In Section~\ref{sub:renewal} we will prove a triangular version of this result, and Condition~\textnormal{\ref{cond-C2}} is here to ensure that this extension of the renewal theorem to a triangular setting holds. We will for instance prove the following result. \begin{lemma}\label{lemma:renewal-V} For any $t \geq 0$, we have $\varepsilon_p V_{\bar \varphi(pt)} \Rightarrow 0$. \end{lemma} \begin{proof} See forthcoming Corollary~\ref{lemma:spine-seen}. \end{proof} This result illustrates the fact that $\bar \varphi_p(t)$ is more convenient to work with compared to $\varphi_p(t)$. Besides, $\varphi_p(t)$ and $\bar \varphi_p(t)$ are close: the triangular law of large numbers of Lemma~\ref{lemma:triangular-LLN} implies similarly as in the proof of Lemma~\ref{lemma:phi} that $\bar \varphi_p(t) \Rightarrow \varphi_\infty(t)$ and, to be more precise, the next result implies that their difference is at most of the order of $1/\varepsilon_p$. This result is a consequence of Proposition~\ref{prop:Delta} which will be proved in Section~\ref{sub:proof-tightness-Delta}. \begin{lemma} \label{lemma:tightness-Delta} For any $t \geq 0$, the sequence of random variables $(\varepsilon_p (\varphi_p(t) - \bar \varphi_p(t)), p \geq 1)$ is tight. \end{lemma} \begin{proof} See forthcoming Proposition~\ref{prop:Delta}. \end{proof} Lemmas~\ref{lemma:renewal-V} and~\ref{lemma:tightness-Delta} allow to get rid of the first term in the upper bound~\eqref{eq:decomposition} as we show now. \begin{corollary} \label{cor:V} For any $t \geq 0$, we have $\varepsilon_p V_{\varphi(pt)} \Rightarrow 0$. \end{corollary} \begin{proof} Since $\bar \varphi(pt) \leq \varphi(pt)$, for any $M, \eta > 0$ we have \begin{multline*} {\mathbb{P}}_p \left( \varepsilon_p V_{\varphi(pt)} \geq \eta \right) \leq {\mathbb{P}}_p \left( \varphi(pt) - \bar \varphi(pt) > M / \varepsilon_p \right)\\ + {\mathbb{P}}_p \left( \varepsilon_p \max \left\{ V_k: k = \bar \varphi(pt), \ldots, \bar \varphi(pt) + [M / \varepsilon_p ]\right\} \geq \eta \right) \end{multline*} which gives \begin{multline*} {\mathbb{P}}_p \left( \varepsilon_p V_{\varphi(pt)} \geq \eta \right) \leq {\mathbb{P}}_p \left( \varepsilon_p (\varphi_p(t) - \bar \varphi_p(t)) > M \right) + {\mathbb{P}}_p \left( \varepsilon_p V_{\bar \varphi(pt)} \geq \eta \right)\\ + {\mathbb{P}}_p \left( \varepsilon_p \max \left\{ V_{\bar \varphi(pt) + k}: k = 1, \ldots, [M / \varepsilon_p] \right\} \geq \eta \right). \end{multline*} Lemmas~\ref{lemma:renewal-V} and~\ref{lemma:tightness-Delta} imply that the two first terms vanish, while for the third term, we write \[ {\mathbb{P}}_p \left( \varepsilon_p \max \left\{ V_{\bar \varphi(pt) + k}: k = 1, \ldots, [M / \varepsilon_p] \right\} \geq \eta \right) \leq \frac{M}{\varepsilon_p} {\mathbb{P}}\left( \varepsilon_p V^*_p \geq \eta \right) \leq \frac{M}{\eta} {\mathbb{E}} \left( V^*_p; V^*_p \geq \frac{\eta}{\varepsilon_p} \right) \] where the first inequality follows from the fact that the $(V_{\bar \varphi(pt)+k}, k \geq 1)$ under ${\mathbb{P}}_p$ are i.i.d.\ with common distribution $V^*_p$. Since the $(V^*_p)$ are uniformly integrable, this last bound vanishes as $p \to \infty$, which completes the proof. \end{proof} In order to show that ${\mathbb{H}}_p(\varphi_p(t)) - {\mathbb{H}}_p(\varphi_\infty(t)) \Rightarrow 0$, we introduce ${\mathbb{H}}_p(\bar \varphi_p(t))$ and write \begin{equation} \label{eq:two-terms} \left \lvert {\mathbb{H}}_p(\varphi_p(t)) - {\mathbb{H}}_p(\varphi_\infty(t)) \right \rvert \leq \left \lvert {\mathbb{H}}_p(\bar \varphi_p(t)) - {\mathbb{H}}_p(\varphi_\infty(t)) \right \rvert + \left \lvert {\mathbb{H}}_p(\varphi_p(t)) - {\mathbb{H}}_p(\bar \varphi_p(t)) \right \rvert. \end{equation} We will then study each term of this upper bound. We will control the first term ${\mathbb{H}}_p(\bar \varphi_p(t)) - {\mathbb{H}}_p(\varphi_\infty(t))$ by showing that the spine originated from the random time $\bar \varphi(pt)$ asymptotically looks like the spine originated from a deterministic time. To do so we prove an extension of the renewal theorem to a triangular setting and a macroscopic horizon in Section~\ref{sub:renewal}, thereby extending results of Miller~\cite{Miller74:0}. To control the second term ${\mathbb{H}}_p(\bar \varphi_p(t)) - {\mathbb{H}}_p(\varphi_p(t))$, we introduce the shifted process ${\mathbb{H}}' = ({\mathbb{H}}(\bar \varphi(pt) + k) - {\mathbb{H}}(\bar \varphi(pt)), k \geq 0)$ and write ${\mathbb{H}}_p(\varphi_p(t)) - {\mathbb{H}}_p(\bar \varphi_p(t)) = \varepsilon_p {\mathbb{H}}'(\Delta)$ with $\Delta = \varphi(pt) - \bar \varphi(pt)$. The key idea is that ${\mathbb{H}}'$ turns out to be close in distribution to ${\mathbb{H}}$, and so elaborating on Lemma~\ref{lemma:tightness-Delta} which states that $\Delta$ is small macroscopically (since $p \gg 1/\varepsilon_p$ by condition~\ref{A.epsilon}) will give the desired result. \subsection{Organization of the rest of the paper} The rest of the paper is organized as follows. In Section~\ref{sec:preliminary-results} we prove some preliminary results, namely some formulas on the height process which extend the right decomposition of the spine introduced in Section~\ref{sub:right-decomposition}, as well as some renewal type results: in particular, these results make it possible to prove Lemma~\ref{lemma:renewal-V}. Section~\ref{sec:proofs} contains the remaining proofs, namely the proof of Lemma~\ref{lemma:tightness-Delta}, the proof that each term in the upper bound of~\eqref{eq:two-terms} vanishes and finally the proof of Theorem~\ref{thm:fdd-trees}. \section{Preliminary results} \label{sec:preliminary-results} \subsection{Right decomposition of the spine continued.} \begin{lemma}\label{lemma:decompo-m} For any $n\geq m \geq 0$ with $0 \leq \mrca{m}{n} < m$, we have \[ {\mathbb{S}}_0^m = \left[ {\mathbb{S}}_0^{\mrca{m}{n}}, {\mathcal Q} \circ {\widetilde T}^{-1} (\tau_{L(n-m)}) \circ \vartheta^m, \ldots, {\mathcal Q}(1) \circ \vartheta^m \right]. \] \end{lemma} \begin{proof} By Lemma \ref{lemma:snake-3}, for every $k$ such that $T(k)\circ\vartheta^m\leq m$ we have \[ {\mathbb{S}}_0^{m} = \left[ {\mathbb{S}}_0^{m-T(k)\circ\vartheta^m}, {\mathcal Q}(k)\circ\vartheta^m, \ldots, {\mathcal Q}(1)\circ\vartheta^m \right]. \] Let $k = {\widetilde T}^{-1}(\tau_{L(n-m)})$: then $T(k) = \tau_{L(n-m)}$ (as $T({\widetilde T}^{-1}(i)) = i$ for every $i \geq 0$) and so $T(k) \circ \vartheta^m = m - \mrca{m}{n}$ by~\eqref{eq:identity-mrca}. Since by assumption $\mrca{m}{n} \geq 0$, we have $T(k) \circ \vartheta^m \leq m$ and so the application of Lemma~\ref{lemma:snake-3} gives the result as $m - T(k) \circ \vartheta^m = \mrca{m}{n}$. \end{proof} In the sequel, we consider the measurable function $D_\ell: {\mathcal M}^* \to {\mathbb{R}}_+$ that satisfies $D_0\equiv0$ and for $\ell \in {\mathbb{N}}\setminus\{0\}$: \begin{equation} \label{eq:def-D} D_\ell({\mathbb{S}}^n_0) = \left( \sum_{i: 0 < T(i) \leq \min(\tau_{\ell},n)} {\mathbb{Y}}(i) - \Indicator{\tau_\ell \leq n} \pi(\mu_\ell) \right) \circ \vartheta^n, n \in {\mathbb{N}}. \end{equation} The fact that the right hand side is measurable with respect to ${\mathbb{S}}_0^n$ (and thus can be written as a function of ${\mathbb{S}}_0^n$) is a consequence of Proposition~\ref{prop:formula-spine} and the fact that the random variables appearing in the formula are related to the dual Lukasiewicz path $S\circ\vartheta^n$. Moreover, we leave the reader check that for any $Y \in {\mathcal M}^*$ the sequence $(D_\ell(Y), \ell \in {\mathbb{N}})$ is increasing. Actually, this comes from a more general fact, namely that $D_\ell(Y)$ for $Y \in {\mathcal M}^*$ gives the distance between $\pi(Y)$ and the $\ell$-th stub of $Y$. The following result relates the two shifts which play a key role in this paper : on the one hand, the canonical shift $\theta$ which acts on the initial sequence of sticks $((V_n, {\mathcal P}_n), n \in {\mathbb{Z}})$ through the term $\pi({\mathbb{S}}^n_m) = \pi({\mathbb{S}}^{n-m}_0) \circ \theta_m$, and on the other hand, the shift in time through the term ${\mathbb{H}}(n) - {\mathbb{H}}(m)$. \begin{prop} \label{prop:shifts} For every $0 \leq m \leq n$ we have \begin{equation} \label{id-diff} {\mathbb{H}}(n) - {\mathbb{H}}(m) = \pi({\mathbb{S}}^n_m) - D_{L(n-m) \circ \vartheta^m}({\mathbb{S}}^m_0). \end{equation} \end{prop} \begin{proof} Applying~\ref{eq:def-D} to the random $\ell = L(n-m) \circ \vartheta^m$, we obtain (see Remark~\ref{rk:manipulation}) \begin{equation} \label{eq:D-L} D_{L(n-m) \circ \vartheta^m} ({\mathbb{S}}^m_0) = \left( \sum_{i = 1}^{{\widetilde T}^{-1}(\min(\tau_{L(n-m)}, m))} {\mathbb{Y}}(i) - \Indicator{\tau_{L(n-m)} \leq m} \pi(\mu_{L(n-m)}) \right) \circ \vartheta^m. \end{equation} To prove~\eqref{id-diff} we distinguish the two cases $\mrca{m}{n}<0$ and $\mrca{m}{n} \geq 0$. \noindent {\it Case 1: $\mrca{m}{n}<0$.} By~\eqref{eq:identity-mrca} this condition is equivalent to $\tau_{L(n-m)}\circ\vartheta^m > m$: in view of~\eqref{eq:D-L}, we thus need to show that \[ {\mathbb{H}}(n)-{\mathbb{H}}(m) = \pi({\mathbb{S}}_{m}^n) - \left(\sum_{i = 1}^{{\widetilde T}^{-1}(m)} {\mathbb{Y}}(i)\right)\circ \vartheta^m. \] Using the expression for ${\mathbb{H}}(n)$, ${\mathbb{H}}(m)$ and $\pi({\mathbb{S}}^n_m)$ provided by Proposition~\ref{prop:formula-H} and~\eqref{eq:formula-pi(rhonm)}, we see that in order to show the above relation we only have to show that ${\widetilde T}^{-1}(n-m) \circ \vartheta^n = {\widetilde T}^{-1}(n) \circ \vartheta^n$. This in turn follows from the fact that the condition $\mrca{m}{n}<0$ implies that $T(T^{-1}(n-m))\circ\vartheta^n>n$ (again by~\eqref{eq:identity-mrca}), which is equivalent to saying that the sets $\{T(i):i\in{\mathbb{N}}\}\circ\vartheta^n$ and $\{n-m, \ldots, n\}$ do not intersect and gives ${\widetilde T}^{-1}(n-m) \circ \vartheta^n = {\widetilde T}^{-1}(n) \circ \vartheta^n$. The proof in this case is thus complete. \\ \noindent {\it Case 2: $\mrca{m}{n} \geq 0$.} The result is obvious in the case $\mrca{m}{n} = m$, while in the other case we can invoke Proposition~\ref{prop:snake-det} and Lemma~\ref{lemma:decompo-m} that give \[ {\mathbb{H}}(n) = \pi({\mathbb{S}}^{\mrca{m}{n}}_0) + \pi(\mu_{L(n-m)}) \circ \vartheta^m + \pi({\mathbb{S}}^n_m) \ \text{ and } \ {\mathbb{H}}(m) = \pi({\mathbb{S}}^{\mrca{m}{n}}_0) + \left( \sum_{i=1}^{{\widetilde T}^{-1}(\tau_{L(n-m)})} {\mathbb{Y}}(i) \right) \circ \vartheta^m. \] Taking the difference between these two expressions yield the result in view of~\eqref{eq:D-L} (recall that $\mrca{m}{n} \geq 0$ is equivalent to $\tau_{L(n-m)}\circ\vartheta^m \leq m$). \end{proof} The following lemma relates the shifted spine to the Skorohod reflection. \begin{lemma}\label{lem:sk} For any $0 \leq m \leq n$, we have $\pi({\mathbb{S}}_m^n) = {\mathbb{H}}(n) - \min_{k=m,\ldots,n} {\mathbb{H}}(k)$. \end{lemma} \begin{proof} It follows from~\eqref{eq:formula-pi(rhonm)} that \[ \pi({\mathbb{S}}_m^n) = \left(\sum_{i=1}^{{\widetilde T}^{-1}(n)} {\mathbb{Y}}(i) \right) \circ \vartheta^n - \left(\sum_{i={\widetilde T}^{-1}(n-m)+1}^{{\widetilde T}^{-1}(n)} {\mathbb{Y}}(i) \right) \circ \vartheta^n = {\mathbb{H}}(n) - \left(\sum_{i={\widetilde T}^{-1}(n-m)+1}^{{\widetilde T}^{-1}(n)} {\mathbb{Y}}(i) \right) \circ \vartheta^n. \] Next, we have from Proposition~\ref{prop:formula-spine} that \[ {\mathbb{S}}^n_0 = \left( {\mathcal Q}({\widetilde T}^{-1}(n)), {\mathcal Q}({\widetilde T}^{-1}(n)-1), \ldots, {\mathcal Q}(1) \right) \circ \vartheta^n \] while Lemma~\ref{lemma:snake-3} with $k = {\widetilde T}^{-1}(n-m)$ gives \[ {\mathbb{S}}^n_0 = \left[ {\mathbb{S}}^{n - T({\widetilde T}^{-1}(n-m)) \circ \vartheta^n}_0, {\mathcal Q}({\widetilde T}^{-1}(n-m)) \circ \vartheta^n, \ldots, {\mathcal Q}(1) \circ \vartheta^n \right]. \] Comparing the two expressions for ${\mathbb{S}}_0^n$ ,we see that \[ {\mathbb{S}}^{n - T({\widetilde T}^{-1}(n-m)) \circ \vartheta^n}_0 = \left( {\mathcal Q}({\widetilde T}^{-1}(n)), \ldots, {\mathcal Q}({\widetilde T}^{-1}(n-m)+1) \right) \circ \vartheta^n \] and in particular, \[ \left(\sum_{i={\widetilde T}^{-1}(n-m)+1}^{{\widetilde T}^{-1}(n)} {\mathbb{Y}}(i) \right) \circ \vartheta^n = {\mathbb{H}} \big( n-T({\widetilde T}^{-1}(n-m)) \circ \vartheta^n \big). \] We let the reader convince herself that ${\mathbb{H}} \big( n-T({\widetilde T}^{-1}(n-m)) \circ \vartheta^n \big) = \min_{\{m, \ldots, n\}} {\mathbb{H}}$ (again by comparing the number of ladder height times at $n-T({\widetilde T}^{-1}(n-m)) \circ \vartheta^n$ and $k\in\{m, \ldots, n\}$), so that gathering the previous relations we finally obtain the desired result. \end{proof} \begin{corollary}\label{cor:formula-min} For any $0 \leq m \leq n$, \begin{equation}\label{eq:min} \min_{K_m \leq t \leq K_n} {\mathbb{C}}(t) = {\mathbb{H}}(m) - D_{L(n-m)\circ\vartheta^m}({\mathbb{S}}^m_0). \end{equation} \end{corollary} \begin{proof} Let $I^n_m = \min_{[K_m, K_n]} {\mathbb{C}}$. Since ${\mathbb{H}}(n) - {\mathbb{H}}(m) = \pi({\mathbb{S}}^n_m) - D_{L(n-m)\circ\vartheta^m}({\mathbb{S}}^m_0)$ by Proposition~\ref{prop:shifts}, in order to prove~\eqref{eq:min} it is enough to prove that \[ \pi \left( {\mathbb{S}}^n_m \right) = {\mathbb{H}}(n) - I^n_m. \] Local minima of ${\mathbb{C}}$ are by construction attained on the set $\{K_n \ : \ n\in{\mathbb{N}}\}$ and since ${\mathbb{H}}(k)={\mathbb{C}}(K_k)$ for any $k \in {\mathbb{N}}$, this implies $I^n_m = \min_{k = m, \ldots, n} {\mathbb{H}}(k)$. The result then follows from Lemma \ref{lem:sk}. \end{proof} \subsection{Triangular renewal theorem on a macroscopic horizon} \label{sub:renewal} By construction, $\pi({\mathbb{S}}^n_m)$ only depends on the finite vector ${\mathcal P}^n_m = ({\mathcal P}_k, k = m, \ldots, n-1)$, and we can thus for instance write $\pi({\mathbb{S}}^n_m) = \Xi_{n-m}({\mathcal P}^n_m)$ for some measurable mapping $\Xi_{n-m} : {\mathcal M}^{n-m} \to [0,\infty)$. With this notation, Condition~\ref{A.Hc} on the convergence of the chronological height process precisely means that if we take a vector $\nu^p \in {\mathcal M}^{[p \delta]}$ of $[p \delta]$ i.i.d.\ random measures with common distribution ${\mathcal P}^*_p$, then $\Xi_{[p\delta]}(\nu^p)$ converges weakly to ${\mathbb{H}}_\infty(\delta)$. For instance, for any $0 < \delta < \varphi_\infty(t)$ we have $\Xi_{[p \delta]} \big( {\mathcal P}^{[\varphi_\infty(pt)]}_{[\varphi_\infty(pt)] - [p \delta]} \big) \Rightarrow {\mathbb{H}}_\infty(\delta)$ and we want to extend this result by replacing the deterministic time $[\varphi_\infty(pt)]$ by the random one $\bar \varphi(pt)$. Of course, the random variables $({\mathcal P}_k, k = \bar \varphi(pt) - [p \delta], \ldots, \bar \varphi(pt) - 1)$ are not i.i.d.\ and so we cannot directly invoke the same argument. However, the renewal theorem suggests that these random variables become asymptotically i.i.d.\ as $p \to \infty$, which gives a rationale for, e.g., the convergence $\Xi_{[p\delta]}\big( {\mathcal P}^{\bar \varphi(pt)}_{\bar \varphi(pt) - [p \delta]} \big) = \pi \big( {\mathbb{S}}^{\bar \varphi(pt)}_{\bar \varphi(pt) - [p \delta]} \big) \Rightarrow {\mathbb{H}}_\infty(\delta)$. Results with a similar flavor, i.e., renewal theorems on a macroscopic horizon, can be found in Miller~\cite{Miller74:0}. Two technical difficulties prevent us from using Miller's or other standard results: $(1)$ we are in a triangular setting and $(2)$ we need to consider a growing number of terms (of the order of $p$). In addition, Miller~\cite{Miller74:0} typically assumes the almost sure convergence of $\Xi_{[p \delta]} \big( {\mathcal P}^{[p\delta]}_0 \big)$ when we only have weak convergence. In order to overcome these difficulties, we exploit the coupling between two random walks with the same step distribution but possibly different initial distributions constructed in the proof of Lemma~$9.21$ in Kallenberg~\cite{Kallenberg02:0}. This coupling leads to the following results proved in the Appendix~\ref{sec:coupling}. \begin{prop}\label{Prop:renewal-1} Let $(\hat V_\infty^*, \hat {\mathcal P}_\infty^*)$ have the following size-biased distribution: for every measurable function $f: {\mathbb{R}}_+ \times {\mathcal M} \to {\mathbb{R}}_+$, \begin{itemize} \item if $V_\infty^*$ is non-arithmetic, \[ {\mathbb{E}}\left[ f(\hat V_\infty^*, \hat {\mathcal P}_\infty^*) \right ] = \frac{1}{{\mathbb{E}}(V_\infty^*)} \int_0^\infty {\mathbb{E}} \left[ f(v,{\mathcal P}_\infty^*) \mid V_\infty^* = v \right] {\mathbb{P}} ( V_\infty^* > v ) \mathrm{d} v; \] \item if $V_\infty^*$ is arithmetic with span $h$, \[ {\mathbb{E}}\left[ f(\hat V_\infty^*, \hat {\mathcal P}_\infty^*) \right ] = \frac{1}{{\mathbb{E}}(V_\infty^*)} \sum_{i \geq 1} {\mathbb{E}} \left[ f(ih,{\mathcal P}_\infty^*) \mid V_\infty^* = ih \right] {\mathbb{P}} ( V_\infty^* > ih ). \] \end{itemize} Then $\left(V_{{\bar \varphi}(pt)},{\mathcal P}_{{\bar \varphi}(pt)} \right) \Rightarrow \left(\hat V^*_\infty, \hat {\mathcal P}^*_\infty\right)$ for every $t > 0$. \end{prop} \begin{prop}\label{Prop:renewal-2} For each $p \geq 1$ let $\Xi_p: {\mathcal M}^p \to {\mathbb{R}}$ be a measurable mapping such that $\Xi_p({\mathcal P}_0^p) \Rightarrow \Xi_\infty$ for some random variable $\Xi_\infty$. Then $\Xi_{[\delta p]}({\mathcal P}^{{\bar \varphi}(pt)}_{{\bar \varphi}(pt)-[p\delta]}) \Rightarrow \Xi_\infty$ for any $0 < \delta < t / (2 \beta^*)$. \end{prop} Recall the exploration process $\rho^n_0 = {\mathbb{S}}^n_0 \circ {\mathcal G}$, which similarly as~\eqref{eq:S^n_m} is extended by setting $\rho^n_m = {\mathbb{S}}^n_m \circ {\mathcal G} = \rho^{n-m}_0 \circ \theta_m$. The following corollary to Propositions~\ref{Prop:renewal-1} and~\ref{Prop:renewal-2} gathers the results needed in the sequel. \begin{corollary} \label{lemma:spine-seen} For $t \geq 0$, the three sequences $\varepsilon_p V_{\bar \varphi(pt)}$, $\varepsilon_p \pi({\mathcal P}_{\bar \varphi(pt)})$ and $\varepsilon_p \lvert {\mathcal P}_{\bar \varphi(pt)} \rvert$ converge weakly to $0$ as $p \to \infty$. If in addition $0 < \delta < t / (2 \beta^*)$, then \[ \varepsilon_p\pi\left( {\mathbb{S}}^{{\bar \varphi}(pt)}_{{\bar \varphi}(pt)-[p\delta]}\right) \Rightarrow {\mathbb{H}}_\infty(\delta), \ \bar \varepsilon_p \pi\left( \rho^{{\bar \varphi}(pt)}_{{\bar \varphi}(pt)-[p\delta]}\right) \Rightarrow {\mathcal H}_\infty(\delta) \] and \[ \sup_{0 \leq u \leq \delta} S_p(u) \circ \vartheta^{{\bar \varphi}(pt)} \Rightarrow \sup_{0 \leq u \leq \delta} S_\infty(u). \] \end{corollary} \begin{proof} The convergence of the three sequences $\varepsilon_p V_{\bar \varphi(pt)}$, $\varepsilon_p \pi({\mathcal P}_{\bar \varphi(pt)})$ and $\varepsilon_p \lvert {\mathcal P}_{\bar \varphi(pt)} \rvert$ is a direct consequence of Proposition~\ref{Prop:renewal-1} (note that, for point processes, the functionals $\pi$ and $\lvert \cdot \rvert$ are continuous for the weak topology). Let us now discuss the remaining convergence of $\varepsilon_p\pi\left( {\mathbb{S}}^{{\bar \varphi}(pt)}_{{\bar \varphi}(pt)-[p\delta]}\right)$, $\bar \varepsilon_p \pi\left( \rho^{{\bar \varphi}(pt)}_{{\bar \varphi}(pt)-[p\delta]}\right)$ and $\sup_{[0, \delta]} S_p \circ \vartheta^{{\bar \varphi}(pt)}$. From their definition, each of these random variables can be expressed in the form $\Xi_{[\delta p]}\big( {\mathcal P}^{\bar \varphi(pt)}_{\bar \varphi(pt) - [p \delta]} \big)$ for some measurable mappings $\Xi_p: {\mathcal M}^p \to [0,\infty)$. Proposition~\ref{Prop:renewal-2} implies that $\Xi_{[\delta p]}\big( {\mathcal P}^{\bar \varphi(pt)}_{\bar \varphi(pt) - [p \delta]} \big)$ converges if $\Xi\big( {\mathcal P}^{[p \delta]}_0 \big)$ does, in which case they have the same limit. This means that we are brought back to the convergence of ${\mathbb{H}}_p(\delta)$, ${\mathcal H}_p(\delta)$ and $\sup_{[0, \delta]} S_p$ and since each of these three terms convergences by assumption~\ref{A.X},~\ref{A.HC} and~\ref{A.Hc}, the result follows. \end{proof} \section{Proof of Theorems~\ref{thm:C-fdd} and~\ref{thm:fdd-trees}} \label{sec:proofs} We now complete the proof of Theorems~\ref{thm:C-fdd} and~\ref{thm:fdd-trees}: Theorem~\ref{thm:C-fdd} is proved in Sections~\ref{sub:proof-remaining-1}--\ref{sub:proof-remaining-2} and Theorem~\ref{thm:fdd-trees} in Section~\ref{sub:proof-trees}. For Theorem~\ref{thm:C-fdd}, recall from the discussion in Section~\ref{sec:overview} that there remains to prove Lemma~\ref{lemma:tightness-Delta} as well as the fact that both terms in the upper bound of~\eqref{eq:two-terms} vanish, i.e., that \begin{equation} \label{eq:remaining-1} {\mathbb{H}}_p(\bar \varphi_p(t)) - {\mathbb{H}}_p(\varphi_\infty(t)) \Rightarrow 0 \end{equation} and \begin{equation} \label{eq:remaining-2} {\mathbb{H}}_p(\varphi_p(t)) - {\mathbb{H}}_p(\bar \varphi_p(t)) \Rightarrow 0. \end{equation} Using the results of the previous section, we will first prove~\eqref{eq:remaining-1} in Section~\ref{sub:proof-remaining-1}. Then, we will use~\eqref{eq:remaining-1} to prove the following result in Section~\ref{sub:proof-tightness-Delta}. \begin{prop}\label{prop:Delta} For any $t > 0$ and any $\eta > 1/(2 \beta^*)$, \[ \lim_{p \to \infty} {\mathbb{P}}_p \left( \varphi(pt) - \bar \varphi(pt) > \eta \, {\mathbb{H}}(\bar \varphi(pt)) \right) = 0. \] \end{prop} Combining~\eqref{eq:remaining-1} and Condition~\ref{A.Hc} implies that ${\mathbb{H}}(\bar \varphi(pt))$ is of the order of $1/\varepsilon_p$, and so Proposition~\ref{prop:Delta} directly implies Lemma~\ref{lemma:tightness-Delta}. Finally, we will use Proposition~\ref{prop:Delta} to prove~\eqref{eq:remaining-2} in Section~\ref{sub:proof-remaining-2}, which will achieve the proof of Theorem~\ref{thm:C-fdd}. \subsection{Proof of~\eqref{eq:remaining-1}} \label{sub:proof-remaining-1} We start with the following simple lemma. \begin{lemma} \label{lemma:bound-m} For any $1 \leq m \leq n$, \begin{equation} \label{eq:bound-m} 0 \leq \pi({\mathbb{S}}_{m-1}^n) - \pi({\mathbb{S}}_m^n) \leq \pi({\mathcal P}_{m-1}). \end{equation} \end{lemma} \begin{proof} Relation~\eqref{eq:formula-pi(rhonm)} gives \[ \pi({\mathbb{S}}^n_{m-1}) = \left( \sum_{k = 1}^{{\widetilde T}^{-1}(n-m+1)} {\mathbb{Y}}(k) \right) \circ \vartheta^n. \] If ${\widetilde T}^{-1}(n-m+1) \circ \vartheta^n = {\widetilde T}^{-1}(n-m) \circ \vartheta^n$, then we obtain $\pi({\mathbb{S}}^n_{m-1}) = \pi({\mathbb{S}}^n_m)$ and so the result holds in this case. Otherwise, we have ${\widetilde T}^{-1}(n-m+1) \circ \vartheta^n = {\widetilde T}^{-1}(n-m) \circ \vartheta^n+1$ and so isolating the last term, we obtain \[ \pi({\mathbb{S}}^n_{m-1}) = \pi({\mathbb{S}}^n_m) + {\mathbb{Y}} \big( {\widetilde T}^{-1}(n-m+1) \big) \circ \vartheta^n. \] Further, for any $k \in {\mathbb{N}}$ we have \[ {\mathbb{Y}}\left({\widetilde T}^{-1}(k)\right) = \pi \circ {\mathcal Q}\left({\widetilde T}^{-1}(k)\right) = \pi \circ \Upsilon_{\zeta_0}({\mathcal P}_{\tau_0-1}) \circ \theta_{T({\widetilde T}^{-1}(k))-1} \leq \pi({\mathcal P}_{\tau_0-1}) \circ \theta_{T({\widetilde T}^{-1}(k))-1}. \] As $\tau_0 \circ \theta_{T({\widetilde T}^{-1}(k))-1} = 1$, this gives ${\mathbb{Y}}({\widetilde T}^{-1}(k)) \leq \pi\left({\mathcal P}_{T({\widetilde T}^{-1}(k)) - 1}\right)$ and consequently, \[ {\mathbb{Y}}\left({\widetilde T}^{-1}(n-m+1)\right) \circ \vartheta^n \leq \pi\left({\mathcal P}_{T({\widetilde T}^{-1}(n-m+1))-1}\right) \circ \vartheta^n = \pi\left({\mathcal P}_{n - T({\widetilde T}^{-1}(n-m+1)) \circ \vartheta^n}\right). \] The condition ${\widetilde T}^{-1}(n-m+1) \circ \vartheta^n = {\widetilde T}^{-1}(n-m) \circ \vartheta^n+1$ means that $n-m+1$ is a weak ascending ladder height time (for the dual process $S \circ \vartheta^n$) and thus implies the relation $T({\widetilde T}^{-1}(n-m+1)) \circ \vartheta^n = n-m+1$. Plugging in this relation in the previous display achieves the proof. \end{proof} Let for simplicity $m_p = \mrca{\bar \varphi(pt)}{[\varphi_\infty(pt)]}$. Since we have ${\mathbb{H}}(m_p) \leq {\mathbb{H}}(\bar \varphi(pt))$ as well as ${\mathbb{H}}(m_p) \leq {\mathbb{H}}([\varphi_\infty(pt)])$, the triangular inequality reads \[ \left \lvert {\mathbb{H}}_p(\bar \varphi_p(t)) - {\mathbb{H}}_p(\varphi_\infty(t)) \right \rvert \leq \varepsilon_p \left( {\mathbb{H}}(\bar \varphi(pt)) - {\mathbb{H}}(m_p) \right) + \varepsilon_p \left( {\mathbb{H}}([\varphi_\infty(pt)]) - {\mathbb{H}}(m_p) \right) \] and since $m_p \leq \min (\varphi(pt), [\varphi_\infty(pt)])$,~\eqref{id-diff} gives by neglecting the terms $D \geq 0$ \[ \left \lvert {\mathbb{H}}_p(\bar \varphi_p(t)) - {\mathbb{H}}_p(\varphi_\infty(t)) \right \rvert \leq \varepsilon_p \pi \left( {\mathbb{S}}^{\bar \varphi(pt)}_{m_p} \right) + \varepsilon_p \pi \left( {\mathbb{S}}^{[\varphi_\infty(pt)]}_{m_p} \right). \] In particular, we only need to show that $\varepsilon_p \pi({\mathbb{S}}^{\phi_p}_{m_p}) \Rightarrow 0$ for $\phi_p = \bar \varphi(pt)$ or $[\varphi_\infty(pt)]$. Using the monotonicity of $\pi({\mathbb{S}}^n_m)$ in~$m$ given by Lemma~\ref{lemma:bound-m}, we obtain for any $0 < \delta < \varphi_\infty(t)$ \[ {\mathbb{P}}_p \left( \varepsilon_p \pi \left( {\mathbb{S}}^{\phi_p}_{m_p} \right) \geq \eta \right) \leq {\mathbb{P}}_p \left( m_p \leq \phi_p - [p \delta] \right) + {\mathbb{P}}_p \left( \varepsilon_p \pi \left( {\mathbb{S}}^{\phi_p}_{\phi_p - [p \delta]} \right) \geq \eta \right). \] The second term converges to ${\mathbb{P}}_p \left( {\mathbb{H}}_\infty(\delta) \geq \eta \right)$: for $\phi_p = [\varphi_\infty(pt)]$ this is a consequence of~\ref{A.Hc}, and for $\phi_p = \bar \varphi(pt)$ this was proved in Corollary~\ref{lemma:spine-seen} for $\delta$ small enough. Since this inequality holds for every $\delta$ small enough and since ${\mathbb{H}}_\infty$ is almost surely continuous at $0$ by Condition~\ref{A.Hc}, in order to conclude the proof it remains to show that ${\mathbb{P}}_p(m_p \leq \phi_p - [p \delta]) \to 0$ as $p \to \infty$ for each fixed $0 < \delta < \varphi_\infty(t)$, which we do now. By Assumption~\ref{A.HC}, the genealogical contour process ${\mathcal C}_p$ converges weakly to a continuous process ${\mathcal C}_\infty$. Since $\phi_p/p \Rightarrow \varphi_\infty(t)$, this implies that ${\mathcal C}_p(t_p) - \inf_{I_p} {\mathcal C}_p \Rightarrow 0$ with $t_p = \phi_p/p$ or $t_p = \varphi_\infty(t)$ and $I_p = [\min(\phi_p/p, \varphi_\infty(t)), \max(\phi_p/p, \varphi_\infty(t))]$. By classical arguments on discrete trees, this implies that the genealogical distance rescaled by $\bar \varepsilon_p$ between $\phi_p$ and $m_p$ converges to $0$, i.e., $\bar \varepsilon_p ({\mathcal H}(\phi_p) - {\mathcal H}(m_p)) \Rightarrow 0$. Therefore, for any $\eta > 0$ we obtain \[ \limsup_{p \to \infty} {\mathbb{P}}_p \left( m_p \leq \phi_p - [p \delta] \right) \leq \limsup_{p \to \infty} {\mathbb{P}}_p \left( m_p \leq \phi_p - [p \delta], \bar \varepsilon_p ({\mathcal H}(\phi_p) - {\mathcal H}(m_p)) \leq \eta \right). \] Since $L(n-m) \circ \vartheta^m = 0$ if and only if $m = \mrca{m}{n}$, Proposition~\ref{prop:shifts} implies that ${\mathcal H}(n) - {\mathcal H}(m) = \pi({\mathbb{S}}^n_m) \circ {\mathcal G} = \pi(\rho^n_m)$ for any $0 \leq m \leq n$ with $m \in {\mathcal A}(n) \cap {\mathbb{R}}_+$. In particular, it follows by definition of $m_p$ that ${\mathcal H}(\phi_p) - {\mathcal H}(m_p) = \pi(\rho^{\phi_p}_{m_p})$. Since $\pi(\rho^n_m)$ is non-increasing in $m$ by Lemma~\ref{lemma:bound-m}, this gives \[ {\mathbb{P}}_p \left( m_p \leq \phi_p - [p \delta], \bar \varepsilon_p ({\mathcal H}(\phi_p) - {\mathcal H}(m_p)) \leq \eta \right) \leq {\mathbb{P}}_p \left( \bar \varepsilon_p \pi\left(\rho^{\phi_p}_{\phi_p - [p \delta]} \right) \leq \eta \right). \] Since this term converges to ${\mathbb{P}}({\mathcal H}_\infty(\delta) \leq \eta)$ (for $\phi_p = \bar \varphi(pt)$ this comes from Corollary~\ref{lemma:spine-seen} and for $\phi_p = [\varphi_\infty(pt)]$ this is the convergence of the genealogical height process assumed in~\ref{A.HC}) we finally obtain \[ \limsup_{p \to \infty} {\mathbb{P}}_p \left( m_p \leq \phi_p - [p \delta] \right) \leq {\mathbb{P}} \left( {\mathcal H}_\infty(\delta) \leq \eta \right). \] Letting $\eta \to 0$ in the last display therefore concludes the proof thanks to Condition~\ref{A.HC}. \subsection{Proof of Proposition~\ref{prop:Delta}} \label{sub:proof-tightness-Delta} In order to prove this result, we introduce two intermediate height processes. We enrich the probability space with a random variable $\widetilde {\mathcal P}$ which under ${\mathbb{P}}_p$ is equal in distribution to ${\mathcal P}_1$ and independent from the sequence $({\mathcal P}_{\bar \varphi(pt) + k}, k \geq 1)$, and we consider $\widetilde {\mathbb{S}}_{(p)} = (\widetilde {\mathbb{S}}^n_{(p)}, n \geq 0)$ the spine process defined from the sequence $(\widetilde {\mathcal P}, {\mathcal P}_{\bar \varphi(pt) + 1}, \cdots)$. For $k \geq 0$ we then let \[ \widehat {\mathbb{H}}^p(k) = \pi\left({\mathbb{S}}_{\bar \varphi(pt)}^{\bar \varphi(pt)+k}\right) \ \text{ and } \ \widetilde {\mathbb{H}}^p(k) = \pi\left(\widetilde {\mathbb{S}}_{(p)}^k \right). \] \begin{lemma}\label{lemma:H-H'} $\widetilde {\mathbb{H}}^p$ under ${\mathbb{P}}_p$ is equal in distribution to ${\mathbb{H}}$ under ${\mathbb{P}}_p$. Moreover, we have $\varepsilon_p \sup_{k \geq 0} \lvert \widetilde {\mathbb{H}}^p(k) - \widehat {\mathbb{H}}^p(k) \rvert \Rightarrow 0$. \end{lemma} \begin{proof} The first part of the lemma directly follows from the strong Markov property. As for the second part, Lemma~\ref{lemma:bound-m} gives \[ 0 \leq \pi\left({\mathbb{S}}_{\bar \varphi(pt)}^{\bar \varphi(pt)+k}\right) - \pi\left({\mathbb{S}}_{\bar \varphi(pt)+1}^{\bar \varphi(pt)+k}\right) \leq \pi({\mathcal P}_{\bar \varphi(pt)}) \ \text{ and } \ 0 \leq \pi\left(\widetilde {\mathbb{S}}_{(p)}^k\right) - \pi\left({\mathbb{S}}_{\bar \varphi(pt)+1}^{\bar \varphi(pt)+k}\right) \leq \pi(\tilde {\mathcal P}) \] which gives $\lvert \widetilde {\mathbb{H}}^p(k) - \widehat {\mathbb{H}}^p(k) \rvert \leq \pi(\widetilde {\mathcal P}) + \pi({\mathcal P}_{\bar \varphi(pt)})$. Since this bound is uniform in $k$ and both $\widetilde {\mathcal P}$ and ${\mathcal P}_{\bar \varphi(pt)}$ converge weakly (by Condition~\textnormal{\ref{cond-C}} and Corollary~\ref{lemma:spine-seen}), multiplying by $\varepsilon_p$ and letting $p \to \infty$ gives the result. \end{proof} We now turn to the proof of Proposition~\ref{prop:Delta}. Let in the rest of the proof $\Delta_p = \varphi(pt) - \bar \varphi(pt)$. Since by definition \[ \varphi(pt) = \inf \left\{ k \geq 1: 2 {\mathcal V}(k-1) - {\mathbb{H}}(k) \geq pt \right\} \ \text{ and } \ \bar \varphi(pt) = \inf \left\{ k \geq 1: 2 {\mathcal V}(k) \geq pt \right\}, \] it follows that \[ \Delta_p = \inf \left\{ k \geq 0 : 2 {\mathcal V}(\bar \varphi(pt)+k-1) - {\mathbb{H}}(\bar \varphi(pt)+k) \geq pt \right\}. \] Defining $\bar {\mathcal V}_p(k) = {\mathcal V}(\bar \varphi(pt) + k) - {\mathcal V}(\bar \varphi(pt))$ for $k \geq -1$, we obtain \[ \Delta_p = \inf \left\{ k \geq 0 : 2 \bar {\mathcal V}_p(k-1) - {\mathbb{H}}(\bar \varphi(pt)) \geq {\mathbb{H}}(\bar \varphi(pt)+k) - {\mathbb{H}}(\bar \varphi(pt)) - (2 {\mathcal V}(\bar \varphi(pt)) - pt) \right\} \] and so according to Proposition~\ref{prop:shifts}, \begin{multline} \label{eq:formula-Delta} \Delta_p = \inf \left\{ k \geq 0 : \right.\\ \left. 2 \bar {\mathcal V}_p(k-1) - {\mathbb{H}}(\bar \varphi(pt)) \geq \pi \left( {\mathbb{S}}^{\bar \varphi(pt) + k}_{\bar \varphi(pt)} \right) - D_{L(k) \circ \vartheta^{\bar \varphi(pt)}} \left( {\mathbb{S}}^{\bar \varphi(pt)}_0 \right) - (2 {\mathcal V}(\bar \varphi(pt)) - pt) \right\}. \end{multline} Since $D_k(\nu) \geq 0$ and $2 {\mathcal V}(\bar \varphi(pt)) \geq pt$, we obtain by definition of $\widehat {\mathbb{H}}^p$ that \[ \Delta_p \leq \inf \left\{ k \geq 0 : 2 \bar {\mathcal V}_p(k-1) - {\mathbb{H}}(\bar \varphi(pt)) \geq \widehat {\mathbb{H}}^p(k) \right \}. \] In particular, if $\sigma_p = [\eta {\mathbb{H}}(\bar \varphi(pt))]$ then in order to prove the result it is enough to show that ${\mathbb{P}}_p \left(2 \bar {\mathcal V}_p(\sigma_p - 1) - {\mathbb{H}}(\bar \varphi(pt)) \geq \widehat {\mathbb{H}}^p(\sigma_p) \right) \to 1$ which we rewrite as \[ {\mathbb{P}}_p \left(2 \bar {\mathcal V}_p(\sigma_p - 1) - \sigma_p / \eta \geq \widehat {\mathbb{H}}^p(\sigma_p) \right) \mathop{\longrightarrow}_{p \to \infty} 1. \] Since for any $\gamma > 0$, we have \[ {\mathbb{P}}_p \left(2 \bar {\mathcal V}_p(\sigma_p - 1) - \sigma_p / \eta \geq \widehat {\mathbb{H}}^p(\sigma_p) \right) \geq {\mathbb{P}}_p \left(2 \bar {\mathcal V}_p(\sigma_p - 1) - \sigma_p / \eta \geq \gamma / \varepsilon_p \geq \widehat {\mathbb{H}}^p(\sigma_p)\right) \] the desired convergence is implied by the following two relations: \begin{equation} \label{eq:goal-Delta} \varepsilon_p \widehat {\mathbb{H}}^p(\sigma_p) \Rightarrow 0 \ \text{ and } \ \liminf_{p \to \infty} \ {\mathbb{P}}_p \left( 2 \bar {\mathcal V}_p(\sigma_p - 1) - \sigma_p / \eta \geq \gamma / \varepsilon_p \right) \mathop{\longrightarrow}_{\gamma \to 0} 1. \end{equation} Let us begin by proving the first relation $\varepsilon_p \widehat {\mathbb{H}}^p(\sigma_p) \Rightarrow 0$. Corollary~\ref{cor:H-fdd} combined with~\eqref{eq:remaining-1} shows that $\varepsilon_p \sigma_p \Rightarrow \eta {\mathbb{H}}_\infty(\varphi_\infty(t))$, and since $p \varepsilon_p \to \infty$ by~\ref{A.epsilon}, it follows that $\sigma_p / p \Rightarrow 0$. Since $\widetilde {\mathbb{H}}^p$ is equal in distribution to ${\mathbb{H}}$ by Lemma~\ref{lemma:H-H'} and $\sigma_p$ is independent of $\widetilde {\mathbb{H}}^p$, we obtain in view of Remark~\ref{rem:conv-seq} that $\varepsilon_p \widetilde {\mathbb{H}}^p(\sigma_p) \Rightarrow 0$. The second part of Lemma~\ref{lemma:H-H'} finally entails the desired result $\varepsilon_p \widehat {\mathbb{H}}^p(\sigma_p) \Rightarrow 0$. We now prove the second convergence in~\eqref{eq:goal-Delta}. By construction, $\bar {\mathcal V}_p$ is a renewal process independent of ${\mathbb{H}}(\bar \varphi(pt))$, and thus independent of $\sigma_p$: Lemma~\ref{lemma:triangular-LLN} thus implies that $\bar {\mathcal V}_p(\sigma_p-1)/\sigma_p \Rightarrow \beta^*$ and since, as already mentioned, $\varepsilon_p \sigma_p \Rightarrow \eta {\mathbb{H}}_\infty(\varphi_\infty(t))$, we get \[ \liminf_{p \to \infty} {\mathbb{P}}_p \left( 2 \bar {\mathcal V}_p(\sigma_p - 1) - \sigma_p / \eta \geq \gamma / \varepsilon_p \right) \geq {\mathbb{P}}\left( (2 \beta^* \eta - 1) {\mathbb{H}}_\infty(\varphi_\infty(t)) \geq \gamma \right). \] Since $(2 \beta^* \eta - 1)>0$ and ${\mathcal H}_\infty(\varphi_\infty(t))>0$ a.s.\ by~\ref{A.Hc}, the result follows by letting $\gamma \to 0$. \subsection{Proof of~\eqref{eq:remaining-2}} \label{sub:proof-remaining-2} Let as in the previous subsection $\Delta_p = \varphi(pt) - \bar \varphi(pt)$. Proposition~\ref{prop:shifts} gives \begin{equation} \label{eq:remaining-2-decomposition} {\mathbb{H}}_p(\varphi_p(t)) - {\mathbb{H}}_p(\bar \varphi_p(t)) = \varepsilon_p \pi\left({\mathbb{S}}_{\bar \varphi(pt)}^{\varphi(pt)}\right) - \varepsilon_p D_{L'_p(\Delta_p)} \left({\mathbb{S}}^{\bar \varphi(pt)}_0 \right), \end{equation} where we have defined $L'_p(k) = L(k) \circ \vartheta^{\bar \varphi(pt)}$. We now show that each term of the right-hand side of~\eqref{eq:remaining-2-decomposition} vanishes, and we start with the second one, i.e., we show that \begin{equation} \label{eq:D-vanish} \varepsilon_p D_{L'_p(\Delta_p)} \left({\mathbb{S}}^{\bar \varphi(pt)}\right) \Rightarrow 0. \end{equation} It is not hard to prove that $D_{L'_p(k)} ({\mathbb{S}}^{\bar \varphi(pt)}_0)$ is non-decreasing in $k$ and the sequence $(\varepsilon_p \Delta_p, p \geq 1)$ is tight, it is enough to show that \begin{equation} \label{eq:D-vanish-2} \varepsilon_p D_{L(t_p) \circ \vartheta^{\bar \varphi(pt)}}\left({\mathbb{S}}^{\bar \varphi(pt)}_0 \right) \Rightarrow 0 \end{equation} for some deterministic integer-valued sequence $(t_p)$ with $\varepsilon_p t_p \to \infty$: we will consider $t_p = [(p / \varepsilon_p)^{1/2}]$, which satisfies in addition $t_p / p \to 0$. In order to prove~\eqref{eq:D-vanish-2}, we fix until further notice $\gamma, \gamma' > 0$ and two integer-valued sequences $(\gamma_p)$, $(\gamma'_p)$ such that $\gamma_p / p \to \gamma$ (in particular $t_p / \gamma_p \to 0$) and $\gamma'_p / (p\bar \varepsilon_p) \to \gamma'$. Since both $D_k({\mathbb{S}}^{{\bar \varphi}(pt)}_0)$ and $L(k) \circ \vartheta^{\bar \varphi(pt)}$ are non-decreasing with $k$, it follows that for $p$ large enough such that $t_p \leq \gamma_p$, we have \[ {\mathbb{P}}_p \left( \varepsilon_p D_{L(t_p) \circ \vartheta^{\bar \varphi(pt)}}({\mathbb{S}}^{{\bar \varphi}(pt)}_0) \geq \eta \right) \leq {\mathbb{P}}_p \left( L(\gamma_p) \circ \vartheta^{\bar \varphi(pt)} \geq \gamma'_p \right) + {\mathbb{P}}_p \left( \varepsilon_p D_{\gamma_p'}({\mathbb{S}}^{{\bar \varphi}(pt)}_0) \geq \eta \right). \] By definition of $L$ and $S$, the first term is equal to \[ {\mathbb{P}}_p \left( L(\gamma_p) \circ \vartheta^{\bar \varphi(pt)} \geq \gamma'_p \right) = {\mathbb{P}}_p \left( \min_{i = 0, \ldots, \gamma_p} \sum_{k=\bar \varphi(pt)}^{\bar \varphi(pt) + i} (\lvert {\mathcal P}_k \rvert - 1) \leq -\gamma'_p \right). \] Isolating the term $\lvert {\mathcal P}_{\bar \varphi(pt)} \rvert - 1$ and using that the ${\mathcal P}_k$'s for $k \geq \bar \varphi(pt) + 1$ are i.i.d., we further get \[ {\mathbb{P}}_p \left( L(\gamma_p) \circ \vartheta^{\bar \varphi(pt)} \geq \gamma'_p \right) \leq {\mathbb{P}}_p \left( \lvert {\mathcal P}_{\bar \varphi(pt)} \rvert \leq -\frac{\gamma'_p}{2} + 1 \right) + {\mathbb{P}}_p \left( \min_{i = 1, \ldots, \gamma_p} \sum_{k=1}^i (\lvert {\mathcal P}_k \rvert - 1) \leq - \frac{\gamma'_p}{2} \right). \] The first term vanishes by~\ref{A.epsilon} and Corollary~\ref{lemma:spine-seen}, and so rescaling the second term by $p \bar \varepsilon_p$ and using~\ref{A.X}, we obtain \[ \limsup_{p \to \infty} {\mathbb{P}}_p \left( L(\gamma_p) \circ \vartheta^{\bar \varphi(pt)} \geq \gamma'_p \right) \leq {\mathbb{P}} \left( \inf_{0 \leq t \leq \gamma} S_\infty(t) \leq -\frac{\gamma'}{2} \right). \] By letting first $p \to \infty$ and then $\gamma \downarrow 0$, we thus have at this point \[ \limsup_{p \to \infty} {\mathbb{P}}_p \left( \varepsilon_p D_{L(t_p) \circ \vartheta^{\bar \varphi(pt)}}({\mathbb{S}}^{{\bar \varphi}(pt)}_0) \geq \eta \right) \leq \limsup_{p \to \infty} \ {\mathbb{P}}_p \left( \varepsilon_p D_{\gamma'_p} ({\mathbb{S}}^{{\bar \varphi}(pt)}_0) \geq \eta \right). \] Fix now some $0 < \delta < t / (2 \beta^*)$: by definition~\eqref{eq:def-D} of $D$, \[ D_{\gamma'_p}({\mathbb{S}}^{\bar \varphi(pt)}_0) \leq \left( \sum_{i: 0 < T(i) \leq \tau_{\gamma'_p}} {\mathbb{Y}}(i) \right) \circ \vartheta^{\bar \varphi(pt)} \] and so in the event $\{ \tau_{\gamma'_p} \circ \vartheta^{\bar \varphi(pt)} \leq [p \delta] \}$, we get \[ D_{\gamma'_p}({\mathbb{S}}^{\bar \varphi(pt)}_0) \leq \left( \sum_{i=1}^{{\widetilde T}^{-1}([p \delta])} {\mathbb{Y}}(i) \right) \circ \vartheta^{\bar \varphi(pt)} = \pi \left( {\mathbb{S}}^{\bar \varphi(pt)}_{\bar \varphi(pt) - [p \delta]} \right), \] where we have used~\eqref{eq:formula-pi(rhonm)} to derive the last equality. In particular, \[ {\mathbb{P}}_p \left( \varepsilon_p D_{\gamma_p'}({\mathbb{S}}^{{\bar \varphi}(pt)}_0) \geq \eta \right) \leq {\mathbb{P}}_p \left(\tau_{\gamma'_p} \circ \vartheta^{\bar \varphi(pt)} > [p \delta] \right) + {\mathbb{P}}_p \left( \varepsilon_p \pi\left( {\mathbb{S}}^{{\bar \varphi}(pt)}_{{\bar \varphi}(pt)- [p \delta]} \right) > \eta\right) \] and since by definition we have \[ {\mathbb{P}}_p \left(\tau_{\gamma'_p} \circ \vartheta^{\bar \varphi(pt)} > [p \delta] \right) = {\mathbb{P}}_p \left( \sup_{k = 0, \ldots, [p\delta]} S(k) \circ \vartheta^{\bar \varphi(pt)} \leq \gamma_p' \right), \] Corollary~\ref{lemma:spine-seen} implies that \[ \limsup_{p \to \infty} \ {\mathbb{P}}_p \left( \varepsilon_p D_{\gamma'_p} ({\mathbb{S}}^{{\bar \varphi}(pt)}_0) \geq \eta \right) \leq {\mathbb{P}}\left(\sup_{0 \leq t \leq \delta} S_\infty(t) \leq \gamma' \right) + {\mathbb{P}} \left( {\mathbb{H}}_\infty(\delta) \geq \eta \right). \] Letting first $\gamma' \to 0$ and then $\delta \to 0$ concludes the proof of~\eqref{eq:D-vanish-2}, and so also of~\eqref{eq:D-vanish}. \\ We now show that the first term in the right-hand side of~\eqref{eq:remaining-2-decomposition} also vanishes. In view of~\eqref{eq:formula-Delta} and using $2 {\mathcal V}(\bar \varphi(pt)) - pt \leq 2V_{\bar \varphi(pt)}$, we obtain \[ \varepsilon_p \pi \left( {\mathbb{S}}^{\varphi(pt)}_{\bar \varphi(pt)} \right) \leq \varepsilon_p \left( 2 \bar {\mathcal V}_p(\Delta_p-1) - {\mathbb{H}}(\bar \varphi(pt)) \right) + \varepsilon_p D_{L(\Delta_p) \circ \vartheta^{\bar \varphi(pt)}} \left( {\mathbb{S}}^{\bar \varphi(pt)}_0 \right) + 2 \varepsilon_p V_{\bar \varphi(pt)}. \] We have just proved that the second term vanishes (in law), and since the third term also vanishes by Lemma~\ref{lemma:renewal-V} it only remains to control the first term. Since $\bar {\mathcal V}$ is an increasing sequence, for any $\gamma, \eta > 0$ we have \begin{multline*} {\mathbb{P}}_p \left( \varepsilon_p \left( 2 \bar {\mathcal V}_p(\Delta_p-1) - {\mathbb{H}}(\bar \varphi(pt)) \right) \geq \gamma \right) \leq {\mathbb{P}}_p \left( \Delta_p > \eta {\mathbb{H}}(\bar \varphi(pt)) \right)\\ + {\mathbb{P}}_p \left( \varepsilon_p \left( 2 \bar {\mathcal V}_p([\eta {\mathbb{H}}(\bar \varphi(pt))]) - {\mathbb{H}}(\bar \varphi(pt)) \right) \geq \gamma \right). \end{multline*} Choose now $\eta > 1/(2 \beta^*)$, so that the first term vanishes by Proposition~\ref{prop:Delta}. For the second term, we note that $\bar {\mathcal V}$ is independent from ${\mathbb{H}}(\bar \varphi(pt))$ to obtain with similar arguments as in the proof of Proposition~\ref{prop:Delta} \[ \limsup_{p \to \infty} {\mathbb{P}}_p \left( \varepsilon_p \left( 2 \bar {\mathcal V}_p([\eta {\mathbb{H}}(\bar \varphi(pt))]) - {\mathbb{H}}(\bar \varphi(pt)) \right) \geq \gamma \right) \leq {\mathbb{P}} \left( (2 \beta^* \eta - 1) {\mathbb{H}}_\infty(\varphi_\infty(t)) \geq \gamma \right). \] Since ${\mathbb{P}}({\mathbb{H}}_\infty(\varphi_\infty(t)) > 0) = 1$, letting $\eta \to 1/(2 \beta^*)$ concludes the proof. \subsection{Proof of Theorem~\ref{thm:fdd-trees}} \label{sub:proof-trees} In this section, we assume in addition to everything else that $({\mathbb{Y}}^*_p)$ is uniformly integrable and we prove Theorem~\ref{thm:fdd-trees}. \begin{lemma}\label{lem:D-X} Let $(\ell(p), p \geq 0)$ be a deterministic sequence in ${\mathbb{R}}_+$ going to $\infty$. Then for every $t>0$ we have \[ \varepsilon_p \left( D_{\ell(p)}({\mathbb{S}}^{[pt]}_0) - \alpha^* D_{\ell(p)}({\mathbb{S}}^{[pt]}_0 \circ {\mathcal G}) \right) \Rightarrow 0. \] \end{lemma} \begin{proof} Let ${\widetilde T}^{-1}_p = {\widetilde T}^{-1}(\min(\tau_{\ell(p)}, [pt]))$ and $R_p = \Indicator{0 < \tau_{\ell(p)} \leq [pt]} \pi(\mu_{\ell(p)})$, so that by definition~\eqref{eq:def-D} of $D$ we have \[ D_{\ell(p)}({\mathbb{S}}^{[pt]}_0) = \left( \sum_{i=1}^{{\widetilde T}^{-1}_p} {\mathbb{Y}}(i) \right) \circ \vartheta^{[pt]} - R_p \circ \vartheta^{[pt]}. \] Using the various facts that $D_{\ell(p)}({\mathbb{S}}^{[pt]}_0 \circ {\mathcal G}) = D_{\ell(p)}({\mathbb{S}}^{[pt]}_0) \circ {\mathcal G}$, that ${\mathbb{Y}}(i) \circ {\mathcal G} = \pi(\mu_\ell) \circ {\mathcal G} = 1$, that ${\widetilde T}^{-1}_p$ and $\tau_{\ell(p)}$ are genealogical quantities and finally that $\vartheta^{[pt]}$ and ${\mathcal G}$ commute, composing on the right with ${\mathcal G}$ in the previous display gives \[ D_{\ell(p)}({\mathbb{S}}^{[pt]}_0 \circ {\mathcal G}) = \left( {\widetilde T}^{-1}_p - \Indicator{\tau_{\ell(p)} \leq [pt]} \right) \circ \vartheta^{[pt]}. \] By duality, we therefore only have to show that the three quantities \[ \varepsilon_p R_p, \ \varepsilon_p \Indicator{\tau_{\ell(p)} \leq [pt]} \ \text{ and } \ \varepsilon_p \sum_{k = 1}^{{\widetilde T}^{-1}_p} \left( {\mathbb{Y}}(k) - {\mathbb{E}}({\mathbb{Y}}^*_p) \right) \] converge weakly to $0$. The second one obviously does since $\varepsilon_p \to 0$. For the third one we proceed similarly as in the proof of Theorem~\ref{thm:H-fdd}: indeed, $\varepsilon_p {\widetilde T}^{-1}_p$ is tight (because it is smaller than $\varepsilon_p {\widetilde T}^{-1}([pt])$ by monotonicity of ${\widetilde T}^{-1}$, which is equal in distribution to ${\mathcal H}_p(t)$), which is the only assumption necessary for the proof of Theorem~\ref{thm:H-fdd} to go through. We now prove that $\varepsilon_p R_p \Rightarrow 0$, which will conclude the proof. First of all, let $\Gamma$ such that $\tau_{\ell(p)} = T(\Gamma)$: then by definition, $\mu_{\ell(p)} = \mu_0 \circ \theta_{T(\Gamma-1)}$ and so $\pi(\mu_{\ell(p)}) = \pi \circ \mu_0 \circ \theta_{T(\Gamma-1)} = {\mathbb{Y}}(\Gamma-1)$. In addition, if $\tau_{\ell(p)} \leq [pt]$ then $\Gamma \leq {\widetilde T}^{-1}([pt])$ and so $R_p \leq \max_{k = 1, \ldots, {\widetilde T}^{-1}([pt])} {\mathbb{Y}}(k)$. Next, we fix some $N \geq 0$, consider $N_p = [N / \varepsilon_p]$ and use the previous inequality to write \begin{multline} \label{eq:end} {\mathbb{P}}_p \left( \varepsilon_p \max_{k = 1, \ldots, {\widetilde T}^{-1}([pt])} {\mathbb{Y}}(k) \geq \eta \right) \leq {\mathbb{P}}_p \left( {\widetilde T}^{-1}([pt]) \geq N_p \right)\\ + {\mathbb{P}}_p \left( \max_{k=1, \ldots, \min( N_p, G-1)} {\mathbb{Y}}(k) \geq \eta_p \right) \end{multline} where $G = \inf\{k \geq 0: T(k) = \infty\}$ and $\eta_p = \eta / \varepsilon_p$. For the first term of the right-hand side, we note that ${\widetilde T}^{-1}([pt])$ is by duality equal in distribution to ${\mathcal H}(pt)$ to get \[ \limsup_{p \to \infty} {\mathbb{P}}_p \left( {\widetilde T}^{-1}([pt]) \geq N_p \right) = \limsup_{p \to \infty} {\mathbb{P}}_p \left( {\mathcal H}_p(t) \geq \varepsilon_p N_p \right) \mathop{\longrightarrow}_{N \to \infty} 0. \] It remains to control the second term in the right-hand side of~\eqref{eq:end}: since the $({\mathbb{Y}}(k), k = 1, \ldots, G-1)$ are i.i.d.\ by Lemma~\ref{lemma:y}, we have \[ {\mathbb{P}}_p \left( \max_{k=1, \ldots, \min( N_p, G-1)} {\mathbb{Y}}(k) \geq \eta_p \right) \leq 1 - \left[ 1 - {\mathbb{P}} \left( {\mathbb{Y}}^*_p \geq \eta_p \right) \right]^{N_p}. \] This last bound vanishes because $N_p {\mathbb{P}}({\mathbb{Y}}^*_p \geq \eta_p) \to 0$ as a direct consequence of the uniform integrability of the ${\mathbb{Y}}^*_p$ together with the following bound: \[ N_p {\mathbb{P}} \left( {\mathbb{Y}}^*_p \geq \eta_p \right) \leq \frac{N}{\eta} {\mathbb{E}}\left({\mathbb{Y}}^*_p ; {\mathbb{Y}}^*_p \geq \frac{\eta}{\varepsilon_p} \right). \] The proof is complete. \end{proof} In the sequel for $0 \leq u \leq v$ we define \[ {\mathcal M}(u,v) = \inf_{u \leq t \leq v} {\mathcal C}(t) \ \text{ and } \ {\mathbb{M}}(u,v) = \inf_{u \leq t \leq v} {\mathbb{C}}(t). \] \begin{corollary} \label{cor:end} For any $0<a<b$ we have \[ \varepsilon_p\left( {\mathbb{M}}(K_{[pa]},K_{[pb]}) - \alpha^* {\mathcal M}(2pa,2pb) \right) \Rightarrow 0. \] \end{corollary} \begin{proof} First of all, we note that \[ \varepsilon_p \left( {\mathcal M}(2pa,2pb) - {\mathbb{M}}(K_{[pa]}, K_{[pb]} ) \circ {\mathcal G} \right) \Rightarrow 0. \] Indeed, this follows from rewriting ${\mathcal M}(2\beta^*pa, 2\beta^*pb) = \inf \left\{ {\mathcal C}_p(t): 2 a \leq t \leq 2 b \right\}$ and \[ {\mathbb{M}}(K_{[pa]}, K_{[pb]}) \circ {\mathcal G} = \inf \left\{ {\mathcal C}_p(t): \frac{1}{p} K_{[pa]} \circ {\mathcal G} \leq t \leq \frac{1}{p} K_{[pb]} \circ {\mathcal G} \right\}, \] together with the following two facts: 1) ${\mathcal C}_p \Rightarrow {\mathcal C}_\infty$ with ${\mathcal C}_\infty$ continuous and 2) $p^{-1} K_{[pa]} \circ {\mathcal G} \Rightarrow 2 a$. Therefore, in order to prove the result we only have to prove that \[ \varepsilon_p\left( {\mathbb{M}}(K_{[pa]},K_{[pb]}) - \alpha^* {\mathbb{M}}(K_{[pa]},K_{[pb]}) \circ {\mathcal G} \right) \Rightarrow 0. \] To prove this, we define $L_p = L( [pb] - [pa] )\circ\vartheta^{[pa]}$ and apply Corollary~\ref{cor:formula-min} to write \begin{multline*} \varepsilon_p \left( {\mathbb{M}}(K_{[pa]},K_{[pb]}) - \alpha^* {\mathbb{M}}(K_{[pa]},K_{[pb]}) \circ {\mathcal G} \right) = \varepsilon_p \left( {\mathbb{H}}([pa]) - \alpha^* {\mathcal H}([pa]) \right)\\ - \varepsilon_p\left( D_{L_p}({\mathbb{S}}^{[pa]}_0) - \alpha^* D_{L_p}({\mathbb{S}}^{[pa]}_0)\circ{\mathcal G} \right). \end{multline*} The first term on the right-hand side vanishes by Theorem \ref{thm:H-fdd}, so we are left with the second term. Since $L_p$ is a genealogical quantity, this term is equal to \[ \varepsilon_p\left( D_{L_p}({\mathbb{S}}^{[pa]}_0) - \alpha^* D_{L_p}({\mathbb{S}}^{[pa]}_0)\circ{\mathcal G} \right) = \varepsilon_p\left( D_{L_p}({\mathbb{S}}^{[pa]}_0) - \alpha^* D_{L_p}({\mathbb{S}}^{[pa]}_0\circ{\mathcal G}) \right) \] and we can now invoke Lemma~\ref{lem:D-X} to conclude that this term vanishes, as $L_p$ is independent of ${\mathbb{S}}^{[pa]}_0$ and converges weakly to $\infty$. This proves the result. \end{proof} \begin{proof}[Proof of Theorem~\ref{thm:fdd-trees}] In order to prove Theorem~\ref{thm:fdd-trees} we have to prove that \[ \varepsilon_p \left( {\mathbb{M}}(ps, pt) - \alpha^* {\mathcal M}(2\varphi_\infty(ps), 2\varphi_\infty(pt)) \right) \Rightarrow 0. \] Since for any $t \in {\mathbb{R}}_+$ we have $p^{-1} K_{[pt]} \Rightarrow 2 \beta^* t$, for any $0 < \gamma < t$ we have ${\mathbb{P}}_p(E_p(t,\gamma)) \to 1$ as $p \to \infty$ where $E_p(t, \gamma)$ is the event \[ E_p(t, \gamma) = \left\{ K_{[\varphi_\infty(pt - p\gamma)]} \leq pt \leq K_{[\varphi_\infty(pt + p\gamma)]} \right\}. \] Thus in the sequel, for any $0 < \gamma < s < t$ we can assume that the event $E_p(s, \gamma) \cap E_p(t, \gamma)$ holds. By monotonicity, in this event we have \[ {\mathbb{M}} \left( K_{[\varphi_\infty(ps - p\gamma)]}, K_{[\varphi_\infty(pt + p\gamma)]} \right) \leq {\mathbb{M}} \left( ps, pt \right) \leq {\mathbb{M}} \left( K_{[\varphi_\infty(ps + p\gamma)]}, K_{[\varphi_\infty(pt - p\gamma)]} \right). \] Thus defining $a = \varphi_\infty(ps)$, $b = \varphi_\infty(pt)$, $a^\pm = [\varphi_\infty(ps \pm p \gamma)]$ and $b^\pm = \varphi_\infty(pt \pm p \gamma)$, we have \begin{multline*} \left \lvert {\mathbb{M}}(ps, pt) - \alpha^* {\mathcal M} \left( 2\varphi_\infty(ps), 2\varphi_\infty(pt) \right) \right \rvert\\ \leq \big \lvert {\mathbb{M}} \left( K_{a^+}, K_{b^-} \right) - \alpha^* {\mathcal M} \left( 2a, 2b \right) \big \rvert + \left \lvert{\mathbb{M}} \left( K_{a^-}, K_{b^+} \right) - \alpha^* {\mathcal M} \left( 2a, 2b \right) \right \rvert \end{multline*} and pursuing with the triangular inequality, we obtain \begin{multline*} \left \lvert {\mathbb{M}}(ps, pt) - \alpha^* {\mathcal M} \left( 2\varphi_\infty(ps), 2\varphi_\infty(pt) \right) \right \rvert\\ \leq \big \lvert {\mathbb{M}} \left( K_{a^+}, K_{b^-} \right) - \alpha^* {\mathcal M} \left( 2a^+, 2b^- \right) \big \rvert + \left \lvert{\mathbb{M}} \left( K_{a^-}, K_{b^+} \right) - \alpha^* {\mathcal M} \left( 2a^-, 2b^+ \right) \right \rvert\\ + \alpha^* \left \lvert {\mathcal M} \left( 2a^+, 2b^- \right) - {\mathcal M} \left(2a , 2b \right) \right \rvert + \alpha^* \left \lvert {\mathcal M} \left( 2a^-, 2b^+ \right) - {\mathcal M} \left(2a , 2b \right) \right \rvert. \end{multline*} Multiplying by $\varepsilon_p$, the two terms of the second line vanish as $p \to \infty$ by Corollary~\ref{cor:end}; letting then $\gamma \to 0$ makes the terms of the third line disappear by virtue of the convergence ${\mathcal C}_p \Rightarrow {\mathcal C}_\infty$ with ${\mathcal C}_\infty$ continuous. The proof of Theorem~\ref{thm:fdd-trees} is complete. \end{proof} \section{Some examples where tightness fails} \label{sec:example} In the Galton--Watson case, if the height process converges in the sense of finite-dimensional distributions toward a c\`adl\`ag process, then one actually only needs mild additional assumptions in order to get weak convergence in a functional sense of both the height and contour processes, essentially assumption~(H3c) discussed after Corollary~\ref{cor:H-fdd}. For instance, we automatically get weak convergence in the non-triangular case where the offspring distribution does not depend on $p$. In this section we consider simple examples where the genealogical height and contour processes of the corresponding CMJ trees converge in the sense of finite-dimensional distributions but not necessarily in a functional sense. In contrast to the Galton--Watson case, we show that this can happen even in the non-triangular case. For these examples, all the assumptions of the main results of the present paper (namely Corollary~\ref{cor:H-fdd} and Theorem~\ref{thm:C-fdd}) hold, which shows that further conditions are called upon in order to strengthen these results to functional convergence. Throughout this section, we assume that $(V^*_p, {\mathcal P}^*_p)$ is equal in distribution to $(V^*, {\mathcal P}^*)$, independent of $p$. We let $\xi = \lvert {\mathcal P}^* \rvert$ and assume that its distribution is a critical offspring distribution in the domain of attraction of an $\alpha$-stable law with $\alpha \in (1,2)$. Then, it is known that for the choice $\varepsilon_p = \bar \varepsilon_p = p^{-(1-1/\alpha)}$, assumptions~(H3a)--(H3c) and~\ref{A.epsilon}--\ref{A.HC} hold. In particular, $S_p$ has jumps of the order of one which means that, typically, some nodes have of the order of $p \varepsilon_p = p^{1/\alpha}$ children: these nodes are called macroscopic. \subsection{First family of examples} To start with, consider the case \[ \left( V^*, {\mathcal P}^* \right) = \left( 1+\xi, \xi \delta_1 \right), \] so that ${\mathbb{E}}(V^*) = 2$ and \[ {\mathbb{E}}({\mathbb{Y}}^*) = {\mathbb{E}} \left( \int_0^\infty u {\mathcal P}^*(\mathrm{d} u) \right) = {\mathbb{E}} \left( \xi \right) = 1. \] In particular, assumptions~(H2) and~\ref{A.V} hold. The corresponding CMJ tree is then almost a Galton-Watson tree with offspring distribution the distribution of $\xi$, except that each edge is extended by a length equal to the number of children of the corresponding individual. Since ${\mathbb{H}}$ only depends on the ${\mathcal P}_p$ but not on the $V_p$, we have ${\mathbb{H}} = {\mathcal H}$ and so ${\mathbb{H}}_p$ converges weakly. On the other hand, macroscopic nodes have, by construction, edges with length of the order of $p^{1/\alpha}$. When the particle traveling along the edges meets such an edge, this makes ${\mathbb{C}}$ go up and then down at rate $\pm 1$ for a duration $p^{1/\alpha}$, so that during this time interval ${\mathbb{C}}$ has variation of the order of $p^{1/\alpha}$. Because of the scaling ${\mathbb{C}}_p(t) = p^{-(1-1/\alpha)} {\mathbb{C}}(pt)$, such a time interval corresponds for ${\mathbb{C}}_p$ to a time interval of size $p^{1/\alpha} \times (1/p) = p^{-(1-1/\alpha)}$, during which ${\mathbb{C}}_p$ has variation of the order of $p^{1/\alpha} \times p^{-(1-1/\alpha)} = p^{2/\alpha-1}$. Since $\alpha \in (1,2)$, in the limit we see that each macroscopic node should induce an infinite jump of ${\mathbb{C}}_p$. Since macroscopic nodes are dense, this strongly proscribes the tightness of ${\mathbb{C}}_p$. \subsection{Second family of examples} Let us now consider a variation of the above example, where both ${\mathbb{H}}_p$ and ${\mathbb{C}}_p$ fail to converge weakly: here we consider \[ \left( V^*, {\mathcal P}^* \right) = \left( 1+\xi, (\xi-1) \delta_1 + \delta_\xi \right), \] so that ${\mathbb{E}}(V^*) = 2$ and ${\mathbb{E}}({\mathbb{Y}}^*) = {\mathbb{E}}(2\xi-1) = 1$. Again, the corresponding CMJ tree is almost a Galton-Watson tree, with the difference that all but one child are born at time $1$, and one child is born at a time equal to the number of children. The crucial difference with the first family of examples is that now, a macroscopic node also induces an infinite jump of ${\mathbb{H}}_p$ for the exact same reason as before. \\ Let us now push this example a little further, and discuss the claim made in Section~\ref{sec:overview} that a uniform control of the kind \begin{equation} \label{eq:rough-bound} \left \lvert {\mathbb{H}}_p(\varphi_p(t)) - {\mathbb{H}}_p(\varphi_\infty(t)) \right \rvert \leq \sup \left\{ \left \lvert {\mathbb{H}}_p(s) - {\mathbb{H}}_p(\varphi_\infty(t)) \right \rvert : \left \lvert s - \varphi_\infty(t) \right \rvert \leq \eta_p \right\} \end{equation} for some $\eta_p \to 0$ such that ${\mathbb{P}}_p(\lvert \varphi_p(t) - \varphi_\infty(t) \rvert \leq \eta_p) \to 1$ is too rough. Actually, we will discuss this with $\bar \varphi_p(t)$ instead of $\varphi_p(t)$ but since these two quantities are close (recall Lemma~\ref{lemma:tightness-Delta}), this discussion is equally insightful. In this case, classical results show that $\bar \varphi(pt) - \varphi_\infty(pt)$ is of the order of $p^{1/\alpha}$. Undoing the scaling, we see that we want to understand the order of magnitude for the variations of ${\mathbb{H}}$ on time scales of the order of $p^{1/\alpha}$, and in particular to see how these variations compare to the space scale $p^{1-1/\alpha}$. Since $S_p$ converges to a stable process, it follows from the previous discussion that on the time scale $p^{1/\alpha}$, $S$ makes jumps of size $(p^{1/\alpha})^{1/\alpha} = p^{1/\alpha^2}$. As before, these jumps correspond to ``mesoscopic'' individuals with of the order of $p^{1/\alpha^2}$ children, which also have edge lengths of the same order. In particular, if the space scale $p^{1-1/\alpha}$ is negligible compared to $p^{1/\alpha^2}$, i.e., if \[ 1 - \frac{1}{\alpha} < \frac{1}{\alpha^2} \Longleftrightarrow \alpha < \frac{1 + \sqrt 5}{2}, \] then it is reasonable to expect the right-hand side of~\eqref{eq:rough-bound} to blow up, although we have proved that left-hand side vanishes. To conclude, we mention that such examples could be generalized by considering \[ \left( V^*, {\mathcal P}^* \right) = \left( 1+\xi, (\xi-1) \delta_1 + \delta_{f(\xi)} \right) \] for some function $f: [0,\infty) \to [0, \infty)$ such that ${\mathbb{E}}(f(\xi)) < \infty$. This extended family of examples then allows to decrease the above threshold involving the golden number, and also to show that even if $\alpha = 2$, i.e., the offspring distribution has finite variance, ${\mathbb{H}}_p$ may fail to be tight even though its finite-dimensional distributions converge.
{ "redpajama_set_name": "RedPajamaArXiv" }
\section*{Acknowledgments} We are grateful to Ted O'Donoghue, Sendhil Mullainathan, the participants of the AI, Policy, and Practice (AIPP) group at Cornell, and the FEAT reading group at Carnegie Mellon for valuable discussions. This material is based upon work supported in part by NSF IIS2040929, a Simons Investigator Award, a Vannevar Bush Faculty Fellowship, a MURI grant, AFOSR grant FA9550-19-1-0183, and grants from the ARO and the John D. and Catherine T. MacArthur Foundation. Any opinions, findings and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of the National Science Foundation and other funding agencies. {\small \bibliographystyle{plainnat} \section{Remaining Proofs from the Main Text}\label{app:technical} \subsection{Proof of Lemma~\ref{lem:min_concave}} \begin{proof} Suppose the statement does not hold, and there exist $i,j \in \{1, \cdots, m\}$ with $i \neq j$ and $x^*_i, x^*_j \in (0, \ell)$. Without loss of generality, we assume $0 <x^*_1 \leq x^*_2 < \ell$. Next, we construct $\mathbf{x}'$, another feasible solution to (\ref{eq:min_concave}), from $\mathbf{x}^*$ and show that $\mathbf{x}'$ leads to a lower objective value compared to $\mathbf{x}^*$, that is, $\sum_{i=1}^m f(x'_i) < \sum_{i=1}^m f(x^*_i)$. This observation would be a contradiction with the optimality of $\mathbf{x}^*$. We construct $\mathbf{x}'$ as follows: If $x^*_1 + x^*_2 \leq \ell$, replace $x^*_1$ and $x^*_2$ in $\mathbf{x}'$ with $0$ and $(x^*_1 + x^*_2)$, respectively. Otherwise and if $x^*_1 + x^*_2 > \ell$, replace them with $(x^*_1 + x^*_2) - \ell$ and $\ell$. It is trivial to verify that in both cases, $\mathbf{x}'$ remains a feasible solution to (\ref{eq:min_concave}). Next we prove that $\mathbf{x}'$ improves the objective function. (We provide the proof for the case in which $x^*_1 + x^*_2 \leq \ell$, but the proof is identical for the case of $x^*_1 + x^*_2 > \ell$) First, note that $\sum_{i=1}^m f(x'_i) = \sum_{i=1}^m f(x^*_i) - f(x^*_1) - f(x^*_2) + f(x^*_1 + x^*_2) + f(0) $, so it suffices to show that $- f(x^*_1) - f(x^*_2) + f(x^*_1 + x^*_2) + f(0) < 0$ or equivalently, $f(x^*_1 + x^*_2) + f(0) < f(x^*_1) + f(x^*_2)$. To prove this, note that we can write $f(x^*_1) + f(x^*_2)$ as follows: {\scriptsize \begin{eqnarray*} &= & f\left( (x^*_1 + x^*_2)\frac{x^*_1}{x^*_1 + x^*_2} \right) + f\left( (x^*_1 + x^*_2)\frac{x^*_2}{x^*_1 + x^*_2} \right) \\ &= & f\left( (x^*_1 + x^*_2)\frac{x^*_1}{x^*_1 + x^*_2} + 0\frac{x^*_2}{x^*_1 + x^*_2} \right) + f\left( 0\frac{x^*_1}{x^*_1 + x^*_2} + (x^*_1 + x^*_2)\frac{x^*_2}{x^*_1 + x^*_2} \right) \end{eqnarray*} } Applying the definition of concavity to the two terms in the equation above, we obtain that: {\scriptsize \begin{eqnarray*} &>& \frac{x^*_1}{x^*_1 + x^*_2} f( x^*_1 + x^*_2) + \frac{x^*_2}{x^*_1 + x^*_2} f(0) + \frac{x^*_1}{x^*_1 + x^*_2}f(0) + \frac{x^*_2}{x^*_1 + x^*_2} f( x^*_1 + x^*_2) \\ & = & f( x^*_1 + x^*_2) + f(0) \end{eqnarray*} } or equivalently, $f(x^*_1) + f(x^*_2) > f (x^*_1 + x^*_2) + f(0).$ Therefore, we have that $\sum_{i=1}^m f(x^*_i) > \sum_{i=1}^m f(x'_i)$. \end{proof} \subsection{Proof of Lemma~\ref{lem:min_convex}} \begin{proof} Let $\mathbf{x}^* = (\mathbf{x}^*_1,\cdots,\mathbf{x}^*_m)$ specify the optimal solution to (\ref{eq:min_convex}). Suppose the statement does not hold, and $\mathbf{x}^*$ contains two unequal numbers. Without loss of generality, suppose $x^*_1 \neq x^*_2$ with $\ell \leq x^*_1 < x^*_2 \leq 1$. Let $\mathbf{x}'$ be another potential solution obtained from $\mathbf{x}^*$ by replacing $x^*_1$ and $x^*_2$ in $\mathbf{x}^*$ with $\frac{x^*_1 + x^*_2}{2}$ in $\mathbf{x}'$. Next, we show that $\mathbf{x}'$ is a feasible solution to (\ref{eq:min_convex}) and $\sum_{i=1}^m f(x'_i) < \sum_{i=1}^m f(x^*_i)$. This observation would be a contradiction with the optimality of $\mathbf{x}^*$. To show that $\mathbf{x}'$ is feasible, note that since $x^*_1, x^*_2 \in [\ell,1]$, their average is also in $[\ell,1]$. Also, note that $x'_1 + x'_2 = \frac{x^*_1 + x^*_2}{2} + \frac{x^*_1 + x^*_2}{2} = x^*_1 + x^*_2$, so $\sum_{i=1}^m x'_i = \sum_{i=1}^m x^*_i = c$. To prove that $\mathbf{x}'$ improves the objective function in (\ref{eq:min_convex}), note that $f$ is strictly convex, so using the definition of convexity, we know that: $$\frac{1}{2} f(x^*_1) + \frac{1}{2}f(x^*_2) > f \left(\frac{1}{2} x^*_1 + \frac{1}{2}x^*_2 \right),$$ or equivalently, $f(x^*_1) + f(x^*_2) > 2 f \left(\frac{x^*_1 + x^*_2}{2} \right) = f(x'_1) + f(x'_2).$ Therefore, we have that $\sum_{i=1}^m f(x^*_i) > \sum_{i=1}^m f(x'_i)$. \end{proof} \iffalse \subsection{Derivation of the slope of k as function of r} Through our simulations, we observe that for settings depicted in Figure~\ref{fig:k-r}, there exist no individual with $p^*_i < \ell$, so $r$ is divided equally among $k(r)$ individuals. Based on Figure~\ref{fig:k-r}, we conjecture that $k(r) = \floor{c\times r}$ for a constant $c$. In what follows, we derive the slope $c$ for $w(p) = e^{-\beta (-\ln{p})^\alpha)}$. For a given $r$, the objective value (\ref{opt:welfare}) associated with dividing $r$ equally between $x$ people is equal to $x w(r/x)$. While semantically, $x$ has to be an integer, we can define the function $g(r,x) = x w(r/x)$ over real numbers to facilitate optimization (over a continuous domain of $x$). Now note that $g(r,x)$ is the perspective function of $w: [\ell,1] \longrightarrow [0,1]$. Since $w$ is convex in the feasible region for $r/x$, we know that its perspective $g$ must also convex~\citep{boyd2004convex}. So for a fixed value of $r$, there exists a unique minimizer $x^*$ for $g(r,x) = x w(r/x)$. Approximating $k(r)$ with $c \times r$, we can write $g(r, k) = g(r, cr) = cr e^{-\beta (\log{c})^\alpha}$. Define $h(r,c) := cr e^{-\beta (\log{c})^\alpha}$. We can derive $c$ by setting $ \frac{\partial g}{\partial k}$ to 0. Note that $$\frac{\partial g}{\partial k} = \frac{\partial g}{\partial h} \frac{\partial h}{\partial c} \frac{\partial c}{\partial k}.$$ Also, it is easy to observe that $\frac{\partial g}{\partial h} =1$ and $\frac{\partial c}{\partial k} = 1/r > 0$. So it must be the case that $\frac{\partial h}{\partial c} =0.$ {\footnotesize \begin{eqnarray*} \frac{\partial h}{\partial c} &=& r e^{-\beta (\log{c})^\alpha} + c r e^{-\beta (\log{c})^\alpha} \times (-\beta) \times \alpha \times \frac{1}{c} \times (\log{c})^{\alpha-1} \\ &=& r e^{-\beta (\log{c})^\alpha} \left( 1 - \beta \alpha (\log{c})^{\alpha-1} \right) = 0 \end{eqnarray*} } From the last equation, we obtain $c = (\alpha \beta)^{\frac{1}{1-\alpha}}$. \fi \iffalse \subsection{Proof of Theorem~\ref{thm:min-1}} \begin{proof} The proof is by induction. The base holds for $n \leq 5$ (To verify the base, first note that we can have at most two $i$'s with $p_i$'s in the convex region because $3\times 1/e > 1 = r$. A local improvement argument establishes that we can have at most one individual with an allocated probability of $x \leq 1-2/e$ in the concave region. It is easy to see that for $x \in (0, 1-2/e]$, $w(x) + 2 w\left( 0.5(1-x) \right)$ is increasing in $x$, which establishes that the minimum happens at $x=0$). The induction hypothesis is that the statement holds for a value of $n \geq 5$. Next, we show that it holds for $n>5$ as follows: Let vector $\mathbf{p}$ be the optimal solution to (\ref{opt:welfare}). If there exists a component $0 \leq i \leq n$ with $p_i=0$, then we can apply the induction hypothesis to the remaining $(n-1)$ individuals and we are done. Otherwise, pick the component $i$ for which $p_i$ is minimum. We must have $p_i \leq 1/n$ because all $p$'s should add up to 1. We must also have at least one $j \neq i$ for which $p_j \leq (1-p_i)/(n-1)$. (Otherwise, the sum of all $p$'s would be greater than $(n-1) \times (1-p_i)/(n-1) + p_i = 1$, which is a contradiction. Now if we set the $i$'th component to 0 and $j$'th component to $p_j + p_i$, we have reduced the objective value $w(p_1)+ \cdots + w(p_n)$. This is because $p_i + p_j \leq p_i + (1-p_i)/(n-1) \leq 2/n$ (it is easy to verify that $p_i + (1-p_i)/(n-1)$ is increasing in $p_i$ for any $n \geq 3$, so $p_i + (1-p_i)/(n-1)$ is maximized at $p_i = 1/n$ (the upper bound on $p_i$) and it amounts to $2/n$ there). Now observe that for $n \geq 6$, $2/n < 1/e$.\footnote{Note that for $\beta=1$, $\ell = 1/e$ regardless of the value of $\alpha$.} So when increasing $p_j$ to $p_j + p_i$ and reducing $p_i$ to 0, we remain entirely in the concave region of $w$. So this action reduces the sum of weighted probabilities at $i$'th and $j$'th components. More precisely, because $w$ is concave in this region, we have: \begin{eqnarray*} && w(p_i) - w(0) \geq w(p_i + p_j) - w(p_j)\\ &\Leftrightarrow & w(p_i) + w(p_j) > w(p_i + p_j) + w(0) \end{eqnarray*} \end{proof} \fi \subsection{Proof of Proposition~\ref{prop:min_heter}} \begin{proof} We first show that in the optimal solution to (\ref{opt:welfare_heter}), at most one individual receives $0<p^*_i <\ell$. Suppose not and there exist two individuals $1,2$ with $0<p^*_1 \leq p^*_2<\ell$. It must be the case that $t_1 > t_2$, otherwise we could reduce the objective value by swapping $p^*_1$ and $p^*_2$. Next we show that we can improve the objective value by redistributing the probability of harm $(p^*_1 + p^*_2)$ among individuals 1,2 as follows: we reduce $p_1$ and increase $p_2$--keeping $p_1 + p_2$ constant at $(p^*_1 + p^*_2)$--until either $p_1$ reaches $0$ or $p_2$ reaches $\ell$. Note that this operation strictly improves the objective value because: (1) according to Lemma~\ref{lem:min_concave} the reduction in $w(p_1)$ is larger than the increase in $w(p_j)$; (b) the reduction in $w(p_1)$ is multiplied by $t_1$, while the increase in $w(p_2)$ is multiplied by $t_2$. This local improvement contradicts our initial assumption that in an optimal allocation, there can exist two individuals with $0<p^*_1 \leq p^*_2 < \ell$. Second, we show that for all $j \in S$ with $p^*_j \geq \ell$, $t_j \times w'(p^*_j) = c$ for a constant $c$. Suppose not, and there are two individuals (say 1,2) with assigned probabilities in the $[\ell,1]$ interval such that $t_1 w'(p^*_1) \neq t_2 w'(p^*_2)$. Without loss of generality, we assume $t_1 w'(p^*_1) < t_2 w'(p^*_2)$. Now note that we can improve the objective value by redistributing the probability of harm $(p^*_1 + p^*_2)$ among individuals 1,2 by solving the following \emph{convex} optimization problem: $$\min_{\ell < p_1,p_2 < 1} t_1 w(p_1) + t_2 w(p_2) \text{ s.t. } p_1 + p_2 = p^*_1 + p^*_2.$$ We can equivalently write the above as $$\min_{p_1,p_2 \in \mathbb{R}} t_1 \tilde{w}(p_1) + t_2 \tilde{w}(p_2) \text{ s.t. } p_1 + p_2 = p^*_1 + p^*_2.$$ where $\tilde{w}(.)$ denotes the extended-value extension of $w: [\ell,1] \longrightarrow [0,1]$. Writing the stationarity conditions for the above equivalent problem, we obtain: \begin{eqnarray*} t_1 \tilde{w}'(p_1) = t_2 \tilde{w}'(p_2) = c \\ &\Rightarrow& t_1 w'(p_1) = t_2 w'(p_2) = c \end{eqnarray*} which is a contradiction with our initial assumption. \end{proof} \subsection{Proof of Lemma~\ref{lem:max_concave}} \begin{proof} The proof is identical to Lemma~\ref{lem:min_convex}. The only difference is that the optimization operator is maximization (instead of minimization), and concavity changes the direction of the inequalities in the proof. \end{proof} \subsection{Proof of Lemma~\ref{lem:max_convex}} \begin{proof} The proof is identical to Lemma~\ref{lem:min_concave}. The only difference is that the optimization operator is maximization (instead of minimization), and concavity changes the direction of the inequalities in the proof. \end{proof} \iffalse \subsection{Proof of Theorem~\ref{thm:max_benefit}} \begin{proof} We first show that in the perceived benefit maximizing allocation, at most one individual receives $\ell < p^*_i < 1$. Suppose not and there are at least two individuals (say 1 and 2) such that $\ell<p^*_1 \leq p^*_2 < 1$. According to Lemma~\ref{lem:max_convex}, we can improve the objective value by redistributing the probability of benefit $(p^*_1 + p^*_2)$ among individuals 1,2 as follows: we reduce $p_1$ and increase $p_2$--keeping $p_1 + p_2$ constant at $(p^*_1 + p^*_2)$--until either $p_1$ reaches $\ell$ or $p_2$ reaches $1$. Note that this operation improves the objective value and in either case it contradicts our initial assumption that for at least two individuals $\ell <p^*_1 \leq p^*_2 < 1$. Second, we show that for all $j \in S$ with $0 \leq p^*_j \leq \ell$, $p^*_j = p$ where $0 \leq p \leq \ell$ is a constant. Suppose not, and there are two individuals with unequal positive probabilities in the $[0,\ell]$ interval. Without loss of generality, we assume $0\leq p^*_1 < p^*_2 \leq \ell$. According to Lemma~\ref{lem:max_concave}, we can improve the objective value by redistributing the probability of harm $(p^*_1 + p^*_2)$ among individuals 1,2 as follows: set both $p_1 = p_2 = (p^*_1 + p^*_2)/2$. Since this operation improves the objective value, it contradicts the assumption that there are two individuals with unequal probabilities in $[0,\ell].$ \end{proof} \fi \subsection{Proof of Theorem~\ref{thm:no_epsilon}} \begin{proof} Note that since $w$ is concave up to the inflection point $\frac{1}{e}$, its derivative is positive and decreasing in $p$. Also we know that the derivative amount to $+\infty$ at $p=0$, so there exists a value $0 < q < \frac{1}{e}$ at which $w'(q) = 1$. We can choose $n$ large enough such that $\frac{1}{n-1} < q$. We next show that for such values of $n$, no distribution of the form $\left(\frac{\epsilon}{n-1}, \cdots, \frac{\epsilon}{n-1}, 1-\epsilon \right)$ with $\epsilon < 1-\frac{1}{e}$ can be optimal. The proof is by contradiction. Suppose not and $\left(\frac{\epsilon}{n-1}, \cdots, \frac{\epsilon}{n-1}, 1-\epsilon \right)$ is a maximizer of (\ref{opt:welfare}) for some $\epsilon < 1-\frac{1}{e}$. Next, we show that we can strictly improve the objective by moving from this distribution to $(\frac{1}{n-1}, \cdots, \frac{1}{n-1}, 0)$. This is a contradiction with the assumption that $\left(\frac{\epsilon}{n-1}, \cdots, \frac{\epsilon}{n-1}, 1-\epsilon \right)$ is a maximizer. To establish the above claim, we can write: {\scriptsize \begin{eqnarray*} \sigma\left(\frac{\epsilon}{n-1}, \cdots, \frac{\epsilon}{n-1}, 1-\epsilon \right) &=& w(1-\epsilon) + (n-1) w\left(\frac{\epsilon}{n-1}\right) \\ &\leq & 1-\epsilon + (n-1) w\left(\frac{\epsilon}{n-1}\right) \\ &\leq & (n-1) \left[ \frac{1-\epsilon}{n-1} + w\left(\frac{\epsilon}{n-1}\right) \right] \\ &\leq & (n-1) \left[ w\left(\frac{1}{n-1}\right) - w\left(\frac{\epsilon}{n-1}\right) + w\left(\frac{\epsilon}{n-1}\right) \right] \\ &\leq & (n-1) w\left(\frac{1}{n-1}\right) \end{eqnarray*} } In the second line of the above derivation, we utilized the fact that $1-\epsilon > \frac{1}{e}$--the fixed point and point of inflection of $w(.)$. So $w(1-\epsilon) < 1-\epsilon$. In the third line, we utilized the fact that $w\left(\frac{1}{n-1}\right) - w\left(\frac{\epsilon}{n-1}\right) > \frac{1-\epsilon}{n-1}$. To see this, note that {\scriptsize \begin{equation*} w\left(\frac{1}{n-1}\right) - w\left(\frac{\epsilon}{n-1}\right) = \int_{\frac{\epsilon}{n-1}}^{\frac{1}{n-1}} w'(p) dp > \int_{\frac{\epsilon}{n-1}}^{\frac{1}{n-1}} 1 dp = \frac{1-\epsilon}{n-1}. \end{equation*} } \end{proof} \iffalse Figure~\ref{fig:min_n} shows the minimum $n$ required for equality to become the unique maximizer of (\ref{opt:welfare}) when the probability weighting function is the Prelec function for various values of $\alpha, \beta$. \begin{figure} \centering \includegraphics[width=0.5\textwidth]{Figures/n_plot} \caption{The minimum $n$ required for equality to be the only perceived benefit maximizing policy.} \label{fig:min_n} \end{figure} \fi \subsection{Proof of Proposition~\ref{prop:certain_benefit}} \begin{proof} We first show that if $r \geq (n-1)\ell + 1$, then at least one individual receives the benefit with certainty. Suppose not, and for all $i \in S$, $p^*_i < 1$. Combining this fact with theorem~\ref{thm:max_benefit}, we can deduce that for every $i \in S$, one of the following must be the case: (1) $0 \leq p^*_i \leq \ell$, or (2) $\ell< p^*_i<1$. We also know according to theorem~\ref{thm:max_benefit} for at most one individual $i$, $\ell< p^*_i<1$. The maximum total benefit in a distribution with these properties realizes when $(n-1)$ individuals belong to category 1 with $p = \ell$ and one individual belongs to category 2 with $p$ very close but smaller than 1. So the maximum total benefit in such distribution is less than $(n-1)\ell + 1$, which is a contradiction with $r \geq (n-1)\ell + 1$. Second, we show that there exists a constant $q$ such that if $r \leq q \times n $, no one receives the benefit with certainty. Let $0 < q \leq \ell$ be the point at which $w'(.)=1$. We can show that for $r \leq q \times n $, no one will receive the benefit with certainty in the optimal solution $\mathbf{p}^*$. Suppose not, and there exists $i \in S$ such that $p^*_i = 1$. Let $m \leq (n-1)$ specify the number of individuals with $p^* \leq q$. Also, let $S_m \subset S$ denote the set of those individuals for whom $p^* \leq q$. With a reasoning similar to the one presented in the proof of Theorem~\ref{thm:no_epsilon}, it is easy to observe that the objective value can be strictly improved by taking $1$ unit of benefit from $i$, and distributing it equally among individuals in $S_m \cup \{i\}$. This is contradiction with the initial assumption that there exists an individual $i$ with $p^*_i = 1$. \end{proof} \subsection{Proof of Proposition~\ref{prop:max_heter}} \begin{proof} Suppose not and there exists an $i$ with $p^*_i=0$. Pick another individual $j$ with $0< p^*_j \leq 1$. We have that $w'(p^*_i) = w'(0) = +\infty$ and $w'(p^*_j) < +\infty$, so we can reduce $p^*_j$ by an infinitesimal amount $\epsilon$ and increase $p^*_i$ at the same time, keeping $p^*_i + p^*_j$ constant. If $\epsilon$ is chosen to be small enough, then this operation will improve the perceived benefit, which is a contradiction. \end{proof} \section{Perceived Benefit Maximizing Allocations}\label{sec:benefit_analysis} Next, we analyze the case of allocating $r$ units of \emph{benefit} among $n$ individuals, and characterize the perceived benefit maximizing allocations. We will show that the number of individuals who receive a non-zero probability of benefit is always $n$ regardless of the size of benefit, $r$. In addition and perhaps surprisingly, for a fixed value of $n$, if the total benefit level $r$ is sufficiently large relative to $n$, the optimal solution allocates probabilities in a two-tier manner where a subset of individuals receive the benefit with certainty and the remaining individuals have equal (but less than 1) chances of obtaining the remaining benefit. Our analysis utilizes the following two lemmas. All omitted proofs can be found in the technical appendix. \begin{lemma}\label{lem:max_concave} Let $f: [0,\ell] \longrightarrow [0,1]$ be any strictly concave function, $m \geq 2$ an integer, and $0< c \leq m \ell$ a constant. Then $(c/m, \cdots, c/m)$ is the unique optimal solution to the following maximization problem: \begin{equation}\label{eq:max_concave} \max_{x_1,\cdots,x_m \in [0,\ell]} \sum_{i=1}^m f(x_i) \text{ s.t. } \sum_{i=1}^m x_i = c. \end{equation} \end{lemma} \begin{lemma}\label{lem:max_convex} Let $f: [\ell,1] \longrightarrow [0,1]$ be any strictly convex function, and $m \ell < c \leq m$ a constant. Let $\mathbf{x}^* = (\mathbf{x}^*_1,\cdots,\mathbf{x}^*_m)$ specify the unique optimal solution to the following maximization problem: \begin{equation}\label{eq:max_convex} \max_{x_1,\cdots,x_m \in [\ell,1]} \sum_{i=1}^m f(x_i) \text{ s.t. } \sum_{i=1}^m x_i = c. \end{equation} Then for at most one $i \in \{1, \cdots, m\}$, $x^*_i \in (\ell,1)$. \end{lemma} Armed with the above two lemmas, we can characterize the perceived benefit maximizing allocation of $r$ units of benefit among $n$ individuals. \begin{theorem}\label{thm:max_benefit} Consider the distribution of $r$ units of benefit among $n$ individuals. Let $\ell$ specify the inflection point of the probability weighting function, $w(.)$. Let $\mathbf{p}^*$ be a maximizer of (\ref{opt:welfare}). Then \begin{enumerate} \item For at most one individual $i$, $\ell < p^*_i < 1$; \item For all $j \in S$ with $0 \leq p^*_j \leq \ell$, $p^*_j = p$ where $0 \leq p \leq \ell$ is a constant. \end{enumerate} \end{theorem} \iffalse \begin{proof} We first show that in the perceived benefit maximizing allocation, at most one individual receives $\ell < p^*_i < 1$. Suppose not and there are at least two individuals (say 1 and 2) such that $\ell<p^*_1 \leq p^*_2 < 1$. According to Lemma~\ref{lem:max_convex}, we can improve the objective value by redistributing the probability of benefit $(p^*_1 + p^*_2)$ among individuals 1,2 as follows: we reduce $p_1$ and increase $p_2$--keeping $p_1 + p_2$ constant at $(p^*_1 + p^*_2)$--until either $p_1$ reaches $\ell$ or $p_2$ reaches $1$. Note that this operation improves the objective value and in either case it contradicts our initial assumption that for at least two individuals $\ell <p^*_1 \leq p^*_2 < 1$. Second, we show that for all $j \in S$ with $0 \leq p^*_j \leq \ell$, $p^*_j = p$ where $0 \leq p \leq \ell$ is a constant. Suppose not, and there are two individuals with unequal positive probabilities in the $[0,\ell]$ interval. Without loss of generality, we assume$0\leq p^*_1 < p^*_2 \leq \ell$. According to Lemma~\ref{lem:max_concave}, we can improve the objective value by redistributing the probability of harm $(p^*_1 + p^*_2)$ among individuals 1,2 as follows: set both $p_1 = p_2 = (p^*_1 + p^*_2)/2$. Since this operation improves the objective value, it contradicts the assumption that there are two individuals with unequal probabilities in $[0,\ell].$ \end{proof} \fi \begin{proof} We first show that in the perceived benefit maximizing allocation, at most one individual receives $\ell < p^*_i < 1$. Suppose not and there are at least two individuals (say 1 and 2) such that $\ell<p^*_1 \leq p^*_2 < 1$. According to Lemma~\ref{lem:max_convex}, we can improve the objective value by redistributing the probability of benefit $(p^*_1 + p^*_2)$ among individuals 1,2 as follows: we reduce $p_1$ and increase $p_2$--keeping $p_1 + p_2$ constant at $(p^*_1 + p^*_2)$--until either $p_1$ reaches $\ell$ or $p_2$ reaches $1$. Note that this operation improves the objective value and in either case it contradicts our initial assumption that for at least two individuals $\ell <p^*_1 \leq p^*_2 < 1$. Second, we show that for all $j \in S$ with $0 \leq p^*_j \leq \ell$, $p^*_j = p$ where $0 \leq p \leq \ell$ is a constant. Suppose not, and there are two individuals with unequal positive probabilities in the $[0,\ell]$ interval. Without loss of generality, we assume $0\leq p^*_1 < p^*_2 \leq \ell$. According to Lemma~\ref{lem:max_concave}, we can improve the objective value by redistributing the probability of harm $(p^*_1 + p^*_2)$ among individuals 1,2 as follows: set both $p_1 = p_2 = (p^*_1 + p^*_2)/2$. Since this operation improves the objective value, it contradicts the assumption that there are two individuals with unequal probabilities in $[0,\ell].$ \end{proof} Next and as a concrete example, we focus on the special case of the Prelec probability weighting functions with $\beta=1$, and derive the perceived benefit maximizing policy for $r=1$. According to theorem~\ref{thm:max_benefit}, it is easy to observe the following: \begin{corollary}\label{cor:max_1} Consider the distribution $r=1$ units of benefit among $n$ individuals. Suppose $w(.)$ is the Prelec function with $\beta=1$. For any $\alpha<1$ and $n>1$, the unique maximizer of (\ref{opt:welfare}) is either \begin{itemize} \item the uniform allocation $(1/n,\cdots,1/n)$; \item or $(\epsilon,...,\epsilon, 1-(n-1)\epsilon)$ for some $\epsilon \leq \min\{1/e, \frac{1-\ell}{n-1}\}$. \end{itemize} \end{corollary}\label{thm:max-1} Moreover, we can show that for sufficiently large $n$, $(1/n,\cdots,1/n)$ is the unique maximizer. \begin{theorem}\label{thm:no_epsilon} Consider the distribution $r=1$ units of benefit among $n$ individuals. Suppose $w(.)$ is the Prelec function with $\beta=1$ and $\alpha<1$. Then there exists a value $q(\alpha,\beta) \in \mathbb{N}$ such that for any $n > q(\alpha,\beta)$, $(1/n,\cdots,1/n)$ is the unique global maximizer of (\ref{opt:welfare}). \end{theorem} \iffalse \begin{proof} Note that since $w$ is concave up to the inflection point $\frac{1}{e}$, its derivative is positive and decreasing in $p$. Also we know that the derivative amount to $+\infty$ at $p=0$, so there exists a value $0 < q < \frac{1}{e}$ below which $w'(p) > 1$ and above which $w'(p) < 1$. We can choose $n$ large enough such that $\frac{1}{n-1} < q$. We next show that for such values of $n$, no distribution of the form $\left(\frac{\epsilon}{n-1}, \cdots, \frac{\epsilon}{n-1}, 1-\epsilon \right)$ with $\epsilon < 1-\frac{1}{e}$. The proof is by contradiction. Suppose not and $\left(\frac{\epsilon}{n-1}, \cdots, \frac{\epsilon}{n-1}, 1-\epsilon \right)$ is a global maximizer for some $\epsilon < 1-\frac{1}{e}$. Next, we show that we can strictly improve the objective by moving from this distribution to $(\frac{1}{n-1}, \cdots, \frac{1}{n-1}, 0)$. This is a contradiction with the assumption that $\left(\frac{\epsilon}{n-1}, \cdots, \frac{\epsilon}{n-1}, 1-\epsilon \right)$ is a maximizer. To establish the above claim, we can write: {\scriptsize \begin{eqnarray*} \sigma\left(\frac{\epsilon}{n-1}, \cdots, \frac{\epsilon}{n-1}, 1-\epsilon \right) &=& w(1-\epsilon) + (n-1) w\left(\frac{\epsilon}{n-1}\right) \\ &\leq & 1-\epsilon + (n-1) w\left(\frac{\epsilon}{n-1}\right) \\ &\leq & (n-1) \left[ \frac{1-\epsilon}{n-1} + w\left(\frac{\epsilon}{n-1}\right) \right] \\ &\leq & (n-1) \left[ w\left(\frac{1}{n-1}\right) - w\left(\frac{\epsilon}{n-1}\right) + w\left(\frac{\epsilon}{n-1}\right) \right] \\ &\leq & (n-1) w\left(\frac{1}{n-1}\right) \end{eqnarray*} } In the second line of the above derivation, we utilized the fact that $1-\epsilon > \frac{1}{e}$--the fixed point and point of inflection of $w(.)$. So $w(1-\epsilon) < 1-\epsilon$. In the third line, we utilized the fact that $w\left(\frac{1}{n-1}\right) - w\left(\frac{\epsilon}{n-1}\right) > \frac{1-\epsilon}{n-1}$. To see this, note that {\scriptsize \begin{equation*} w\left(\frac{1}{n-1}\right) - w\left(\frac{\epsilon}{n-1}\right) = \int_{\frac{\epsilon}{n-1}}^{\frac{1}{n-1}} w'(p) dp > \int_{\frac{\epsilon}{n-1}}^{\frac{1}{n-1}} 1 dp = \frac{1-\epsilon}{n-1}. \end{equation*} } \end{proof} \fi \begin{proof} Note that since $w$ is concave up to the inflection point $\frac{1}{e}$, its derivative is positive and decreasing in $p$. Also we know that the derivative amount to $+\infty$ at $p=0$, so there exists a value $0 < q < \frac{1}{e}$ at which $w'(q) = 1$. We can choose $n$ large enough such that $\frac{1}{n-1} < q$. We next show that for such values of $n$, no distribution of the form $\left(\frac{\epsilon}{n-1}, \cdots, \frac{\epsilon}{n-1}, 1-\epsilon \right)$ with $\epsilon < 1-\frac{1}{e}$ can be optimal. The proof is by contradiction. Suppose not and $\left(\frac{\epsilon}{n-1}, \cdots, \frac{\epsilon}{n-1}, 1-\epsilon \right)$ is a maximizer of (\ref{opt:welfare}) for some $\epsilon < 1-\frac{1}{e}$. Next, we show that we can strictly improve the objective by moving from this distribution to $(\frac{1}{n-1}, \cdots, \frac{1}{n-1}, 0)$. This is a contradiction with the assumption that $\left(\frac{\epsilon}{n-1}, \cdots, \frac{\epsilon}{n-1}, 1-\epsilon \right)$ is a maximizer. To establish the above claim, we can write: {\scriptsize \begin{eqnarray*} \sigma\left(\frac{\epsilon}{n-1}, \cdots, \frac{\epsilon}{n-1}, 1-\epsilon \right) &=& w(1-\epsilon) + (n-1) w\left(\frac{\epsilon}{n-1}\right) \\ &\leq & 1-\epsilon + (n-1) w\left(\frac{\epsilon}{n-1}\right) \\ &\leq & (n-1) \left[ \frac{1-\epsilon}{n-1} + w\left(\frac{\epsilon}{n-1}\right) \right] \\ &\leq & (n-1) \left[ w\left(\frac{1}{n-1}\right) - w\left(\frac{\epsilon}{n-1}\right) + w\left(\frac{\epsilon}{n-1}\right) \right] \\ &\leq & (n-1) w\left(\frac{1}{n-1}\right) \end{eqnarray*} } In the second line of the above derivation, we utilized the fact that $1-\epsilon > \frac{1}{e}$--the fixed point and point of inflection of $w(.)$. So $w(1-\epsilon) < 1-\epsilon$. In the third line, we utilized the fact that $w\left(\frac{1}{n-1}\right) - w\left(\frac{\epsilon}{n-1}\right) > \frac{1-\epsilon}{n-1}$. To see this, note that {\scriptsize \begin{equation*} w\left(\frac{1}{n-1}\right) - w\left(\frac{\epsilon}{n-1}\right) = \int_{\frac{\epsilon}{n-1}}^{\frac{1}{n-1}} w'(p) dp > \int_{\frac{\epsilon}{n-1}}^{\frac{1}{n-1}} 1 dp = \frac{1-\epsilon}{n-1}. \end{equation*} } \end{proof} \xhdr{Conditions for two-tier optimal allocations} According to Theorem~\ref{thm:max_benefit}, when $r>1$, the perceived benefit maximizing solution can consist of individuals who receive the benefit with certainty (i.e., $p^*_i = 1$). Next we ask: how large should $r$ be relative to $n$ for at least one individual to receive the benefit with certainty in the perceived benefit maximizing allocation? The following Proposition establishes that the necessary and sufficient condition is $r = \Theta(n)$. \begin{proposition}\label{prop:certain_benefit} If $r \geq (n-1)\ell + 1$, there exists an individual $i \in S$ with $p^*_i = 1$. Moreover, there exists a constant $q$ such that if $r \leq q \times n $, $p^*_i < 1$ for all $i$. \end{proposition} \iffalse \begin{proof} We first show that if $r \geq (n-1)\ell + 1$, then at least one individual receives the benefit with certainty. Suppose not, and for all $i \in S$, $p^*_i < 1$. Combining this fact with theorem~\ref{thm:max_benefit}, we can deduce that for every $i \in S$, one of the following three must be the case: (1) $0 \leq p^*_i \leq \ell$, or (2) $\ell< p^*_i<1$. We also know according to theorem~\ref{thm:max_benefit} for at most one individual $i$, $\ell< p^*_i<1$. The maximum total benefit in a distribution with these properties realizes when $(n-1)$ individuals belong to category (1) with $p = \ell$ (2) and one individual belongs to category (2) with $p$ very close but smaller than 1. So the maximum total benefit in such distribution is less than $(n-1)\ell + 1$, which is a contradiction with $r \geq (n-1)\ell + 1$. Second, we show that there exists a constant $q$ such that if $r \leq q \times n $, no one receives the benefit with certainty. Let $0 < q \leq \ell$ be the point at which $w'(.)=1$. We can show that for $r \leq q \times n $, no one will receive the benefit with certainty in the optimal solution $\mathbf{p}^*$. Suppose not, and there exists $i \in S$ such that $p^*_i = 1$. Let $m \leq (n-1)$ specify the number of individuals with $p^* \leq q$. Also, let $S_m \subset S$ denote the set of those individuals for whom $p^* \leq q$. With a reasoning similar to the one presented in the proof of Theorem~\ref{thm:no_epsilon}, it is easy to observe that the objective value can be strictly improved by taking $1$ unit of benefit from $i$, and distributing it equally among individuals in $S_m \cup \{i\}$. This is contradiction with the initial assumption that there exists an individual $i$ with $p^*_i = 1$. \end{proof} \fi Figure~\ref{fig:min_r} depicts the minimum $r$ required for at least one individual to receive the benefit with certainty as a function of $n$ when $w(.)$ is the Prelec probability weighting function with $\alpha=0.9$ and $\beta=1$. \begin{figure} \centering \includegraphics[width=0.45\textwidth]{Figures/phase_shift_benefit.png} \caption{The minimum $r$ required for at least one individual to receive the benefit with certainty when the probability weighting function is the Prelec function with $\alpha=0.9$ and $\beta=1$.} \label{fig:min_r} \end{figure} \xhdr{Heterogeneous priorities} Similar to the case of allocating harms, a heterogeneous priority vector $\mathbf{t}$ does not significantly impact the shape and support of the perceived benefit maximizing allocations, as long as for all $i \in S$, $t_i>0$. \begin{proposition}\label{prop:max_heter} Consider the distribution of $r$ units of benefit among $n$ individuals with priorities specified by $\mathbf{t}$. For any $\mathbf{t} \ggcurly \textbf{0}$ and any $\mathbf{p}^* \in C^*(\mathbf{t})$, $\mathbf{p}^* \ggcurly \textbf{0}$. \end{proposition} \iffalse \begin{proof} Suppose not and there exists an $i$ with $p^*_i=0$. Pick another individual $j$ with $0< p^*_j <1$. Since $w'(p^*_i) = w'(0) = +\infty$ and $w'(p^*_j) < +\infty$, we can reduce $p^*_j$ by an infinitesimal amount $\epsilon$ and increase $p^*_i$ at the same time, keeping $p^*_i + p^*_j$ constant. If $\epsilon$ is chosen to be small enough, then this operation will improve the perceived benefit, which is a contradiction. \end{proof} \fi \subsection{Examples and Discussion}\label{sec:benefit_examples} As with the case of distributions of harm, it is useful to interpret the theoretical results for distributions of benefit by asking how they reflect real-world scenarios. As in our previous discussion of examples, we emphasize again that in any real-world instance of benefit allocation, numerous factors impact the ultimate distribution, and we do not claim that probability weighting is the only factor shaping the benefit allocations in the examples that follow. Instead, we argue that the instances presented here collectively serve as inductive evidence for probability weighting playing \emph{a} possible role in how benefits are distributed in society, and of policymakers grappling with choices between distributions of uncertain benefits. \xhdr{Uniform lotteries} We begin by observing that when the total benefit $r$ is small relative to the size of the population under consideration ($n$), the allocation that maximizes the perceived benefit is a {\em uniform lottery} in which everybody has an equal chance of receiving the benefit. Examples of uniform lotteries are widespread; what is interesting is that our model produces this natural outcome without an overt preference for \emph{equalizing} probabilities among people. Rather, the equalization of probabilities is arising for a different reason: because probability weighting inflates small probabilities, we find that assigning everyone a small but equal probability of receiving the benefit serves to maximize the sum of weighted probabilities. In this way, an objective that maximizes total perceived welfare produces an outcome that indirectly optimizes for a certain type of equity in the probabilities as well. Note also that this forms a strong contrast with the case of harm distributions. For probabilities of harm, this same principle from probability weighting---that small probabilities are inflated---implies that uniform lotteries will produce an unnecessarily {\em large} perceived total cost to the population. Hence, for harms, the optimal solution was to run a lottery over a much smaller subset of the population. This reflects an important contrast between uncertain harms and uncertain benefits as viewed through the lens of probability weighting: we may prefer to spread a small possibility of benefit very widely across the population, but would view the same distribution as producing unacceptable levels of total perceived risk when it is used for allocating harms. \xhdr{Systems with discrete tiers} Once the total benefit $r$ is sufficiently large (relative to the size of the population $n$), the maximum perceived distribution takes on a very different form: it provides benefit to a subset of the population with probability $1$, and runs a uniform lottery on the remainder. (Potentially there may also be one intermediate value as well.) This form of the solution, in which benefit is assigned in a small number of discrete ``tiers of service,'' has a more unusual structure; informally, the probability weighting model discovers that it can \hhdelete{provide significant gain}\hhedit{significantly improve the perceived benefit} to a subset of the population by pushing their probabilities up to $1$, while reducing the perceived benefit to the rest of the population only very slightly when it shifts their probabilities correspondingly downward to balance the total. Despite this unusual structure, we can find a number of cases in practice where probabilistic allocations of benefit use structures based on a small number of discrete tiers. As above, we do not claim that this arises directly from explicit probability-weighting considerations, but that a recurring preference for such solutions suggests that this type of approach---which is difficult to motivate via other simple probabilistic models---reflects a consistency with the underlying principles of probability weighting. One paradigmatic example of this multi-tiered approach is in the allocation of hunting permits. For safety and conservation reasons, states often impose limits on how many of a given animal species may be hunted in a season. There are often more people who want to hunt a particular type of animal than there are animals that the state deems appropriate to be hunted---requiring states to determine how to allocate this benefit among them. States approach this problem by running ``tag lotteries'' in which each tag grants the right to harvest one animal. States differ in how they structure these lotteries. Some conduct uniform lotteries in which each entrant has equal odds of obtaining a tag, while others structure them to reward longtime entrants who have not successfully drawn tags in previous rounds. They may do so by, for example, giving hunters an additional entry for each year they have been unsuccessful, increasing their chances---though not ensuring with certainty---that they receive the benefit in the current round. Still others let hunters accumulate ``preference points'' for each unsuccessful year, and then \emph{guarantee} tags to all hunters who meet some point threshold, while allocating any remaining tags by random lottery \citep{huntsplus}---effectively creating a two-tiered structure of benefit allocation, like that predicted by our model (Theorem~\ref{thm:max_benefit}). When states design these allocation mechanisms, they find themselves wrestling explicitly with how to distribute uncertain benefits fairly and how to balance between certainty and randomness. For example, in the late 2000s, Montana---which had used a preference point system to reward more senior hunters, to the degree that the process had become essentially a seniority queue in which hunters could wait decades for some prized tags---reformed its system to reserve a small number of tags for random lottery, in order to incentivize new, young hunters to participate \citep{montana}. Through this two-tier system, Montana gave many longtime hunters the certainty of knowing they'd be able to hunt; but by injecting a modest amount of uncertainty into the system via a second random tier, the state aimed to encourage others to behave based on a belief that they, too, might partake of the benefit. Allocation systems with a small number of tiers, each with different probabilities, arise in other cases where permission to enter a restricted activity is being granted; for example, entries in the New York City marathon follow a similar high-level structure, with a portion of the entries allocated based on deterministic criteria and the rest allocated by lottery \citep{marathon}. Upgrade policies in brand loyalty programs can be viewed as setting different probabilities of receiving benefits for a small number of different tiers as well, though of course the specifics of each policy can become complex. \hhcomment{Quick comment on ``small number of tiers'': our model only justifies two tiers, and with very particular structure to those two tiers. So its connection to settings with small number of tiers is not immediately clear.} An intriguing feature of all these examples is that we typically think of the use of tiers as a way of controlling the cognitive complexity of a policy: it is easier for people to understand a small number of categories than a continuum of different probability values. In contrast, the model based on probability weighting has no intrinsic reason to group the allocation probabilities into a small number of tiers, since any distribution is an allowable option for it; the fact that it nevertheless creates small numbers of discrete tiers for its optimal solutions suggests a fundamental connection between this type of tiered discretization and the structure of preferences based on probability weighting. \section{Conclusion and Future Directions}\label{sec:conclusion} \newcommand{\omt}[1]{} We have considered policies that distribute probabilities of harm, or probabilities of benefit, across a population, and have asked how we might distinguish among policies that produce different distributions with the same total expected impact. The theory of probability weighting from behavioral economics provides one natural proposal: since people systematically perceive probabilities to be different from their actual values, two distributions with the same expectation will not in general have the same sum of {\em weighted} probabilities. Accordingly, we have investigated which types of distributions optimize certain functions of these perceived (weighted) probabilities, and have found that distributions with the characteristics of these optima show up in a diverse range of policy contexts. Our point in this analysis is not to recommend such distributions as being normatively preferable to others, but instead to argue that a societal preference for them is consistent with (and hence captured well by) basic principles from the behavioral science of probability perception. Our results also reveal a number of opportunities for further research, both empirical and theoretical. As we've emphasized throughout, we do not expect that our model fully accounts for the diverse considerations that inform decisions regarding the allocation of probabilistic harms and benefits. By design, our model highlights one aspect of a complex process that might help account for the preferences and policies that we seem to observe in practice. But far more empirical work is necessary to establish what role probability weighting actually plays in any given policymaking process. Our model points to particular types of allocation policies that are worthy of further investigation. Such work could help uncover empirical details that support alternative plausible explanations, but it could also find that policymakers struggle to offer a coherent justification for their chosen allocation, perhaps lending support to the idea that their judgments might rest, implicitly, on distorted perceptions of probabilities. Empirical methods from behavioral economics would no doubt be useful in this work as well, testing how various stakeholders involved in the policymaking process actually perceive the relevant probabilities of harm and benefit. We also observe that the types of distributions favored by models based on probability weighting can be seen to align with principles of distributive justice in some cases (particularly when probabilities are distributed more uniformly), and to contrast with these principles in other cases (such as real-world instances where policies choose to concentrate risk on smaller sets, for example). It would be interesting to search for deeper foundations that might underpin these similarities and contrasts. Since our analysis is stylized, we should be able to bring similar thinking to bear in other domains and extend our reasoning beyond the types of cases we've considered here. For the sake of concreteness, we'll discuss these extensions in terms of harm rather than benefit, but the questions generally apply in both cases. To start, we are modeling cases in which there is a single kind of harm (engaging in a dangerous activity or not, being exposed to a hazardous environment or not) and the risk is experienced once. It would clearly be of interest to incorporate multiple levels of harm or harms that evolve over time. Moving to this non-binary setting would require that we incorporate behavioral models that account for ranked levels of utility into probability weighting (e.g., by distinguishing between relative losses and gains), as well as behavioral biases around costs incurred in the present versus the future. All of these might shed further light on settings where these types of policy questions arise. Probability weighting is also not the only systematic behavioral bias that people exhibit when dealing with probabilities. For example, the phenomenon of {\em base rate neglect} leads people to overweight the evidence of a single instance rather than correctly taking into account the background distribution that the instance comes from. People also systematically display {\em overconfidence} when estimating the probability of beneficial outcomes that are based on their own agency \citep{della-vigna-psych-econ}. It would be interesting to see how these might be integrated into a framework for analyzing distributional questions as we do here. \klcomment{citation or two here?} \sbcomment{Do you happen to have some go-to citations, Hoda? If not, I can try to dig some up!}\jkcomment{I added a citation to a survey paper.} Of course, none of these behavioral effects tell us how to choose among policies offering different distributions, but they can provide insight, in some cases, into why people express the policy preferences that they do. \hl{By integrating consideration of a robust behavioral bias like probability weighting into these contexts, we may better understand policy preferences for different mechanisms with uncertain allocations.} \omt{ \begin{itemize} \item Settings in which the total harm $r$ is a function of $k$ \item Multiple levels of harm/benefit (instead of the simplified version where $B = \{0,1\}$). This would necessitate taking loss aversion into account, and define probability weighting in a rank-dependent manner. \item One-off vs. repeated allocations of risk \item Other behavioral biases involving probabilities (e.g., probability neglect, probability matching?) \item Taking advantage of uncertainty in order to influence people to behave in certain ways \item Behavioral economics as a field is devoted to uncovering systematic cognitive biases that explain apparent departures from rational thinking. From this perspective, it's easy to see probability weighting as a discovery that allows us to explain preferences that don't, at first, seem like they are the result of a rational cost-benefit analysis. But once we recognize the distorting effects of probability weighting, then it's possible to understand these allocation preferences as the result of a rational cost-benefit analysis that just happens to be based on distorted expected costs and benefits. In other words, it lets us salvage cost-benefit analysis as an explanatory tool. But there's something else going on here: probability weighing also has the effect of attuning us to questions of distributive justice---precisely the questions that cost-benefit analysis often fails to consider. There's something intuitively much less offensive about two people having 50 chance of suffering a harm than concentrating certain harm on one person---and this is likely because we don't see the total cost as the \textit{same} in these two situations. In a way, probability weighting performs a similar function as the "distributional weights" that philosophers and economists have proposed as a way to integrate distributional concerns into cost-benefit analysis. Probability weighting makes it rational to favor distributions of harms and benefits that are not highly unequal because doing so seems to reduce the overall harm or increase the overall benefit. \end{itemize} } \section{Perceived Harm Minimizing Allocations}\label{sec:harm_analysis} We begin our analysis with the case of allocating $r$ units of \emph{harm} among $n$ individuals, and characterize the perceived harm minimizing distributions. We will show that for any given total harm level $r$, the optimal solution to (\ref{opt:welfare}) concentrates the risk on a subset of the population, such that each member of the at-risk subset has a probability of harm bounded away from 1, while most of the population has a probability of harm equal to 0. Our analysis utilizes the following two lemmas. All omitted proofs can be found in the technical appendix. \begin{lemma}\label{lem:min_concave} Let $f: [0,\ell] \longrightarrow [0,1]$ be a strictly concave function, $m \geq 2$ an integer, and $0< c \leq m \ell$ a constant. Let $\mathbf{x}^* = (\mathbf{x}^*_1,\cdots,\mathbf{x}^*_m)$ specify the unique optimal solution to the following minimization problem: \begin{equation}\label{eq:min_concave} \min_{x_1,\cdots,x_m \in [0,\ell]} \sum_{i=1}^m f(x_i) \text{ s.t. } \sum_{i=1}^m x_i = c. \end{equation} Then for at most one $i \in \{1, \cdots, m\}$, $x^*_i \in (0, \ell)$. \end{lemma} \iffalse \begin{proof} Suppose the statement does not hold, and there exist $i,j \in \{1, \cdots, m\}$ with $i \neq j$ and $x^*_i, x^*_j \in (0, \ell)$. Without loss of generality, we assume $0 <x^*_1 \leq x^*_2 < \ell$. We construct $vx'$, another feasible solution to (\ref{eq:min_concave}), from $\mathbf{x}^*$ and show that $\mathbf{x}'$ leads to a lower objective value compared to $\mathbf{x}^*$, that is, $\sum_{i=1}^m f(x'_i) < \sum_{i=1}^m f(x^*_i)$. This observation would be a contradiction with the optimality of $\mathbf{x}^*$. We construct $\mathbf{x}'$ as follows: If $x^*_1 + x^*_2 \leq \ell$, replace $x^*_1$ and $x^*_2$ in $\mathbf{x}'$ with $0$ and $(x^*_1 + x^*_2)$, respectively. Otherwise and if $x^*_1 + x^*_2 > \ell$, replace them with $(x^*_1 + x^*_2) - \ell$ and $\ell$. In both cases, $\mathbf{x}'$ remains a feasible solution to (\ref{eq:min_concave}). Next we prove that $\mathbf{x}'$ improves the objective function. First, note that $\sum_{i=1}^m f(x'_i) = \sum_{i=1}^m f(x^*_i) - f(x^*_1) - f(x^*_2) + f(x^*_1 + x^*_2) + f(0) $, so it suffices to show that $- f(x^*_1) - f(x^*_2) + f(x^*_1 + x^*_2) + f(0) < 0$ or equivalently, $f(x^*_1 + x^*_2) + f(0) < f(x^*_1) + f(x^*_2)$. To prove this, note that we can write $f(x^*_1) + f(x^*_2)$ as follows: {\scriptsize \begin{eqnarray*} &= & f\left( (x^*_1 + x^*_2)\frac{x^*_1}{x^*_1 + x^*_2} \right) + f\left( (x^*_1 + x^*_2)\frac{x^*_2}{x^*_1 + x^*_2} \right) \\ &= & f\left( (x^*_1 + x^*_2)\frac{x^*_1}{x^*_1 + x^*_2} + 0\frac{x^*_2}{x^*_1 + x^*_2} \right) + f\left( 0\frac{x^*_1}{x^*_1 + x^*_2} + (x^*_1 + x^*_2)\frac{x^*_2}{x^*_1 + x^*_2} \right) \end{eqnarray*} } Applying the definition of concavity to the two terms in the equation above, we obtain that: {\scriptsize \begin{eqnarray*} &>& \frac{x^*_1}{x^*_1 + x^*_2} f( x^*_1 + x^*_2) + \frac{x^*_2}{x^*_1 + x^*_2} f(0) + \frac{x^*_1}{x^*_1 + x^*_2}f(0) + \frac{x^*_2}{x^*_1 + x^*_2} f( x^*_1 + x^*_2) \\ & = & f( x^*_1 + x^*_2) + f(0) \end{eqnarray*} } or equivalently, $f(x^*_1) + f(x^*_2) > f (x^*_1 + x^*_2) + f(0).$ Therefore, we have that $\sum_{i=1}^m f(x^*_i) > \sum_{i=1}^m f(x'_i)$. \end{proof} \fi \begin{lemma}\label{lem:min_convex} Let $f: [\ell,1] \longrightarrow [0,1]$ be a strictly convex function, $m \geq 2$ an integer, and $c \in [m \ell, m]$ a constant. Then $(c/m, \cdots, c/m)$ is the unique optimal solution to the following minimization problem: \begin{equation}\label{eq:min_convex} \min_{x_1,\cdots,x_m \in [\ell,1]} \sum_{i=1}^m f(x_i) \text{ s.t. } \sum_{i=1}^m x_i = c. \end{equation} \end{lemma} \iffalse \begin{proof} Let $\mathbf{x}^* = (\mathbf{x}^*_1,\cdots,\mathbf{x}^*_m)$ specify the optimal solution to (\ref{eq:min_convex}). Suppose the statement does not hold, and $\mathbf{x}^*$ contains two unequal numbers. Without loss of generality, suppose $x^*_1 \neq x^*_2$ with $\ell \leq x^*_1 < x^*_2 \leq 1$. Let $\mathbf{x}'$ be another potential solution obtained from $\mathbf{x}^*$ by replacing $x^*_1$ and $x^*_2$ in $\mathbf{x}^*$ with $\frac{x^*_1 + x^*_2}{2}$ in $\mathbf{x}'$. Next, we show that $\mathbf{x}'$ is a feasible solution to (\ref{eq:min_convex}) and $\sum_{i=1}^m f(x'_i) < \sum_{i=1}^m f(x^*_i)$. This observation would be a contradiction with the optimality of $\mathbf{x}^*$. To show that $\mathbf{x}'$ is feasible, note that since $x^*_1, x^*_2 \in [\ell,1]$, their average is also in $[\ell,1]$. Also, note that $x'_1 + x'_2 = \frac{x^*_1 + x^*_2}{2} + \frac{x^*_1 + x^*_2}{2} = x^*_1 + x^*_2$, so $\sum_{i=1}^m x'_i = \sum_{i=1}^m x^*_i = c$. To prove that $\mathbf{x}'$ improves the objective function, note that $f$ is strictly convex, so using the definition of convexity, we know that: $$\frac{1}{2} f(x^*_1) + \frac{1}{2}f(x^*_2) > f \left(\frac{1}{2} x^*_1 + \frac{1}{2}x^*_2 \right),$$ or equivalently, $f(x^*_1) + f(x^*_2) > 2 f \left(\frac{x^*_1 + x^*_2}{2} \right) = f(x'_1) + f(x'_2).$ Therefore, we have that $\sum_{i=1}^m f(x^*_i) > \sum_{i=1}^m f(x'_i)$. \end{proof} \fi Armed with the above two lemmas, we can characterize the perceived harm minimizing allocations of $r$ units of harm among $n$ individuals. \begin{theorem}\label{thm:min_harm} Consider the allocation of $r$ units of harm among $n$ individuals. Let $\ell$ specify the inflection point of the probability weighting function, $w(.)$. Let $\mathbf{p}^*$ be a minimizer of (\ref{opt:welfare}). Then \begin{enumerate} \item For at most one individual $i \in S$, $0<p^*_i<\ell$; \item For any $j \in S$ with $p^*_j \geq \ell$, $p^*_j = p$ where $p \geq \ell$ is a constant. \end{enumerate} \end{theorem} \begin{proof} We first show that in the perceived harm minimizing allocation, at most one individual receives $0<p^*_i <\ell$. Suppose not and there are at least two individuals (say $1$ and $2$) such that $0<p^*_1 \leq p^*_2 < \ell$. Next, we apply Lemma~\ref{lem:min_concave} to $w:[0, \ell] \longrightarrow [0,1]$, $m=2$, and $c = p^*_1 + p^*_2$ to argue that there exist an operation to improve the objective value in (\ref{opt:welfare}). Let $(p'_1,p'_2)$ specify the unique optimal solution to the following minimization program: \begin{equation} \min_{p_1,p_2 \in [0,\ell]} w(p_1) + w(p_2) \text{ s.t. } p_1 + p_2 = p^*_1 + p^*_2. \end{equation} Then according to Lemma~\ref{lem:min_concave}, for at most one $i \in \{1,2\}$, $p'_i \in (0, \ell)$, otherwise $w(p_1) + w(p_2)$ can be improved. If there exists an operation to improve $w(p_1) + w(p_2)$, the same operation can be utilized to improve (\ref{opt:welfare}) keeping $p^*_3, \cdots, p^*_n$ (and subsequently, $w(p^*_3), \cdots, w(p^*_n)$ in (\ref{opt:welfare}) ) unchanged. This would contradicts our initial assumption that for at least two individuals $0<p^*_1 \leq p^*_2 < \ell$. Second, we show that for all $j \in S$ with $p^*_j \geq \ell$, $p^*_j = p$ where $p \geq \ell$ is a constant. Suppose not, and there are two individuals with unequal probabilities in the $[\ell,1]$ interval. Without loss of generality, we assume $\ell \leq p^*_1 < p^*_2$. Applying Lemma~\ref{lem:min_convex} to $w:[\ell,1] \longrightarrow [0,1]$, $m=2$, and $c = p^*_1 + p^*_2$, we can improve the objective value of (\ref{opt:welfare}) by redistributing the probability of harm $(p^*_1 + p^*_2)$ among individuals 1,2 as follows: set both $p_1 = p_2 = (p^*_1 + p^*_2)/2$. Since this operation reduces the objective value, it contradicts the assumption that there are two individuals with unequal probabilities in $[\ell,1].$ \end{proof} Theorem~\ref{thm:min_harm} implies that for any given total harm level $r$, the number of individuals who receive a positive probability of harm is always bounded regardless of the number of individuals $n$. Let $k(n,r)$ specify the number of individuals whose probability of harm is greater than or equal to $\ell$ through the optimal solution, $\mathbf{p}^*$. (More precisely, $k(n,r) = \vert \{i \in S \text{ s.t. } p^*_i = p \geq \ell \}\vert$). \begin{corollary} For any given total harm level $r$, there exists $K \in \mathbb{N}$ such that $k(n,r) \leq K$ for any $n \in \mathbb{N}$. \end{corollary} It is also easy to see that for a fixed $n$, $k(n,r)$ must increase roughly linearly in $r$. This is simply because according to the above theorem, there is at most one individual with $0<p^*_i<\ell$. And when the remaining $(r-p^*_i)$ units of harm is allocated equally among $k(n,r)$ individuals, it leads each one of them to receive a probability harm $p \geq \ell$. Since $p = \frac{(r-p^*_i)}{k(n,r)}$ and $0 \leq p^*_i < \ell$, we have {\footnotesize \begin{eqnarray*} &&\frac{r - \ell}{k(n,r)} \leq p \leq \frac{r}{k(n,r)} \\ &\Rightarrow & \frac{r-\ell}{p} \leq k(n,r) \leq \frac{r}{p} \\ (\ell \leq p \leq 1) &\Rightarrow & r-\ell\leq k(n,r) \leq \frac{r}{\ell} \end{eqnarray*} } So $k(n,r)$ is sandwiched between two linear functions of $r$. Figure~\ref{fig:k-r} demonstrates $k$ as a function of $r$ for $n=50$ when $w$ is the Prelec probability weighting function. \iffalse For the settings depicted in Figure~\ref{fig:k-r}, there exist no individual with $p^*_i < \ell$, so $r$ is divided equally among $k(r)$ individuals. For a given $r$, the objective value associated with dividing $r$ equally between $x$ people is equal to $x w(r/x)$. (While semantically, $x$ has to be an integer, we can define the function $g(r,x) = x w(r/x)$ over real numbers to optimize over a continuous domain of $x$.) Note that $g(r,x)$ is the perspective function of $w$. Since $w$ is convex in the feasible region for $r/x$, we know that $g$ must also convex. So for a fixed value of $r$, there exists a unique minimizer $x^*$ for $x w(r/x)$. Based on Figure~\ref{fig:k-r}, we conjecture that $k(n,r) = cr$ for a constant $c$. With this assumption, $g(r, k) = g(r, cr) = cr e^{-\beta (\log{c})^\alpha}$. Define $h(r,c) := cr e^{-\beta (\log{c})^\alpha}$. It is easy to derive $c$ by setting $ \frac{\partial g}{\partial k}$ to 0. Note that $$\frac{\partial g}{\partial k} = \frac{\partial g}{\partial h} \frac{\partial h}{\partial c} \frac{\partial c}{\partial k}.$$ It is easy to observe that $\frac{\partial g}{\partial h} =1$ and $\frac{\partial c}{\partial k} = 1/r > 0$. So it must be the case that $\frac{\partial h}{\partial c} =0.$ {\footnotesize \begin{eqnarray*} \frac{\partial h}{\partial c} &=& r e^{-\beta (\log{c})^\alpha} + c r e^{-\beta (\log{c})^\alpha} \times (-\beta) \times \alpha \times \frac{1}{c} \times (\log{c})^{\alpha-1} \\ &=& r e^{-\beta (\log{c})^\alpha} \left( 1 - \beta \alpha (\log{c})^{\alpha-1} \right) = 0 \end{eqnarray*} } From the last equation, we can readily obtain $c = (\alpha \beta)^{\frac{1}{1-\alpha}}$. \fi Through our simulations, we observe that for settings depicted in Figure~\ref{fig:k-r}, there exist no individual with $p^*_i < \ell$, so $r$ is divided equally among $k(r)$ individuals. Based on Figure~\ref{fig:k-r}, we conjecture that $k(r) = \floor{c\times r}$ for a constant $c$. In what follows, we derive the slope $c$ for $w(p) = e^{-\beta (-\ln{p})^\alpha)}$. For a given $r$, the objective value (\ref{opt:welfare}) associated with dividing $r$ equally between $x$ people is equal to $x w(r/x)$. While semantically, $x$ has to be an integer, we can define the function $g(r,x) = x w(r/x)$ over real numbers to facilitate optimization (over a continuous domain of $x$). Now note that $g(r,x)$ is the perspective function of $w: [\ell,1] \longrightarrow [0,1]$. Since $w$ is convex in the feasible region for $r/x$, we know that its perspective $g$ must also convex~\citep{boyd2004convex}. So for a fixed value of $r$, there exists a unique minimizer $x^*$ for $g(r,x) = x w(r/x)$. Approximating $k(r)$ with $c \times r$, we can write $g(r, k) = g(r, cr) = cr e^{-\beta (\log{c})^\alpha}$. Define $h(r,c) := cr e^{-\beta (\log{c})^\alpha}$. We can derive $c$ by setting $ \frac{\partial g}{\partial k}$ to 0. Note that $$\frac{\partial g}{\partial k} = \frac{\partial g}{\partial h} \frac{\partial h}{\partial c} \frac{\partial c}{\partial k}.$$ Also, it is easy to observe that $\frac{\partial g}{\partial h} =1$ and $\frac{\partial c}{\partial k} = 1/r > 0$. So it must be the case that $\frac{\partial h}{\partial c} =0.$ {\footnotesize \begin{eqnarray*} \frac{\partial h}{\partial c} &=& r e^{-\beta (\log{c})^\alpha} + c r e^{-\beta (\log{c})^\alpha} \times (-\beta) \times \alpha \times \frac{1}{c} \times (\log{c})^{\alpha-1} \\ &=& r e^{-\beta (\log{c})^\alpha} \left( 1 - \beta \alpha (\log{c})^{\alpha-1} \right) = 0 \end{eqnarray*} } From the last equation, we obtain $c = (\alpha \beta)^{\frac{1}{1-\alpha}}$. \begin{figure} \centering \includegraphics[width=0.45\textwidth]{Figures/k_r} \caption{The optimal $k$ for various values of $r$ ($n=50$). $k$ is roughly a linear function of $r$ with a slope of $(\alpha \beta)^{\frac{1}{1-\alpha}}$. } \label{fig:k-r} \end{figure} As another concrete example, we can derive the perceived harm minimizing policy for the special case of $r=1$ and the Prelec probability function: \begin{theorem}\label{thm:min-1} Consider the distribution $r=1$ units of harm among $n$ individuals. Suppose $w(.)$ is the Prelec function with $\beta=1$. For any $\alpha<1$ and $n>1$, $(0.5, 0.5, 0, \cdots, 0)$ is the unique minimizer of (\ref{opt:welfare}). \end{theorem} \iffalse \begin{proof} Note that for $\beta=1$, $\ell = 1/e$ regardless of the value of $\alpha$. The proof is by induction. The base holds for $n \leq 5$ (To verify the base, first note that we can have at most two $i$'s with $p_i$'s in the convex region because $3\times 1/e > 1 = r$. A local improvement argument establishes that we can have at most 1 individual with an allocated probability of $x \leq 1-2/e$ in the concave region. It is easy to see that for $x \in (0, 1-2/e]$, $w(x) + 2 w\left( 0.5(1-x) \right)$ is increasing in $x$, which establishes that the minimum happens at $x=0$). The induction hypothesis is that the statement holds for a value of $n \geq 5$. Next, we can show it holds for $n>5$ as follows: Let vector $\mathbf{p}$ be a global minimum. If there exists a component $0 \leq i \leq n$ with $p_i=0$, then we can apply the induction hypothesis to the remaining $n-1$ components and we are done. Otherwise, pick the component $i$ for which $p_i$ is minimum. We must have $p_i \leq 1/n$ because all $p$'s should add up to 1. We must also have at least one $j \neq i$ for which $p_j \leq (1-p_i)/(n-1)$. (Otherwise, the sum of all $p$'s would be greater than $(n-1) \times (1-p_i)/(n-1) + p_i = 1$, which is a contradiction. Now if we set the $i$'th component to 0 and $j$'th component to $p_j + p_i$, we have reduced the objective. This is because $p_i + p_j \leq p_i + (1-p_i)/(n-1) \leq 2/n$ (it is easy to verify that $p_i + (1-p_i)/(n-1)$ is increasing in $p_i$ for any $n \geq 3$, so $p_i + (1-p_i)/(n-1)$ is maximized at $p_i = 1/n$ (the upper bound on $p_i$) and it amounts to $2/n$ there). Now observe that for $n \geq 6$, $2/n < 1/e$. So when increasing $p_j$ to $p_j + p_i$ and reducing $p_i$ to 0, we remain entirely in the concave region of $w$. So this action reduces the sum of weights at the $i$ and $j$ component. More precisely, because $w$ is concave in this region, we have: \begin{eqnarray*} && w(p_i) - w(0) \geq w(p_i + p_j) - w(p_j)\\ &\Leftrightarrow & w(p_i) + w(p_j) > w(p_i + p_j) + w(0) \end{eqnarray*} So the objective improves. \end{proof} \fi \begin{proof} The proof is by induction. The base holds for $n \leq 5$ (To verify the base, first note that we can have at most two $i$'s with $p_i$'s in the convex region because $3\times 1/e > 1 = r$. A local improvement argument establishes that we can have at most one individual with an allocated probability of $x \leq 1-2/e$ in the concave region. It is easy to see that for $x \in (0, 1-2/e]$, $w(x) + 2 w\left( 0.5(1-x) \right)$ is increasing in $x$, which establishes that the minimum happens at $x=0$). The induction hypothesis is that the statement holds for a value of $n \geq 5$. Next, we show that it holds for $n>5$ as follows: Let vector $\mathbf{p}$ be the optimal solution to (\ref{opt:welfare}). If there exists a component $0 \leq i \leq n$ with $p_i=0$, then we can apply the induction hypothesis to the remaining $(n-1)$ individuals and we are done. Otherwise, pick the component $i$ for which $p_i$ is minimum. We must have $p_i \leq 1/n$ because all $p$'s should add up to 1. We must also have at least one $j \neq i$ for which $p_j \leq (1-p_i)/(n-1)$. (Otherwise, the sum of all $p$'s would be greater than $(n-1) \times (1-p_i)/(n-1) + p_i = 1$, which is a contradiction. Now if we set the $i$'th component to 0 and $j$'th component to $p_j + p_i$, we have reduced the objective value $w(p_1)+ \cdots + w(p_n)$. This is because $p_i + p_j \leq p_i + (1-p_i)/(n-1) \leq 2/n$ (it is easy to verify that $p_i + (1-p_i)/(n-1)$ is increasing in $p_i$ for any $n \geq 3$, so $p_i + (1-p_i)/(n-1)$ is maximized at $p_i = 1/n$ (the upper bound on $p_i$) and it amounts to $2/n$ there). Now observe that for $n \geq 6$, $2/n < 1/e$.\footnote{Note that for $\beta=1$, $\ell = 1/e$ regardless of the value of $\alpha$.} So when increasing $p_j$ to $p_j + p_i$ and reducing $p_i$ to 0, we remain entirely in the concave region of $w$. So this action reduces the sum of weighted probabilities at $i$'th and $j$'th components. More precisely, because $w$ is concave in this region, we have: \begin{eqnarray*} && w(p_i) - w(0) \geq w(p_i + p_j) - w(p_j)\\ &\Leftrightarrow & w(p_i) + w(p_j) > w(p_i + p_j) + w(0) \end{eqnarray*} \end{proof} \subsection{Heterogeneous Priorities} Next, we focus on the case of heterogeneous priorities (i.e., non-uniform $\mathbf{t}$), and characterize the optimal solution(s) when individuals have various priority levels. Recall that the optimization problem with heterogeneous priorities can be written as follows: \begin{equation}\label{opt:welfare_heter} \opt_{\mathbf{p} \in [0,1]^n} \sum_{i \in S} t_i \times w(p_i)\text{ s.t. } \sum_{i \in S} p_i = r. \end{equation} where $t_i = \sum_{j \in S} t_{ij}$ is the overall priority of individual $i$. We will use the notation $\sigma (\mathbf{t}, \mathbf{p})$ to refer to the objective function of (\ref{opt:welfare_heter}), and $C$ to refer to all feasible solutions to it, i.e., $C = \{\mathbf{p} \in [0,1]^n \vert \sum_{i=1}^n p_i = r \}$. Note that $C$ does not vary with $\mathbf{t}$. Let $\sigma^*(\mathbf{t})$ be the value function of (\ref{opt:welfare_heter}), and $C^*(\mathbf{t})$ its set of optimal solutions for parameter $\mathbf{t}$. Since $C$ is constant and therefore, continuous (i.e., both upper and lower hemicontinuous) in $\mathbf{t}$, we can apply the Maximum theorem~\citep{berge1997topological} to establish that the correspondence $C^*(\mathbf{t})$ is upper-hemicontinuous in $\mathbf{t}$. \begin{corollary} Let $\sigma^*(\mathbf{t})$ be the value function of (\ref{opt:welfare_heter}), and $C^*(\mathbf{t})$ its set of optimal solutions. Then $\sigma^*$ is continuous and $C^*$ is upper hemicontinuous in $\mathbf{t}$ with nonempty and compact values. \end{corollary} In particular, the correspondence is upper-hemicontinuous at $\mathbf{t} = \textbf{1}$ (the case of homogeneous priorities). Therefore, the optimal solution for values of $\mathbf{t}$ sufficiently close to $\textbf{1}$ are close to the optimal solutions at $\textbf{1}$ -- which we have already characterized. But for values of $\mathbf{t}$ that are \emph{not} sufficiently close to $\textbf{1}$, does the support and shape of the optimal solution substantially change? The answer is no, as long as $\mathbf{t} \ggcurly \textbf{0}$ (that is, for all $i \in S$, $t_i>0$) \begin{proposition}\label{prop:min_heter} Consider the distribution of $r$ units of harm among $n$ individuals with priorities specified through vector $\mathbf{t}$. For any $\mathbf{t} \ggcurly \textbf{0}$ and any $\mathbf{p}^* \in C^*(\mathbf{t})$, \begin{enumerate} \item There exists at most one individual $i$ with $0<p^*_i<\ell$; \item For all $j \in S$ with $p^*_j \geq \ell$, $t_j \times w'(p^*_j) = c$ where $c$ is a constant. \end{enumerate} \end{proposition} \iffalse \begin{proof} We first show that in the perceived harm minimizing allocation, at most one individual receives $0<p^*_i <\ell$. Suppose not and there exist two individuals $1,2$ with $0<p^*_1 \leq p^*_2<\ell$. It must be the case that $t_1 > t_2$, otherwise we could reduce the objective value by swapping $p^*_1$ and $p^*_2$. Next we show that we can improve the objective value by redistributing the probability of harm $(p^*_1 + p^*_2)$ among individuals 1,2 as follows: we reduce $p_1$ and increase $p_2$--keeping $p_1 + p_2$ constant at $(p^*_1 + p^*_2)$--until either $p_1$ reaches $0$ or $p_2$ reaches $\ell$. Note that this operation strictly improves the objective value because: (1) according to Lemma~\ref{lem:min_concave} the reduction in $w(p_1)$ is larger than the increase in $w(p_j)$; (b) the reduction in $w(p_1)$ is multiplied by $t_1$, while the increase in $w(p_2)$ is multiplied by $t_2$. This local improvement contradicts our initial assumption that in an optimal allocation, there can exist two individuals with $0<p^*_1 \leq p^*_2 < \ell$. Second, we show that for all $j \in S$ with $p^*_j \geq \ell$, $t_j w'(p^*_j) = c$ for a constant $c$. Suppose not, and there are two individuals (say 1,2) with assigned probabilities in the $[\ell,1]$ interval such that $t_1 w'(p^*_1) \neq t_2 w'(p^*_2)$. Without loss of generality, we assume $t_1 w'(p^*_1) < t_2 w'(p^*_2)$. Now note that we can improve the objective value by redistributing the probability of harm $(p^*_1 + p^*_2)$ among individuals 1,2 by solving the following \emph{convex} optimization problem: $$\min_{\ell < p_1,p_2 < 1} t_1 w(p_1) + t_2 w(p_2) \text{ s.t. } p_1 + p_2 = p^*_1 + p^*_2.$$ We can equivalently write the above as $$\min_{p_1,p_2 \in \mathbb{R}} t_1 \tilde{w}(p_1) + t_2 \tilde{w}(p_2) \text{ s.t. } p_1 + p_2 = p^*_1 + p^*_2.$$ where $\tilde{w}(.)$ denotes the extended-value extension of $w: [\ell,1] \longrightarrow [0,1]$. Writing the stationarity conditions for the above equivalent problem, we obtain: $$t_1 w'(p_1) = t_2 w'(p_2) = c,$$ which is a contradiction with our initial assumption. \end{proof} \fi Let $k(\mathbf{t},r)$ specify the number of individuals whose probability of harm is greater than or equal to $\ell$ through an optimal solution, $\mathbf{p}^* \in C^*(\mathbf{t})$. Given the above Proposition and with a reasoning similar to the one provided previously for uniform priorities, it is easy to see that $r-\ell\leq k(\mathbf{t},r) \leq \frac{r}{\ell}$. \subsection{Examples and Discussion}\label{sec:harm_examples} \hl{When policymakers make risk allocation decisions in the real world, the resulting policies sometimes resemble the perceived harm-minimizing allocations that our theory predicts. In this Section, in addition to the already-discussed example of military draft, we present several examples of policies---allocating significant, but potentially diffuse harms---that we would struggle to explain without recourse to probability weighting. We offer these real-world examples \emph{not} to claim that policymakers do--or should--explicitly take the effects of probability weighting on people's preferences into account when making policy decisions, nor that the policies we detail here are, empirically, the most common ways policymakers allocate risk. But we offer them here as suggestive evidence that probability weighting may play a role---even an implicit one---in leading policymakers or the public to perceive certain outcomes as more attractive in real-world settings, and is therefore important to consider.} \hl{ The concrete examples that we review here are unmistakably unpleasant to think about, and the reader may find it morbid to dwell on them. Yet the distastefulness of these contexts is precisely the reason why we suggest that probability weighting may play a role in them. The fact that these decisions involve such extreme harms may explain why decision-makers might rely, implicitly, on psychological distortions that give the impression that the chosen allocation has helped to reduce the total amount of harm. } \iffalse When policymakers make risk allocation decisions in the real world, the resulting policies sometimes resemble the perceived harm-minimizing allocations that we model here. We offer these real-world examples \emph{not} to claim that policymakers are, in fact, explicitly taking the effects of probability weighting on people's preferences into account when making policy decisions, nor that the policies we detail here are, empirically, the most common ways policymakers allocate risk. Further, our discussion should not be interpreted as endorsement of any of these policies---or as an effort to stake any normative claim about them at all. But we offer them here as suggestive evidence that probability weighting may play a role---even an implicit one---in real-world policy contexts, and therefore merits our consideration. In particular, \hl{aside from the already-discussed example of military draft,} we present a few \hl{additional} examples of policies\hl{---allocating significant, but potentially diffuse harms---}that we would struggle to explain without recourse to probability weighting. \hl{These examples collectively suggest that the patterns identified by our theory are not mere artifacts of our particular mathematical model; rather they correspond with a number of real-world policies.} The concrete examples that we review here are unmistakably unpleasant to think about, and the reader may find it morbid to dwell on them. Yet the distastefulness of these contexts is precisely the reason why we suggest that probability weighting may play a role in them. The fact that these decisions involve such extreme harms may explain why decision-makers might rely, implicitly, on psychological distortions that give the impression that the chosen allocation has helped to reduce the total amount of harm. \hl{Finally, we emphasize again that these examples are not intended to establish the \emph{prevalence} of the allocation patterns suggested by our theory.} \fi \xhdr{Environmental pollutants} A recurring setting in which the public is confronted with uncertain risks of harm is in the effect of environmental pollutants on people's health. We draw on this setting as a stylized example for probability weighting by adapting a hypothetical scenario from Robert Sapolsky on the diffusion of risk \cite{sapolsky1994measures}. In his scenario, Sapolsky asks us to consider the psychological impact of dangers from pollution concentrated in a small area versus pollution that is spread more diffusely over a large area. We will see that probability weighting provides a potential way of thinking about some of the different reactions we have to contrasting scenarios. In particular, consider the following three hypothetical scenarios involving harms from pollution. In Scenario 1, the public learns that a company has decided to unsafely dispose of a certain amount of toxic waste by burying it on a remote plot of land next to a house where 3 people live far from the rest of the population; the concentration of the chemicals is so high that each of these three people acquires a rare, deadly disease that has a typical prevalence in the population of essentially 0. In Scenario 2, we learn that a company has decided to unsafely dispose of this same amount of toxic waste by spreading it diffusely over a larger region where a community of people live, in the process increasing the disease risk by some intermediate level such that three people will die of the same rare disease in expectation. In Scenario 3, we learn that a company has, again, disposed of this same amount of toxic waste by venting it into the air over a metropolitan area where three million people live, giving each of them a .0001\% chance of acquiring this rare disease. (Again, we suppose this chance is significantly higher than the base-rate of this disease in the population, which is effectively zero.) We begin by positing that the public might well have different subjective reactions to these stories if they were presented in isolation. At a minimum, the three stories have very different structures. The first evokes the image of a specific set of three people who have died as a direct result of the company's actions; they are the {\em identifiable victims} in the story \citep{jenni1997explaining}. The second evokes the image of the community that is involved, and the people who are now at elevated risk of the disease. And in the third, the company's actions have had an impact on millions of people, all of whom have been endangered by its actions. These different subjective framings of the scenarios should be contrasted with the fact that, in each case, the company's recklessness leads to three deaths from the same disease in expectation. This suggests that something other than the expected number of deaths is leading to the subjective differences. Moreover, the differences are also hard to fit in any simple way into an equity principle, since in all three scenarios, most people in society are completely unaffected by the harm from this particular pollution incident, and so the contrasts are more about the number of people affected and how seriously than about any sense in which all of society is sharing the harm equally. Probability weighting provides a framework for thinking about the contrasts. If $k$ people are impacted by the pollution, each receiving a probability of $r/k$ of contracting the resulting disease, then the total perceived harm from this allocation of risk, under the model in this section, is $k w(r/k)$. Even though the total harm $r$ is fixed in the three scenarios, the total perceived harm $k w(r/k)$ changes as we vary $k$, and so under probability weighting, the total perceived harm will appear different across the scenarios. Moreover, Theorem~\ref{thm:min_harm} shows that for some choices of these parameters, it is possible to view the intermediate scenario as producing the lowest perceived harm---when the harm is diffuse enough that we neither have identifiable victims (as in Scenario 1), nor that we produce positive risk for too large a population (as in Scenario 3). Beyond the specifics of the formal model, however, our purpose in this example is also to highlight a broader qualitative point rooted in the contrasts drawn by Sapolsky's example \citep{sapolsky1994measures}: that probability weighting can lead us to have very different reactions to a fixed amount of expected harm, and that our reactions can be non-monotone in the sense that harm to an intermediate-sized group can, in principle, generate less discomfort than harm to a group that is either too small or too large. \jkdelete{Consider an \hhcomment{hypothetical? stylized?} \sbcomment{Ah---yes, we should add hypothetical. We shoudl also maybe give some justification for using a hypothetical when the point is to offer real-world examples. Would it suffice to say that this is an example contemplated in the existing literature and so we're extending it slightly? Or should we say that this example feels quite realistic?} example that we adapt from \citep{sapolsky1994measures}: a villainous industrialist, contemplating a profitable venture that will generate toxic waste as a byproduct of the manufacturing process for his widgets, needs to decide how he will dispose of the dangerous material. Confronted with such a decision, the industrialist can choose to spread the risk of exposure diffusely across the local population, perhaps dumping the waste in the nearby waterway. Alternatively, he might concentrate the risk by burying the waste in one discrete location. Or he might place the waste in a set of smaller bins that he then distributes across a few different sites. In all three cases, the total amount of harm due to exposure remains roughly the same, but each choice would distribute the risk of harm differently. Given that the same number of people are harmed in expectation across all three choices, the industrialist should have no particular preferences between his options. In \citeauthor{sapolsky1994measures}'s version of the story, the industrialist is told that ``the toxins his factory will dump into the drinking water will probably lead to three cancer deaths in his town of 100,000.'' But rather than contemplating the harm that his plans would cause to three specific people, the industrialist re-frames his choices in terms of probabilities: ``All it does is increase the cancer risk .003 percent for each person.'' \citeauthor{sapolsky1994measures} focuses on how probabilistic thinking can insulate the industrialist from having to grapple with concrete harms, but it also points toward a behavioral bias captured by our model. Spreading a risk diffusely can help to reduce the \textit{perceived} harm, even when the harm remains constant. On \citeauthor{sapolsky1994measures}'s account, the more diffusely the industrialist is able to spread the risk, the less guilt he will feel. But spreading the risk too broadly could also engender a more violent reaction from the local population than the reaction to a more intermediate distribution. At the extreme, like dumping the waste in the local waterway, the entire population might be put at some small risk, upsetting \textit{everyone} who now faces a non-zero chance of harm. It is easy to imagine others confronting similar dilemmas, even those with a greater sense of responsibility than the industrialist in \citeauthor{sapolsky1994measures}'s story. How should they go about making their decision? \hhdelete{ The first step is to is to choose a value of $k$ and spread the risk over $k$ people. Next, we discuss some of the alternatives available. \begin{enumerate} \item Choose $k = n$: They could spread the toxic waste very thinly over the entire country. If caught, the public would say, ``You put the entire country at risk.'' They'd argue, ``But think how small the risk to each of you was.'' \item Choose $k = r$: They could find a house where $k$ people live and bury it directly next to them. All $k$ people die, but no one else is exposed to the risk. If caught, the public would say, ``You killed these $k$ people.'' They'd argue, ``But at least the rest of the public was safe.'' \item Choose an intermediate value of $k$ between $r$ and $n$: They choose a medium-sized town and dump it in an adjacent body of water. $r$ extra people in the town die, but the rest of the country isn't exposed to risk. \end{enumerate} Intuitively, a company caught in situation 3 might well be vilified less than a company caught in situation 1 or (certainly) in situation 2. (We see companies caught in situation 3 on a regular basis \hh{citations?}.) How can we describe our intuitive reactions to the alternatives in the above hypothetical example? Our model offers one way of deriving from first principles the idea that the minimum perceived harm is to spread the risk over a set of ``medium'' size rather than over a set that is too large or too small.} \hhcomment{Do we want to keep a version of the following sentence -- just to point at the specific result we are referring to?} Theorem~\ref{thm:min_harm} suggests that from a probability weighting perspective, one prefers solutions where the size of the at-risk subgroup, $k$, lies somewhere between the two extremes of $r$ and $n$. \sbcomment{I definitely think it makes sense to draw directly on the theorem to help answer this question. I wonder if it's actually enough to just add that one sentence to the end of the previous paragraph and be done with the entire example.} } \xhdr{The executioner's conscience.} Governments that enact capital punishment have sometimes done so with significant attention to the moral guilt felt by executioners. (There is, of course, no small degree of dark irony in a policy aimed at alleviating the pain of killing somebody.) Military firing squads traditionally involved at least one rifle carrying a blank cartridge---called the ``conscience round''---distributed at random, so that all members of the squad could avoid knowing with certainty that their own actions directly led to the death of the victim \citep{sapolsky1994measures,brennan2019strange}. Some executions by lethal injection have carried forward a similar practice. Lethal injection machinery used for a time in some states involved dual sets of syringes and dual switches to activate them, which were to be thrown simultaneously by two people. A computer would randomize which of the two vials would be injected into the prisoner and which would be discarded, and would then erase the determination---therefore giving both executioners a means of avoiding the certainty of knowing whether they, individually, had delivered the injection to the prisoner \citep{sapolsky1994measures}. While acknowledging the perversity of a policy that focuses on the pain an execution causes to an executioner, we detail this strategy because it closely fits the situation we model, as follows. In a dual execution team, the ``harm''---again, here construed as the moral guilt of having killed a person---is distributed according to probabilities $(0.5,0.5,0,0,...,0)$ (as predicted in Theorem~\ref{thm:min-1}). Why? What is the advantage of orchestrating the execution in such a way, which certainly is more complicated and costly than simply omitting the placebo? With a single executioner, the operator knows without doubt that they performed the execution, which presumably causes them psychic harm. But with dual teams---the solution we know to be optimal according to Theorem~\ref{thm:min-1}---we distribute this harm over \emph{two} people. Both executioners are spared the moral weight of certainty---after all, there's only a 50\% chance each was the killer. And in fact, if we were to randomize the harm over \emph{more} people, we would end up with a sub-optimal allocation from a perceived harm perspective (even putting aside the monetary costs of additional redundancy in the system). As we know from probability weighting, three people bearing a 33\% chance of harm is perceived to be more harmful than two people each bearing 50\%. This situation is also interesting because, in many real-world instances of harm allocation, probabilities are assigned---but then at some future point, the uncertainty is resolved and it becomes clear who actually experienced the harm. In contrast, in this case, the uncertainty is (deliberately) never resolved---by design, the computer discards the determination. The cost that agent $i$ experiences even after the event is the weighted probability $w(p_i)$ that they caused the death. Therefore, the total perceived harm remains at $w(p_1) + w(p_2) + ... + w(p_n)$ both before and after the deed. This perceived harm is precisely the objective function we study. \section{Introduction} Societies frequently wrestle with tough decisions regarding the allocation of benefits or burdens among their populations (see, e.g.,~\citep{calabresi1978tragic,viscusi2018pricing}). These decisions---particularly those that involve harm---are immensely difficult yet often unavoidable. As \citet{sunstein2003hazardous} points out, governments regularly pursue policies that lead to harms, including death, among the public: \begin{quote} {\em If government allows new highways to be built, it will know that people will die on those highways; if government allows new power plants to be built, it will know that some people will die from the resulting pollution. [...] Of course it would make sense, in most or all of these domains, to take extra steps to reduce risks. But that proposition does not support the implausible claim that we should disapprove, from the moral point of view, of any action taken when deaths are foreseeable.} \end{quote} These considerations remain true even when the prospective harms are reduced as much as possible; to the extent that they haven't been eliminated altogether, we must reason about the impact of policies that produce foreseeable harms. To make matters more complicated, many of these allocations deal in \emph{probabilities} of some outcome occurring: when we raise the speed limit by a certain amount, for example, we can estimate to some approximate level the number of additional traffic fatalities that will result \citep{farmer2019effects}, but we can say much less about who in particular will die. Thus, for matters involving harm, the policy process necessarily involves a set of choices (even if these choices arise only implicitly) between different {\em distributions} of harm over the population.\hhcomment{Above, we are effectively equating statistical predictions about a policy with probabilities of harm experienced by individuals. Should we clarify that in the absence of further information pinpointing who exactly is at high risk, we assume people perceive statistical facts as probabilities?}\klcomment{I don't think it's essential to clarify, but feel free if you would prefer!}\sbcomment{I also don't think we need to clarify, as doing so would interrupt the flow of the text. I think the plan was to address this point later in the text anyway.} For example, policy $P$ might produce a probability $p_i$ that individual $i$ is harmed,\hhcomment{Later we are using $\pi$ to refer to allocation policies, but it's not a big deal I think.}\klcomment{I agree but if you want to make consistent, feel free} while policy $Q$ might produce a probability $q_i$ that individual $i$ is harmed, for each individual in the population. (To keep the discussion simple, we will think about a single kind of ``harm'' that can befall people as a result of the policy, rather than adding the complexity of different degrees of harm.) \xhdr{Comparing probability distributions resulting from different policies} How should we compare the two distributions of harm that arise from policies $P$ and $Q$, respectively? Much of the work that underpins mathematical models in these domains, including many of the loss functions that go into algorithmic decisions, tend to be based on expected cost---the idea that we should favor the policy that produces the lower expected harm. In our case, policy $P$ produces a sequence of probabilities $(p_1, p_2, ..., p_n)$ over the $n$ members of the population, and its expected harm is the sum $p_1 + p_2 + \cdots + p_n$; we can write a similar expression for the probabilities of harm $(q_1, q_2, ..., q_n)$ produced by policy $Q$. Of course, real-life policymaking is complex, and it is not clear that minimization of expected harm is typically the chief criterion in selecting among policy options. But there is also a more basic problem with using expected harm as the criterion: many policy questions about competing distributions of harm begin after we've already reduced the total amount of harm to a roughly fixed, low target level,\hhcomment{should we mention here that the target level is often close to the minimum expected harm achievable given the constraints on hand?}\sbcomment{I think this point might be worth making because it helps to emphasize that we're not interested in cases where policy-makers have set the total harm at some arbitrarily level; we're interested in hard cases where the harm has---presumably---been reduced as much as we believe it can be.} and so the debate is between distributions that all have the same expected level of harm. How, then, should we think about preferences between these competing policy proposals? \xhdr{An example: The Selective Service System} We can see the outlines of such debates in a number of settings where a risk of harm is being allocated across a population. In the policies for drafting people into the military in the United States, for example, the government has considered a number of different implementations for randomizing the selection of inductees. (Here, required service in the military is the cost, or harm, that is being allocated according to a probability distribution.) Under a given policy $P$, individual $i$ would learn that they had a probability $p_i$ of being drafted. Crucially, difficult questions about the implementations of draft systems persist regardless of the desired {\em size} of the military; that is, for a given size of the military, the sum of the draft probabilities $p_i$ over the population is pinned to this number, but some distributions of these probabilities have been nonetheless viewed as preferable to others. What accounts for these preferences? We note that discussions of revisions to the draft framed uncertainty itself as a cost being borne by members of the population. As the U.S. Selective Service System notes, prior to the introduction of a structured process for randomization, men\footnote{Under U.S. law, only men are, or have ever been, required to register for the draft.} knew only that they were eligible to be drafted from the time they turned 18 until they reached age 26; {\em ``[this] lack of a system resulted in uncertainty for the potential draftees during the entire time they were within the draft-eligible age group. All throughout a young man's early 20's he did not know if he would be drafted.''} \citep{sss} The systems that were subsequently introduced specified priority groups according to age, which had the effect of deliberately producing non-uniform probabilities of being drafted; under these systems, some people learned that their probability of being drafted was higher than average, and others learned that their probability was lower than average.\footnote{Specifically, men were drafted according to ``priority year,'' with the youngest men being drafted first. During the year a man was 20 years old, he was in the top priority group, with reduced likelihood of being called up each subsequent year. Within each group, call-up order was randomized by lottery according to birthday \citep{sss}.} Viewed in terms of distributions, these policy changes had the effect of {\em concentrating} the probabilities more heavily on a subset of the eligible population, rather than {\em diffusing} the probabilities more evenly across everyone. The quote from the Selective Service System points out that a process that diffuses probabilities too widely seems to create unnecessary (and harmful) levels of uncertainty; but there are, of course, corresponding objections that could be raised to processes that concentrate probabilities too heavily on too small a group. An abstraction of these questions would therefore consider multiple probability distributions of harm---for example, policy $P$ producing $(p_1, p_2, ..., p_n)$, policy $Q$ producing $(q_1, q_2, ..., q_n)$, and perhaps others---and ask which of these should be preferred as a choice for society. In posing such questions, we are guided by the belief that studying reactions to distributions of harm should draw closely on those parts of the behavioral sciences that have considered how people subjectively evaluate probabilities. We therefore develop a framework based on the concept of {\em probability weighting} from behavioral economics. Our model will allow us to evaluate the Selective Service System's argument, and similar arguments in other domains, at a broad level---the contention that completely uniform randomization over the draft-eligible population is a sub-optimal policy because the cumulative level of uncertainty felt by the population is unnecessarily high. At first glance, this argument is counter-intuitive: since the size of the military is the same under all the draft policies being considered, isn't the cumulative level of uncertainty felt by the population also the same under all policies? On closer inspection, though, we find that this decision---to shift the probabilities in a non-uniform direction, and to interpret this as reducing cumulative uncertainty---is very much consistent with the predictions of probability weighting. \xhdr{A stylized example} To further motivate the models that follow, we can adapt our discussions about harm allocations---and complex scenarios such as the military draft---into a stylized example in which a fixed amount of harm must be allocated across a given population. We will argue that different allocations of harm have very different subjective resonances, and it is these differences that behavioral theories of probability weighting aim to illuminate. Thus, as a thought experiment, consider the following hypothetical example. Suppose we need to allocate 1 unit of harm among 100 individuals. For simplicity, let's assume all 100 individuals are equally deserving and willing to bear the harm. We might allocate the harm to one specific person (say, Bob), while giving the other 99 people certainty that they are not at risk---hence the probability distribution $(1,0,\cdots, 0)$. Feeling sorry for Bob, we might instead divide the risk between him and another member of the population, Chloe---and ultimately flip a coin to decide which of them is to bear the harm\hhcomment{This might be a good place to emphasize that we are evaluating these policies ex-ante--before the risks turn into realized/deterministic outcomes.}\klcomment{I think it may be clear enough, but feel free to add if you like! If so might want to add at beginning or end of the para to avoid disrupting the flow.}\sbcomment{I agree that it is clear as is, so don't think there's any need for additions}, while the other 98 people are free and clear; i.e. the distribution $(1/2, 1/2, 0, \cdots, 0)$. Or we could have a third person, David, join Bob and Chloe in the risk pool, lowering the risk for each of them to one-third $(1/3, 1/3, 1/3, 0, \cdots, 0)$. Finally, we might allocate the risk evenly among all 100 individuals, and select the recipient of the harm by random lottery: $(0.01, \cdots, 0.01)$. How might a policymaker select among these policies? Each of them, ultimately, results in the same amount of harm (1 unit) befalling the population, yet they strike us as intuitively quite different. We may consider it blatantly unfair to single Bob out as a certain victim by concentrating the risk completely on him; and indeed, a long line of work in psychology on the so-called {\em identifiable victim effect} suggests that we tend to find such outcomes particularly troubling \citep{jenni1997explaining}.\footnote{Philosophy has also grappled with the observation that we tend to recoil at the idea of, for example, harvesting one person's organs to save the lives of five other people. Such cases reveal an intuitive distaste for distributions that aim to reduce the overall amount of harm experienced by a population by focusing those harms on a small subset of people \citep{thomson1976killing}. Note that our framework does not apply to these cases because concentrating costs in these instances actually reduces the total cost (e.g., reducing the total number of deaths from five to one); in our settings, the way a policy allocates harms does not affect the amount of harm imposed on the overall population.} On the other hand, a random lottery distributes the risk equally among all 100 individuals---but in the interim, it forces \textit{everybody} to worry about their chances of being harmed. (This is the form of uncertainty, and corresponding psychological cost, that the Selective Service System was concerned with in our example of the draft lottery.) The second and third options provide intermediate alternatives. In the second alternative, no one person is harmed with \emph{certainty}, while at the same time, the smallest possible number of individuals need bear the risk. The fact that we may prefer some of the above alternatives to others immediately suggests that a cost-benefit analysis based on expected harm is not sufficient to capture our intuitions---since all the options involve the same expected amount of harm. Likewise, our intuitive reactions to these different proposals do not neatly map onto common concerns with distributive justice, where we tend to worry about the relative impact of allocations on different \hl{social groups or subgroups within} the population, given existing social inequalities. In this case, our reactions have nothing to do with any details about who Bob, Chloe, and Dave happen to be or the social groups to which they belong. What we perceive to be the more desirable allocation instead seems to rest on how we perceive the benefits or harms of being subject to uncertain outcomes.\footnote{\hl{To put it differently, the purpose of our work is \emph{not} to argue that probability-weighting tends to result in distributions that disproportionately harm members of specific social groups. Rather, we study human perceptions toward distributions that allocate the same type of harm unevenly across \emph{otherwise-equal} individuals (without specifying their group memberships). As we show later in the paper, behavioral principles suggest why people have strong reactions to uneven distributions even when the specific people who are harmed by or benefit from the allocation do not belong to distinct social groups. }} \xhdr{An interpretive analysis} Our intention in exploring people's subjective perceptions of risk probabilities is, emphatically, \textit{not} to prescribe a ``best'' mode of allocating probabilities of risk, nor to endorse the underlying policy decisions that give rise to a need to allocate such risk in the first place, nor to treat superficially the variety of other procedural and moral concerns that attend the allocation of harms and benefits to people. Ours is a purely {\em interpretive} undertaking; we find that preferences for certain allocation policies involving probabilities are difficult to explain unless we take probability weighting into account. Policy experts disagree about the extent to which cognitive errors ought to be explicitly incorporated into account in public decision-making. While some consider it foolish to base policies on what are essentially misunderstandings, others suggest that we might reasonably consider the ``psychic benefits'' to the public of protecting against ``imaginary'' risks \citep{schneier2008psychology,viscusi2018pricing,portney1992trouble,pollak1998imagined}. We stake no claim in this debate; our goal is to explore descriptively how people's subjective perceptions of probabilities \textit{might} impact preferences regarding such allocations---and how these impacts potentially explain peculiar real-life allocation policies. In this way, our work follows a style of research that seeks to shed light on observed policy outcomes by linking them to our behavioral understanding of latent human preferences for certain types of outcomes over others (see, e.g., \citep{srivastava2019mathematical,zhu2018value,Lee2018WeBuildAI} for earlier work in this genre). All of this still leaves us with a basic question. We have seen examples so far (with others to come) of policy-making favoring some level of randomization, while also steering away from completely uniform randomization that would spread risk of harm diffusely across a population. Is there a model that predicts this type of ``intermediate'' position that avoids both a concentration of risk on identifiable victims as well as too diffuse a distribution over the whole population? And can such a model be derived from known psychological models of human behavior? In this work we will argue that a preference for these types of intermediate distributions of risk can be derived naturally from the concept of {\em probability weighting}, one of the most empirically well-grounded human biases studied in behavioral economics~\citep{kahneman2013prospect}, to which we now turn. \hl{\xhdr{Plan for the remainder of the paper} Motivated by the premise that understanding people's perceptions of harm/benefit allocations are crucial in designing acceptable policies, we posit that models that solely rely on expected-value comparisons may miss crucial aspects of human perceptions toward uncertain allocations---which are in part shaped by probability weighting. As a result, expected-value optimizing algorithms may produce allocations that are behaviorally repugnant to people. Our model can partially explain these reactions using one of the fundamental principles in behavioral sciences. To our knowledge, our work is the first to explore the attractiveness of different uncertain allocation policies by exploring \emph{optimal allocations} under probability weighing. We make several connections between the optimal allocation patterns suggested by our theory and real-world policy choices that would be otherwise difficult to explain. } \hl{The rest of the paper is organized as follows: in Section~\ref{sec:pw}, we provide an overview of theoretical models of probability weighting as they relate to our work. Section~\ref{sec:model} introduces our new optimization-based framework, which utilizes probability weighting to understand preferences toward policies that lead to uncertain allocations. In In Sections~\ref{sec:harm_analysis} and \ref{sec:benefit_analysis}, respectively, we explore the implications of probability weighting in preferences toward distributions of harm and benefit, and discuss several real-world policy choices that align with the patterns suggested by our theory. We conclude in Section~\ref{sec:conclusion} with a brief summary and several directions for future work. } \section{Probability Weighting}\label{sec:pw} \xhdr{A model based on probability weighting} Probability weighting begins from the qualitative observation that people tend to overweight small probabilities---behaving as though they are larger than they actually are---and tend to underweight large probabilities---behaving as though they are smaller than they actually are. More generally, probability weighting is the premise that when faced with an uncertain event of probability $p$, people will tend to behave with respect to this event---for example, when determining risks or evaluating gambles involving the event---as though its probability were not $p$ but a value $w(p)$, the {\em weighted version} of the probability. This weighting function $w(p)$ has the two properties noted above: that $w(p)$ is larger than $p$ when $p$ is small, and $w(p)$ is smaller than $p$ when $p$ is large. If we think in terms of the graph of $w(p)$ as a function of $p$, people refer to these properties as the ``\hl{inverse} S-shaped'' nature of the probability weighting curve. There are a number of different models that derive \hl{inverse} S-shaped probability weighting curves from simple observations; one influential functional form was provided by \citet{prelec1998probability}, who derived it from a set of underlying axioms about preferences for different types of gambles. (See Figure~\ref{fig:wp} for several examples of Prelec's S-shaped probability weighting functions.) \hl{The concept of probability weighting has been invoked to explain a number of peculiar behavioral patterns; one of the canonical examples is people's participation in gambling and lotto games \citep{quiggin1991optimal,kahneman2011thinking}.\footnote{To elaborate further on this connection, note that the cost of buying a lotto ticket is always set to be higher than the expected benefit (i.e., the likelihood of winning times the prize), otherwise lottery owners would lose money. Nonetheless, people participate in these games in large numbers. Work in behavioral economics has advanced probability weighting as one explanation for this irrational behavior, via the tendency to over-weigh small probabilities---here the chance of winning the lottery \citep{quiggin1991optimal,kahneman2011thinking}.}} We use probability weighting here to ask the following basic question. Suppose there are $r$ units of harm to be allocated across a population of $n$ people, and we are evaluating policies that assign individual $i$ a probability $p_i$ of receiving harm, subject to the constraint that the sum of $p_i$ over all individuals $i$ is $r$. In the motivating settings discussed so far, it is natural to think of the cost borne by individual $i$ as the perceived probability $w(p_i)$---either because individual $i$ perceives it this way (via the psychological cost of their own uncertainty) or because the rest of society views it this way (via our discomfort at the idea that $i$ is an identifiable victim with a perceived probability $w(p_i)$ of being harmed). We can therefore ask: which probability distribution minimizes this total cost, the sum of $w(p_i)$ over all individuals $i$? Notice that this question allows for distinctions among probability distributions that all produce the same total expected harm for the population: in particular, all the distributions under consideration have a total expected harm of $r$, but they can nevertheless differ substantially in the sum of $w(p_i)$ over all individuals $i$. We find that the distributions minimizing the weighted sum of harm probabilities $w(p_i)$ in fact correspond to intermediate distributions of the type we have been discussing qualitatively: distributions that concentrate the risk on a subset of the population, such that each member of the at-risk subset has a probability of harm that is strictly less than $1$, while most of the population has a probability of harm equal to $0$. The analysis leading to this conclusion involves some subtlety: sums of $S$-shaped functions do not exhibit the nice properties that simpler function classes do, and so minimizing them requires additional complexity in the analysis. With this model in place, we can also explore the natural complement to this dynamic. Our discussion thus far has focused on probabilities of {\em harm}, but there is an analogous class of questions about distributing probabilities of {\em benefit} across a population---for example, in the availability of opportunities like higher education or financial assistance programs. Suppose there are $r$ units of benefit available to the population as a whole, and we are considering policies that assign a probability $p_i$ that individual $i$ receives the benefit. Which distributions maximize the sum of $w(p_i)$ over all individuals $i$---that is, maximizing the total perceived benefit? As with risks of harm, we do not argue that such a policy is necessarily desirable, only that it may have added or diminished attractiveness in its perceived impact; to the extent that such policies are favored in practice, the theory of probability weighting might therefore offer a suggestive description. We find that the distributions maximizing this sum of perceived probabilities of benefit are quite different from the distributions minimizing the sum of perceived probabilities of harm. In particular, when the total available benefit $r$ is small relative to the size of the population under consideration, the maximizing distribution is a uniform lottery which assigns all $n$ people a probability of $r/n$; but as $r$ increases, the maximizing distribution changes abruptly to one in which a subset of the population receives a portion of the benefit with certainty, and the rest of the population is given a uniform lottery for the remainder. \begin{table*} \begin{center} \begin{tabular}{| l | p{0.35\textwidth} | p{0.35\textwidth} |} \hline & Min. perceived harm & Max. perceived benefit \\ \hline $r = 1$ & $(\delta, 0.5(1-\delta), 0.5(1-\delta), 0, \cdots, 0)$ for some $\delta < \min\{\ell, 1-2\ell\}$ & $\left(\frac{1}{n}, \frac{1}{n}, \cdots, \frac{1}{n}\right)$ for all $n$ larger than a constant $q$ \\ \hline $r>1$ & $\left(\delta, \frac{r-\delta}{k}, \frac{r-\delta}{k}, \cdots, \frac{r-\delta}{k}, 0, \cdots, 0 \right)$ for $k<n$, $\delta < \min\{\ell, r-k\ell\}$ & $(\frac{r-j-\gamma}{n-j-1}, \cdots, \frac{r-j-\gamma}{n-j-1}, \gamma, \underbrace{1, \cdots, 1}_{j \text{ persons}})$ for $\gamma > \max\{\ell, r-j-\ell(n-j-1)\}$ \\ \hline \end{tabular} \caption{The summary of our findings regarding the allocation of probabilities across $n$ individuals for a probability weighting $w(.)$ that is monotone, concave up to an inflection point $\ell$, and convex beyond $\ell$.} \label{table:summary} \end{center} \end{table*} \xhdr{Implications of probability weighting} Given that a society developing policy seems to favor some probability distributions of harm or benefit over others, even when they have the same expected value, it is natural to ask whether a model based on probability weighting can shed light on the nature of these preferences. Our modeling activity thus works out what the favored policies would look like if society were seeking to maximize or minimize the total weighted probability. As we discuss in Sections~\ref{sec:harm_examples} and \ref{sec:benefit_examples}, properties of these minimizing and maximizing distributions \emph{can} be observed in a variety of real-world settings. We consider a number of allocation policies that have been adopted in practice that involve distributions of uncertain harms and benefits that closely resemble what our model suggests are optimal under probability weighting. Because the attractiveness of these policies is difficult to explain otherwise, we present them as inductive evidence that probability weighting may be playing a meaningful role in guiding societal preferences for certain allocations and in determining the actual distributions of harms and benefits in society. \hhcomment{The last phrase ``playing a role'' may suggest that we consider probability weighting as a mechanism that plays out in people's minds... Can we somehow tone this down?} \hhcomment{I think the following paragraph is mostly left over from the initial sketch of the intro, so it reads a bit abrupt. Can the better writers in the group re-write it please?} \klcomment{I merged the two paras that were above these commments as they were making similar points. Hoda -- feel free to edit, or if you're satisfied can delete this block of comments!} \section{Behavioral Model}\label{sec:model} Consider a society $S$ consisting of $n$ individuals, denoted by $S=\{1,2, \cdots, n\} = [n]$. A policymaker needs to choose a policy $\pi$ that probabilistically allocates some notion of benefit/burden to individuals in $S$. Let $B$ denote the set of all possible levels of benefit/burden that an individual in $S$ can receive. Unless otherwise specified, for simplicity we assume $B= \{0,1\}$, with $1$ indicating benefit/burden and $0$ indicating the absence of it. A policy $\pi$ distributes a \emph{fixed} level of benefit/harm across individuals $S$. We use the notation $p^{\pi}_i \in [0,1]$ to refer to the probability of benefit/harm imposed on a particular individual $i \in S$ through policy $\pi$. (When $\pi$ is clear from the context, we drop the superscript $\pi$ and use the simplified notation $p_i$ to indicate the probability of individual $i$ receiving the benefit/harm.) For any feasible/admissible policy $\pi$, we assume \begin{equation} \sum_{i \in S} p_i = r, \end{equation} where $r$ captures the expected amount of benefit/harm that must be distributed. (Note that we consider settings in which $r$ is fixed (or changes negligibly) in the number of individuals it is allocated across.) Following prospect theory, we assume for every individual $i$, there exists an inverse S-shaped function $w_i: [0,1] \rightarrow [0,1]$, such that $w_i(p)$ determines individual $i$'s perception of probability $p \in [0,1]$. Throughout for simplicity, we assume $w(.)$ and its derivative $w'(.)$ are continuous. A concrete instance of a widely-studied probability weighting function is the following \citep{prelec1998probability}: \begin{equation} w(p) = e^{-\beta (-\ln{p})^\alpha)}. \end{equation} See Figure~\ref{fig:wp} for $\beta=0.5$ and various levels of $\alpha$. \begin{figure} \centering \includegraphics[width=0.45\textwidth]{Figures/weighting_beta_50percent.png} \caption{Prelec's probability weighting function for $\beta=0.5$} \label{fig:wp} \end{figure} The parameter $\alpha$ determines the \emph{curvature} and $\beta$ determines the \emph{elevation} of the probability weighting function. \xhdr{Perceived social welfare} Given a policy $\pi$, let $\mathbf{p}^\pi = (p_1, p_2, \cdots, p_n)$ denote the distribution of benefit/harm across individuals in $S$ through the policy $\pi$. We assume that an individual $i$'s judgment of policy $\pi$ can be captured by a score function, $\sigma_i: [0,1]^{n} \rightarrow \mathbb{R}$, which maps $\mathbf{p}^\pi$ to a real number. The score indicates $i$'s perception regarding the overall benefit/burden that policy $\pi$ imposes on the society $S$. In particular, we assume the score function is additively separable across individuals and is defined as follows: $$\sigma_i(\mathbf{p}) = \sum_{j \in S} t_{ji} \times w_i(p_j).$$ In the above, $t_{ji}$ indicate the level of \emph{priority} that individual $i$ allocates to $j$ in the context of the allocation problem at hand. For example, the priority $t_{ji}$ may represents $i$'s perception of $j$'s desert/need when allocating benefit, or $j$'s desert/ability to bear the harm when allocating harms. Without loss of generality, we assume $\sum_{j \in S} t_{ji} = 1$ for all $i \in S$ to normalize subjective priorities. The overall \emph{perceived welfare} of a policy $\pi$ is calculated by averaging the score function across all individuals, and it is equal to $\sum_{i \in S} \sum_{j \in S} t_{ji} \times w_i(p_j).$ \hhcomment{is this a good place to mentioned that instead of social welfare, one could define the objective in terms of inequality (or a combination of welfare and inequality)? For simplicity, throughout we assume that the probability weighting function is the same for all individuals. Under this assumption, the perceived welfare simplifies to $\sigma (\mathbf{t}, \mathbf{p}) = \sum_{j \in S} t_j \times w(p_j)$, where $t_j = \sum_{i \in S} t_{ji}$ is the overall priority of individual $j$ and $\mathbf{t}=(t_1, \cdots, t_n)$. Unless otherwise specified, throughout our analysis we also assume that for all $i,j$, $t_{ji} = 1/n$, that is, priorities are all equal. With this assumption, the perceived welfare simplified to $\sigma(\mathbf{p}) = \sum_{j \in S} w(p_j)$. In order to design a policy that probability-weighting individuals perceive positively, the policymaker aims to understand the distribution that optimizes the perceived welfare: \begin{equation}\label{opt:welfare} \opt_{\mathbf{p} \in [0,1]^n} \sum_{i \in S} w(p_i) \text{ s.t. } \sum_{i \in S} p_i = r. \end{equation} In allocating harms, the policymaker is naturally interested in allocations that \emph{minimize} $\sum_{i \in S} w(p_i)$. Conversely, when allocating benefits, he/she is interested in allocations that \emph{maximize} $\sum_{i \in S} w(p_i)$. In the next two sections, we characterize solution(s) to (\ref{opt:welfare}). Our findings show that perceptions toward harm vs. benefit distributions are fundamentally different. When allocating \emph{benefit} among $n$ individuals, the perceived-benefit-maximizing solution consists of providing every individual with a non-zero chance of obtaining the benefit. In contrast, when allocating \emph{harms}, the equal allocation of risk is sub-optimal, and the perceived-harm-minimizing allocation instead concentrates the risk across $k<n$ individuals leaving the rest to enjoy $0$ risk of harm. \subsection{Models of Decision-Making Under Risk} Several models in economics aim to capture people's preferences regarding choices that have uncertain outcomes. Expected utility theory posits that people are expected-utility maximizers and that they weight probabilities linearly. However, empirical evidence suggests people often do not treat probabilities linearly (see, e.g.,~\citep{quattrone1988contrasting,etchart2004probability,humphrey2004probability,berns2008nonlinear}). Instead, they overweight small probabilities and underweight large probabilities. Major alternatives to the expected utility theory, including rank dependent utility~\citep{quiggin1982theory,quiggin2012generalized}, prospect theory~\citep{kahneman2013prospect}, and cumulative prospect theory~\citep{tversky1992advances}, propose the concept of a probability weighting function to capture this behavioral effect. (Note that all these alternatives share the assumption of \emph{additive separability} across outcomes with the expected utility theory.) A probability weighting function is a widely-studied model of probability distortions in decision making under risk. A large number of probability weighting functions have been proposed (see, e.g., \citep{gonzalez1999shape} and \citep{tversky1992advances}). \citet{prelec1998probability} observes that unlike utility functions, which are characterized by concavity, in empirical studies probability weighting functions are: \begin{itemize} \item \textbf{regressive} intersecting the diagonal from above, \item \textbf{asymmetric} with fixed point at about 1/3, \item \textbf{s-shaped} concave on an initial interval and convex beyond that, \item \textbf{reflective} assigning equal weight to a given loss-probability as to a given gain-probability. \end{itemize} \citeauthor{prelec1998probability} uses the above observation to axiomatize a subproportional function, $w(p) = \exp{ - ( - \ln p)^\alpha )}$, $0 < \alpha < 1$, that satisfies all four of the above properties, and that has an invariant fixed point and inflection point at $p = 1/e = .37$. Probability weighting has been featured in various domains, including stock market and the pricing of financial securities (see, e.g.,~\citep{barberis2008stocks}). In what follows, we build on this prior work, showing that we can derive theoretically what would be the optimal allocation of harms and benefits under probability weighting---allocations that occasionally defy intuition---and we point to a number of concrete cases where we seem to observe such policies in practice.
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} \vspace{-.3cm} Microtubules are essential constituents of the cytoskeleton of eukaryotic cells. They provide mechanical support, and are involved in a wide range of cellular functional modules. For instance, they serve cells to build cilia and flagellae which are slender extensions of the cell used for migration and sensory tasks. In addition, microtubules~are important during cell division, where they build the mitotic spindle, and separate chromosomes. To facilitate this variety of tasks, there have to be mechanisms that control for the dynamics of microtubules~\cite{Desai1997}. Such capabilities are crucial for the microtubule~cytoskeleton in order to accomplish such diverse tasks as cell division~\cite{Goshima2010} and migration~\cite{deForges2012}, and further, to determine cell size and shape~\cite{Tischer2009}, and to position the nucleus in the center of the cell~\cite{Laan2012,Pavin2012}. Molecular motors and microtubule~associated proteins seem to play a crucial role in this regulation process~\cite{Howard2007,Wordeman2005}: Motors move along the microtubule and interact specifically with the filament at its end. Other associated proteins that influence the dynamics of microtubules~also bind directly to the microtubule~tip~\cite{Akhmanova2008}. Biochemical reconstitution experiments with microtubules~and microtubule~associated molecules have lead to considerable insight in recent years~\cite{Subramanian2012}. In the following we highlight two specific proteins, that are important for the length dynamics of~microtubules. Kip3p is a microtubule~depolymerising molecular motor~\cite{Varga2006,Gupta2006} of the kinesin-8 protein family~\cite{Walczak2013}. It binds strongly to the microtubule~lattice, and, therefore, exhibits a long run-length~\cite{Leduc2012}. At the microtubule~tip the effect of strong motor binding to the terminal tubulin heterodimer induces depolymerisation~\cite{Cooper2010}. Interestingly, kinesin-8 shows a length-dependent depolymerisation activity mediated by the accumulations of motors along the microtubule, as shown in experiments~\cite{Varga2006,Varga2009} and recent theoretical work~\cite{Reese2011}. In vitro experiments have unveiled many molecular details of kinesin-8: The tail of the motor has been shown to be responsible for long residence times on the microtubule~lattice and it influences microtubule~dynamics~\cite{Stumpff2011,Su2011,Mayr2011} and spindle size~\cite{Weaver2011}; for a brief review of these findings see Ref.~\cite{Su2012}. In the mathematical analysis we will concentrate on the depolymerising activity of the motor and treat those molecular details effectively in terms of rate constants for movement, depolymerisation activity, and attachment/detachment kinetics of the motors. XMAP215 is a microtubule~associated protein~\cite{Gard1987}, that has been shown to significantly amplify the growth rate of microtubules~\cite{Vasquez1994}, and to influence the dynamic properties of~microtubules~in the cytosol~\cite{Tournebize2000,Kinoshita2002}. Recent in vitro experiments investigated the interaction of XMAP215 with single microtubules~\cite{Brouhard2008} and the interplay with other end-binding proteins, which act as cofactors~\cite{Li2012,Zanic2013}. Specifically, it was found that XMAP215 is a polymerising enzyme to the microtubule~plus-end; a single XMAP215 is able to polymerise several rounds of tubulin heterodimers to the microtubule~\cite{Brouhard2008}. Similar properties have also been observed for other end-binding proteins, see \emph{e.g.}\ Ref.~\cite{Komarova2009}. Combining the observations described above, suggests that kinesin-8 and XMAP215 may constitute a minimal functional unit able to regulate microtubule~dynamics~\cite{Tournebize2000,Kinoshita2001} and antagonistically influence microtubule~length. This view is supported by recent experiments on cilia~\cite{Niwa2012} showing that the molecular motor Kif19a, which belongs to the kinesin-8 protein family, regulates the cilia length in a concentration dependent manner: High motor concentrations lead to short cilia, whereas low motor concentrations ensue long cilia. On a molecular scale, the ability to regulate length is traced back to the observed length-dependent depolymerisation speed~\cite{Howard2007}. In more detail, longer microtubules~are observed to depolymerise faster than shorter ones. This has been explained as follows~\cite{Varga2009,Reese2011}: Molecular motors in the cytosol attach to the microtubule~and subsequently move towards the microtubule~tip. The unidirectional movement of motors towards the microtubule~tip leads to an accumulation of motors and the motor density increases from the minus- to the plus-end of the~microtubule, which finally results in an \emph{antenna-like} steady state profile of molecular motors. Therefore, there are more motors present at the tip for longer microtubules~than for shorter ones, which in turn leads to the observed length-dependence in the depolymerisation speed. In combination with microtubule~polymerisation, which is either spontaneous or catalysed by XMAP215, this is a promising starting point to achieve microtubule~regulation~\cite{Howard2007,Melbinger2012}. In this work, we elaborate on two possible molecular mechanisms how molecular motors~could interact with the microtubule~tip. We specifically distinguish two scenarios, one where molecular motors~prevent the addition of tubulin heterodimers at the microtubule~tip (inhibition scenario), and another neutral scenario where microtubule~growth is possible irrespective of whether the microtubule~tip is occupied by motors or not. These differences in the interaction of motors with the microtubule~tip give rise to a rich dynamics of microtubule~length ranging from accurate length-control to intermittent dynamics. This article is organised as follows. In section~\ref{sec:model} we introduce a model for the dynamics of molecular motors on a microtubule. Further we define different possible molecular scenarios for how kinesin-8 interacts with the microtubule~tip during the depolymerisation process, including the case when XMAP215 acts as a polymerase. In section \ref{sec:dynamics} we present our main results: We begin with an outline of the theoretical framework, and then employ it to study MT length dynamics. Our analytical calculations are complemented by stochastic simulations. Taken together, this allows us to identify the parameter regimes where length regulation is possible, and to provide a comprehensive analysis on how the ensuing stationary length depends on biochemical rates and protein concentrations. Moreover, we investigate the role of stochastic effects in length regulation, and discuss why there are dramatic differences between the considered scenarios. Finally, we conclude in section~\ref{sec:discussion} by discussing our results in terms of their possible biological relevance and their importance for driven diffusive lattice gases. \begin{figure*}[th] \centering \includegraphics[scale=1]{figure1.png} \caption{ Illustrations of motors on a microtubule (MT) and different regulation scenarios at the MT plus-end. Starting from an empty MT lattice motors start to accumulate on the MT lattice by Langmuir kinetics (with rates $\omega_\tn{on}$ and $\omega_\tn{off}$) and subsequent transport with rate $\nu$ to the plus-end. The combined effect of Langmuir kinetics and steric exclusion between the motors leads to a antenna-like profile which saturates at the Langmuir density $\rho_\tn{La}$. At the MT tip, kinesin-8 depolymerises the MT lattice and blocks MT growth the \emph{inhibition scenario} while it does not affect MT growth in the \emph{neutral scenario}. \label{fig:cartoon}} \end{figure*} \section{Model definition} \label{sec:model}\vspace{-.3cm} We consider a one-dimensional lattice gas model of finite length $L$~\cite{Krapivsky2010,Chou2011} as illustrated in Fig.~\ref{fig:cartoon}: Motor proteins (kinesin-8), present at a constant bulk concentration $c$, are assumed to randomly attach to and detach from the microtubule (MT) lattice with rates $\omega_\text{on}$ and $\omega_\text{off}$, respectively, defining the binding constant $K=c \, \omega_\text{on}/\omega_\text{off}$. Once bound, these motors move towards the plus-end at a constant hopping rate $\nu$; we fix the time scale by setting $\nu = 1$ (corresponding to approximately $6.35$ steps/sec in the case of kinesin-8~\cite{Varga2009}). As these motors hinder each other sterically, individual binding sites on MTs can at most be occupied once. This lattice gas model is known as the \emph{totally asymmetric simple exclusion process} (TASEP) with \emph{Langmuir kinetics} (LK)~\cite{Lipowsky2001,Parmeggiani2003,Parmeggiani2004}. The right boundary is considered to be dynamic: When a kinesin-8 motor arrives at the MT plus-end, which is the boundary in our model, it acts as a depolymerase, \emph{i.e.}\ it removes the last MT subunit at rate $\delta$~\cite{Reese2011}. In addition, the MT is assumed to polymerise through the attachment of single tubulin heterodimers. Unfortunately, there is insufficient experimental information on the detailed molecular cycle for MT growth in the presence of kinesin-8 motors. We hypothesise the following different but equally plausible mechanisms for MT growth: \begin{enumerate}[label=(\roman{enumi})] \renewcommand{\labelenumi}{(\roman{enumi})} \renewcommand{\theenumi}{(\roman{enumi})} \item The MT only grows at rate~$\eta$ if the last site at the plus-end is \emph{not} occupied by a kinesin-8 motor. Because kinesin-8 inhibits MT growth we call this the \emph{inhibition scenario}; cf. Fig.~\ref{fig:cartoon}. \label{eta} \item The MT grows at rate~$\gamma$ independently of whether the tip is occupied or not. This \emph{neutral scenario} has been considered previously in Ref.~\cite{Melbinger2012}; cf. Fig.~\ref{fig:cartoon}. \label{gamma} \item MT polymerisation is facilitated by a second protein species, like for instance XMAP215. This enzyme, in the absence of kinesin-8, attaches to and detaches from the MT tip with rates $k_\tn{on}$ and $k_\tn{off}$, respectively. Once bound, XMAP215 prevents kinesin-8 from reaching the tip, and processively polymerises the MT at rate $\eta_\tn{x}$, \emph{i.e.}\ the enzyme immediately binds to the newly formed tip site after polymerisation has occurred. \end{enumerate} We use the remainder of this chapter to give a concise summary of the results obtained recently for the \emph{neutral scenario}~\cite{Melbinger2012}: The combined effect of motor attachment in proximity of the minus-end (left boundary) and subsequent movement towards the plus-end (right boundary) leads to an accumulation of motors, which results in an antenna-like steady state profile~\cite{Varga2009,Reese2011}. At a certain distance from the minus-end the density profiles saturate to the equilibrium Langmuir density $\rho_\text{La}=K/(K+1)$~\cite{Leduc2012}. The resulting density profiles in vicinity of the minus-end is position-dependent, $\rho_-(x)$, and can be described by Lambert-$W$ functions~\cite{Parmeggiani2004}. Moving further towards the MT plus-end the density profile is determined by the interplay of motor current and the boundary conditions at the plus-end, which gives rise to a particular tip density $\rho_+(L)$. In a mean-field description~\cite{Chou2011}, this determines the length dynamics \begin{equation} \partial_t L (t)=-\delta\rho_+(L)+\gamma\,. \label{eq:dtL} \end{equation} Steady state is reached at a critical density $\rho_+^c={\gamma}/{\delta}$, where $\partial_t L(t)=0$. Depending on whether the tip density $\rho_+(L)$ is smaller or larger than $\rho_+^c$ the MT grows or shrinks. Because the motor current to the tip depends on the accumulation of motors along the MT, $\rho_-(x)$, the tip density depends on the actual length $L(t)$ of the MT. As a consequence a mechanism for MT regulation emerges: On a short MT, when the accumulation of motor density is low, also the tip density is low and the MT grows because the tip density lies below the critical threshold density, $\rho_+(L)<\rho_+^c$. This is in contrast to the case of a long MT where a higher density of motors accumulates along the MT and also the tip density is higher. Once the tip density exceeds the critical threshold value $\rho_+(L)>\rho_+^c$ the MT depolymerises. Figure~\ref{fig:cartoon_length} illustrates this mechanism. Shaded areas indicate density profiles for MTs of different length and also schematically account for the fact that the tip density is length-dependent and has a spike like shape. The dashed line shows a threshold value for the tip density, above and below which the MT shrinks and grows respectively. \begin{figure*}[th] \centering \includegraphics[scale=1.0]{figure2.png} \caption{Illustration of a linear motor density profile (shaded areas) and the threshold density $\rho_+^c$ (dashed line) for MT regulation. Low tip density $\rho_+(L)$ ensues a growing MT, and a high tip density results in a shrinking MT. Note that the density at the tip generally has a spike-like shape~\cite{Pierobon2006}.\label{fig:cartoon_length}} \end{figure*} \begin{figure*}[t] \centering \includegraphics[scale=1.0]{figure3.png} \caption{ Kymographs of how molecular motors regulate a MT. In the neutral case (a) the system displays a higher accuracy in length regulation ($\delta=0.2$, $\gamma=0.1316$). This is in contrast to the case of growth inhibition (b) where the system displays intermittent dynamics ($\delta=0.2$, $\eta=0.385$). A close look reveals different patterns of motor accumulation at the MT tips in the two scenarios. Attachment and detachment rates are $c \, \omega_\tn{on}=0.001$ and $\omega_\tn{off}=0.003$.\label{fig:trajectories}\label{fig:phen}} \end{figure*} \section{Motor and Microtubule Dynamics}\label{sec:dynamics}\vspace{-.3cm} Though at first sight the \emph{neutral} and the \emph{inhibition} scenario as introduced above appear very similar, there are actually strong qualitative differences in the ensuing length dynamics. Fig.~\ref{fig:phen}(a) and (b) show kymographs for the neutral and the inhibition scenario, respectively, as obtained from stochastic simulations employing Gillespie's algorithm~\cite{Gillespie2007}. While in the neutral scenario the overall length of the MT stays approximately constant with only small fluctuations, the length dynamics for the inhibition scenario is intermittent with extended episodes of filament growth and shrinkage, reminiscent of the dynamic instability~\cite{Mitchison1984}. Note that for the inhibition scenario there is significant accumulation of motors at the MT tip during periods of depolymerisation. To understand how the system alternates between periods of growth and shrinkage, let us turn to a mathematical description of the dynamics. As already noted in the previous section, the length change of the MT is determined by the tip density $\rho_+$, \emph{i.e.} the probability that the MT tip is occupied by a molecular motor, \begin{equation} \partial_t L = \begin{cases} -\delta \rho_+ + \gamma & \text{(neutral scenario)}\, , \\ -\delta \rho_+ + \eta(1-\rho_+) & \text{(inhibition scenario)}\,. \end{cases} \label{eqn:drift} \end{equation} Here the first term on the right stands for depolymerisation, and the second term describes polymerisation dynamics of the neutral and the inhibition scenario, respectively. Equation~\eqref{eqn:drift} shows that depending on the magnitude of the tip density, $\rho_+$, the MT either grows or shrinks: For large tip densities, depolymerisation is strong and the MT shrinks, while the MT grows for small tip densities; see Fig.~\ref{fig:cartoon_length}. The critical tip densities, $\rho_+^c$, where the filament length becomes stationary read: \begin{equation} \rho_{+}^c= \begin{cases} \frac{\gamma}{\delta} & \text{(neutral scenario)}\, , \\ \frac{\eta}{\delta+\eta} & \text{(inhibition scenario)}\,. \end{cases} \label{eqn:rhoc_ex} \end{equation} To make further progress one needs to determine the actual tip densities employing a mean-field approach for the motor dynamics along the MT~\cite{Melbinger2012}. \begin{table}[t] \centering \setlength{\extrarowheight}{2.pt} \begin{tabular*}{8.4cm}{@{\extracolsep{\fill}}lll} \hline \hline {\bf Kinesin-8} & model & experiment~\cite{Varga2009}\\ \hline speed &$\nu=1$& $6.35$ steps/sec\\ attachment &$\omega_\tn{on}$& $ 24\, (\tn{nM}\, \tn{min}\, \mu\tn{m})^{-1}$\\ detachment & $\omega_\tn{off}$ & $4.8 \cdot 10^{-3}\,\tn{sec}^{-1}$\\ depolymerisation&$\delta$&n/k~ \cite{Reese2011}\\ tip-detachment$\dagger$&$\beta$& $0.1-0.01\,\tn{sec}^{-1}$\\ \hline \hline {\bf MT growth}& model& experiment$\ddagger$\\ \hline neutral &$\gamma$ & n/k\\ inhibition & $\eta$ & n/k\\ \hline \hline {\bf XMAP215} & model& experiment~\cite{Brouhard2008}\\ \hline attachment & $k_\tn{on}$& $ 0.1\, (\tn{nM}\, \tn{sec}\, \mu\tn{m})^{-1}$\\%& $0.0016\, \nu/\tn{nM}$\\ detachment &$k_\tn{off}$& $ 3.8\,\tn{sec}^{-1}$\\% & $0.6\, \nu$\\ polymerisation &$\eta_\tn{x}$& $6.6\, \tn{dimers}/\tn{sec}$ \\%& $\nu$ \hline \end{tabular*} \caption{Quantification of model parameters for kinesin-8 and XMAP215. $\dagger$~Tip-detachment rates of different kinesin-8 constructs are: $10-55\,\tn{sec}$~\cite{Stumpff2011}; $20-40\,\tn{sec}$~\cite{Su2011}; $80\,\tn{sec}$~\cite{Varga2009}. In Ref.~\cite{Varga2009} it is shown that dwell times at the tip depend on motor concentration, suggesting cooperative effects of motors at the tip. A theoretical analysis is given in Ref.~\cite{Reese2011}. $\ddagger$~{MT growth speeds in the presence of kinesin-8s in vivo are $1.3\, \mu\tn{m}/\tn{min}$~\cite{Gupta2006}; $2\, \mu\tn{m}/\tn{min}$~\cite{Stumpff2008,Tischer2009}. Rate constants of individual growth events, however, are not available to our knowledge and the complexity of the process~\cite{Gardner2011} renders it difficult to quantify the damping effects of kinesin-8~\cite{Du2010}.}\label{tab:rates}} \end{table} \subsection{Phase behaviour and tip densities}\label{sec:simplified} \vspace{-.3cm} For biologically relevant parameter ranges, the time scales of the tip dynamics and the motor dynamics are comparable, cf. Table~\ref{tab:rates}. Therefore, the motor density profile quickly adapts to changes in the tip density and one can readily assume that the tip density and the bulk density are adiabatically coupled~\cite{Melbinger2012}. Moreover, experimental data also show that both the attachment and the detachment rates, $\omega_\tn{on}$ and $\omega_\tn{off}$, are very small \cite{Varga2009}. This suggest to consider a \emph{simplified model} where one neglects the attachment and detachment kinetics, and assumes that a constant density $\rho_-$ serves as a particle reservoir at the left end of a lattice with fixed size $L$; see Fig.~\ref{fig:simplified}. This allows us to focus on the dynamics at the plus-end and unravel how it depends on the reservoir density $\rho_-$. Due to the adiabatic coupling between boundary and bulk, the results for the full model can be inferred from the simplified model upon replacing $\rho_-$ by the actual spatially varying profile $\rho_-(x)$. \begin{figure*}[t] \centering \includegraphics[scale=1.0]{figure4.png} \caption{ (a) Illustration of the \emph{simplified model} with a constant particle reservoir $\rho_-$ at the minus-end, and where Langmuir kinetics (LK) is not accounted for. (b) Mean-field phase diagram for the simplified model (inhibition scenario) for two different values of the reservoir density: $\rho_-=0.5$ (solid black), and $\rho_-=0.25$ (dotted). Dashed line indicates the phase boundary obtained from stochastic simulations of the simplified model including LK with on- and off-rates $c\omega_\text{on}=\omega_\text{off}=0.005$. (c) Mean-field solutions for tip densities at various growth rates $\eta$ indicated in the graph compared to simulation data with and without LK. Different phases (IN/EX/MC) are indicated by symbols and lines and refer to analytic results; cf. Eqs.~\eqref{eqn:exclusive_IN}, and \eqref{eqn:exclusive_MC}. The dotted line indicates a discontinuous transition between the EX and MC phase. The lattice was initiated with random configurations of motors with bulk density $\rho_\tn{b}=0.5$.\label{fig:simplified}\label{fig:phasediagram}\label{fig:tip_density}} \end{figure*} Since there is particle conservation, the dynamics of the tip density is given by \begin{equation} \partial_t \rho_+ =J_\text{b} (\rho_b, \rho_+) -J_\text{exit} (\rho_+)\, , \label{eqn:tip_dynamics} \end{equation} where $J_\text{b}$ and $J_\text{exit} = \delta \rho_+$ denote the bulk current and the loss rate of motors due to depolymerisation, respectively. Calculations are conveniently performed in a frame comoving with the MT tip. Then the bulk currents for the neutral (N) and the inhibition (I) scenario read in a mean-field approximation~\cite{Melbinger2012}: \begin{subequations}\begin{eqnarray} J_\text{b}^\text{N} &=& \rho_\text{b}(1-\rho_\text{b})-\gamma\rho_\tn{b}+\delta \rho_+ \rho_\text{b} \label{current_nonex} \, ,\\ J_\text{b}^\text{I} &=& \rho_\text{b}(1-\rho_\text{b})-\eta \rho_\text{b} (1-\rho_+)+\delta \rho_{+} \rho_\text{b} \, . \label{current_ex} \end{eqnarray}\end{subequations} Here $\rho_b$ denotes the motor density in the bulk of the MT, and the first term describes the current due to hopping processes accounting for particle exclusion on neighbouring sites. The remainder of the terms indicate polymerisation and depolymerisation currents, which in a comoving frame simply correspond to simultaneous movement of all particles on the MT lattice to the left and right end, respectively. The stationary state of the model is determined by a balance of currents, or, in other words, the fixed point of Eq.~\ref{eqn:tip_dynamics}:~$\rho_+ \delta = J_\text{b} (\rho_b, \rho_+)$. Solving for the tip density one finds \begin{equation} \rho_{+} = \rho_{+}(\rho_\tn{b},\delta,\eta)=\frac{\rho_\tn{b} (1-\eta -\rho_\tn{b})}{\delta (1-\rho_\tn{b})-\eta \rho_\tn{b}}\, ,\label{eqn:tip_exclusive} \end{equation} for the inhibition scenario. The tip density is determined by the bulk particle flux towards the tip, and, at the same time, the bulk density depends on the molecular processes at the MT tip. To make progress with the analytical calculations, it is necessary to have some phenomenological knowledge about the nature of the density profiles and their stability with respect to fluctuations. For exclusion processes, there are in general three distinct phases, each of which corresponds to different bulk densities $\rho_\tn{b}$ and ensuing bulk currents~\cite{Derrida1992,Schuetz1993}: \begin{itemize} \item {\it IN phase}: In this phase the particle current that enters the system at the minus-end determines the bulk density. For TASEP this phase is also called \emph{low density phase}. \item {\it EX phase}: The bulk density is determined by the current of particles that leave the system at the right boundary (TASEP: \emph{high density phase}). \item {\it MC phase}: In this phase the \emph{maximal current} (MC) through the system determines the bulk density. It corresponds to a local maximum in the current density relation $J_\text{b} (\rho_\text{b})$. In contrast to the two other phases the bulk density in the MC phase is independent of the boundary conditions. \end{itemize} Moreover, for exclusion processes, there are two possibilities to account for the boundary conditions at the left and right end. Either there is a domain wall (DW) delineating a low density region, $\rho_-$, from a high density region, $\rho_+$, or there are boundary layers~\cite{Hager2001} at one of the MT ends. \subsubsection{Density perturbations and domain wall theory} \label{sec:dwtheory} \vspace{-.3cm} To make progress on the phase diagram we need to investigate the stabilities of the aforementioned DW and bulk density. To this end we introduce two important criteria that allow to analyse the stability of perturbations in exclusion processes known as DW-theory and the extremal current principle (ECP)~\cite{Krug1991,Kolomeisky1998,Popkov1999}. First we consider the stability of a bulk density $\rho_\tn{b}$ against a density perturbation. Such a perturbation travels at the collective velocity~\cite{Krug1991,Kolomeisky1998} \begin{equation} u_\text{coll}=\partial_{\rho} J_\tn{b} (\rho, \rho_+) \mid_{\rho= \rho_\tn{b}}\, . \end{equation} Since for $u_\tn{coll} < 0$ density perturbations move towards the minus-end, they do not affect the tip density and thereby the EX phase remains stable. In contrast, for $u_\tn{coll} > 0$, perturbations move towards the plus-end which renders the IN phase stable against density fluctuations. Note that the collective velocity $u_\tn{coll} = 0$ in the MC phase (by definition). Second, we consider the stability of DWs. A DW between a left $\rho^\text{left}$ and right density $\rho^\text{right}$ travels at a velocity, \begin{align} u_{\tn{DW}}=\frac{J_\tn{b}(\rho^\text{left}, \rho_+)-J_\tn{b}(\rho^\text{right}, \rho_+)}{\rho^\text{left}-\rho^\text{right}}\, . \label{eq:vshock} \end{align} Depending on the sign of this velocity the phase corresponding to $\rho^\text{left}$ or $\rho^\text{right}$ is stable~\cite{Kolomeisky1998}. Taken together $u_\tn{coll}$ and $u_\tn{DW}$ lead to analytic results for bulk and tip densities in the various phases; see Table~\ref{tab:exact}. \subsubsection{Phase diagram for the inhibition scenario}\label{sec:phasediagram} \vspace{-.3cm} With the methods introduced in the previous section it is a straightforward task to derive the densities and the ensuing phase behaviour of the simplified model. Since the neutral scenario has already been discussed previously~\cite{Melbinger2012}, we here focus on the inhibition scenario. In the IN phase the bulk density is (by definition) given by the reservoir density at the left boundary: $\rho_\text{b}^\text{IN}=\rho_-$. With the stationarity condition, Eq.~\eqref{eqn:tip_exclusive}, one finds that the tip density is a function of the reservoir density \begin{equation} \rho_{+}^{\tn{IN}}(\rho_-,\delta,\eta)=\frac{\rho_- (1-\eta -\rho_-)}{\delta (1-\rho_-)-\eta \rho_-}\, .\label{eqn:exclusive_IN} \end{equation} Note, however, that this is a stable solution of Eq.~\ref{eqn:tip_dynamics} only outside of the shaded area indicated in the phase diagram shown in Fig.~\ref{fig:phasediagram}(b). In the EX phase, the bulk density is given by the right boundary, $\rho_\text{b}^\text{EX}=\rho_+$, and Eq.~\eqref{eqn:tip_exclusive} leads to the striking result that the MT tip is \emph{always} occupied by a molecular motor, \begin{equation} \rho_{+}^{\tn{EX}}=1 \,, \label{eqn:exclusive_EX} \end{equation} in stark contrast to the corresponding result in the neutral scenario; see Table~\ref{tab:exact}. It implies that a MT always depolymerises for those parameter regimes where the system is in the EX phase. Similar as in Ref.~\cite{Reese2011} we attribute this behaviour to the slow depolymerisation rate in the EX phase, $\delta < \rho_-$. It implies that motors leave the tip more slowly than they arrive. Then the MT tip acts as a bottleneck for molecular transport and induces a traffic jam with $\rho_{+}^{\tn{EX}}=1$ at the plus-end. For the MC phase, the bulk density is given by the maximum of the bulk current $J_\tn{b}^\tn{I}$: \begin{equation} \rho_{\text{b}}^\text{{MC}}=\frac{\delta -\sqrt{\delta \eta (\delta +\eta -1)}}{\delta +\eta }\, . \end{equation} Upon using this bulk density in Eq.~\eqref{eqn:tip_exclusive} gives a constant value for the tip density in the MC phase which is independent of the reservoir density \begin{equation} \rho_{+}^{\tn{MC}}=\frac{\delta+\eta (\eta +\delta -1)-2 \sqrt{\eta \delta (\eta +\delta -1)} }{(\delta+\eta)^2}\, .\label{eqn:exclusive_MC} \end{equation} Knowing the tip densities, we can now use the domain wall theory explained above (see section \ref{sec:dwtheory}) to determine the transition lines between the various phases. The DW velocity gives the direction in which a DW between two densities, one from the left and one from the right, travels. To employ this criteria, we first have to identify the respective densities. Let us start with $\rho_\text{left}$: The density at the minus-end is in general determined by the entering current, corresponding to a tip density $\rho^+_{\text{IN}}$, Eq.~\ref{eqn:exclusive_IN}. This tip density, however, is only stable against small perturbations if $u_\text{coll}\geq0$. For parameters where $u_\text{coll}<0$ the density from the left is decreased to $\rho_\text{left}=\rho^+_{\text{MC}}$. This sign-change of the collective velocity defines the phase boundary between the IN and MC phase: $\eta = \delta(\rho_- -1)^2/(\delta -\rho_-^2)$. Taken together, the density on the left of the DW is given by $\rho_\text{left}=\text{Min}[\rho^+_{\text{IN}},\rho^+_{\text{MC}}]$. The density at the right of the DW, $\rho_\text{right}$, is determined analogously. Since in that regime the collective velocity is strictly negative we simply have $\rho_\text{right}=\rho^+_{\text{EX}}=1$. Upon using the above expressions for $\rho_\text{left}$ and $\rho_\text{right}$ in Eq.~\eqref{eq:vshock} gives the remaining phase boundaries: With $\rho^\text{left}=\rho_-$, $\rho^\text{right}=1$, and $\rho_+=1$ one obtains $u_{\tn{DW}}= \delta - \rho_-$, implying that the phase boundary between the IN and EX phase is given by $\delta = \rho_-$. The boundary line $\delta + \eta = 1$ signifies that above this line the stationary solution given by Eq.~\ref{eqn:exclusive_IN} becomes unstable. This instability gives rise to interesting motor dynamics, in particular, a subtle dependence of the ensuing stationary profile on the initial condition. While these effects are certainly worthwhile studying they are irrelevant for our main focus, namely MT regulation, and, hence, we refrain from further analysing this regime here. Taken together, the above analysis gives the phase diagram shown in Fig.~\ref{fig:phasediagram}(b) for two different values of the reservoir density $\rho_-$. The general trend is that with decreasing reservoir density the parameter domain where the IN phase is stable expands. \begin{table*}[t] \setlength{\extrarowheight}{2.pt} \begin{tabular*}{172.5mm}{@{\extracolsep{\fill}}lllll} \hline \hline \bf Tip density\\ \hline Inhibition & $\rho_{+}^{\tn{IN}}=\frac{{\rho_- (1-\eta -\rho_- )}}{{\delta-\rho_- (\eta +\delta ) }}$& $\rho_{+}^{\tn{EX}}=1$& $\rho_{+}^{\tn{MC}}=\frac{\eta (\eta +\delta -1)+\delta -2 \sqrt{\delta\eta (\eta +\delta -1)} }{{(\eta +\delta )^2}}$\\ \hline Neutral~\cite{Melbinger2012} & $\rho_{+}^{\tn{IN}}=\frac{\rho_- (1-\gamma -\rho_- )}{\delta (1- \rho_-)}$& $\rho_{+}^{\tn{EX}}=1-\frac{\gamma }{1-\delta }$& $\rho_{+}^{\tn{MC}}=\frac{1-\sqrt{\gamma} }{\delta}$\\ \\ \hline \hline \bf Critical growth\\% / critical XMAP density \hline Inhibition& $\eta_c^\tn{IN}=\frac{\delta \rho_-(1-\rho_- ) }{\delta - \rho_-(1-\rho_- ) }$ & $\eta_c^\tn{EX}=1-\delta$ [see Eq.~\eqref{rho_c_EX}]& $\eta_c^\tn{MC}=\frac{\delta }{4 \delta -1}$\\ \hline Neutral~\cite{Melbinger2012}& $\gamma_c^\tn{IN}=\rho_-(1 - \rho_-)$ & $\gamma_c^\tn{EX}=\delta (1 -\delta )$ & $\gamma_c^\tn{MC}=\frac{1}{4}$\\ \hline \end{tabular*} \caption{Analytic results for the tip densities $\rho_+$ in the different phases IN/EX/MC and the critical growth rates $\eta_c$ and $\gamma_c$ for the inhibition and neutral scenario, respectively. Note that $\eta_c^\tn{EX}$ is obtained from the phase boundary of the EX phase as derived in the main text.\label{tab:exact}} \end{table*} The analytical results obtained from mean-field theory agree nicely with the stochastic simulations, see Fig.~\ref{fig:phasediagram}(c), in case LK is neglected. For a depolymerisation rate $\eta = 0.3$, concomitant with the phase transition from the IN to the EX phase, the tip density increases upon lowering the depolymerisation rate $\delta$ and then continuously saturates at $\rho_+=1$ as the EX phase is reached. In contrast, for $\eta \geq 0.5$, there is a \emph{discontinuous jump} in the tip density as one passes from the MC into the EX phase; see discussion above. The stochastic simulations with LK show a quite significant increase in the magnitude of the tip density in the MC phase, in particular in the shaded area of the phase diagram, Fig.~\ref{fig:phasediagram}(b). We attribute this to the fact that the Langmuir density in bulk, $\rho_\text{La}$, acts as a source for kinesin-8 motors which tends to increase the motor density on the MT and at the tip. Though these effects are interesting and worthwhile studying they are not important for our main concern here, namely regulation of MT length. As discussed previously~\cite{Melbinger2012} and elaborated on later in section \ref{sec:length}, MT regulation is possible only if the density profile is determined by the particle current at the minus-end, \emph{i.e.}\ if the system in its stationary state is in the IN phase. In that case, even adding LK in the simulations has only a minor effect on the magnitude of the tip density, and we can savely use the analytical mean-field results to further analyse the stationary MT length. \begin{figure*}[t] \centering \includegraphics[scale=1.0]{figure5.png} \caption{Drift velocity of the MT tip, $v=\partial_t L$, as a function of the polymerisation and depolymerisation rates for the \emph{inhibition} (a) and the \emph{neutral} (b) scenario obtained from stochastic simulations for the simplified model with LK; colour code indicates the magnitude of the drift. Solid lines indicate where the MT velocity is zero, $\eta_c (\delta)$ and $\gamma_c (\delta)$, as obtained from the analytical calculations; see Table~\ref{tab:exact}. The dotted line is obtained from the analytical theory; it coincides with the boundary line $v=0$ and agrees well with numerical simulations where LK has been turned off. The dashed line is the numerically determined boundary $\eta_c$ when LK is turned on. Stochastic simulations including LK were performed with motor attachment and detachment rates $c\omega_\text{on}=\omega_\text{off}=0.005$, respectively, $\rho_-=0.5$ and system size $200$. \label{fig:drift}\label{fig:comparison}} \end{figure*} \subsection{Dynamics of the microtubule length}\label{sec:speed}\vspace{-.3cm} Figure \ref{fig:drift} shows the results of our stochastic simulations with LK for the MT drift velocity, $v=\partial_t L$ as a function of the depolymerisation and the polymerisation rates for both the inhibition and the neutral scenario. There are well defined boundaries, $\eta_c (\delta, \rho_-)$ and $\gamma_c (\delta, \rho_-)$, separating regimes in which MTs grow and shrink, respectively. Since the tip density, $\rho_+$, dictates MT dynamics, see Eq.~\eqref{eqn:drift}, those boundaries can be readily calculated upon comparing the tip densities listed in Table~\ref{tab:exact} with the critical tip density, Eq.~\eqref{eqn:rhoc_ex}. For the inhibition scenario we find that for $\delta < \rho_-$ the critical tip density coincides with the phase boundary of the EX phase \begin{equation} \eta_c = 1 -\delta \, , \label{rho_c_EX} \end{equation} while for $\delta > \rho_-$ it lies either within the MC or the IN phase: \begin{equation} \eta_c = \begin{cases} \frac{\delta \rho_-(1-\rho_- )}{\delta-\rho_-(1-\rho_-)} & \text{for} \quad \rho_- < 1/2 \, ,\\ \frac{\delta/4 }{\delta -1/4} & \text{for} \quad \rho_- > 1/2 \, ;\\ \end{cases} \label{rho_c_MC} \end{equation} see Table~\ref{tab:exact} for a summary together with the results for the neutral scenario. These analytical results are in perfect accordance with our stochastic simulations [Fig.~\ref{fig:drift}] with one interesting exception for the inhibition scenario, namely the boundary line of the EX phase for $\delta < \rho_-$. Since we recover agreement between stochastic simulations and analytical calculations by switching off LK in our stochastic simulations, we can fully attribute this difference to the effect of attachment and detachment of motors in bulk, as discussed in the previous section, \ref{sec:phasediagram}; cf. dotted and dashed lines in Fig.~\ref{fig:drift}(a). Further, the differences between both scenarios are significant, cf. Figs.~\ref{fig:drift}(a) and (b), respectively. In the inhibition scenario, the regime where MTs shrink -- and hence regulation becomes possible -- is much broader since kinesin-8 inhibits MT growth when bound to the tip: For small depolymerisation rate $\delta$, motors reside at the MT end for a relatively long time, which dramatically broadens the regime of MT shrinkage. \begin{figure}[t] \centering \includegraphics[scale=1.0]{figure6.png} \caption{Microtubule dynamics with kinesin-8 and XMAP215: (a) Simplified model with motor detachment from the tip (rate $\beta$) and tip-binding of XMAP215. (b) MT growth velocity as a function of kinesin-8 and XMAP215 density. A XMAP215 density of $\rho_\tn{x}^\tn{eq}=0.5$ corresponds to $c_\tn{x}\approx 10\,\mu$M, and the concentration of kinesin-8 is approximately $c\approx 1.5\,\tn{nM}$ for a half-filled lattice $\rho_\tn{b}=0.5$, see Table~\ref{tab:rates}. \label{fig:xmap_dynamics}} \end{figure} \subsection{Interplay between kinesin-8 and polymerase XMAP215}\label{sec:xmap} \vspace{-.3cm} In this section we compare the dynamics of the inhibition scenario with a model which explicitly accounts for a second protein, XMAP215, that enzymatically facilitates MT growth; see Fig.~\ref{fig:xmap_dynamics}(a) and section \ref{sec:model}. Since XMAP215 and kinesin-8 mutually exclude each other at the MT tip, one expects strong similarities between those scenarios. In order to compare with an analytically tractable lattice gas model we performed the stochastic simulations for the simplified model without LK~\footnote{Note that in order to achieve a more realistic description of the tip related processes, we also include tip-detachment of kinesin-8 at a rate of $\beta=0.02$ as suggested by experiments cf. Table~\ref{tab:rates}.}. Figure~\ref{fig:xmap_dynamics} shows the regimes of MT growth and shrinkage as a function of kinesin-8 and XMAP215 densities for a set of depolymerisation rates $\delta$. The general trend is that the regime where MTs shrink is enlarged with smaller depolymerisation rates. At the mean-field level, the equilibrium density of XMAP215 at the MT tip is given by the product $\rho_\tn{x}=\rho_\tn{x}^{\tn{eq}}(1-\rho_+)$, where $1-\rho_+$ is the probability that kinesin-8 is not bound and $\rho_\tn{x}^{\tn{eq}}$ denotes the Langmuir isotherm for XMAP215 binding: \begin{equation} \rho_\tn{x}^{\tn{eq}}=\frac{c_\tn{x}k_\tn{on}}{c_\tn{x}k_\tn{on}+k_\tn{off}}\,.\label{eqn:xmap_density} \end{equation} Here $c_\tn{x}$ is the XMAP215 concentration in solution, and $k_\tn{on}$ and $k_\tn{off}$ are the attachment and detachment rates of the enzyme to and from the MT tip, respectively. This mean-field approximation neglects that the presence of XMAP215 at the MT tip influences the current of kinesin-8 to the MT end because it could block the motor particles~\cite{Wood2009}. Fortunately, since the polymerisation rate of XMAP215, $\eta_\tn{x}$, and the walking speed, $\nu$, of kinesin-8 are almost the same~\cite{Brouhard2008,Varga2009} the two molecules rarely interact. This implies that a model explicitly accounting for XMAP215 can be reduced to the inhibition scenario with an effective polymerisation rate given by \begin{equation} \eta= \eta_\tn{x} \, \rho_\tn{x}^{\tn{eq}}\, . \label{eqn:xmap_poly} \end{equation} Indeed, as can be inferred from Fig.~\ref{fig:xmap_dynamics}(b), the predictions of the effective inhibition scenario agree nicely with the numerical simulations. Taken together this implies that the inhibition scenario may serve as a minimal model to include other MT associated proteins that antagonise the depolymerisation activity of kinesin-8. \subsection{Microtubule regulation}\label{sec:length}\vspace{-.3cm} We now consider the full model for a MT of finite length $L$, where LK leads to an accumulation of kinesin-8 motors along the MT. As discussed in section~\ref{sec:model}, the ensuing antenna-like profile $\rho_-(x)$ can be calculated within the framework of the TASEP/LK model~\cite{Parmeggiani2003,Parmeggiani2004}; these theoretically predicted profiles have recently been confirmed by in vitro experiments~\cite{Leduc2012}. Now length regulation becomes possible if this spatially varying profile translates into a length-dependent velocity $v(L)$ of the MT tip~\cite{Howard2007,Melbinger2012}. This requires that the tip density $\rho_+ (L)$ depends on $\rho_-(L)$ which is the case only for the IN phase; see Eq.~\ref{eqn:exclusive_IN}. Then the tip density reads \begin{equation} \rho_{+} (L) = \rho_{+}^{\tn{IN}} (\rho_-(L),\delta,\eta) \, . \label{eqn:rhoplusL} \end{equation} Upon inserting the ensuing length-dependent tip density into Eq.~\ref{eqn:drift} one obtains a length-dependent velocity $v(L)$. It is instructive to define an \emph{effective potential} \begin{equation} U_\text{eff} (L) = - \int_0^L dx \; v(x) \, , \label{eqn:rhoplusL} \end{equation} whose minimum defines the stationary MT length $L^*$ \begin{equation} \rho_{+} (L^*) = \rho_{+}^{\tn{IN}} (\rho_-(L^*),\delta,\eta) =\rho_+^c\, ,\label{eqn:Lstar} \end{equation} as illustrated in Figs.~\ref{fig:potential}(a-c). Tight length regulation is restricted to the regime where the critical density $\rho_-^c:= \rho_-(L^*)$ falls well into the linearly increasing antenna profile. The closer $\rho_-^c$ is to the Langmuir plateau $\rho_\tn{La}$ the less well defined is the stationary length; note that the effective spring coefficient \begin{equation} k(L):=U_\text{eff}^{''} (L) = \begin{cases} \delta \, \rho'_+ (L) & \text{(neutral scenario)} \\ (\delta + \eta) \, \rho'_+ (L) & \text{(inhibition scenario)} \end{cases} \, , \label{eqn:spring_constant} \end{equation} is proportional to the slope of the profile, where prime denotes derivative; see also Fig.~\ref{fig:potential}(c). \begin{figure*}[t] \centering \includegraphics[scale=1.0]{figure7.png} \caption{Effective potentials for the inhibition and the neutral scenario, (a) and (b), for the trajectories shown in Fig.~\ref{fig:trajectories}. The diagram in (c) illustrates how threshold densities for the tip density $\rho_+(x)$ and minus-end density $\rho_-(x)$ are defined, and how both quantities together set MT length $L^*$. Panels (d) and (e) show data for the accumulated density, and the ensuing probability of tip occupation $p_+ (L)$. For the inhibition scenario $p_+ (L)$ is constant, while in the neutral scenario it is length-dependent and thus samples the effective potential, \emph{i.e.} $p_+ (L) = \rho_+ (L)$. \label{fig:potential}} \end{figure*} As can be inferred from Figs.~\ref{fig:length}(a,b) the stochastic simulations agree nicely with the above analytical results for the stationary MT length $L^*$ in both scenarios, neutral and inhibition. Previous studies~\cite{Melbinger2012} have shown that the variance of the length can be obtained well upon using a van Kampen expansion for the stochastic dynamics of the MT length $L(t)$, which assumes that the tip density is adiabatically coupled to the motor density along the MT. This essentially amounts to saying that the MT length performs a random walk in the effective potential $U_\text{eff}(L)$. Such a picture is fully consistent with results obtained from our stochastic simulations: The observed stochastic trajectories resemble those of random walks in confinement; see Fig.~\ref{fig:phen}(a). More importantly, the numerically observed value for the probability that the MT tip is occupied, $p_+(L)$ agrees well with the mean-field tip density $\rho_+(x)$; see Fig.~\ref{fig:potential}(d). This implies that the stochastic trajectory samples the values of MT length $L(t)$ with a statistical weight determined by the effective potential $U_\text{eff} (L)$. Surprisingly, as can be inferred from Fig.~\ref{fig:potential}(e), this is not the case for the inhibition model which immediately invalidates a description of the stochastic dynamics in terms of a continuous random walk in the potential landscape shown in Fig.~\ref{fig:potential}(b). The latter would actually give rise to stochastic trajectories strongly confined to the stationary value $L^*$. In contrast, the actual stochastic trajectories for the inhibition scenario shown in Fig.~\ref{fig:phen}(b) rather resemble an intermittent dynamics similar to the behaviour of MT dynamic instability with abrupt transitions between growing and shrinking states~\cite{Mitchison1984}. \begin{figure*}[t] \centering \includegraphics[scale=1.0]{figure8.png} \caption{Comparison of MT length $L^*$ for the inhibition (solid) and neutral (dashed) scenario with respect to polymerisation (a) and depolymerisation rates (b). The data was obtained single trajectories $L(t)$. Data points correspond to the most probable length of the process $L^*$; error bars denote the standard deviation of $L(t)$. Motor attachment and detachment rates are $c \, \omega_\text{on}=0.001$ and $\omega_\text{off}=0.003$. In (c) probability distributions of the times, during which MTs shrink and grow, are shown for parameter values as in Fig.~\ref{fig:phen}. The exponential tails of the distributions support the view that the inhibition scenario follows dichotomous switching dynamics (see main text for details). \label{fig:length}} \end{figure*} The key to understand this anomalous dynamics lies in realising that the stochastic length dynamics in the inhibition scenario is a \emph{dichotomous process} with only two states: While, if the MT tip is empty, the MT grows with a rate $\eta$, it shrinks with a rate $\delta$ if the MT tip is occupied by a kinesin-8 protein. In other words, depending on whether the MT tip is occupied or not, it is either in a shrinking or a growing state, respectively~\cite{Dogterom1993}. Consider a configuration where the tip is empty, and, hence, the MT is in a growing state (with average speed $\eta$). Then it will remain in this state for some time $\tau_\tn{grow}$ until the motor closest to the tip actually reaches the tip. Figure~\ref{fig:length}(c) shows the probability distribution of $\tau_\tn{grow}$ for the same parameters as in Figure~\ref{fig:length}(b). The distribution is clearly exponential with a typical time of the order $\sim 23 / \nu$. On the other hand, if the MT tip is occupied by a kinesin-8 protein, it will remain in this state and not depolymerise the tip for a time of the order of $\delta^{-1}$. During this time the filament does neither grow nor shrink, and the kinesin-8 protein at the MT tip acts as a strict bottleneck. As a consequence, an extended traffic jam may emerge at the MT tip by motors queuing up behind this bottleneck. These traffic jams can be clearly seen in Fig.~\ref{fig:phen}(b) as black clusters. The formation of such clusters is a nucleation process, and the duration of the shrinking state is determined by a subtle interplay between particles gained by stochastic arrival at the left end and depolymerisation dynamics. Interestingly, the probability distribution of $\tau_\tn{shrink}$ shows two typical time scales, and, in particular, a broad exponential tail with a typical time of the order $\sim 112 / \nu$. We leave a more detailed investigation of these interesting stochastic effects for future work. The main results we would like to emphasize here are that we have identified two distinct time scales characteristic for prolonged growing and shrinking states. These time scales are macroscopic in the sense that they are much larger then the hopping time of individual motors (which we have set to $1$). This implies that also the typical lengths covered during the growing and the shrinking state are rather large; for the examples shown in Figure~\ref{fig:length}(c) they are on average approximately $8$ and $22$ lattice sites during polymerisation and depolymerisation, respectively. These large length scales explain why the probability to occupy the MT tip as obtained from the stochastic simulations is only weakly dependent on MT length. Taken together we find that in the neutral scenario MT length is tightly controlled. The variance in MT length is mainly determined by the width of the effective potential, or, equivalently by the effective spring coefficient $k(L) = \delta \, \rho'_+ (L)$. Hence, the slope of the antenna profile is the key determinant of length fluctuations. In contrast, for the inhibition scenario, extended periods of MT growth and shrinkage lead to large length fluctuations as can be seen from the kymographs in Fig.~\ref{fig:phen}. These large fluctuations ensue characteristic exponential tails in the filament's length distributions; the characteristic width of this distribution is shown as error bars in Fig.~\ref{fig:length}(b). Note also the different dependencies of the two models with respect to the depolymerisation rate. While, in the neutral scenario the length of the MT is independent of the depolymerisation rates $\delta$, it strongly affects MT length in the inhibition scenario. \section{Discussion} \label{sec:discussion} \vspace{-.3cm} We have analysed distinct molecular mechanisms of MT regulation by proteins which are able to catalyse growth and shrinkage of MTs. Specifically, our interest was in the interplay between kinesin-8 motors acting as depolymerases when bound to the MT tip and microtubule growth processes which are either spontaneous or also catalysed by proteins like XMAP215. We investigated two distinct scenarios: In a \emph{neutral scenario} MTs grow independently of whether a kinesin-8 motor is bound to the tip or not. In contrast, in an \emph{inhibition scenario}, the MT only grows if the MT tip is not occupied by a depolymerase. Experiments with a microtubule polymerising enzyme, XMAP215~\cite{Brouhard2008}, suggest a high binding rate for the microtubule tip through facilitated diffusion. Then, to a first approximation, one may model XMAP215 as a tip-binding protein which excludes binding of kinesin-8. As we have shown, this tip site exclusion leads to a dynamics which is equivalent to the inhibition scenario. The results obtained here show how interactions between individual proteins and the microtubule tip play an important role for microtubule regulation. There are three main findings: (i) Microtubule regulation is directly affected by motor traffic. It is influenced by the microtubule growth rate, and the attachment and detachment kinetics of motors to and from the microtubule. Both parameters can be tuned in experiments through the tubulin concentration and the motor and salt concentrations~\cite{Leduc2012}, respectively. (ii) Regimes of microtubule growth and shrinkage critically depend on the probability that a kinesin-8 motor is bound to the microtubule tip. (iii) Protein-microtubule interactions at the microtubule tip are key to distinguish different mechanism of microtubule regulation, like for example intermittent dynamics or tight length control. The parameter regimes, where motor traffic constrains microtubule growth differ dramatically for the two scenarios, cf. Fig~\ref{fig:comparison}. For the neutral scenario, this parameter regime is relatively small and, in particular, limited to slow growth rates. It is characterised by relatively tight length control~\cite{Melbinger2012}. In contrast, for the inhibition scenario, the regime where length regulation is possible is extremely broad and includes high growth rates, however, at the cost of accurate length control: microtubule dynamics is intermittent with extended periods of microtubule growth and shrinkage reminiscent of microtubule dynamic instability. Therefore, in view of the regulation of microtubule length, these findings suggest the inhibition scenario as a mechanism for large length fluctuations, while the neutral scenario provides a mechanism for precise length control. To test these theoretical ideas, we suggest experiments which vary the protein concentration of kinesin-8, tubulin, and XMAP215. The specific predictions of our theory will allow to discern between different molecular mechanism at the microtubule tip simply by analysing how changes in the concentrations affect macroscopic quantities like the microtubule length and the speed of microtubule growth and shrinkage. Besides its biological relevance, our study also contributes to the field of driven diffusive systems. We not only show, how systems with a dynamic length can be treated analytically, but the technique we propose also gives conceptual insights in the determination of boundary-induced phases. This is achieved by extending the extremal current principle~\cite{Krug1991} to dynamic systems. For instance, we found that a shock forming dynamically at the right boundary (not in bulk) determines whether the system is in the IN or EX phase. In addition, we could identify an unstable region in the phase diagram (between EX and MC phase for the inhibition scenario), where the system does not only depend on the boundaries, but also on the initial conditions. This behaviour is to our knowledge not common for driven diffusive systems, and an interesting topic for future studies. Even though the main dynamic behaviour, as microtubule length, is governed by currents which are determined by the boundaries, also bulk phenomena as observed in Refs.~\cite{Lipowsky2001, Parmeggiani2003, Leduc2012}, especially for lattice length fluctuations. We restricted our analysis to boundary induced transitions, leaving it as a challenge for the future to capture also the bulk dynamics of motors on the microtubule. From a broader perspective, the presented findings support the view that length-dependent disassembly and/or assembly rates due to molecular motor transport are likely to constitute a general mechanism to influence the length of one-dimensional structures in biology regardless of mechanistic details~\cite{Marshall2004}. Specifically, microtubule tips are \enquote{crowded} spots in the cell, where space limitations for protein binding, inferring mutual exclusion, are relevant factors. Future experimental work needs to study dwell times of molecules at microtubule tips at the highest possible accuracy, because dwell times encode important information about the underlying molecular interaction networks~\cite{Li2013b}. Similarly it will be important to learn more about interactions of molecular motors with the microtubule~\cite{Vilfan2001,Roos2008} during dynamic instability~\cite{Gardner2011a,Kuan2013,Li2013} and with networks of microtubules~\cite{Neri2013,Greulich2012}. \section*{Acknowledgments}\vspace{-.3cm} We thank Matthias Rank for helpful comments on the manuscript. This project was supported by the Deutsche Forschungsgemeinschaft in the framework of the SFB~863.
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} According to the investigation conducted in \cite{tassi2015resource}, the global video traffic will dominate the Internet usage in the future. Since heterogeneous devices request different kinds of video formats, resolutions, and bitrates, the existing video contents may need to be transformed to fit the network condition and the usage of different mobile devices. Therefore, transcoding technology is necessary for transforming the current video version into a suitable one, which can be played and matches the screen size of the devices. However, such a transcoding procedure is computation-intensive so that it can be hardly executed on the mobile devices with limited resources. Thereby, a novel computing platform is desirable.\\ \indent \emph{Mobile edge computing} (MEC) is recognized as a promising paradigm in next generation wireless networks, enabling the cloud-computing capabilities in close proximity to mobile devices \cite{kumar2016vehicular}. With the physical proximity, MEC realizes a low-latency connection to a large-scale resource-rich computing infrastructure by offloading the computation task to an adjacent computing sever/cluster instead of relying on a remote cloud \cite{chen2015efficient}. Therefore, MEC is envisioned to provide computation services for mobile devices at anytime and anywhere by endowing radio access networks (RANs) with powerful computing capabilities \cite{zhang2016energy}.\\ \indent Since the prodigious amount of videos and the wide variety of video versions will certainly result in a large-scale distribution of video contents calling for tremendous resources, it is essential to have storage resources at some of the intermediate nodes within the network \cite{jin2015optimal}. \emph{In-network caching} can help efficient distribution of contents in wireless networks \cite{FYH15}. Compared to the traditional network paradigms with a general lack of content distribution information, the cache-enable heterogeneous networks (HetNets) can reduce the backhaul cost of the popular contents, increase the delivery probability of contents to mobile users, and support a highly efficient and scalable content retrieval.\\ \indent In view of the benefits from MEC and in-network caching, a novel framework integrated with these promising techniques is necessary to be designed for efficiently delivering the massive video contents in HetNets. In this paper, \emph{wireless network virtualization} is considered as a candidate technique for simplifying network management \cite{LY15,LY15m}. Through virtualization, wireless network infrastructure can be decoupled from their provided services, and various users with differentiated services requirements can dynamically share the same infrastructure, thereby maximizing the system utilization \cite{wang2016information}.\\ \indent Thanks to MEC and in-network caching, the computing and caching functions can be achieved in close proximity to mobile devices. However, although some excellent works have been done on MEC and in-network caching, these two areas have been addressed separately. Thus, how to integrate these two techniques, and efficiently allocate the limited resources to jointly optimize the utilities of computing, caching, and communication, remain to be an urgent issue.\\ \indent In this paper, we investigate the virtualized HetNets with MEC and in-network caching. Specifically, we design a novel virtualized HetNets framework aiming at enabling content caching and computing, in which the resources of communication, computing, and caching can be shared among users from different virtual networks. In this framework, we formulate the virtual resource allocation strategy as a joint optimization problem, where the gains of not only virtualization but also caching and computing are taken into consideration in the proposed HetNets virtualization architecture. A distributed algorithm, based on alternating direction method of multipliers (ADMM) \cite{boyd2011distributed}\cite{leinonen2013distributed}, is presented to solve the formulated problem with a lower computational complexity and a reduced signaling overhead. Simulations results are presented to show the performance improvements of the proposed scheme. \indent The rest of this paper is organized as follows. In Section II, we introduce the proposed framework and formulate the virtual resource allocation scheme as an optimization problem. In Section III, we address the problem via a distributed ADMM-based algorithm. Simulation results are discussed in Section IV. Finally, we conclude this study in Section V. \section{Virtual Resources Allocation with Mobile Edge Computing and In-Network Caching} \subsection{System Model} \begin{figure} \centering \includegraphics[width=3.7in]{figure//myfigure2.eps} \caption{Virtualized HetNets Framework.} \label{fig1} \end{figure} \indent\textit{1): Virtual Heterogeneous Networks Model}\\ \indent As shown in Fig. 1, the virtual networks are generated according to the requests of service providers (SPs) by mobile virtual network operators (MVNO), since the quality-of-service (QoS) requirements may be different for each mobile user. In particular, some users, who want to compute the task such as face recognition, prefer to access to a virtual network without in-network caching function (i.e., SP2), because the contents of these kinds of computation tasks may be private and with very low reuse probability. In contrast, some computation tasks like video transcoding are better to be executed in the virtual network with in-network caching function (i.e., SP1). The virtual network with in-network caching function offers an opportunity to cache the popular contents before or after the execution of the computation tasks, which will significantly reduce the operation cost for delivering the video contents. It is assumed that the whole virtualization process is realized and controlled by a virtual wireless network controller \cite{ETSI2013NFV}. For simplicity, user mobility \cite{YL01} and handover \cite{MYL04,MYL07,YK07} are not considered in this paper. Each mobile user can connect to the virtual wireless networks logically, and subscribe to the required services from these virtual networks, while actually they connect to the physical networks.\\ \indent Considering that the HetNets are with multiple macro base stations (MBSs) and small base stations (SBSs) for serving multiple users, let $\mathcal{N}_m$ and $\mathcal{N}_s$ be the sets of MBSs and SBSs, and $\mathcal{N}=\mathcal{N}_m\cup\mathcal{N}_s=\{1,...,N\}$ and $\mathcal{U}=\{1,...,U\}$ be the sets of all BSs and users, respectively. It is assumed that each BS belongs to different infrastructure providers (InPs), and the licensed spectrum of each InP is orthogonal so that there is no interference among them. In addition, let $\mathcal{S}=\{1,...,S\}$ be the set of SPs. For each SP $s$, each assigned user is denoted by $u_s$, and $\mathcal{U}_s$ is the set of users belonging to SP $s$, where $\mathcal{U}=\cup_s\mathcal{U}_s$ and $\mathcal{U}_s\cap\mathcal{U}_{s'}=\phi$, $\forall {s'}\neq s$.\\ \indent In our business model, MVNO leases radio resource (e.g., spectrum) and backhaul bandwidth (e.g., data rate) from InPs, and slices them to virtual SPs. On the revenue side, MVNO charge the access fee of virtual network from users, which is defined as $\alpha_{u_s}$ per bps. The users, who have already paid the fee, can access to the virtual network for offloading their computation task. Besides, the fee for user $u_s$ to compute the task at BS $n$ is defined as $\phi_{u_s}$ per bps. Since the contents of the computation tasks may have the potential benefits to be cached, the backhaul cost, paid by MVNO and defined as $\gamma_n$ per bps, can be saved when users call for the contents which have already been cached at BS $n$. On the spending side, MVNO needs to dynamically pay for the usage of spectrum to InPs, which is defined as $\beta_n$ per Hz. Furthermore, MVNO also needs to pay the computation fee and caching fee to InPs, once there is a computation task need to be executed at the MEC server or the contents before and after the computation are valuable to be cached at BSs. In addition, with the increasingly rigid environmental standards and rising energy costs, there are great interests on the energy issues in wireless networks \cite{XYJL12,BYC12,YZX11,BYY15}. Therefore, we define the unit price of computation energy at BS $n$ as $\psi_n$ per J. The prices per unit of space to cache the contents before and after the computation at BS $n$ are denoted by $\Psi^n_{z_{u_s}}$ and $\Psi^n_{z_{u_s}'}$, where $z_{u_s}$ and $z_{u_s}'$ represent the contents before and after the computation.\\ \indent\textit{2): Computing Model}\\ \indent Assume each user has a computation task to be completed with a certain requirement of computation rate. Let $a_{u_s,n}$ denote the association indicator, where $a_{u_s,n}=1$ means that user $u_s$ associates with BS $n$ to compute the offloading task. Each user can associate to only one BS; thus \begin{equation} \sum\limits_{n\in\mathcal{N}}a_{u_s,n}=1, \forall s\in\mathcal{S},u_s\in\mathcal{U}_s. \end{equation} $b_{u_s,n}$ denotes the allocated bandwidth from BS $n$ to user $u_s$, and we have \begin{equation} \sum\limits_{s\in\mathcal{S}}\sum\limits_{u_s\in\mathcal{U}_s}a_{u_s,n}b_{u_s,n}\leq B_n, \forall n\in\mathcal{N}, \end{equation} where $B_n$ is used to denote the spectrum bandwidth allocated to BS $n$. In order to ensure the data rate requirements of each user, we have \begin{equation} \sum\limits_{n\in\mathcal{N}}a_{u_s,n}b_{u_s,n}r_{u_s,n}\geq R^{\text{cm}}_{u_s}, \forall s\in\mathcal{S},u_s\in\mathcal{U}_s. \end{equation} where $R^{\text{cm}}_{u_s}$ is user $u_s$'s communication rate requirement in the corresponding QoS class. According to the Shannon bound, $r_{u_s,n}$, the achievable spectrum efficiency of user $u_s$ associating with BS $n$, can be easily obtained.\\ \indent Assume each computation task can be described in four terms as $T_{u_s}=\{z_{u_s},z'_{u_s},c_{u_s},R^{\text{cp}}_{u_s}\}, \forall s,u$. For the task $T_{u_s}$, $z_{u_s}$ and $z'_{u_s}$ respectively represent the sizes of the contents before and after the computation. $c_{u_s}$ denotes the computing ability required for accomplishing this task, which can be quantized by the amount of CPU cycles \cite{chen2015efficient}. $R^{\text{cp}}_{u_s}$ is the minimum computation rate required by user $u_s$.\\ \indent Let $e_{n}$ be the energy consumption for one CPU cycle at BS $n$. We denote $f_{u_s,n}$ as the computation capability of BS $n$ assigned to user $u_s$, which is quantized by the total number of CPU cycles per second \cite{chen2015efficient}. Then the computation execution time of the task at BS $n$ can be easily obtained as $t_{u_s,n}=\frac{c_{u_s}}{f_{u_s,n}}$. Therefore, the computation rate (i.e., the amount of bits computed during one second) of BS $n$ to compute task $T_{u_s}$ can be equivalent to $R_{u_s,n}=\frac{z_{u_s}}{t_{u_s,n}}=\frac{f_{u_s,n}z_{u_s}}{c_{u_s}}$, and the total energy consumption used for computing task $T_{u_s}$ at BE $n$ can be calculated as $E_{u_s,n}=c_{u_s}e_{n}$.\\ \indent Since each user has the requirement for computation rate, \begin{equation} \sum\limits_{n\in\mathcal{N}}a_{u_s,n}R_{u_s,n}\geq R^{\text{cp}}_{u_s}, \forall s\in\mathcal{S},u_s\in\mathcal{U}_s. \end{equation} Moreover, it should be noted that the computation ability at each BS is limited; thus \begin{equation} \sum\limits_{s\in\mathcal{S}}\sum\limits_{u_s\in\mathcal{U}_s}a_{u_s,n}\leq D_{n}, \forall n\in\mathcal{N}, \end{equation} where $D_n$ is the maximum amount of tasks simultaneously executed on the MEC server of BS $n$.\\ \indent\textit{3): Caching Model}\\ \indent For each BS, they can determine whether to cache the content sent by users before or after the computation, according to the popularity distribution of each content. The caching strategy can be controlled by two binary parameter $x^1_{u_s,n}$ and $x^2_{u_s,n}$. If BS $n$ caches the original content, $x^1_{u_s,n}=1$; otherwise $x^1_{u_s,n}=0$. If BS $n$ caches the computed content, $x^2_{u_s,n}=1$; otherwise $x^2_{u_s,n}=0$. It should be noted that the storage of BS $n$ may be limited. Thus, the cached content cannot be larger than the remaining space $Z_n$ of BS $n$, which can be expressed as \begin{equation} \sum\limits_{s\in\mathcal{S}}\sum\limits_{u_s\in\mathcal{U}_s}a_{u_s,n}(x^1_{u_s,n}z_{u_s}+x^2_{u_s,n}z_{u_s}')\leq Z_{n}, \forall n\in\mathcal{N}. \end{equation} \indent In this paper, it is assumed that the popularity distribution is represented by a vector $\bm p=[p_1,p_2,...,p_F]$, where $F$ types of contents with diverse popularity are distributed in the networks. That is, each content $f$ is requested by each mobile user independently with the probability $p_f$. Generally, $\bm p$ is modeled as the Zipf distribution \cite{li2016pricing}, which can be expressed as \begin{equation} p_f=\frac{1/{f^\epsilon}}{\sum\limits_{f=1}^{F}1/{f^\epsilon}}, \forall f, \end{equation} where the exponent $\epsilon$ is a positive value and can characterizes the content popularity. For our business model, $p_{z_{u_s}}$ and $p_{z_{u_s}'}$ can be directly derived from $p_f$ if the content sent by user $u_s$ is known. Afterwards, the gains of the expected saved backhaul bandwidth through caching contents $z_{u_s}$ and $z_{u_s}'$ can be respectively calculated as $g_{z_{u_s}}=\frac{p_{z_{u_s}}z_{u_s}}{T_{z_{u_s}}}$ and $g_{z_{u_s}'}=\frac{p_{z_{u_s}'}z_{u_s}'}{T_{z_{u_s}'}}$, where $T_{z_{u_s}}$ and $T_{z_{u_s}'}$ are the time durations for downloading the required contents through backhaul. \subsection{Problem Formulation} In this subsection, an optimization problem is formulated to maximize the aggregate utility of the MVNO system. The optimization problem is mathematically modeled as \begin{eqnarray*} &&OP1:\max\limits_{\mbox{\tiny$\begin{array}{c}\{a_{u_s,n},b_{u_s,n},\\ x^1_{u_s,n},x^2_{u_s,n}\}\end{array}$}} \sum\limits_{s\in\mathcal{S}}\sum\limits_{u_s\in\mathcal{U}_s}\sum\limits_{n\in\mathcal{N}}U_{u_s,n}\\ &&s.t.:(1)(2)(3)(4)(5)(6)\\ \end{eqnarray*} where $U_{u_s,n}$ is the potential utility of user $u_s$ associating with BS $n$, and it can be defined as \begin{equation} \begin{array}{r} U_{u_s,n}=a_{u_s,n}(\alpha_{u_s}b_{u_s,n}r_{u_s,n}-\beta_{n}b_{u_s,n})\\ +a_{u_s,n}(\phi_{u_s}R_{u_s,n}-\psi_n E_{u_s,n})\\ +a_{u_s,n}x^1_{u_s,n}(\gamma_ng_{z_{u_s}}-\Psi^n_{z_{u_s}}z_{u_s})\\ +a_{u_s,n}x^2_{u_s,n}(\gamma_ng_{z_{u_s}'}-\Psi^n_{z_{u_s}'}z_{u_s}'). \end{array} \end{equation} Here, $\alpha_{u_s}b_{u_s,n}r_{u_s,n}$ denotes the gain of user data rate, $\beta_{n}b_{u_s,n}$ is the cost of consumed radio bandwidth, $\phi_{u_s}R_{u_s,n}$ denotes the gain of computation rate, $\psi_n E_{u_s,n}$ is the cost of consumed computation energy, $\gamma_ng_{z_{u_s}}$ and $\gamma_ng_{z_{u_s}'}$ are the gains achieved on the saved backhaul bandwidth from caching the contents $z_{u_s}$ and $z_{u_s}'$, and $\Psi^n_{z_{u_s}}z_{u_s}$ and $\Psi^n_{z_{u_s}'}z_{u_s}'$ are the costs of caching the contents $z_{u_s}$ and $z_{u_s}'$, respectively. \subsection{Problem Reformulation} It is obvious that the formulated mixed discrete and non-convex optimization problem is a NP-hard problem \cite{fooladivanda2013joint}. A relaxation of the binary conditions of $a_{u_s,n}$, $x^1_{u_s,n}$, and $x^2_{u_s,n}$ constitutes the first step to solve the problem OP1, where $a_{u_s,n}$, $x^1_{u_s,n}$, and $x^2_{u_s,n}$ are relaxes to be real value variables as $0\leq a_{u_s,n}\leq 1$, $0\leq x^1_{u_s,n}\leq 1$ and $0\leq x^2_{u_s,n}\leq 1$. The relaxed $a_{u_s,n}$ is sensible and meaningful to be a time sharing factor representing the ratio of time for user $u_s$ to associate with BS $n$ in order to offload and compute the offloading task. The relaxed $x^1_{u_s,n}$ and $x^2_{u_s,n}$ can be also interpreted as the time fractions for sharing one unit cache of BS $n$.\\ \indent However, even after the relaxation of the variables, the problem is still non-convex due to the multiplication of the variables. Thus, a second step is necessary for further simplifying the problem to make it tractable and solvable.\\ \indent\textit{Proposition 4.1:} If we define $\widetilde x^1_{u_s,n}=a_{u_s,n}x^1_{u_s,n}$, $\widetilde x^2_{u_s,n}=a_{u_s,n}x^2_{u_s,n}$, and $\widetilde b_{u_s,n}=a_{u_s,n}b_{u_s,n}$, there exists an equivalent formulation of problem OP1 as follows: \begin{eqnarray*} &&OP2:\max\limits_{\mbox{\tiny$\begin{array}{c}\{a_{u_s,n},\widetilde b_{u_s,n},\\ \widetilde x^1_{u_s,n},\widetilde x^2_{u_s,n}\}\end{array}$}} \sum\limits_{s\in\mathcal{S}}\sum\limits_{u_s\in\mathcal{U}_s}\sum\limits_{n\in\mathcal{N}}\widetilde U_{u_s,n}\\ &&s.t.:C1:\sum\limits_{n\in\mathcal{N}}a_{u_s,n}=1, \forall s\in\mathcal{S},u_s\in\mathcal{U}_s\\ &&C2:\sum\limits_{s\in\mathcal{S}}\sum\limits_{u_s\in\mathcal{U}_s}\widetilde b_{u_s,n}\leq B_n, \forall n\in\mathcal{N}\\ &&C3:\sum\limits_{n\in\mathcal{N}}\widetilde b_{u_s,n}r_{u_s,n}\geq R^{\text{cm}}_{u_s}, \forall s\in\mathcal{S},u_s\in\mathcal{U}_s\\ &&C4:\sum\limits_{n\in\mathcal{N}}a_{u_s,n}R_{u_s,n}\geq R^{\text{cp}}_{u_s}, \forall s\in\mathcal{S},u_s\in\mathcal{U}_s\\ &&C5:\sum\limits_{s\in\mathcal{S}}\sum\limits_{u_s\in\mathcal{U}_s}a_{u_s,n}\leq D_{n}, \forall n\in\mathcal{N}\\ &&C6:\sum\limits_{s\in\mathcal{S}}\sum\limits_{u_s\in\mathcal{U}_s}(\widetilde x^1_{u_s,n}z_{u_s}+ \widetilde x^2_{u_s,n}z_{u_s}')\leq Z_{n}, \forall n\in\mathcal{N}\\ \end{eqnarray*} \indent The relaxed problem OP1 can be directly recovered through substituting the variables $\widetilde x^1_{u_s,n}=a_{u_s,n}x^1_{u_s,n}$, $\widetilde x^2_{u_s,n}=a_{u_s,n}x^2_{u_s,n}$, and $\widetilde b_{u_s,n}=a_{u_s,n}b_{u_s,n}$ into problem OP2. If $a_{u_s,n}=0$, $b_{u_s,n}=0$ certainly holds due to the optimality. Obviously, there is no need for BS $n$ to allocate any resource to a user when the user does not associate with BS $n$.\\ \indent Now problem OP2 is transformed as a convex problem. However, the signaling overhead will be prohibitively large if a centralized algorithm is used to solve the problem, because finding out the optimal solution requires all the channel state information (CSI) and content distribution information. Therefore, a distributed optimization algorithm executed on each BS is necessary to be designed for practical implementing. However, because of the constraints $C1, C3$, and $C4$, problem OP2 is not separable to be executed on each BS. Thus, the coupling has to be decoupled appropriately, which will be discussed in Section III. To lighten the notation, from now on, $u$ is used to denote each user instead of $u_s$. \section{Resource Allocation via Alternating Direction Method of Multipliers} In order to decouple the coupling variables, the local copies of $\{a_{u,n}\}$ and $\{\widetilde b_{u,n}\}$ at BS $n$ is introduced as $\{\widehat a^n_{u,k}\}$ and $\{\widehat b^n_{u,k}\}$, respectively. With the local vectors $\{\widehat a^n_{u,k}\}$ and $\{\widehat b^n_{u,k}\}$, a feasible local variable set for each BS $n$ can be defined as \begin{equation} \label{feasible set} \mathcal{X}_n=\left\{ {\begin{array}{*{20}{c}} \{\widehat a^{n}_{u,k}\}\\ \{\widehat b^n_{u,k}\} \end{array}\left| \begin{array}{l} \sum\limits_{k\in\mathcal{N}}\widehat a^n_{u,k}=1, \forall u\\ \sum\limits_{u\in\mathcal{U}}\widehat b^n_{u,k}\leq B_k, \forall k\\ \sum\limits_{k\in\mathcal{N}}\widehat b^n_{u,k}r_{u,k}\geq R^{\text{cm}}_{u}, \forall u\\ \sum\limits_{k\in\mathcal{N}}\widehat a^n_{u,k}R_{u,k}\geq R^{\text{cp}}_{u}, \forall u\\ \sum\limits_{u\in\mathcal{U}}\widehat a^n_{u,k}\leq D_{k}, \forall k\\ \sum\limits_{u\in\mathcal{U}}(\widetilde x^{1}_{u,k}z_{u}+ \widetilde x^{2}_{u,k}z_{u}')\leq Z_{k}, \forall k\\ \end{array} \right.} \right\}, \end{equation} and an associated local utility function can be expressed as \begin{equation} \mathcal{y}_n=\left\{ \begin{aligned} &-\sum\limits_{u\in\mathcal{U}}\widehat U_{u,n}, (\{\widehat a^{n}_{u,k}\}, \{\widetilde x^{1}_{u,k}\}, \{\widetilde x^2_{u,k}\}, \{\widehat b^n_{u,k}\})\in\mathcal{X}_n \\ &0,\quad\quad\text{Otherwise} \end{aligned} \right. \end{equation} With this notation, the global consensus problem of the problem OP2 can be shown as follows: \begin{eqnarray*} &&OP3:\min \mathcal{Y}(\{\widehat a^{n}_{u,k}\}, \{\widetilde x^{1}_{u,k}\}, \{\widetilde x^2_{u,k}\}, \{\widehat b^n_{u,k}\})=\\ &&\quad\quad\quad\sum\limits_{n\in\mathcal{N}}\mathcal{y}_n(\{\widehat a^{n}_{u,k}\}, \{\widetilde x^{1}_{u,k}\}, \{\widetilde x^2_{u,k}\}, \{\widehat b^n_{u,k}\})\\ &&s.t.: \{\widehat a^{n}_{u,k}\}=\{a_{u,k}\}, \{\widehat b^{n}_{u,k}\}=\{\widetilde b_{u,k}\}, \forall n,u,k \end{eqnarray*}\\ \indent Obviously, now the objective function is separable across each BS. The initial step of ADMM to solve the problem OP3 is the formulation of an augmented Lagrangian $\mathcal{L}_{\rho}(\{\widehat {\bm a},\widetilde {\bm x}^1,\widetilde {\bm x}^2,\widehat {\bm b}\},\{\bm a,\widetilde {\bm b}\},\{\bm\mu,\bm\nu\})$ with corresponding global consensus constrains. Here, $\widehat {\bm a}=\{\widehat a^{n}_{u,k}\}$, $\widetilde {\bm x}^1=\{\widetilde x^{1}_{u,k}\}$, $\widetilde {\bm x}^2=\{\widetilde x^{2}_{u,n}\}$, $\widehat {\bm b}=\{\widehat b^n_{u,k}\}$, $\bm a=\{a_{u,k}\}$, and $\widetilde {\bm b}=\{\widetilde b_{u,k}\}$. The augmented Lagrangian can be derived as \cite{boyd2011distributed} \begin{equation} \begin{array}{r} \mathcal{L}_{\rho}(\{\widehat {\bm a},\widetilde {\bm x}^1,\widetilde {\bm x}^2,\widehat {\bm b}\},\{\bm a,\widetilde {\bm b}\},\{\bm\mu,\bm\nu\})=\sum\limits_{n\in\mathcal{N}}\mathcal{y}_n(\widehat {\bm a}^{n}, \widetilde {\bm x}^1, \widetilde {\bm x}^2, \widehat {\bm b}^n)+\\ \sum\limits_{n\in\mathcal{N}}\sum\limits_{\substack{u\in\mathcal{U}\\k\in\mathcal{N}}}\mu^n_{u,k}(\widehat a^n_{u,k}- a_{u,k})+\frac{\rho}{2}\sum\limits_{n\in\mathcal{N}}\sum\limits_{\substack{u\in\mathcal{U}\\k\in\mathcal{N}}}(\widehat a^n_{u,k}- a_{u,k})^2+\\ \sum\limits_{n\in\mathcal{N}}\sum\limits_{\substack{u\in\mathcal{U}\\k\in\mathcal{N}}}\nu^n_{u,k}(\widehat b^n_{u,k}-\widetilde b_{u,k})+\frac{\rho}{2}\sum\limits_{n\in\mathcal{N}}\sum\limits_{\substack{u\in\mathcal{U}\\k\in\mathcal{N}}}(\widehat b^n_{u,k}-\widetilde b_{u,k})^2, \end{array} \end{equation} where $\rho$ is the penalty parameter, and $\bm\mu=\{\mu^n_{u,k}\}$ and $\bm\nu=\{\nu^n_{u,k}\}$ are the dual variables.\\ \indent According to the iteration of AMDD with consensus constraints, the process for solving the problem OP3 consists of the following several steps:\\ \indent\textit{Step 1: $\{\widehat {\bm a}^{n},\widetilde {\bm x}^{1},\widetilde {\bm x}^{2},\widehat {\bm b}^{n}\}-update$:} In this step, the problem OP3 can be completely decoupled into $N$ specific subproblems, and each of the subproblems can be solved locally and separately at BSs. BS $n$ solves the following optimization problem at iteration $[i]$: \begin{equation} \begin{array}{r} \{\widehat {\bm a}^n,\widetilde {\bm x}^{1},\widetilde {\bm x}^{2},\widehat {\bm b}^{n}\}^{[i+1]}_{n\in\mathcal{N}}:=\arg\min\{\mathcal{y}_n(\widehat {\bm a}^{n}, \widetilde {\bm x}^{1}, \widetilde {\bm x}^{2}, \widehat {\bm b}^n)\\ +\sum\limits_{\substack{u\in\mathcal{U}\\k\in\mathcal{N}}}\mu^{n[i]}_{u,k}(\widehat a^n_{u,k}- a^{[i]}_{u,k})+\frac{\rho}{2}\sum\limits_{\substack{u\in\mathcal{U}\\k\in\mathcal{N}}}(\widehat a^n_{u,k}-a^{[i]}_{u,k})^2\\ +\sum\limits_{\substack{u\in\mathcal{U}\\k\in\mathcal{N}}}\nu^{n[i]}_{u,k}(\widehat b^n_{u,k}-\widetilde b^{[i]}_{u,k})+\frac{\rho}{2}\sum\limits_{\substack{u\in\mathcal{U}\\k\in\mathcal{N}}}(\widehat b^n_{u,k}-\widetilde b^{[i]}_{u,k})^2\}. \end{array} \end{equation} In this paper, the primal dual interior-point method, which is able to provide an efficient way for solving convex problems \cite{boyd2004convex}, is used to find out the optimal solution of the problem. Due to the limited space, the details of the procedure are omitted here.\\ \indent\textit{Step 2: $\{{\bm a}, \widetilde {\bm b}\}-update$:} In the second step, $\bm a$ and $\widetilde {\bm b}$ can be updated according to \begin{equation} \begin{array}{r} {\bm a}^{[i+1]}:=\arg\min\sum\limits_{n\in\mathcal{N}}\sum\limits_{\substack{u\in\mathcal{U}\\k\in\mathcal{N}}}\mu^{n[i]}_{u,k}(\widehat a^{n[i+1]}_{u,k}-a_{u,k})\\ +\frac{\rho}{2}\sum\limits_{n\in\mathcal{N}}\sum\limits_{\substack{u\in\mathcal{U}\\k\in\mathcal{N}}}(\widehat a^{n[i+1]}_{u,k}-a_{u,k})^2,\\ \widetilde {\bm b}^{[i+1]}:=\arg\min\sum\limits_{n\in\mathcal{N}}\sum\limits_{\substack{u\in\mathcal{U}\\k\in\mathcal{N}}}\nu^{n[i]}_{u,k}(\widehat b^{n[i+1]}_{u,k}-\widetilde b_{u,k})\\ +\frac{\rho}{2}\sum\limits_{n\in\mathcal{N}}\sum\limits_{\substack{u\in\mathcal{U}\\k\in\mathcal{N}}}(\widehat b^{n[i+1]}_{u,k}-\widetilde b_{u,k})^2. \end{array} \end{equation} Because we have added the quadratic regularization term to the augmented Lagrangian (11), the unconstrained problems (13) are strictly convex with respect to ${\bm a}$ and $\widetilde {\bm b}$.\\ \indent\textit{Step 3: $\{\bm\mu,\bm\nu\}-update$:} This step shows the updating of dual variables, which can be represented as \begin{equation} \begin{array}{r} \bm\mu^{n[i+1]}:=\bm\mu^{n[i]}+\rho(\widehat {\bm a}^{n[i+1]}-{\bm a}^{[i+1]}),\\ \bm\nu^{n[i+1]}:=\bm\nu^{n[i]}+\rho(\widehat {\bm b}^{n[i+1]}-\widetilde {\bm b}^{[i+1]}). \end{array} \end{equation} Here, the augmented Lagrangian parameter $\rho$ is used as a step size to update the dual variables.\\ \indent\textit{Step 4: Algorithm Stopping Criterion:} According to \cite{boyd2011distributed}, the residual for the primal feasibility condition of BS $n$ at iteration $[i]$ should be small enough so that \begin{equation} \begin{array}{r} ||\widehat {\bm a}^{n[i+1]}-{\bm a}^{[i+1]}||_2\leq\upsilon_{pri},\quad ||\widehat {\bm b}^{n[i+1]}-\widetilde {\bm b}^{[i+1]}||_2\leq\upsilon_{pri}. \end{array} \end{equation} Moreover, the residual for the first dual feasibility condition at iteration $[i+1]$ should be small enough so that \begin{equation} \begin{array}{r} ||{\bm a}^{[i+1]}-{\bm a}^{[i]}||_2\leq\upsilon_{dual},\quad ||\widetilde {\bm b}^{[i+1]}-\widetilde {\bm b}^{[i]}||_2\leq\upsilon_{dual}. \end{array} \end{equation} Here, $\upsilon_{pri}>0$ and $\upsilon_{dual}>0$, called as the feasibility tolerances of the primal feasibility and dual feasibility conditions, respectively. Finally, after obtaining the optimum solution, the binary recovery can be viewed as computing the marginal benefit for each user $u$ \cite{LYJ16}. \section{Simulation Results and Discussions} \begin{table} \renewcommand{\arraystretch}{1.3} \caption{Parameter Values} \label{tab:1} \centering \begin{tabular}{c|c||c|c} \hline Parameter & Value & Parameter & Value\\ \hline $R_u^{cm}$ & $10^5$bps & $R_u^{cp}$ & $10^5$bps \\ $e_n$ & $1$W/GHz & $D_n$ & $10$\\ $\beta_n$ & [1,3] units/KHz & $\psi_n$ & [40,80]*$10^{-6}$ units/J\\ $\Psi^n_{z_{u_s}}$ & [10,20] units/Mb & $\Psi^n_{z'_{u_s}}$ & [10,20] units/Mb\\ $\alpha_n$ & $10$units/Mbps & $\phi_{u_s}$ & $100$units/bps\\ $\gamma_1$ & $10$units/Mbps & $\gamma_2$ & $12$units/Mbps\\ noise & -174dBm & power & 27dBm\\ \hline \end{tabular} \end{table} We assume that SBSs and users are randomly distributed within the covered area of the MBS, and all the channel coefficients are distributed as $\mathcal {CN}(0,\frac{1}{(1+d)^{\alpha}})$ with a path loss exponent $\alpha=4$, where $d$ is the distance between each mobile user and BS. In addition, there are two SPs, two BSs, one MVNO, and the bandwidth of each BS is normalized. The values of the rest of parameters are summarized in Table I.\\ \begin{figure} \centering \includegraphics[width=3.7in]{figure//1-convergence42.eps} \caption{Convergence of the algorithms. (The total number of users is 8. The sizes of each content are randomly distributed within 1Mb to 4Mb. The computing ability is distributed within 100Megacycles to 1300Megacycles. The computation capability of two BSs are 10GHz and 5GHz, and the cache spaces of two BSs are 10Mb and 5Mb.)} \label{fig2} \end{figure} \indent Fig. 2 shows the convergence of the proposed scheme under different values of $\rho$. All schemes are able to converge to a stable solution rapidly, and the proposed scheme with different values of $\rho$ can eventually converge to a same value of the total utility. However, a higher value of $\rho$ will result in a higher rate of convergence. Thus, in the following simulations, we set $\rho=2$. Furthermore, we can observe that the proposed scheme performs better than the distributed scheme without caching function. Although there is a performance gap from the centralized scheme, the advantage of the proposed scheme is the reduced signal overhead for the exchange of the content distribution information and the CSI.\\ \begin{figure} \centering \includegraphics[width=3.7in]{figure//2-utility2.eps} \caption{Total utility of MVNO with different numbers of users. (The sizes of each content are randomly distributed within 1Mb to 4Mb. The computing ability is distributed within 100Megacycles to 1300Megacycles. The computation capability of two BSs are 10GHz and 5GHz, and the cache spaces of two BSs are 10Mb and 5Mb.)} \label{fig3} \end{figure} \indent Fig. 3 illustrates the total utility of different schemes with respect to the different values of total number of users. As the number of users increases, the total utilities of all the schemes continue to grow. The main reason for the performance of the distributed scheme without caching function being worse than the proposed scheme, is that the popular contents cannot be cached at BSs so that there is no caching revenue when some users call for the previous contents. On the other hand, in the proposed scheme, if the computed contents required by users have already been cached at the associated BSs, the BSs does not need to compute the offloading contents, which will certainly contribute to increasing the computation revenue.\\ \balance \section{Conclusion and Future Work} In this paper, we studied virtual resource allocation for communication, computing, and caching in the designed virtualized HetNets framework. The allocation strategy was formulated as a joint optimization problem, considering the gains of not only virtualization but also caching and computing. In addition, a distributed ADMM-based algorithm was introduced to decouple the coupling variables and then split the optimization problem into several subproblems. Simulation results were presented to show the convergence and performance of the proposed scheme. Future work is in progress to consider software-defined networking (SDN) in the proposed framework.\\ \section*{Acknowledgment} This work is jointly supported by the National Natural Foundation of China (Grant No. 61601347) and the `111' project of China (Grant No. B38038). \bibliographystyle{IEEEtran}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} Essential to human cognition is the ability to group stimuli into meaningful categories. The emergence of such categories is accompanied by the development of boundaries encoded in the activity patterns of neural circuitry \cite{Freedman312}. Exactly how new information about objects is mapped into the correct boundaries, such that relevant associations become grouped together, remains incompletely understood. Furthermore, it is not generally known how far apart such groups should be, and what kind of space efficiently embeds such a mapping. These concepts and questions are reminiscent of studies of coding efficiency in neural responses to low-level sensory stimuli \cite{articleBarlow,Atick:1990:TTE:1351041.1351047,OLSHAUSEN19973311,Lewicki2002} -- a notion quantifying a system's information processing given biophysical and metabolic constraints. An open question is whether similar principles of efficiency play a role in higher-level processes such as cognition \cite{Poldrack201512}. What goals and constraints must be balanced to enable such \textit{cognitive coding efficiency} \cite{Buzsaki2012,ZIMMER201244,HAIER1992415,Gold387,Heinzel1224}, and how might such efficiency support accurate perceptions and decisions? To formalize intuitive notions of space and organization in neural activity during the building of such mental categories, we use a geometric perspective adapted from machine learning \cite{Rigotti2013}. Specifically, we represent distributed neural responses as points in a multidimensional space. Applied to neuron-level data, such representations have been shown to be very effective in isolating an instrinsic low-dimensional subspace relevant to ongoing cognitive processes \cite{Rigotti2013,Ganguli200815,joshgold}. Here we extend these tools to the examination of large-scale neural responses in humans as they integrate information across many areas to form representations of novel objects \cite{doi:10.1093/cercor/1.1.1-a,doi:10.1146/annurev.neuro.27.070203.144220}, appreciate abstract properties of those objects \cite{Bartra2013412}, and both prepare and execute associated motor responses. Despite our growing understanding of the regions activated by such learning \cite{doi:10.1146/annurev.neuro.27.070203.144220,Bartra2013412,OpdeBeeck11796,Grill-Spector2014,COHEN200561}, a significant gap in knowledge lies in delineating how spatiotemporal patterns of neural responses in these activated regions allow for effective behavioral choices. Our approach complements multivoxel pattern analysis and related techniques -- which enable a local quantification of regional representations of objects or concepts \cite{kahnt2017decade} -- by offering tools that synthesize information across all brain regions simultaneously. Fundamentally, these tools allow us to hypothesize that the dimension of a geometric representation of neural responses is related to the effective identification of learned categories. The simple intuition behind this hypothesis is that a higher dimension allows for an easier grouping of neural responses according to different objects in the geometric space. To test this hypothesis, we examine blood oxygen level dependent (BOLD) magnitudes at the regional and voxel level, in a cohort of 20 healthy adult human subjects as they learn the values of twelve novel objects over the course of 4 consecutive days for a total of 80 experimental imaging sessions. We spatially average these indirect measurements of neural activity in 83 regions of interest (ROIs) defined by a whole-brain anatomical parcellation. Next, we use a generalized linear model to deconvolve the hemodynamic response function to obtain approximate neural responses to each stimulus at the time point at which it was presented. We ask how the dimension of the geometric representation of such neural responses reflects the speed with which participants learn the objects' monetary values \cite{doi:10.1162/NETN_a_00021}. To answer this question, we study three aspects of the geometric organization of these neural responses: the task-relevant and embedding dimensions, and label assortativity. We demonstrate that fast learners indeed have higher dimensional task-relevant geometric representations, allowing for an easier development of boundaries between neural responses to different stimuli. However, a potential disadvantage of this high dimensionality is that the brain might utilize more resources for the embedding of the information. To assess the presence or absence of this potential tradeoff, we study the \emph{embedding dimension}: the geometric representation of each subject's neural responses, with the map between stimulus and neural response shuffled uniformly at random. Surprisingly, we find that the embedding dimension of a fast learner is more compact than that of a slow learner, suggesting that their neural responses form a more contained underlying subspace within the higher dimensional ROI space. The large ratio between the \emph{task-relevant dimension} and the embedding dimension is indicative of efficient coding, and is observed most commonly in the participants who learned rapidly. To enhance our understanding of the anatomy driving these observations, we identify brain regions that most contribute to the emergence of high dimensional patterns in quick learners, and we further implement a voxel-level analysis to examine finer-scale structure in neural responses. Lastly, we use the complementary metric \emph{label assortativity} to characterize how easy it is to distinguish between neural responses. Our results confirm our prior analysis that fast learners have more distinguishable neural responses. Taken together, our approach provides novel insights into the geometry of neural responses supporting learning, and offers a suite of computational heuristics to intuitively describe cognitive processes more generally. \begin{figure}[tb] \includegraphics[width=0.99\linewidth]{f1.pdf} \caption{\textbf{Neural responses from fMRI data; separability dimension and assortativity. a.} We measure the regional fMRI-BOLD activation over 1 hour of task practice. \textbf{b.} Using a generalized linear model to deconvolve the hemodynamic response function from the BOLD time series, we obtain approximate neural responses, $\{\beta_i\}$, to each stimulus at the time point when it was presented. \textbf{c.} We assign binary labels (denoted by color) to the neural data (denoted by shapes). When the data are arranged in a low-dimensional manner (top row), some binary assignments will result in poor separability, whereas in a higher dimension, these binary assignments can be more easily separated. The average performance of separability over all possible binary assignments gives the separability dimension. \textbf{d.} Label assortativity does not depend strictly on dimension, and can measure a different geometric aspect of the same data.} \label{fig:intro} \end{figure} \section{Results} \subsection{Quick learners develop higher dimensional task-relevant representations of neural responses} We seek to understand how the neural responses of subjects are distributed according to the task-relevant stimuli, and how this distribution reflects their learning ability. The dimensionality of the fMRI BOLD evoked responses (Fig. \ref{fig:intro}a-b) can be estimated based on the performance of a linear classifier in distinguishing assigned binary labels on the data \cite{Rigotti2013}. Intuitively, a given spatial arrangement of these responses will make it easier for any of the stimuli categories to be distinguished from the others, when the data are arranged in a higher-dimensional manner. Specifically, for $n$ categories there are $\binom {n} {n/2}-2$ ways to assign binary labels to these categories. When the data are arranged in a low-dimensional manner, some binary assignments will result in poor separability, whereas in a higher dimension, these binary assignments will result in high separability (see Fig. \ref{fig:intro}c). By exhaustively examining all $\binom {n} {n/2}-2$ choices of binary labelings and recording the resulting separability, the average performance over this combinatorial number of assignments yields the \textit{task-relevant separability dimension} (see Methods). An advantage of this process of averaging over many separating hyperplanes is a robustness of the results to noise: while the result in any particular plane might be sensitive to perturbation, the average result will be stable. We apply this method to the evoked neural responses of participants learning the value of twelve arbitrary computer-generated shapes (see Fig. \ref{fig:expt}a; \cite{doi:10.1162/NETN_a_00021}). Each shape was assigned a distinct and fixed monetary value. During learning trials, participants were shown a pair of shapes simultaneously and asked to select which shape had the higher value, after which they received feedback based on their response (see Fig. \ref{fig:expt}b). There were three such learning sessions per day across four days (see Fig. \ref{fig:expt}c), as well as additional sessions where retention was assessed by having subjects judge the value of individual shapes (see Methods). We focus on these latter value judgment sessions conducted at the end of each day, where stimuli were presented singly and hence we can isolate the neural responses to each shape. As the sessions progressed, each subject more accurately identified and categorized the value of the shapes presented; by the conclusion of the second day of practice, all subjects reached a generally high level of performance (see Fig. \ref{fig:expt}c). We observe greatest individual variability in performance at the end of the first day, and therefore use this response accuracy from the value judgement session as a metric for the learning speed of participants. \begin{figure}[tb] \includegraphics[width=0.99\linewidth]{f2.pdf} \caption{\textbf{Experimental protocol and behavioral results. a.} Stimulus set and corresponding values. Twelve abstract shapes were computer generated, and an integer value between \$1 and \$12 was assigned to each. On each trial, the empirical value of each shape was drawn from a Gaussian distribution with fixed mean, and standard deviation of \$0.50. \textbf{b.} Task paradigm. Participants were presented with two shapes side-by-side on the screen and asked to choose the shape with the higher monetary value. Once a selection was made, feedback on their selection was provided. Each trial lasted 2.75 s (250 ms inter-stimulus interval). \textbf{c.} The experiment was conducted over four consecutive days, with three experimental scans (396 trials) on each day, for a total of 1584 trials. Participants' accuracy in selecting the shape with higher expected value improved steadily over the course of the experiment, increasing from chance level in the first few trials to approximately 95\% in the final few trials.}\label{fig:expt} \end{figure} We seek to explore how the geometric representation of each subject's neural responses is related to their learning speed. Hence we investigate the task-relevant separability dimension of each subject that emerges by the end of the experiment: that is, on the fourth and final day of training. For $n=12$ however, calculating $ \binom {n} {n/2}-2$ binary assignments is computationally expensive. Thus, in practice we choose a subset of $m=4$ stimuli over which to calculate this separability dimension. To ensure that our results do not depend on the particular subset of stimuli chosen, we repeat the calculation on 20 different combinations (roughly 7\%) out of the $\binom{n}{m}$ available choices, making sure that each shape was represented a roughly equal number of times throughout these sets. We find that the response accuracy of participants at the end of the first day of training is significantly correlated with their separability dimension at the end of the last day of training (Pearson's correlation coefficient $r=0.56$, see Fig. \ref{fig:dim}a). To assess the statistical significance of this relation, we construct a null model by permuting the task (or object exemplar) labels of the neural responses uniformly at random, and then we calculate the separability dimension on these permuted data. Across subjects, we calculate the correlation between their response accuracy and the dimension of these null data in 1000 bootstrapped samples (gold bars in Fig. \ref{fig:dim}b). We see that the true task-relevant data falls significantly outside this distribution with non-parametric $p<0.001$. These findings suggest that participants who learn more quickly also display a larger task-relevant separability dimension of their representations, which allows for easier distinguishability between stimuli associated with different values. In a sensitivity analysis, we also examine the separability dimension of neural responses from the other days, but find that this correlation is the strongest with data from the final day (see Supplementary Information), suggesting that these higher dimensional responses emerge most clearly over time and the course of the experiment. \begin{figure*}[tb] \includegraphics[width=0.80\linewidth]{f3_2.pdf} \caption{\textbf{Quick learners show higher dimensional and more efficient representations. a.} Relation between task-relevant separability dimension ($m=4$) and learning accuracy across participants (Pearson's correlation coefficient $r=0.56$). We compare this correlation value with that observed in a null model where task labels are shuffled uniformly at random, and the separability dimension is recalculated. We find that the true correlation is significantly greater than that expected under this null model with non-parametric $p<0.001$. \textbf{b.} Histogram of 1000 bootstrapped estimates of the correlation value across subjects between their response accuracy and the dimension of their null data (gold bars); the correlation value estimated from the true task-relevant data is shown in red. \textbf{c.} Relation between the separability dimension with learning accuracy across subjects, for $m$ from 2 to 10; the true data is shown in red while the null data is shown in gold (same color scheme as in panel \textbf{b}). The estimates become more reliable as $m$ increases. We see that the true data displays a positive correlation with a magnitude far outside the error bars of the null model, which by constrast displays a negative correlation. These observations jointly suggest that fast learners have a large task-relevant dimension but small embedding dimension, overall forming an efficient representation of neural responses. \textbf{d.} Schematics of (i) an efficient representation (left), characterized by a high task-relevant but low embedding dimension, (ii) an inefficient representation (right), characterized by a low task-relevant but high embedding dimension, as well as (iii) the two other possible combinations. Our findings suggest that fast learners possess efficient neural representations; the axes $x$, $y$, and $z$ denote an effective ROI measurement space. \textbf{e.} A virtual lesioning experiment shows the brain regions that most weaken the correlation between task-relevant separability dimension and learning accuracy upon removal ($z$-score $<-2$).}\label{fig:dim} \end{figure*} \subsection{Quick learners have a lower embedding dimension and hence overall more efficient representations} Intuitively, a high-dimensional response provides flexibility in coding for task-relevant information but is naturally more computationally intensive. In contrast, a low-dimensional response is simpler but rigid. How might quick learners potentially balance these two competing factors to develop efficient neural responses? To address this question, we extend our calculations from the previous section across a range of values of $m$, which gives the cardinality of the subset of shapes from which the dimension is estimated. We calculate the correlation between separability dimension and response accuracy for the true task-based data and for the null data in 100 bootstrapped samples, up to $m=10$ (see Fig. \ref{fig:dim}c; Methods). Firstly, we notice that the true data are consistently positively correlated (red points) and fall far outside the error bars of the null data (gold points), confirming that across a range of $m$ the true data reflects the fact that quick learners have a higher separability dimension of their representations. In fact, the results at large $m$ are particularly instructive as the combinatorics of $ \binom {m} {m/2}-2$ that are averaged over for each calculation lead to a strong convergence of the results as reflected in very small error bars. Lastly, we note that while the positive correlation between task-relevant dimension and learning accuracy holds over a range of $m$ values, given that $m=4$ provides the strongest signal and is relatively computationally feasible to calculate in large quantities, further investigations into the task-relevant dimension are done using $m=4$. Across subjects, we also observe that the correlation between separability dimension and learning accuracy is negative in the null data, particularly for large $m$ (see Fig. \ref{fig:dim}c). Intuitively, these data are the geometric distribution of neural responses without task-relevant labels, and thus reflect the underlying embedding of neural activity used in the task. We therefore refer to the separability dimension of these null data as the \textit{embedding dimension}. Surprisingly, the negative correlation between subjects' learning accuracy and embedding dimension shows that fast learners have a lower embedding dimension, complementing their higher task-relevant dimension. This large ratio of task-relevant dimension to embedding dimension for fast learners suggests an efficient cognitive coding: the use of a smaller amount of embedding or effective resources from which a more informative set of task-relevant features can be constructed. We provide a low-dimensional schematic of such geometric arrangements in Fig. \ref{fig:dim}d. While the use of efficiency as a construct in cognitive science has been debated \cite{Poldrack201512}, here we provide a mathematical definition that constrasts the coding for meaningful content with the neural activity involved \textit{per se}, via the ratio between task-relevant and embedding dimensions. \subsection{Regional drivers of the relationship between representation and behavior} Next, we perform several \emph{post-hoc} analyses to better understand the main effects reported in the previous sections. Specifically, we first seek to determine which regions contribute the most to the higher task-relevant dimension observed in quick learners. To address this question, we conduct an exploratory analysis using a virtual lesioning approach in which we remove brain regions one at a time, and then recalculate the separability dimension of the modified representation. Here we report the regions whose absence causes the largest change in the observed correlation between separability dimension and response accuracy across subjects (magnitude of $z$-score $>2$ or $p<0.023$; uncorrected for multiple comparisons). We find that removal of the left hippocampus and right temporal pole respectively, cause the largest decreases in the observed correlation (see Fig. \ref{fig:dim}e). In other words, in subjects that learn quickly, the left hippocampus and right temporal pole seem to contribute to a higher separability dimension and \emph{vice versa}. A possible explanation for these results is that learning to perform this task requires effective separability of stimulus dimensions mediated by these regions. Such an interpretation is in line with the known role of the hippocampus in the rapid learning of stimulus associations \cite{squire1992memory}, and the role of the temporal pole in representing information about abstract conceptual properties of objects (such as object value) \cite{peelen2012conceptual}. In contrast, the removal of regions such as the left rostral middle frontal cortex and left supramarginal gyrus most strongly enhance the observed correlation, suggesting that their activity is orthogonal to or does not directly contribute to the large separability dimension that characterizes quick learners. \subsection{Quick learners show high dimensional task-relevant representations within local brain regions} Up to this point, we have studied neural activity across the whole-brain and the separability dimension of such neural activity. It is natural to ask if this relationship between learning ability and the dimension of neural responses can also be found in the multivoxel patterns of single brain regions hypothesized to be relevant for task performance. To address this question, we adapt our approach to examine ten regions of interest composed of 300 (or fewer) voxels (see Methods and Supplement). Following the prior analyses, we examine the correlation between separability dimension in the neural data in each local region and the participants' learning accuracy. Overall, we note that none of the regions show a negative correlation between their separability dimension and learning accuracy. Moreover, we find that three regions show a significant positive correlation, greater in magnitude than expected in the null model of shuffled data (non-parametric $p\leq0.05$; see Fig. \ref{fig:voxel}): the left anterior cingulate and primary visual cortices, as well as the right posterior fusiform cortex. We note that only the non-parametric test for the left anterior cingulate displayed $p\leq0.05$ after correcting for multiple comparisons (see Table \ref{tab:voxels}). Notably, the anterior cingulate cortex is thought to play a role in reward-based learning \cite{bush2002dorsal}, while the visual areas V1 and posterior fusiform are involved in the representation of lower-level and higher-level features of objects, respectively \cite{grill2003neural}. Our findings therefore suggest that these regions are comparatively more engaged in the creation of a value-related heuristic at a local level. \begin{figure}[tb] \includegraphics[width=0.92\linewidth]{voxel.pdf} \caption{\textbf{Quick learners show a larger dimension of responses in certain task-relevant regions at the voxel level. a.} We study regions of 300 (or fewer) voxels that we hypothesize to be involved in the processing of value and the learning of shapes. We see that three regions show a positive correlation between learning accuracy and separability dimension, with non-parametric $p\leq0.05$ compared to the null model of shuffled data. \textbf{b.} Topographical representation of these three regions on the surface of the brain: the left anterior cingulate cortex, left primary visual area, and right posterior fusiform. We note that the laterality of this latter effect is consistent with prior work demonstrating that the right and left posterior fusiform exhibit differential responses during object recognition \cite{Vuilleumier2002,KOUTSTAAL2001184,SIMONS2003613}. }\label{fig:voxel} \end{figure} \begin{table}[!hb] \centering \caption {\textbf{Brain regions where a higher dimensional representation is correlated with learning ability.} Pearson's correlation coefficient values and non-parametric $p$-values are given from comparison with the null model. The left anterior cingulate passes $p<0.005$ corrected for multiple comparisons (marked with $^*$).} \begin{tabular}{|c|c|c|c|c|} \hline No. of voxels & Brain region & Hemisphere & $r$ & $p$ \\ \hline 300& Anterior cingulate & Left & 0.54 & 0.003$ ^*$ \\ 300 & Primary visual & Left & 0.49 & 0.016 \\ 300 & Posterior fusiform & Right & 0.61& 0.050 \\ \hline \end{tabular} \label{tab:voxels} \end{table} \subsection{Quick learners develop more assortative representations} Besides separability dimension, a complementary geometric measure is that of \textit{label assortativity}, which simply identifies how easily distinguishable the neural responses are from each other according to all labels, and not just binarized labels. These two metrics provide distinct and potentially independent information regarding how data are organized (see Fig. \ref{fig:intro}d). We hypothesize that quick learners should show a more assortative representation, in addition to having a higher task-relevant dimension (see Fig. \ref{fig:sep}a). Here, we calculate assortativity using a linear support vector machine, chosen because of its simple interpretability. When examining the same neural data from the value judgement session at the end of the fourth day, and comparing that estimate with the response accuracy of participants on the first day, we find a positive correlation with their assortativity. Comparing this correlation to that observed in the null model in which labels are randomly permuted, we find that this correlation of $r=0.55$ is significant with non-parametric $p=0.012$ (see Fig. \ref{fig:sep}b). Intuitively, these data suggest that participants who learn more quickly have a more assortative pattern of neural responses than participants who learn less quickly. To verify that the metrics of separability dimension and label assortativity do not have a strict overlap, we note that one metric explains approximately $r^2=34\%$ of the variance of the other metric. \begin{figure}[tb] \includegraphics[width=0.99\linewidth]{f5.pdf} \caption{\textbf{Dimension and assortativity provide a geometric depiction of neural data. a.} As different cognitive processes can exhibit typified geometric changes in the neural responses to various stimuli, we hypothesize that learning performance is associated with both higher dimension and higher assortativity. \textbf{b.} The distinct metric of label assortativity (according to all labels; see Fig. \ref{fig:intro}d) across the whole brain, shows that quick learners display a higher assortativity ($r=0.55$; red markers), compared to the shuffled data in gold with non-parametric $p=0.012$.} \label{fig:sep} \end{figure} \section{Discussion} Here we develop and apply a novel computational framework to reveal how the high-dimensional neural responses of quick learners allow for greater distinguishability of meaningful stimuli while requiring fewer informational resources. Our observations are enabled by emerging methods from machine learning and data science \cite{Rigotti2013}, which can be used to estimate the instrinsic dimension of a representation despite pervasive measurement noise. In a cohort of 20 healthy adult humans learning the value of novel objects over the course of four days, we find that participants who learn most quickly display uniquely optimized neural responses to encode the cognitive processes associated with the task. We introduce two intuitive and interpretable metrics -- the task-relevant and embedding dimensions -- which allow us to show that quick learners achieve a delicate and efficient balance of a large task-relevant dimension and small embedding dimension. We complement this examination with supporting studies of finer neuroanatomy (assessing multivoxel patterns) and computation (assessing local assortativity). Broadly, our work offers a suite of tools to characterize response geometry, thereby offering a simple and intuitive explanation for how individuals learn to successfully distinguish between relevant stimuli in their environment over time. ~\\ ~\\ \noindent \emph{A notion of cognitive coding efficiency.} The concept of coding efficiency has been exercised at smaller spatial scales to characterize the (often unexpectedly low) dimension of neural representations. For example, neuronal spiking patterns measured in the lateral intraparietal area as macaques engage in a visual spatial attention task maps onto a one-dimensional dynamical trajectory \cite{Ganguli200815}. The simplicity and low-dimensionality of these dynamics marks disparate cognitive processes from decision-making and attentional shifting, to biased representations that arise from associative learning \cite{joshgold}. Indeed, such low-dimensionality is almost ubiquitous in neuronal measurements \cite{Cunningham2014,Machens350,SSolla} and may usefully characterize neural task complexity \cite{Gao214262}. Within this low-dimensional manifold, temporal variation in this ``effective'' dimension of neural activity can also indicate temporal variation in behavior \cite{Sadtler2014}. For example, as macaques engage in a recall task, the estimated task-relevant dimension from neural spiking activity in the prefrontal cortex is higher during correct responses than during incorrect responses \cite{Rigotti2013}. Extending previous methods, we introduce two complementary types of dimension (task-relevant and embedding) that allow insight into learning capacity and cognitive flexibility. Our results are consistent with the notion that the substantially different use of these two types of dimension allows the efficient encoding of contextually relevant data, potentially supporting optimal learning strategies. The compression of a large amount of information or content into a restricted number of channels has been studied in other cognitive domains such as sensory processing \cite{articleBarlow,Atick:1990:TTE:1351041.1351047,OLSHAUSEN19973311,Lewicki2002}. In light of these historical contributions, our results suggest that similar principles of geometric efficiency may extend to higher-order cognitive processes in humans. Further work could directly investigate commonalities in such principles across different scales of space and time. Such an investigation is in principle made possible by the fact that while the absolute value of these geometric metrics depends on the particular measurement technique, relative changes in value could be used to compare between data collected \emph{across} wholly different measurement techniques. The concept of efficiency has been applied to large-scale human neuroimaging data in several contexts, predominantly to describe situations where the behavior of subjects appears similar but neural activation is greater for one group (which is taken to be the ``less efficient'' group) than for the other \cite{ZIMMER201244,HAIER1992415,Gold387,Heinzel1224}. For instance, in an experiment involving working memory, less neural activity was needed for trained items as compared to new items \cite{ZIMMER201244}. The authors interpret this difference as a correlate of a gain in neural efficiency, and that training causes a more efficient neural representation. However, it has been pointed out that this interpretation does not shed light on the relationship between these two facts \cite{Poldrack201512}. In our study, we show that a more compact dimension of neural activation is simultaneously tied to larger information content in the same neural activation, leading to the idea of efficiency in the representation itself. This notion is more akin to how the concept of efficiency is used in other contexts in the neuroscience literature, such as in studies of efficient coding in sensory systems \cite{articleBarlow,Atick:1990:TTE:1351041.1351047,OLSHAUSEN19973311,Lewicki2002} or in studies of network efficiency \cite{Bullmore2012,10.1371/journal.pcbi.1000748}, where a maximal amount of information is conveyed through a fixed (or smaller) feature or basis set. That the efficient cognitive coding we observe also appears differentially in subjects who learn faster is consistent and intuitive, but is not in itself required for our definition of efficiency. Hence our calculations of the dimension of representation provides a rigorous framework for quantifying and reasoning about the efficiency of cognitive coding, which can be measured and compared in other cognitive processes. In our experiment, subjects were presented with a set of shapes designed to have no visual features that correlated with their monetary value (see Methods). Each subject was required to flexibly reassign new values to these shapes through the course of the experiment. In general, humans can be guided to act according to what has been previously reinforced, or to move towards promising sources of future reward \cite{10.2307/1884852, SHIZGAL1997198,doi:10.1146/annurev.neuro.29.051605.112903}. Our work examines the neural basis that supports this flexible identification of new value to existing objects, and how such objects become distinguished from each other in the representation of neural activity according to task-relevant cues. Indeed, upon investigating data from sessions where subjects are asked to evaluate the size and not the value of each shape, fast learners show no particular difference in the dimension of their neural responses (see Supplementary Information). Our results complement previous investigations into the relevance of cognitive flexibility for effective learning \cite{10.2307/2785779,Bassett03052011} and the underlying processes of executive function \cite{shine,Braun15092015}, while illuminating the emergent geometric architecture of the neural responses of effective learners. Future work could study if efficient geometric representations arise in individuals who exhibit higher degrees of cognitive flexibility and dynamic reorganization of neural responses. ~\\ ~\\ \noindent \emph{Role of single regions within a broader whole-brain geometry.} Geometry and topology can be investigated across multiple scales of any complex system or its emergent dynamics \cite{betzel2016multi}. While some systems can display heterogeneity in geometric principles across spatial and temporal scales, others display greater scale-invariance, with the principles at one scale being recapitulated at other scales \cite{Khaluf20170662}. We find that neural activity patterns elicited by value judgments of learned stimuli display similar geometric principles whether assessed at the level of the whole brain, or at the level of multivoxel patterns in single brain areas. Indeed, at this smaller scale we find that the left primary visual and anterior cingulate cortices, and right posterior fusiform of quick learners display a differential increase in dimension. While we focus on just ten local regions, each of which are hypothesized to play an important role in the cognitive processes elicited by this task, it would be of interest to expand the study to additional regions or sets of regions defined with other methods. Then, using the computational techniques that we introduce, one could begin to bridge the regional drivers of whole-brain simplicity and complexity in response geometry. ~\\ ~\\ \noindent \emph{Methodological considerations.} We note that there are several methodological considerations that are pertinent to our study. First, while the GLM extracts neural responses from the time series averaged across entire regions, it could also be useful to perform this extraction on time series at the voxel level before averaging, which may also decrease noise from irrelevant signals. Second, dimension and assortativity constitute starting points for a deeper analysis, and further work could identify the exact topology of the response. Third, the broad geometric methods that we develop and utilize here could be complemented by a dynamical study to assess how this geometry evolves across time. Fourth, while our cohort of twenty subjects already demonstrates significant evidence for geometric features that distinguish quick from slow learners, these results could well be verified across larger samples. Fifth, in our work, we find a significantly higher dimension in the neural responses of quick learners on the last day (more than in the previous days, see Supplementary Information), suggesting that this higher-dimensional and more efficient representation emerges most clearly over time and training. However, our results remain correlative and cannot suggest a causal link between this high-dimensional representation and effective learning. Finally, we study a single cognitive task, and future work could extend these notions to other cognitive domains during different experiments, or as different cognitive processes are engaged. In a previous experiment examining recall performance in trained macaques, the two estimates of dimension and decoding accuracy (analogous to assortativity) are differentially related to behavior \cite{Rigotti2013}. Specifically, while the dimension of the macaque's neural representation was predictive of the macaque's performance, the decoding accuracy of the same neural data instead remained constant in both error and correct trials. These observations raise fundamental questions about whether different cognitive processes can exhibit typified geometric changes in the neural responses. In humans, a particularly interesting context in which to study such differences is the mental states engendered by ``explore" \emph{versus} ``exploit" behaviors common in general human experience \cite{Addicott2017}, which are thought to give rise to diffuse \emph{versus} structured neural representations. ~\\ ~\\ \noindent \emph{Conclusion.} Here we offer a computational framework for quantifying and understanding the geometry of neural responses in humans. The tools that we develop and exercise hold promise for the analysis of other complex cognitive tasks due to their general applicability to non-invasive neuroimaging and notable robustness to noise. We illustrate the utility of these tools in characterizing the organization of neural activity associated with effective cognitive performance and efficient cognitive coding during the learning of abstract values associated with novel objects. Our results suggest that effective learners are marked by a type of cognitive coding efficiency characterized by high-dimensional geometric representations in concert with a compact embedding of the task-relevant information. Our observations motivate future work in cognitive and clinical neuroscience examining the generalizability of this notion of efficiency, and its relevance for disease. ~\\ ~\\ \paragraph{Acknowledgments} We thank Brett Falk for helpful discussions and Sarah Solomon for helpful comments on an earlier version of this manuscript. This work was supported by the John D. and Catherine T. MacArthur Foundation, the Alfred P. Sloan Foundation, the Army Research Laboratory and the Army Research Office through contract numbers W911NF-10-2-0022 and W911NF-14-1-0679, the National Institute of Health (2-R01-DC-009209-11, 1R01HD086888-01, R01-MH107235, R01-MH107703, R01MH109520, 1R01NS099348 and R21-M MH-106799), the Office of Naval Research, and the National Science Foundation (BCS-1441502, CAREER PHY-1554488, BCS-1631550, and CNS-1626008). The content is solely the responsibility of the authors and does not necessarily represent the official views of any of the funding agencies.
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} The accretion of smaller systems is an integral part of galaxy formation. The accretion history of a galaxy is perhaps most clearly encoded in its stellar halo, due to the combination of a relative scarcity of stars formed ``in-situ'', and long dynamical times which allow orbital information to persist. Motivated by the large apparent differences in mass and metallicity between the stellar halos of the Milky Way (MW) and M~31, \cite{2005MNRAS.363L..16R,2006ApJ...646..886F} proposed a picture---supported by cosmological simulation work---in which the metallicity of accreted material reflects the assembly history of the galaxy. The median metallicity of a stellar halo is now thought to be a reflection of the mass of the most massive accreted object \cite{2005ApJ...632..872R,2016ApJ...821....5D,2017arXiv170508442D}, a notion supported by the recent first observation of the stellar halo mass--metallicity relation \cite{2017MNRAS.466.1491H}. Below, we propose a simple method to explore, within a theoretical framework, the possible link between the accreted halo of a galaxy and present-day dwarf galaxies which may resemble those disrupted to form it. \section{Materials and Methods} We use the {\sc apostle}\footnote{A Project Of Simulating The Local Environment}~suite of cosmological hydrodynamical simulations \cite{2016MNRAS.457.1931S,2016MNRAS.457..844F}. These comprise twelve volumes, each containing two halos with masses, separations, kinematics, and local environment consistent with the MW, M~31, and the Local Group of galaxies. Each volume is simulated at mutliple resolution levels, with gas particle masses varying from $\sim$$10^6\,{\rm M}_\odot$ at the lowest (L3) resolution level to $\sim$$10^4\,{\rm M}_\odot$ at the highest (L1) level. In this study, we use the intermediate (L2) level with gas particles of $\sim$$10^5\,{\rm M}_\odot$, dark matter particle mass $\sim$$6\times 10^5\,{\rm M}_\odot$, and $\sim$$300\,{\rm pc}$ force softening. This is the highest resolution at which all twelve volumes have been integrated to the present day. Each volume samples a region extending to radii $\gtrsim$$2\,{\rm Mpc}$ around the barycentre of the two central objects. We assume the WMAP7 cosmological parameters \cite{2011ApJS..192...18K}. {\sc apostle}\ {uses} the same hydrodynamics and galaxy formations prescriptions as the {\sc {eagle}} project~\cite{2015MNRAS.446..521S,2015MNRAS.450.1937C}---specifically, the model labelled ``Ref'' by \cite{2015MNRAS.446..521S}. The hydrodynamics are solved using the pressure--entropy formulation of smoothed particle hydrodynamics \cite{2013MNRAS.428.2840H}, and the {\sc {anarchy}} collection of numerical methods (for a brief description, see \cite{2015MNRAS.446..521S}) is used. The model includes prescriptions for radiative cooling \cite{2009MNRAS.393...99W}, star formation \cite{2004ApJ...609..667S,2008MNRAS.383.1210S}, stellar and chemical enrichment \cite{2009MNRAS.399..574W}, stellar feedback \cite{2012MNRAS.426..140D}, and cosmic reionization \cite{2001cghr.confE..64H,2009MNRAS.399..574W}. The model is calibrated to reproduce the galaxy mass--size relation and galaxy stellar mass function of $M_\star > 10^8\,{\rm M}_\odot$ objects \cite{2015MNRAS.450.1937C}. Structures are identified in the simulation output using the friend-of-friends (FoF) \cite{1985ApJ...292..371D} and {\sc {subfind}} \cite{2001MNRAS.328..726S,2009MNRAS.399..497D} algorithms. The former iteratively links particles separated by less than $0.2\times$ the mean inter-particle separation; the latter then identifies self-bound substructures, termed ``subhalos'', separating them along saddle points in the density distribution. The most massive object in each FoF group is labelled the ``central'' object, and other objects in the same group are ``satellites''. In this study we focus on the stellar halo component of the $24$ roughly MW- and M~31-like galaxies (two per volume) in the {\sc apostle}\ {suite}. For each simulated galaxy we identify the progenitor system at earlier times using the merger tree procedure described in \cite{2003MNRAS.338..903H}. We explicitly check that the progenitor ``tracks'' are smooth in position and mass (i.e., that the primary progenitor is accurately traced through time). For each system we define the ``accreted halo'' as the collection of star particles in the {\sc {subfind}} group (i.e., gravitationally bound to the host) whose FoF group at their time of formation is not the FoF group hosting the progenitor of the MW- or M~31-like galaxy at the same time. Typically, a satellite galaxy will become FoF-associated with its host before any substantial disturbances to the stellar and gas components of the satellite due to the massive host begin. Our sample therefore does not include stars formed in tidal tails after accretion, ``in-situ'' halo stars, or stars ejected from the central galaxy. We~note that our definition of ``accreted halo'' includes \emph{{all accreted stars}}. Many of these are located in the central regions of the object they were accreted by and might be observationally characterized as part of a bulge, rather than a halo, component. \section{Results} In Figure~\ref{fig1} we show the metallicity and formation time\footnote{the age of the Universe minus the age of the star} distributions of the accreted halo stars. The distributions are normalized to the total stellar mass of each system before they are combined, such that each system contributes equal weight to the distribution. The accreted halos as defined here are significantly more metal-rich (median $-0.31$) than recent estimates for the MW (median $-1.78$~\cite{2013ApJ...763...65A}), M~31 ($\lesssim$$-0.7$ \cite{2014ApJ...780..128I}), and other galaxies with similar stellar masses \cite{2017MNRAS.466.1491H}.~The chemical enrichment prescriptions adopted in the {\sc {eagle}}-Ref model---and therefore used in the {\sc apostle}\ simulations---result in a galaxy stellar mass--median metallicity relation offset to higher metallicities than observed \cite{2013ApJ...779..102K} for galaxies with $M_\star\lesssim10^8\,{\rm M}_\odot$. The disruption of these unusually metal-rich satellite galaxies unsurprisingly results in unusually metal-rich stellar haloes. This is fundamentally a shortcoming of the model. A direct comparison with the measurements cited above is further hindered by our selection of accreted particles, which inevitably includes many stars which would not usually be present in observed samples of ``halo stars'', especially toward the centre of each system. In the context of our analysis below, these differences are of limited concern since our argument concerns mainly relative---rather than absolute---metallicities. \begin{figure}[H] \centering \includegraphics[width=.7\columnwidth]{figs/fig1} \caption{The upper panel shows the formation time distribution for stars in the accreted halos of the $24$~Milky Way (MW)- and M~31-like galaxies from the {\sc apostle}\ simulation suite (filled histogram) at resolution level L2. The coloured lines show the same distribution for field (central) objects in $4$ consecutive 1-dex bins of stellar mass from $10^6$ to $10^{10}\,{\rm M}_\odot$ (red, green, blue, purple in order of increasing $M_\star$). The lower panel shows the metallicity distribution for the same classes of objects. The accreted halos have a metallicity distribution similar to that of $10^9$--$10^{10}\,{\rm M}_\odot$ field objects, but relatively older stellar populations. For all curves, star particles with metallicities $<$$-4$ contribute to the lowest metallicity bin shown. \label{fig1}} \end{figure} With its tail of recently-formed stars, the formation time distribution may at first glance seem unusual: the MW stellar halo does not have such a population of young stars \cite{2016NatPh..12.1170C}. However, the M 31 stellar halo has a star formation history which, though it still peaks at ages of $5$--$13\,{\rm Gyr}$, has a clear tail to much younger ages \cite{2006ApJ...652..323B}. Furthermore, there is no attempt in the selection of systems for the {\sc apostle}\ simulations to choose halos with merger histories similar to the MW and M~31, or even the morphology of the galaxies: a few of the systems have had recent major mergers and still show clear morphological disturbances. These systems make the largest (but not the entire) contribution to the tail of young stars. How the {\sc apostle}\ sample of stellar haloes compares to the similarly-sized GHOSTS sample of stellar haloes recently observed in detail \cite{2016MNRAS.457.1419M,2017MNRAS.466.1491H} (see also the compilation in \cite{2017ApJ...837L...8B}) is a topic we hope to pursue in a future contribution. In Figure~\ref{fig1} we also show the formation time and metallicity distributions for other isolated (central) galaxies within the same {\sc apostle}\ simulation volumes, binned by stellar mass in 1~dex bins from $10^6$--$10^{10}\,{\rm M}_\odot$. The metallicity distribution of the accreted halos is roughly similar to that of the field objects in the highest mass bin ($M_\star\sim10^{9.5}$), but their formation time distribution is biased to earlier times (older ages). This is unsurprising: $10^{10}\,{\rm M}_\odot$ galaxies in the field are typically actively star forming up to the present day, whereas recently formed stars are excluded from the accreted halo sample which is dominated by disrupted ``quenched'' sattelites, as enforced by our selection process. The metallicity distributions in Figure~\ref{fig1} hint that the stellar populations in the accreted halos must be dominated by relatively massive accreted objects---the high median metallicity simply cannot be reached via the accretion of many low mass objects. We now explore this point further. For illustrative purposes, we use a single {\sc apostle}\ galaxy from the 7th volume, which we {label}\footnote{AP-[resolution level]-[volume number]-[FoF group number]-[subgroup number]} AP-L2-V7-1-0. This~galaxy was chosen ``by eye'' to be representative of the sample. The formation time and metallicity distributions of this galaxy are shown in Figure~\ref{fig2}. The total stellar mass of accreted halo stars in this system is $10^{9.4}\,{\rm M}_\odot$ (for comparison, the MW stellar halo mass is $\sim$$5.5\times10^8\,{\rm M}_\odot$ \cite{2016ARAnA..54..529B}, that of M~31 is $\sim$$1.5\times10^{10}\,{\rm M}_\odot$ \cite{2014ApJ...780..128I}; see also \cite{2017ApJ...837L...8B}). We also show the formation time and metallicity distributions for present-day dwarf galaxies in the field which have stellar masses slightly ($0.2$--$0.5$~dex; the choice of this particular interval is explained below) larger than this accreted halo. The stellar populations in these field galaxies are more metal-rich and younger than those in the accreted halo. However, if we re-weight the star particles in the same field objects such that the age distribution of the accreted halo is exactly matched, the resulting metallicity distribution is a close match to the metallicity distribution of the accreted halo, both in terms of the median (offset by less than $0.05$~dex, compared to $0.3$~dex before re-weighting) and the shape. \begin{figure}[H] \centering \includegraphics[width=.7\columnwidth]{figs/fig2} \caption{The upper panel shows the formation time distribution for stars in the accreted halo of AP-L2-V7-1-0, one MW or M~31-like galaxy from the {\sc apostle}\ suite (filled histogram); the lower panel is similar but for the metallicity distribution. The dotted purple histogram shows the same distributions for field (central) objects which have stellar masses in the range $10^{9.6}$--$10^{9.9}\,{\rm M}_\odot$, i.e., $0.2$--$0.5$~dex more massive than the accreted halo of AP-L2-V7-1-0. The result of re-weighting the stellar populations of the field objects, weighted by the formation time distribution of the accreted halo, is shown with the solid purple histogram (by construction, the formation time distribution is then a perfect match). This enforced bias toward older stars in the field objects results in a re-sampled metallicity distribution which resembles much more closely that of the accreted halo. Arrows mark the median of the distribution with corresponding line style.\label{fig2}} \end{figure} In Figure~\ref{fig3} we show the result of applying the same process illustrated in Figure~\ref{fig2} to all $24$~accreted halos in our sample. We use the same offset of $0.2$--$0.5$~dex in stellar mass for all galaxies and show the median metallicity before and after re-weighting by the formation time distribution of the accreted halo. In most cases, the median of the re-weighted distribution approaches that of the accreted halo, though with significant scatter. The mass offset interval was chosen based on purely empirical considerations, by systematically exploring a range of possibilities covering the full mass range of field objects present in the simulations, and various widths for the interval. The $0.2$--$0.5$~dex window is the one which minimizes the scatter in the right panel of Figure~\ref{fig3}, without introducing a systematic offset from the line of 1:1 agreement. \begin{figure}[H] \centering \includegraphics[width=.7\columnwidth]{figs/fig3} \caption{Result of applying the stellar population re-weighting illustrated in Figure~\ref{fig2} to the $24$ MW- and M~31-like galaxies in the {\sc apostle}\ suite (AP-L2-V7-1-0, the example from Figure~\ref{fig2}, is marked with a star). In each case, the field objects in the stellar mass interval between $0.2$ and $0.5$~dex more massive than the stellar mass of the accreted halo are selected. Before the re-weighting (\textbf{left panel}), the field objects typically have a median metallicity greater than that of the corresponding accreted halo (the~dotted line indicates 1:1 agreement). After re-weighting (\textbf{right panel}), the medians of the metallicity distributions of the field objects and accreted halos agree, albeit with substantial scatter.\label{fig3}} \end{figure} \section{Discussion} The ``accreted halos'' from the {\sc apostle}\ simulation suite, as we have defined them here, should not be taken as direct detailed models of the MW, M~31, or other galactic stellar halos as defined observationally. They are, however, robust and internally self-consistent models of the assembly of such systems, and simultaneously of the nearby field objects which survive to the present day. The above results suggest that the mass in the accreted halo of a galaxy is usually dominated by the disrupted content of one or a handful of relatively massive objects. If these had continued to grow and evolve in isolation instead of being accreted and destroyed, we would expect them to resemble present-day dwarf galaxies in the field with masses a factor of $\lesssim$$3$ greater than that of the accreted halo. Older stellar populations in relatively massive dwarf galaxies are roughly the surviving analogs of the most massive ``building blocks'' of stellar halos. Though it seems that $1$--$2$ disrupted massive systems make up the bulk of most stellar halos, the remains of many lower-mass systems are also expected to be present. These are a nearly insignificant contribution (by mass) to the halo as a whole, but their signature may be detectable as a radial gradient---less massive systems are subject to weaker dynamical friction, and are destroyed at larger radii---and/or as overdense features such as shells or streams. The simulations and method used above offer a means of studying the link between the properties of such features and the types of objects which were destroyed to create them. \vspace{6pt} \acknowledgments{We thank the other members of the {\sc apostle}\ team, especially T. Sawala, A. Fattahi, M.~Schaller, C. Frenk, for their efforts in creating the simulations used in this work. We thank the {\sc {eagle}} simulation collaboration for providing the galaxy formation model, software and calibration. KAO thanks J. Helly for support in using the merger tree software. We thank the anonymous referees for their useful comments and suggestions. This work was supported by the Science and Technology Facilities Council (grant number ST/F001166/1). This~work used the DiRAC Data Centric system at Durham University, operated by the Institute for Computation Cosmology on behalf of the STFC DiRAC HPC Facility (www.dirac.ac.uk). This equipment was funded by BIS National E-infrastructure capital grant ST/K00042X/1, STFC capital grant ST/H008519/1, and STFC DiRAC Operations grant ST/K003267/1 and Durham University. DiRAC is part of the National E-Infrastructure. This~research has made use of NASA's Astrophysics Data System.} \authorcontributions{K.A.O., E.S. and J.F.N. conceived and designed the experiments; K.A.O. performed the experiments; K.A.O. and E.S. and J.F.N. analyzed the data; K.A.O. wrote the paper.} \conflictsofinterest{The authors declare no conflict of interest. The founding sponsors had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript, and in the decision to publish the results.}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} The inflationary paradigm is highly successful and an attractive way of resolving some of the puzzles of standard cosmology. During inflation, the early universe undergoes an accelerated expansion, stretching quantum fluctuations to super-horizon scales which we observe today as CMB anisotropy \cite{Komatsu:2008hk}. Since Einstein's equations are highly non-linear, comparison of the predictions of inflation with the observations require one to expand the equations order-by-order. At linear order, predictions of inflation is consistent with CMB. However, the linear order observables, like scalar spectral index and tensor-to-scalar ratio, can not rule out models of inflation; physically measurable observables corresponding to higher-order quantities like bispectrum or trispectrum will help to rule out some models of inflation \cite{Lidsey:1995np,Lyth:1998xn,Lyth:2007qh,Mazumdar:2010sa}. In the standard inflationary models, inflation is driven by scalar field(s). The canonical scalar fields are the simplest and to get sufficient amount of inflation require flat-potential. During canonical `slow-roll' evolution, potential energy dominates over its kinetic energy and drives a quasi-exponential expansion of the universe which is often difficult to obtain within particle physics models \cite{Lyth:1998xn, Lidsey:1995np}. Non-canonical scalar fields are generalizations of canonical scalar fields and reduces the dependence on the potential. In case of non-canonical scalar field, even in the absence of potential energy term, a general class of non-quadratic kinetic terms can drive inflationary evolution. This model satisfy two crucial requirements of inflationary scenarios: the scalar perturbations are well-behaved during inflation and there exists a natural mechanism for exiting inflation in a graceful manner. Non-canonical scalar field also contains extra parameters than canonical scalar field such as speed of sound. However, unlike canonical scalar field modelds, the speed of propagation of the scalar perturbations in these inflationary models can be time-dependent\cite{Armendariz-Picon1999, Garriga1999, Armendariz-Picon2001}. Recently, in order to seek more generalized field, scalar fields with higher time derivatives models like Hordenski scalar fields, Kinetic Gravity Braiding models \cite{Horndeski1974,Kobayashi2010,Kobayashi2011,Nicolis2008,Deffayet2009,Deffayet:2010qz} are considered. Besides these, there are plenty of other models that lead to accelerated universe. Since inflation takes place at high-energies, quantum field theory is the best description of the matter at these energies. Hence, evaluation of any physical quantity, like the $n-$point correlation functions, require us to either promote effective field variables (using Heisenberg picture) to operators or integrate over all possible field configurations on all of space–time (path - integral picture). Since it is unclear what effective field configurations to integrate over, Heisenberg picture is the preferred approach. In other words, we obtain the effective Hamiltonian operator and evaluate the correlation functions of the relevant operator. In the context of cosmological perturbation theory, there are currently two approaches in the literature to evaluate the effective Hamiltonian --- Lagrangian fomalism and Hamiltonian formalism. In case of Lagrangian formalism, the Lagrangian is expanded upto a particular order, i.e., if we are interested in obtaining third order interaction Hamiltonian, effective Lagrangian needs to be expanded up to third order and constraints are systematically removed from the system to obtain the effective perturbed Lagrangian. Then, the momentum $\pi$ corresponding to $\varphi$ is obtained as a polynomial of $\dot{\varphi}$ and using order-by-order approximations, $\dot{\varphi}$ is expressed as a polynomial of $\pi$. Next, using Legendre transformation, Hamiltonian is expressed in terms of $\pi$ and $\varphi$. In order to express the Hamiltonian in terms of $\dot{\varphi}$ and $\varphi$, only the leading order relation between $\pi$ and $\dot{\varphi}$ is used \cite{Huang:2006eha, Seery:2005wm, Chen:2006nt, Bartolo:2001cw, Bartolo:2003bz, Bartolo:2004if, Maldacena2003, Seery:2008ax}. There are some difficulties with the previous method: \begin{enumerate} \item In case of cosmological perturbations, $\pi$ and $\dot{\varphi}$ are perturbed quantities (curvature perturbation), expressing one in terms of polynomial of other is an approximation. \item At the end, to express the Hamiltonian in terms of configuration-space variable, we use only the leading order relation between $\pi$ and $\dot{\varphi}$, not the polynomial relation. Hence, several approximations are employed to convert effective Lagrangian to effective Hamiltonian. \item The above method is also very restrictive and it is difficult to extend the method for a generalized constrained system. \item Also, it is difficult to use this method for higher order perturbations. \end{enumerate} In the context of cosmological pertubrations, the above approach leads to consistent results, however, a consistent Hamiltonian formulation is always preferred than the previous approach to make calculations simpler with more technical details. Langolois \cite{Langlois1994} first introduced a consistent Hamiltonian formulation of canonical scalar field. However, Langlois' approach is also difficult to extend to higher order of perturbations or to any different types of field due to the fact that it requires construction of gauge-invariant conjugate momentum. In our recent work \cite{Nandi:2015ogk}, henceforth referred as I, we have introduced a different Hamiltonian approach that can address and deal with all the issues in the previous methods and provides an effective and robust way to obtain interaction Hamiltonian for any model for any order of perturbations. Also, in case of calculating mixed-mode (e.g., scalar-tensor) interaction Hamiltonian \cite{Maldacena2003, Seery:2008ax}, our approach is simpler than the previous one. Table below provides a bird's eye view of the both the formulations and advantages of the Hamiltonian formulation that is proposed in this work \cite{Nandi:2015ogk} \begin{center} \begin{tabular}{|m{2cm}|m{6cm}|m{6cm}|} \hline & Lagrangian formulation & Hamiltonian formulation \\ \hline Gauge conditions and {guage-invariant equations} & At any order, choose a gauge {which} does not lead to gauge-artifacts & Choose a gauge {with no gauge-artifacts}, however, momentum corresponding to unperturbed quantity is non-zero leading to consistent equations of motion.\\ \hline Dynamical variables & {Counting true dynamical degrees of freedom is difficult.} & Using Dirac's procedure, constraints can easily be obtained and is easy to determine the degrees of freedom.\\ \hline Quantization at all orders & Difficult to quantize constrained systems. & Since constraints are obtained systematically and reduced phase space contains only true degrees of freedom, it is straightforward to quantize the theory using Hamiltonian formulation.\\ \hline Calculating the observables & Requires to invert the expressions at each order and hence non-trivial to compute higher-order correlation function. & Once the relation between $\varphi$ and Curvature perturbation\footnotemark[1] is known, calculating the correlation functions from the Hamiltonian is simple and straightforward to obtain.\\ \hline \end{tabular} \end{center} \footnotetext[1]{It is important to note that, in the case of first order, relation between $\varphi$ and three-curvature is straight forward. However, it is more subtle in the case of higher-order perturbations\cite{Malik2009}.} In I, we applied our new method to canonical and a specific higher derivative (Galilean) scalar field model, and showed explicitly that the method can efficiently obtain Hamiltonian at all orders. In the case of certain non-canonical scalar field models, if $\dot{\varphi}$ can be expressed uniquely in terms of the canonical conjugate momentum, it is then possible to obtain Hamiltonian and the results of I can be extended. However, for a general non-canonical scalar field, it is not possible to do the procedure as we do not have a way to rewrite $\dot{\varphi} $ in terms of the canonical conjugate momentum and, hence, it is not possible to obtain the Hamiltonian for general non-canonical scalar field. In this work, we explicitly obtain Hamiltonian for a general non-canonical scalar fields and obtain interaction Hamiltonian upto fourth order. This work is divided into two parts. In the first part, we provide the procedure to obtain Hamiltonian for the non-canonical scalar field by introducing a phase-space variable. Then, by choosing different models, we explicitly show that the Hamiltonian leads to consistent equations of motion as well as perturbed interaction Hamiltonian by implementing our approach. We also find a new definition of speed of sound in terms of phase-space variables. In the second part, in order to retrieve generalized equations of motion in configuration-space from phase-space, we provide a systematic way to invert generalized non-canonical phase-space variables to configuration-space variables and vice versa and show that, all equations are consistent. Finally, we extend the method to generalized higher derivative scalar fields. A flow-chart below illustrates the method for non-canonical scalar fields: {\tiny \begin{center} \begin{tikzpicture}[node distance=2.5cm] \node (start)[st] {Non-canonical scalar field \\ $P(X, \varphi)$. See eq.(\ref{ADM-Non-canonical})}; \node (defmom) [st, below of=start] {Define momenta, see eqs (\ref{piijgeneral}), (\ref{piphigeneral})}; \node (nonham) [st, below of=defmom]{Hamiltonian cannot be obtained\\$H \stackrel{?}{=} f(\varphi, \pi_\varphi) $}; \node (ham) [st, below of=nonham ]{Hamiltonian, see eq. (\ref{Hamfull})}; \node (Appndix) [st, below of=nonham, xshift= -6cm] {Appendix \ref{Inversion}}; \node (Inv) [st, below of=defmom, xshift=5cm]{Inverse Legendre transformation}; \node (EL) [st, below of=ham, yshift=8cm, xshift=5cm]{Euler-Lagrange equations of motion}; \draw [arrow] (start) -- (defmom); \draw [arrow] (defmom) --node[anchor=west, yshift=.3cm]{Legendre transformation} (nonham); \draw [arrow] (defmom) --node[anchor=west]{in general not possible} (nonham); \draw [arrow] (defmom) --node[anchor=west, yshift=-.3cm]{$[\dot{\varphi} \stackrel{?}{=} f(\pi_\varphi)$]} (nonham); \draw [arrow] (nonham) -- node[anchor=west]{Define $G$, see eq. (\ref{G})}(ham); \draw [arrow] (ham) -- node[anchor=south]{Specific}(Appndix); \draw [arrow] (ham) -- node[anchor=north]{non-canonical models}(Appndix); \draw [arrow] (ham) -| node[anchor=south, xshift=-1.7cm]{consistency check} (Inv); \draw [arrow] (ham) -| node[anchor=north, xshift=-.7cm, yshift=1.6cm]{see section \ref{InversionNC}.} (Inv); \draw [arrow] (ham) -| node[anchor=south, xshift=-1.2cm, yshift=1.5cm]{Define Inverse of G's} (Inv); \draw [arrow] (start) -| node[anchor=west, xshift=-3cm, yshift=.25cm]{variation of action} (EL); \draw [arrow] (Inv) -- (EL); \end{tikzpicture} \end{center} } In the next section, we briefly discuss about non-canonical scalar fields. We also discuss the gauge choices and corresponding gauge-invariant variables. In section \ref{Hamiltonian-nonC}, Hamiltonian formulation of generalized non-canonical scalar field is introduced by defining a new phase-space function which provides consistent equations of motion. In section \ref{first}, we extend the results of I to non-canonical scalar field in flat-slicing gauge to obtain perturbed equations of motion. In section \ref{InversionNC}, we provide a partial inversion method between phase-space variables and configuration-space variables and in section \ref{InteractionHam}, we provide the third and fourth order Interaction Hamiltonian for non-canonical scalar field. In section \ref{ExGal}, we briefly discuss the application of our method to generalized Galilean scalar field model. Finally, in section \ref{Conclu}, we end with discussions and conclusions of the results. In Appendix \ref{Inversion}, functional form of the new variable is obtained for different scalar field models. In Appendix \ref{Langlois-nonC}, we implement Langlois' approach to non-canonical scalar field model. In this work, we consider $(-, + , +, +)$ metric signature. We also denote $\prime$ as derivative with respect to conformal time. \section{Model and gauge choices}\label{BasicModel} Action for non-canonical scalar field minimally coupled to gravity is \begin{equation} \label{GravityAction} \mathcal{S} = \int d^4 x \sqrt{-g} \left[\frac{1}{2 \kappa} R + \mathcal{L}_m(\varphi, \partial\varphi) \right], \end{equation} where $R$ is the Ricci scalar and the matter Lagrangian, $\mathcal{L}_m$ is of the form. \begin{equation}\label{boxx} \mathcal{L}_m = P(X, \varphi) ,~~~ ~X \equiv \frac{1}{2} g^{\mu \nu} \partial_{\mu}{\varphi} \partial_{\mu}{\varphi}. \end{equation} {$P(X, \varphi)$ corresponds to non-standard kinetic term and hence the name non-canonical scalar field\cite{Armendariz-Picon1999,Garriga1999,Armendariz-Picon2001}. Further, fixing $ P = - X - V(\varphi)$, where $V(\varphi)$ is the potential, one can retrieve the well-known canonical scalar field model.} Varying the action (\ref{GravityAction}) with respect to metric gives Einstein's equation \begin{equation} \label{EinsteinEquation} R_{\mu \nu} - \frac{1}{2} g_{\mu \nu}~ R = \kappa~ T_{\mu \nu}, \end{equation} where the stress tensor $(T_{\mu \nu})$ for non-canonical scalar field is \begin{equation} \label{EMTensor} T_{\mu \nu} = - P_X \partial_{\mu}{\varphi} \partial_{\nu}{\varphi} + g_{\mu \nu} P. \end{equation} Varying the action (\ref{GravityAction}) with respect to the scalar field `$\varphi$' leads to the following equation of motion \begin{equation} \label{EG} P_X \Box \varphi - P_{XX} \partial_{\mu}{\varphi} \partial^{\mu}{\varphi} - 2X P_{X \varphi} + P_\varphi = 0 \end{equation} which can also be obtained from the conservation of Energy-Momentum tensor, $\nabla_{\mu}{T^{\mu \nu}} = 0$. The four-dimensional line element in the ADM form is given by, \begin{eqnarray} ds^2 &=& g_{\mu \nu} dx^{\mu} dx^{\nu} \nonumber \\ \label{line} &=& -(N^2 - N_{i} N^{i} ) d\eta^2 + 2 N_{i} dx^{i} d\eta + \gamma_{i j} dx^{i} dx^{j}, \end{eqnarray} where $N(x^{\mu})$ and $N_i(x^{\mu})$ are Lapse function and Shift vector respectively, $\gamma_{i j}$ is the 3-D space metric. Action (\ref{GravityAction}) for the line element (\ref{line}) takes the form \begin{equation} \label{ADM-Non-canonical} \mathcal{S}_{NC} = \int d^4x\, N\, \sqrt{\gamma}\, \left[ \frac{1}{2 \kappa} \left(^{(3)}R + K_{i j} K^{i j} - K^2\right) + P(X, \varphi) \right] \end{equation} where $K_{i j}$ is extrinsic curvature tensor and is defined by \begin{eqnarray} && K_{i j} \equiv \frac{1}{2N} \left( \partial_{0}{\gamma_{i j}} - N_{i|j} - N_{j | i} \right), \nonumber \\ && K \equiv \gamma^{i j} K_{i j}. \nonumber \end{eqnarray} Perturbatively expanding the metric only in terms of scalar perturbations and the scalar field about the flat FRW spacetime in conformal coordinate, we get, \begin{eqnarray} && g_{0 0} = - a(\eta)^2(1 + 2 \epsilon \phi_1 + \epsilon^2 \phi_2 + ...) \\ && g_{0 i} \equiv N_{i} = a(\eta)^2 (\epsilon \partial_{i}{B_1} + \frac{1}{2} \epsilon^2 \partial_{i}{B_2} + ...) \\ && g_{i j} =a(\eta)^2 \left((1 - 2 \epsilon \psi_1 - \epsilon^2 \psi_2 -...)\delta^{i j} + 2 \epsilon E_{1 i j} + \epsilon^2 E_{2 i j} + ...\right)\\ &&\varphi = \varphi_0(\eta) + \epsilon \varphi_1 + \frac{1}{2} \epsilon^2 \varphi_2 + ... \end{eqnarray} where $\epsilon$ denotes the order of the perturbation. To determine the dynamics at every order, we need five scalar functions ($\phi, B, \psi, E$ and $\varphi$) at each order. Since there are two arbitrary gauge-freedoms for scalar perturbations, one can fix two of the five scalar functions. In this work, we derive all equations by choosing a specific gauge --- flat-slicing gauge, i.e., $\psi = 0, E = 0$ --- at all orders: \begin{eqnarray} \label{00pmetric} && g_{0 0} =- a(\eta)^2(1 + 2 \epsilon \phi_1 + \epsilon^2 \phi_2 + ...) \\ \label{0ipmetric} && g_{0 i} \equiv N_{i} = a(\eta)^2 (\epsilon \partial_{i}{B_1} + \frac{1}{2} \epsilon^2 \partial_{i}{B_2} + ...) \\ \label{3metric} && g_{i j} =a(\eta)^2 \delta_{i j}\\ \label{field} &&\varphi = \varphi_0(\eta) + \epsilon \varphi_1 + \frac{1}{2} \epsilon^2 \varphi_2 + ... \end{eqnarray} It can be shown that, perturbed equations in flat-slicing gauge coincide with gauge-invariant equations of motion (in generic gauge, $\varphi_1$ coincides with $\varphi_1 + \frac{\varphi_0{}^\prime}{H} {}\psi_1 \equiv \frac{\varphi_0{}^\prime}{H} \mathcal{R}$ which is a gauge-invariant quantity, $\mathcal{R}$ is called curvature perturbation \cite{Malik2009}). Similarly, one can choose another suitable gauge with no coordinate artifacts to obtain gauge-invariant equations of motion. Such gauges are Newtonian-conformal gauge $(B = 0,\, E = 0)$, constant density gauge $(E = 0, \, \delta\varphi = 0)$, etc. Before we proceed with the Hamiltonian formulation, it is important to clarify issues related to the quantization in the cosmological perturbation theory: While the field variable $\varphi_i$ and metric variables $\phi_i, B_i, \psi_i$ (where $i$ takes values $1, 2, \cdots$) are expanded perturbatively, it is important to note that the operators corresponding to these variables (i. e. $\varphi_1, \varphi_2, \cdots$) can not be treated as independent operators as higher orders in perturbation theory do not lead to independent degrees of freedom. As otherwise, the unperturbed theory should have infinitely many local degrees of freedom. In a canonical quantization, there is one operator and one for its momentum, on which the quantum Hamiltonian depend\footnote{We thank Martin Bojowald for discussion regarding this point.}. \section{Hamiltonian formulation}\label{Hamiltonian-nonC} In our last work \cite{Nandi:2015ogk}, we provided an efficient way of obtaining consistent perturbed Hamiltonian for any gravity models. However, it works only if the form of the action is specified in such a way that the Legendre transformation $\dot{\varphi} \rightarrow \pi_\varphi$ is invertible in both ways. For non-canonical scalar fields, momenta corresponding to the action (\ref{ADM-Non-canonical}) are \begin{eqnarray} \label{piijgeneral} \pi^{i j} \equiv \frac{\delta \mathcal{S}_{NC}}{\delta {\gamma^\prime_{i j}}} &=& \frac{\sqrt{\gamma}}{2 \kappa}\, ({\gamma}^{i j} {\gamma}^{k l} - {\gamma}^{i k} {\gamma}^{j l}) {K}_{k l} \\ \label{piphigeneral} \pi_\varphi \equiv \frac{\delta \mathcal{S}_{NC}}{\delta {\varphi^\prime}} &=& - \sqrt{\gamma}P_X \sqrt{- 2X + Y}, \quad \mbox{where}\quad Y \equiv \gamma^{i j} \partial_i \varphi \partial_j \varphi. \end{eqnarray} As one can see, equation (\ref{piijgeneral}) is invertible and the inversion relation is given by \begin{equation}\label{InvPiij} {\gamma}^\prime_{m n} = \gamma_{n k} N^k_{|m} + \gamma_{m k} N^k_{|n} - 2 N K_{m n}, ~~~ K_{i j} = \frac{\kappa}{\sqrt{\gamma}} \left(\gamma_{i j} \gamma_{k l} - 2 \gamma_{i k} \gamma_{j l}\right) \pi^{k l} \end{equation} but equation (\ref{piphigeneral}) is non-invertible for arbitrary function of $P(X, \varphi)$. However, if $P(X, \varphi)$ is specified, it may be possible to invert the equation and $X$ can be written in terms of $\pi_\varphi$. Inversion relations for commonly used non-canonical models are given in Appendix \ref{Inversion}. Using equation (\ref{InvPiij}), we can write the Hamiltonian density as \begin{eqnarray} \mathcal{H}_{NC} &=& \pi^{i j} \gamma_{i j}^\prime + \pi_\varphi \varphi^\prime - \mathcal{L}_{NC} \nonumber \\ &=& 2 {\gamma}_{i j} {\partial}_{k}{{N}^{j}}\, {\pi}^{i k} + {N}^{i} {\partial}_{i}{{\gamma}_{j k}}\, {\pi}^{j k} - \frac{N \kappa}{\sqrt{\gamma}} \left(\gamma_{i j} {\gamma}_{k l} - 2\gamma_{i k} \gamma_{j l}\right) {\pi}^{i j} {\pi}^{k l} - \frac{N \sqrt{\gamma}}{2 \kappa} \,{}^{(3)}R -\nonumber \\ \label{fullhamiltonian} && N \sqrt{\gamma}~\tilde{G}(X, \gamma, Y, \varphi) + {N}^{i} \pi_\varphi {\partial}_{i}{\varphi}, \quad\mbox{where}\quad\tilde{G}\equiv \left( P - P_X \left(2X - Y\right)\right). \end{eqnarray} Note that, the above expression is still not the Hamiltonian since $\tilde{G}$ is not a phase-space variable and it is not invertible for arbitrary form of $P(X, \varphi)$ since equation (\ref{piphigeneral}) is not invertible, in general. Hence, a natural question that arises is: \emph{How to invert configuration-space variables to phase-space variables so that we can obtain generalized consistent Hamiltonian for non-canonical scalar field?} In this section, we show that, by defining a new phase-space function, the above problem can be resolved. The new phase-space quantity is defined as \begin{equation} \label{G} G(\pi_\varphi, \gamma, Y, \varphi) = \tilde{G}(X, \gamma, Y, \varphi) \equiv P - P_X \left(2X - Y\right). \end{equation} Since momenta corresponding to $N$ and $N^i$ vanish, i.e., $\pi_N = \pi_i = 0$, using the above defined variable, Hamiltonian constraint can be written as \begin{equation} \label{HamitonianConstraint} \mathcal{H}_N \equiv \{\pi_N, \mathcal{H}_{NC}\} = \frac{\delta \mathcal{H}_{NC}}{\delta N} = - \frac{ \kappa}{\sqrt{\gamma}} \left(\gamma_{i j} {\gamma}_{k l} - 2\gamma_{i k} \gamma_{j l}\right) {\pi}^{i j} {\pi}^{k l} - \frac{ \sqrt{\gamma}}{2 \kappa} \,{}^{(3)}R - \sqrt{\gamma}\,\, G(\pi_\varphi, \gamma, Y, \varphi) = 0, \end{equation} and Momentum constraint is given by \begin{equation} \label{MomentumConstraint} M_i \equiv \{\pi_i, \mathcal{H}_{NC}\} = \frac{\delta \mathcal{H}_{NC}}{\delta N^i}= -2 \partial\left(\gamma_{i m}\pi^{m n}\right) + \pi^{k l} \partial_{i}\gamma_{k l} + \pi_\varphi\partial_i \varphi = 0. \end{equation} Due to diffeomorphic invariance, total Hamiltonian density can be written as \begin{equation}\label{Hamfull} \mathcal{H}_{NC} = N \mathcal{H}_N + N^i \mathcal{H}_i = 0. \end{equation} Instead of defining $G$, one can define any other phase-space variable(s) and express the Hamiltonian in a different form and the possibilities are infinite. However, as one can see, since $\tilde{G}(X, \gamma, Y, \varphi)$ automatically appears directly in the Hamiltonian, this is the simplest and effective way to express the Hamiltonian for non-canonical scalar field. $G$ not only resolves the issue of expressing Hamiltonian for non-canonical scalar field and is also uniquely defined for different non-canonical scalar fields. Hence, $G$ carries the signature of the non-canonical scalar fields in phase-space. Explicit forms of $G(\pi_\varphi, \gamma, Y, \varphi)$ for different types of scalar fields are given in Appendix \ref{Inversion}. \subsection{Zeroth order}\label{zero} At zeroth order, since $\gamma_{i j} = a^2 \delta_{i j}$ and all quantities are independent of spatial coordinates, we then get \begin{equation} \pi_0^{i j} = \frac{1}{6 a} \pi_a \delta^{i j} \end{equation} and Hamiltonian density at zeroth order becomes \begin{equation} \label{zerothH} \mathcal{H}_{0} = - \frac{N_0 \kappa}{12 a} \pi_a^2 - G \,N_0\, a^3. \end{equation} Variation of the Hamiltonian (\ref{zerothH}) with respect to the momenta leads to two equations and are given by \begin{eqnarray} \label{Zerotha'} a^\prime &=& - \frac{N_0 \kappa}{6 a} \pi_a \\ \label{Zerothphi'} \varphi_0^\prime &=& - N_0 \,a^3\, G_{ \pi_\varphi}. \end{eqnarray} Hamiltonian constraint leads to the equation of motion of $N$ and at zeroth order, it is given by \begin{equation}\label{ZerothHam} \mathcal{H}_{N0} \equiv - \frac{\kappa}{12 a} \pi_a^2 - G\, a^3 = 0. \end{equation} Equations of motion are obtained by varying the Hamiltonian (\ref{zerothH}) with respect to field variables. Hence, equation of motion of $a$ is obtained by the relation \begin{equation}\label{ZerothEoMa} \pi_a^\prime = - \frac{\delta \mathcal{H}_{0}}{\delta a} = - \frac{N_0 \kappa}{12 a^2} \pi_a^2 + 3\, G\, N_0\, a^2 + G_{a}\, N_0\, a^3. \end{equation} Similarly, equation of motion of $\varphi_0$ can be obtained from \begin{equation}\label{ZerothEoMphi} \pi_{\varphi0}^\prime = N_0\, a^3\,G_{\varphi}. \end{equation} \subsection{First order}\label{first} As we have mentioned in the introduction, there are two ways to obtain Hamiltonian --- Langlois' approach \citep{Langlois1994}, and the approach used in I \cite{Nandi:2015ogk}. In this work, we use both the approaches and explicitly show that it is possible to obtain a consistent Hamiltonian for non-canonical scalar fields. In Appendix \ref{Langlois-nonC}, we extend Langlois' approach to non-canonical scalar field and in the rest of the section, we extend I to obtain a consistent Hamiltonian for non-canonical scalar field. The field variables and their corresponding momenta can be separated into unperturbed and perturbed parts as \begin{eqnarray} \label{perturbfm1} && N = N_0 + \epsilon N_1, ~~~N^{i} = \epsilon N_1^i,~~~\varphi = \varphi_0 + \epsilon \varphi_1 \\ \label{perturbfm2} && \pi^{i j} = {}\pi_0{}^{i j} + \epsilon \pi_1^{i j}, ~~~\pi_\varphi = {}\pi_{\varphi0} + \epsilon \pi_{\varphi1} \end{eqnarray} and by using Taylor expansion of phase-space variable $G(\pi_\varphi, \gamma, Y, \varphi)$, the second order perturbed Hamiltonian density is given by \begin{eqnarray} \label{firstH} && \mathcal{H}_{2} = {\delta}_{i j} {\partial}_{k}{{N_1}^{j}}\, ({\pi_1}^{i k} + \pi_1^{k i}) a{}^{2} - N_0 \kappa a\,({\delta}_{i j} {\delta}_{k l} - 2 \delta_{i k} \delta_{j l}){\pi_1}^{i j} {\pi_1}^{k l} - 2\, N_1 \kappa a \,({\delta}_{i j} {\delta}_{k l} - 2 \delta_{i k} \delta_{j l}){\pi_0}^{i j} {\pi_1}^{k l} \nonumber \\ &&~~~~~ - G_\varphi N_1 a{}^{3} \varphi_1 - G_{\pi_\varphi} N_1 \,\pi_{\varphi 1} \,a{}^{3} - \frac{1}{2}\, G_{\varphi\varphi} N_0\, \varphi_1{}^{2} a{}^{3} - \frac{1}{2}\, G_{\pi_\varphi \pi_\varphi} N_0\, \pi_{\varphi 1} {}^{2} a{}^{3} - G_{\varphi \pi_\varphi } N_0 \pi_{\varphi 1} a{}^{3} \varphi_1 \nonumber \\ &&~~~~~- G_Y N_0 {\delta}^{i j} {\partial}_{i}{\varphi_1}\, {\partial}_{j}{\varphi_1}\, a + {N_1}^{i} \pi_{0 \varphi} {\partial}_{i}{\varphi_1}.\, \end{eqnarray} Note that, as we have pointed out in I \citep{Nandi:2015ogk}, perturbed momentum corresponding to an unperturbed variable may arise due to the presence of other perturbed phase-space variables, thus $\pi^{ij}_1$ is non-zero and can be obtained by varying the Hamiltonian (\ref{firstH}) with respect to $\pi^{ij}_1$: \begin{equation} \frac{\delta \mathcal{H}_2}{\delta \pi_1^{i j}} = 0 \Rightarrow \pi^{i j}_1 = \frac{a}{2 N_0 \kappa} \delta^{i j} \partial_k N_1^k - \frac{a}{4 N_0 \kappa} \delta^{k i} \partial_k N_1^j - \frac{a}{4 N_0 \kappa} \delta^{k j} \partial_k N_1^i - \frac{N_1}{N_0} \pi_0^{i j}. \end{equation} Varying the perturbed Hamiltonian (\ref{firstH}) with respect to $\pi_{\varphi1}$ leads to the following equation \begin{eqnarray} \varphi_1^\prime &=& - G_{\pi_\varphi} N_1 a^3 - G_{\pi_\varphi \pi_\varphi}N_0\, \pi_{\varphi1} a^3 - G_{\varphi \pi_\varphi} N_0\, \varphi_1 a^3 \\ \Rightarrow \pi_{\varphi1} &=& - \frac{1}{N_0\, a^3\,G_{\pi_\varphi \pi_\varphi} }\left(\varphi_1^\prime + G_{\pi_\varphi} N_1 a^3 + G_{\varphi \pi_\varphi} N_0\, \varphi_1 a^3 \right). \end{eqnarray} Hamiltonian constraint is obtained by varying the Hamiltonian with respect to Lapse function. Hence, varying (\ref{firstH}) with respect to $N_1$ leads to first order Hamiltonian constraint and takes the form \begin{equation} \label{FirstHamiltonian} -2 \delta_{i j} \delta_{k l} \,\pi_0^{i j}\, \pi_1^{k l} + 4 \delta_{i j} \delta_{k l} \,\pi_0^{i k}\, \pi_1^{j l} - G_\varphi a^2 \varphi_1 - G_{\pi_\varphi} \pi_{\varphi1}\, a^2 = 0.\\ \end{equation} Similarly, by varying Hamiltonian with respect to $N_1^i$ we get the following Momentum constraint, \begin{equation} \label{FirstMomentum} \pi_{\varphi0} \, \partial_i \varphi_1 - 2 a^2\, \delta_{i j}\,\partial_k \pi_1^{k j}=0. \end{equation} Finally, equation of motion of $\varphi_1$ is obtained by varying the Hamiltonian with respect to $\varphi_1$, i.e., \begin{equation} \label{FirstEoMvarphi} \pi_{\varphi_1}^\prime = a^3 \,G_\varphi N_1 + a^3\,G_{\varphi\varphi}N_0 \varphi_1 + a^3\,G_{\varphi \pi_\varphi} N_0 \pi_{\varphi1} - 2\,a\, G_Y N_0 \nabla^2 \varphi_1 + \pi_0^{i j}\,\partial_iN_1^i. \end{equation} Since the perturbed scalar field equation is linear in nature and follows wave equation, speed of sound is defined as the ratio of negative of the coefficient of $\nabla^2 \varphi_1$ and $\varphi_1^{\prime\prime}$ and in phase-space, it takes the form \begin{equation} c_s^2 = 2 \,N_0^2\,a^4 \, G_{\pi_\varphi \pi_\varphi}\, G_Y \end{equation} which, in conformal coordinate can be expressed as \begin{equation}\label{sonic-conf} c_s^2 = 2 \,a^6 \, G_{\pi_\varphi \pi_\varphi}\, G_Y. \end{equation} The relation between generalized phase-space derivatives of $G$ ($G_\varphi,~ G_Y,~G_{\varphi \pi_\varphi}$ etc.) and configuration-space derivatives of $P(X, \varphi)$ ($P,~P_{\varphi},~P_{\varphi X}$ etc.) is unknown, hence, it is not possible to invert above Hamilton's equations to Euler-Lagrange equations and hence, it is not possible to compare both the formalisms. However, for a particular scalar field, the exact form of $G$ is known to us (see Appendix \ref{Inversion}), and hence, for those model it is possible to write down equations of motion in configuration space and can be verified that Hamiltonian formulation of non-canonical scalar field is consistent. \section{Inversion of non-canonical terms}\label{InversionNC} In the last section, we showed that it is possible to obtain Hamiltonian for a non-canonical scalar field by defining a new variable G (see eqs. (\ref{G}) and (\ref{Hamfull})). In order to understand the importance of this new function G, we ask the following question: \emph{Starting from the Hamiltonian (\ref{zerothH}) and (\ref{firstH}), can we invert the expressions leading to generalized equations in configuration-space?} In this section, we show that, inversion can be established. To invert the equations, one needs to invert the coefficients like $G_\varphi,~ G_Y,~G_{\varphi \pi_\varphi}$ from phase-space to configuration-space. Since the form of $G$ in configuration space is known, by carefully looking at the equations, it is apparent that only the phase-space derivatives of $G$ are needed to invert which, in general, is not possible. To begin with, let us take a phase-space function $F \equiv F(\pi_\varphi, \gamma, Y, \varphi) = \tilde{F}(X, \gamma, Y, \varphi)$, i.e., \begin{eqnarray} F &=& F(\pi_\varphi, \gamma, Y, \varphi) \nonumber \\ \Rightarrow dF &=& F_{\pi_\varphi} d\pi_\varphi + F_\gamma d\gamma + F_Y dY + F_\varphi d\varphi \nonumber \end{eqnarray} Note that, tilde is used for configuration-space functions only. The invertibility of Legendre transformation implies that, if $X=X(\pi_\varphi, \gamma, Y, \varphi)$ then $\pi_\varphi = \pi_\varphi (\pi_\varphi, \gamma, Y, \varphi)$, i.e., \begin{eqnarray} && \pi_\varphi = \pi_\varphi ( X, \gamma, Y, \varphi) \nonumber \\ && d\pi_\varphi = \frac{\partial \pi_\varphi}{\partial X} dX + \frac{\partial \pi_\varphi}{\partial \gamma}d\gamma + \frac{\partial \pi_\varphi}{\partial Y} dY + \frac{\partial \pi_\varphi}{\partial \varphi} d \varphi \nonumber \end{eqnarray} implying that \begin{eqnarray} d\tilde{F}(X, \gamma, Y, \varphi) &=& F_{\pi_\varphi} \frac{\partial \pi_\varphi}{\partial X} dX + \left( F_\gamma + F_{\pi_\varphi} \frac{\partial \pi_\varphi}{\partial \gamma}\right) d\gamma + \left( F_Y + F_{\pi_\varphi}\frac{\partial \pi_\varphi}{\partial Y}\right) dY \nonumber \\ &&+\left( F_\varphi + F_{\pi_\varphi}\frac{\partial \pi_\varphi}{\partial \varphi}\right) d\varphi. \end{eqnarray} Hence, the relations between phase-space variables and configuration-space variables are \begin{eqnarray} F_{\pi_\varphi} &=& \frac{\tilde{F}_X}{\frac{\partial \pi_\varphi}{\partial X}}, \\ F_\gamma &=& \tilde{F}_\gamma - F_{\pi_\varphi} \frac{\partial \pi_\varphi}{\partial \gamma}, \\ F_Y &=& \tilde{F}_Y - F_{\pi_\varphi} \frac{\partial \pi_\varphi}{\partial Y},\\ F_\varphi &=& \tilde{F}_\varphi - F_{\pi_\varphi} \frac{\partial \pi_\varphi}{\partial \varphi}. \end{eqnarray} In our case, for arbitrary non-canonical scalar field, we do not know the exact form of $G(\pi_\varphi, \gamma, Y, \varphi)$, however, we know $\tilde{G}(X, \gamma, Y, \varphi) = P - P_X \left(2X - Y\right)$. Using equation (\ref{piphigeneral}) and the above established relations, we get \begin{eqnarray} &&G_{\pi_\varphi} = - \frac{\sqrt{-2X}}{a^3},~~~G_{\pi_\varphi \pi_\varphi} = \frac{1}{a^6 \left(P_X + 2X P_{XX}\right)},\nonumber \\ &&G_{\pi_\varphi \pi_\varphi \pi_\varphi} = -\frac{\sqrt{-2X} \left(3P_{XX} + 2X P_{XXX}\right)}{a^9 \left(P_X + 2X P_{XX}\right)^3} \nonumber \\ &&G_\varphi = P_\varphi, ~~~~~G_{\varphi \pi_\varphi} = \frac{\sqrt{-2X}\,P_{X\varphi}}{a^3 \left(P_X + 2X P_{XX}\right)}\nonumber \\ && G_{\varphi \varphi} = P_{\varphi\varphi} - \frac{2X\,P_{X\varphi}^2}{ \left(P_X + 2X P_{XX}\right)} \end{eqnarray} Using these definitions it is possible to invert all phase-space quantities to the ones in configuration-space. To start with, we first consider speed of sound. In conformal coordinate, it is given by equation (\ref{sonic-conf}) and hence using above relations, we get \begin{equation} c_s^2 = \frac{P_X}{P_X + 2XP_{XX}} \end{equation} which matches with the conventional configuration-space definition of sound of speed. Similarly, the zeroth order equations (\ref{ZeroHamP}), (\ref{ZeroEomaP}) and (\ref{ZeroEoMphiP}), after inversion, become \begin{eqnarray} \label{ZeroHamP} && H^2 =- \frac{\kappa}{3} (P_X {\varphi^{\prime}_0}^{2} + P a^2), ~~~ \mbox{Hubble Constant:}~ H \equiv \frac{a^\prime}{a} \\ \label{ZeroEomaP} &&- 2 \frac{a^{\prime \prime}}{a} + H^2 = \kappa P a^2, \\ \label{ZeroEoMphiP} && P_X {\varphi_0^{\prime \prime}} - P_{XX} {\varphi_0^{\prime \prime}} {\varphi_0^{\prime}}^{2} a^{-2} + P_{X\varphi} {\varphi_0^{\prime}}^{2} + 2 P_X {\varphi_0^{\prime}} H + P_{XX} H {\varphi_0^{\prime}}^{3} a^{-2} + P_{\varphi} a^{2} = 0, \end{eqnarray} respectively. At first order, $N_1 = a \phi_1$ and $N^i = \delta^{i j} \partial_j B_1$, which helps to reduce first order perturbed Hamiltonian constraint (\ref{FirstHamiltonian}) into \begin{eqnarray} && \frac{H}{\kappa} \nabla^2 B_1 + \frac{3\, H^2}{\kappa} \phi_1 + \frac{G^2_{\pi_\varphi}}{2\,G_{\pi_\varphi \pi_\varphi}} \phi_1 a^2 + \frac{G_{\pi_\varphi}}{2\, G_{\pi_\varphi \pi_\varphi} a^2} \varphi_1^\prime + \frac{G_{\pi_\varphi}\,G_{\varphi \pi_\varphi}}{2\, G_{\pi_\varphi \pi_\varphi}} \varphi_1 a^2 \nonumber \\ &&~~~~~~~~ - \frac{G_\varphi}{2} \varphi_1 a^2 = 0 \end{eqnarray} which, further, can be inverted back to configuration-space, again by using above relations as \begin{eqnarray} \label{firstHamP} &&\mathcal{H} \nabla^2{B_1} = \frac{\kappa}{2} \Big[ P_X \phi_1 {\varphi_0^{\prime}}^{2} + 2 P a^2 \phi_1 + P_X \varphi_0^{\prime} \varphi_1^{\prime} + P_{XX} \phi_1 {\varphi_0^{\prime}}^{4} a^{-2} - P_{XX} \varphi_1^{\prime} {\varphi_0^{\prime}}^{3} a^{-2} + \nonumber \\ && ~~~~~~~~~~~~~P_{X \varphi} {\varphi_0^{\prime}}^{2} \varphi_1 + P_\varphi \varphi_1 a^2 \Big]. \end{eqnarray} Similarly, first order Momentum constraint becomes \begin{equation} \label{fisrtMomP} \partial_i\phi_1= - \frac{\kappa}{2 \,H}P_X \varphi_0^{\prime} \partial_i\varphi_1 \end{equation} and equation of motion of scalar field $\varphi_1$, i.e., equation (\ref{FirstEoMvarphi}) takes the form \begin{eqnarray} \label{FirstEoMP} &&- P_X {\varphi_1^{\prime \prime}} a^{2} - P_{XX} {\phi_1^{\prime}}{\varphi_0^{\prime}}^{3} + P_{XX} {\varphi_1^{\prime \prime}} {\varphi_0^{\prime}}^{2} - P_{XX\varphi} \phi_1 {\varphi_0^{\prime}}^{4} + P_{XX\varphi} {\varphi_1^{\prime}} {\varphi_0^{\prime}}^{3} -P_{\varphi} \phi_1 a^{4}- \nonumber\\ && P_{\varphi \varphi} a^{4} \varphi_1 + P_X \phi_1 {\varphi_0^{\prime \prime}} a^{2} + P_X \nabla^2{\varphi_1} a^{2} + P_X {\phi_1^{\prime}}{\varphi_0^{\prime}} a^{2} - 2 P_X {\varphi_1^{\prime}} H a^2 - 4 P_{XX} \phi_1 {\varphi_0^{\prime \prime}}{\varphi_0^{\prime}}^{2} +\nonumber\\ && 3 P_{XX} {\varphi_0^{\prime}} {\varphi_1^{\prime}}{\varphi_0^{\prime \prime}} + P_{XX\varphi} {\varphi_0^{\prime \prime}}{\varphi_0^{\prime}}^{2} \varphi_1 - P_{X\varphi} {\varphi_0^{\prime}} {\varphi_1^{\prime}} a^{2} - P_{X\varphi} {\varphi_0^{\prime \prime}} a^{2} \varphi_1 -P_{X\varphi\varphi} {\varphi_0^{\prime}}^{2} a^{2} \varphi_1 + \nonumber\\ && 2 P_X \phi_1 {\varphi_0^{\prime}} H a^2 + P_X {\varphi_0^{\prime}} \nabla^2{B_1} a^{2} + P_{XX} \phi_1 H {\varphi_0^{\prime}}^{3} - P_{XX} {\varphi_1^{\prime}} H {\varphi_0^{\prime}}^{2} - P_{XXX} \phi_1 H {\varphi_0^{\prime}}^{5} a^{-2} +\nonumber \\ && P_{XXX} \phi_1 {\varphi_0^{\prime \prime}}{\varphi_0^{\prime}}^{4} a^{-2} + P_{XXX} {\varphi_1^{\prime}}H{\varphi_0^{\prime}}^{4} a^{-2} - P_{XXX}{\varphi_1^{\prime}}{\varphi_0^{\prime \prime}}{\varphi_0^{\prime}}^{3} a^{-2} - P_{XX\varphi} H {\varphi_0^{\prime}}^{3} \varphi_1 - \nonumber\\ && 2 P_{X\varphi} {\varphi_0^{\prime}} H \varphi_1 a^2 = 0. \end{eqnarray} Equations (\ref{ZeroHamP}), (\ref{ZeroEomaP}), (\ref{ZeroEoMphiP}), (\ref{firstHamP}), (\ref{fisrtMomP}) and (\ref{FirstEoMP}) are consistent with the zeroth and first order perturbed Euler-Lagrange equations of motion. \section{Interaction Hamiltonian}\label{InteractionHam} The higher-order physical observables like Bi-spectrum/Tri-spectrum are related to higher-order correlation functions; in order to compute higher-order correlation functions, we need higher-order interaction Hamiltonian. In this section, we obtain the interaction Hamiltonian of the non-canonical field. Third order perturbed generalized interaction Hamiltonian for non-canonical scalar field in terms of phase-space variables is obtained directly by expanding the Hamiltonian (\ref{Hamfull}) upto third order of perturbation \cite{Nandi:2015ogk} and it takes the form \begin{eqnarray} \label{ThirdIntHam} \mathcal{H}_3 &=& - N_1 {\delta}_{i j} {\delta}_{k l} \kappa {\pi_1}^{i j} {\pi_1}^{k l} a + 2\, N_1 {\delta}_{i j} {\delta}_{k l} \kappa {\pi_1}^{i k} {\pi_1}^{j l} a - \frac{1}{2}\, G_{\varphi\varphi} N_1 \varphi_1{}^{2} a{}^{3} - \frac{1}{2}\, G_{\pi_\varphi \pi_\varphi} N_1 \pi_{\varphi1}{}^{2} a{}^{3} -\nonumber \\ && G_{\varphi\pi_\varphi} N_1 \pi_{\varphi1} a{}^{3} \varphi_1 - G_Y N_1 {\delta}^{i j} {\partial}_{i}{\varphi_1}\, {\partial}_{j}{\varphi_1}\, a - G_{Y \pi_\varphi} N_0 \pi_{\varphi1} {\delta}^{i j} {\partial}_{i}{\varphi_1}\, {\partial}_{j}{\varphi_1}\, a -\nonumber \\ && G_{\varphi Y} N_0 {\delta}^{i j} {\partial}_{i}{\varphi_1}\, {\partial}_{j}{\varphi_1}\, \varphi_1 a - \frac{1}{6}\, G_{\pi_\varphi \pi_\varphi \pi_\varphi} N_0 \pi_{\varphi1}{}^{3} a{}^{3} - \frac{1}{6}\, G_{\varphi \varphi \varphi} N_0 \varphi_1{}^{3} a{}^{3} - \nonumber \\ &&\frac{1}{2}\, G_{\varphi \pi_\varphi \pi_\varphi} N_0 \pi_{\varphi1}{}^{2} a{}^{3} \varphi_1 - \frac{1}{2}\, G_{\varphi\varphi\pi_\varphi} N_0 \pi_{\varphi1} \varphi_1{}^{2} a{}^{3} + {N_1}^{i} \pi_{\varphi1} {\partial}_{i}{\varphi_1} \end{eqnarray} and similarly, fourth order interaction Hamiltonian takes the form \begin{eqnarray} \label{FourthIntHam} \mathcal{H}_4 &=& - \frac{1}{2}\, G_{YY} N_0 {\delta}^{i j} {\delta}^{k l} {\partial}_{i}{\varphi_1}\, {\partial}_{j}{\varphi_1}\, {\partial}_{k}{\varphi_1}\, {\partial}_{l}{\varphi_1}\, a{}^{-1} - G_{\pi_\varphi Y} N_1 \pi_{\varphi1} {\delta}^{i j} {\partial}_{i}{\varphi_1}\, {\partial}_{j}{\varphi_1}\, a - \nonumber \\ && G_{\varphi Y} N_1 {\delta}^{i j} {\partial}_{i}{\varphi_1}\, {\partial}_{j}{\varphi_1}\, \varphi_1 a - \frac{1}{6}\, G_{\pi_\varphi \pi_\varphi \pi_\varphi} N_1 \pi_{\varphi1}{}^{3} a{}^{3} - \frac{1}{6}\, G_{\varphi\varphi\varphi} N_1 \varphi_1{}^{3} a{}^{3} - \nonumber \\ &&\frac{1}{2}\, G_{\varphi \pi_\varphi \pi_\varphi} N_1 \pi_{\varphi1}{}^{2} a{}^{3} \varphi_1 - \frac{1}{2}\, G_{\pi_\varphi \pi_\varphi Y} N_0 {\delta}^{i j} {\partial}_{i}{\varphi_1}\, {\partial}_{j}{\varphi_1}\, \pi_{\varphi1}{}^{2} a - \frac{1}{2}\, G_{\varphi\varphi \pi_\varphi} N_1 \pi_{\varphi1} \varphi_1{}^{2} a{}^{3} - \nonumber \\ &&\frac{1}{2}\, G_{\varphi\varphi Y} N_0 {\delta}^{i j} {\partial}_{i}{\varphi_1}\, {\partial}_{j}{\varphi_1}\, \varphi1{}^{2} a - G_{\varphi \pi_\varphi Y} N_0 \pi_{\varphi1} {\delta}^{i j} {\partial}_{i}{\varphi_1}\, {\partial}_{j}{\varphi_1}\, \varphi_1 a - \nonumber \\ &&\frac{1}{24}\, G_{\pi_\varphi \pi_\varphi \pi_\varphi \pi_\varphi} N_0 \pi_{\varphi1}{}^{4} a{}^{3} - \frac{1}{24}\, G_{\varphi\varphi\varphi\varphi} N_0 \varphi_1{}^{4} a{}^{3} - \frac{1}{6}\, G_{\varphi\pi_\varphi\pi_\varphi\pi_\varphi} N_0 \pi_{\varphi1}{}^{3} a{}^{3} \varphi_1 - \nonumber \\ &&\frac{1}{6}\, G_{\varphi\varphi\varphi\pi_\varphi} N_0 \pi_{\varphi1} \varphi_1{}^{3} a{}^{3} - \frac{1}{4}\, G_{\varphi\varphi\pi_\varphi\pi_\varphi} N_0 \pi_{\varphi1}{}^{2} \varphi_1{}^{2} a{}^{3}. \end{eqnarray} Again, using inversion formulae mentioned in above section, phase-space form of interaction Hamiltonian can be written in terms of configuration-space variables. \section{Extension to generalized higher-derivative models}\label{ExGal} As we have shown above, for an arbitrary non-canonical scalar field, it is possible to define a canonical conjugate momenta and the Hamiltonian. In this section, we extend our method to generalize higher-derivative models. First, we extend the analysis to G-Inflation\cite{Kobayashi2010, Kobayashi2011} model with generalized functions $P(X, \varphi)$ and $K(X, \varphi)$ . Action for G-Inflation scalar field minimally coupled to gravity is given by \begin{equation}\label{galaction} \mathcal{S}_G = \int d^4 x \sqrt{-g} \left[\frac{1}{2 \kappa} R + P(X, \varphi) + K(X, \varphi) \Box \varphi \right]. \end{equation} Directly obtaining the Hamiltonian for the above action is difficult since it contains second order derivatives of the scalar field. However, using the approach of Deffayet \emph{et al.} \cite{Deffayet:2015qwa, Nandi:2015ogk}, action (\ref{galaction}) can be re-written as \begin{eqnarray}\label{GalileanAction} \mathcal{S}_G &=& \int d^4 x \sqrt{-g} \left[\frac{1}{2 \kappa} R + P(X, \varphi) + K(X, \varphi)\, S \right] + \int d^4 x \,\lambda\, (S - \Box \varphi) \nonumber \\ &=& \int d^4 x \sqrt{-g} \left[\frac{1}{2 \kappa} R + P(X, \varphi) + K(X, \varphi)\, S \right]+ \int d^4x [\, \lambda S + \lambda~ g^{\mu \nu}~\Gamma^{\alpha}_{\mu \nu} ~\partial_{\alpha}{\varphi} + \nonumber \\ && ~~~~~~~~g^{\mu \nu} \partial_{\mu}{\varphi} \partial_{\nu}\lambda + \lambda \, \partial_{\nu}{g^{\mu \nu}} \partial_{\mu}{\varphi}\,]. \end{eqnarray} Linearizing the action costs two extra variables in configuration-space, thus four extra phase-space variables. We have discussed the issue in I \cite{Nandi:2015ogk} and proved that those variables are not dynamic in nature and thus, there are no extra degrees of freedom. Since the action (\ref{GalileanAction}) is converted in terms of first derivatives of fields, it is now possible to define momenta in terms of time derivative of the fields. However, the action still contains two generalized configuration-space variables $P(X, \varphi)$ and $G(X, \varphi)$. Hence, using the above approach for generalized non-canonical scalar field, a consistent perturbed Hamiltonian formalism for generalized Galilean scalar field can be established. The approach can also be extended to any other higher-derivative models like Hordenski scalar field models, modified gravity models or an arbitrary higher-derivative theory. The above case is the quadratic and cubic parts of the Hordenski's scalar field model\cite{Horndeski1974}. In case of general Hordenski's scalar field model, action depends on $R_{\mu \nu}, \nabla_{\mu \nu}\varphi, \partial_{\mu}{\varphi}, g_{\mu \nu}$ and $\varphi$. The metric-part can be written in terms of extrinsic tensor, $K_{i j}$ and 3-Ricci scalar, $^{(3)}R$ with Lapse function, $N$ and Shift vector, $N^i$. Since the action contains $\nabla_{\mu \nu} \varphi$ instead of only $\Box \varphi$, above method for linearizing action will not work. Instead, we have to linearize the action by adding $$ S_H + \int d^4x~\lambda^{\mu \nu}\left(S_{\mu \nu} - \nabla_{\mu \nu}\varphi\right) $$ for general Hordenski's model \cite{Deffayet:2015qwa}. Hordenski's full action contains four unknown functions, $G_n(X, \varphi),\, n=2\cdots5$, hence, using the approach for non-canonical scalar field, we can also deal with Hamiltonian formulation for Hordenski's theory. By using the same argument and method, it is possible to obtain Hamiltonian beyond Hordenski's model, i.e., for any higher-order derivative gravity models with arbitrary functions. To deal with arbitrary functions, using the above approach for generalized non-canonical scalar fields, we can define a new and unique phase-space variable(s) and write the corresponding canonical Hamiltonian of the system and inversion formulae can be used to invert from phase-space variable to configuration-space variable and vice-versa. Once the Hamiltonian is obtained, we can use the approach in I to get the consistent Hamiltonian formalism of cosmological perturbation at any perturbed order for the specific model. Hamiltonian approach in I, is independent of how we construct the Hamiltonian and is readily applicable once we successfully write down a consistent Hamiltonian for a specific model. Hence, the Hamiltonian approach for higher derivative theory is not restricted only by the Deffayet's approach \cite{Deffayet:2015qwa}. Recently, Langlois and Noui \cite{Langlois:2015cwa, Langlois:2015skt} have also provided a simpler way to obtain Hamiltonian for higher derivative theory and the Hamiltonian approach for perturbation can also be extended to these models. \section{Conclusion and discussion}\label{Conclu} In this work, we have explicitly provided the Hamiltonian formulation of cosmological perturbation theory for generalized non-canonical scalar fields. The following procedure was adopted: first we provided the essential information regarding gauge-choices and related gauge-invariant quantities. Next, we performed Legendre transformation for the generalized non-canonical scalar fields and showed that, since ($\varphi^\prime \rightarrow \pi_\varphi$) transformation is not possible, Hamiltonian for generalized non-canonical scalar fields cannot be obtained by using conventional method. We introduced a new generalized phase-space variable $G(\pi_\varphi, \gamma, Y, \varphi)$ that is unique for different non-canonical scalar fields and obtained Hamiltonian of a non-canonical scalar field. We showed that, this is the simplest and efficient way to obtain the Hamiltonian. We extended the approach in I to generalized non-canonical scalar fields in the flat-slicing that doesn't lead to gauge-artifacts and obtained perturbed Hamilton's equations in terms of phase-space variables. In parallel, we also extended Langlois' approach to generalized non-canonical scalar field and showed that both approaches lead to identical speed of sound. In order to compare Hamiltonian approach with Lagrangian approach, Hamilton's equations are to be converted to Euler-Lagrange equation and in doing so, we provided explicit forms of $G(\pi_\varphi, \gamma, Y, \varphi)$ for different non-canonical scalar field models and showed that the Hamiltonian formulation is consistent. Since we do not know how, in general, phase-space derivatives of $G(\pi_\varphi, \gamma, Y, \varphi)$ transform to configuration-space derivatives, hence for an arbitrary field, it is not possible to directly invert the generalized phase-space Hamilton's equations to Euler-Lagrange equations. In order to overcome this, we prescribed an inversion mechanism from generalized phase-space variables to generalized configuration-space variables (and vice versa) and showed that all generalized phase-space equations lead to consistent E-L equations. We also retrieved the conventional form of speed of sound in configuration-space. We also obtained the interaction Hamiltonian in terms of phase-space variables for generalized non-canonical scalar field at third and fourth order of perturbation for scalar perturbations. These can also be expressed in terms of $(\varphi^\prime,~ \varphi)$ using the general inversion formulae. Note that, we considered only the first order scalar perturbations. Vector or tensor modes can similarly be implemented by considering $\delta\gamma_{i j} \neq 0$ and decomposing the metric using vector and tensor modes. Hamiltonian as well as equations of motions for vector or tensor modes also change accordingly. For the linear order, three modes decouple and $\delta\pi^{i j}$ can also be decomposed as ${\delta\pi_{S}^{i j} + \delta\pi_{V}^{ij} + \delta\pi_{T}^{i j}}$, so the equations of motion. However, for higher order of perturbations, modes are highly coupled to each other, hence similar decomposition is not possible. Finally, we briefly discussed the Hamiltonian formulation for generalized higher derivative scalar fields. The method is not restricted to gravity related models, it can also be applied to any other models where the Lagrangian is not specified properly. Throughout the work, we carried out the method by assuming that the field allows Legendre transformation, which, most of the known models follow. However, if a certain model is specified in such a way that $\varphi^\prime$ cannot be written in terms of $\pi_\varphi$ or the mapping is one-to-many, then the current formalism cannot be applied to obtain a unique form of $G(\pi_\varphi, \gamma, Y, \varphi)$ and hence, for those kind of models, this approach is not applicable. \section{Acknowledgements} We thank Swastik Bhattacharya for useful discussions. Work is supported by Max Planck-India Partner group on Gravity and Cosmology. DN is supported by CSIR fellowship. Further we thank Kasper Peeters for his useful program Cadabra\cite{Peeters:2007wn,DBLP:journals/corr/abs-cs-0608005} and useful algebraic calculations with it.
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} The Brauer-Thrall Conjectures first appeared in a 1957 paper by Thrall's student J.\ P.\ Jans~\cite{Jans}. They say, roughly speaking, that if a finite-dimensional algebra $A$ over a field $\mathsf{k}$ has infinite representation type, then $A$ has lots of big indecomposable finitely generated modules. Recall that $A$ has \emph{finite representation type} provided there are only finitely many indecomposable finitely generated $A$-modules up to isomorphism, \emph{bounded representation type} provided there is a bound on the $\mathsf{k}$-dimensions of the indecomposable finitely generated $A$-modules, and \emph{strongly unbounded representation type} provided there is an infinite sequence $n_1<n_2<\cdots$ of positive integers such that $A$ has, for each $i$, infinitely many non-isomorphic indecomposable modules of $\mathsf{k}$-dimension $n_i$. Here are the conjectures: \begin{conj}[First Brauer-Thrall Conjecture (BT1)]\label{conj:BT1} If $A$ has bounded representation type, then $A$ has finite representation type. \end{conj} \begin{conj}[Second Brauer-Thrall Conjecture (BT2)]\label{conj:BT2} If $A$ has unbounded representation type and $\mathsf{k}$ is infinite, then $A$ has strongly unbounded representation type. \end{conj} Under mild hypotheses, both conjectures are now theorems. Ro\u\i ter~\cite{Roiter:1968} verified (BT1), and Nazarova and Ro\u\i ter \cite{Nazarova-Roiter:1973} proved (BT2) for perfect fields $\mathsf{k}$. See~\cite{Ringel:Report} or \cite{Gustafson:1982} for some history on these results. When we move from Artinian rings to local rings $(R,\mathfrak{m},\mathsf{k})$ of positive dimension, the first thing we need to do is to decide on the right class of modules. If $R$ is not a principal ideal ring, constructions going back to Kronecker \cite{Kronecker:1874} and Weierstra\ss\ \cite{Weierstrass:1868} show that $R$ has indecomposable modules requiring arbitrarily many generators. Moreover, if $\mathsf{k}$ is infinite, then for every $n$ there are $|\mathsf{k}|$ non-isomorphic indecomposable modules each of which requires exactly $n$ generators. (See \cite[Theorem 3.3 and Exercise 3.25]{BOOK}.) Thus imposing finiteness or boundedness conditions on the class of \emph{all} modules does not lead to anything interesting. Restricting to torsion-free modules yields a more robust theory, at least in dimension one. In the 1960's Jacobinski \cite{Jacobinski:1967} and, independently, Drozd and Ro\u\i ter \cite{Drozd-Roiter} studied orders in algebraic number fields and, more generally, rings essentially module-finite over the ring of integers, and classified the rings having only finitely many indecomposable finitely generated torsion-free modules up to isomorphism. In dimensions greater than one, there are just too many torsion-free modules. Indeed, Bass \cite{Bass:1962} proved in 1962 that every local domain of dimension two or more has indecomposable finitely generated torsion-free modules of arbitrarily large rank. The maximal Cohen-Macaulay (MCM) property, a higher-dimensional form of torsion-freeness, turns out to give a fruitful class of modules to study. The equality of a geometric invariant (dimension) with an arithmetic one (depth) makes MCM modules easy to work with, simultaneously ensuring that in some sense they faithfully reflect the structure of the ring. For example, a Cohen-Macaulay local ring has no non-free MCM modules if and only if it is a regular local ring, so the rings that are the simplest homologically are also simple in this sense. Imposing finiteness or boundedness conditions on MCM modules over a Cohen-Macaulay local ring leads to classes of rings that are large enough to include interesting examples, but small enough to study effectively. The seminal work of Herzog \cite{Herzog:1978}, Artin and Verdier \cite{Artin-Verdier:1985}, Auslander \cite{Auslander:rationalsing}, and Buchweitz, Greuel, Kn\"orrer, and Schreyer \cite{BGS, Knorrer} supports this assertion. For example, the main result of \cite{BGS, Knorrer} is that a complete equicharacteristic hypersurface singularity over an algebraically closed field of characteristics zero has only finitely many indecomposable MCM modules up to isomorphism if and only if it is a simple singularity in the sense of V.\ I.~Arnol$'$d, that is, one of the (A$_n$), (D$_n$), (E$_6$), (E$_7$), or (E$_8$) hypersurface singularities. Next, we have to decide what invariant should be used to measure the size of a finitely generated module $M$. Two obvious choices are $\mu_R(M)$, the minimal number of generators required for $M$, and $\e_R(M)$, the multiplicity of $M$. We choose multiplicity. \begin{dfn}\label{dfn:bdd} Let $(R,\mathfrak{m},\mathsf{k})$ be a local ring. \begin{enumerate}[label=(\roman{*})] \item\label{item:finite} $R$ has \emph{finite} CM type provided $R$ has, up to isomorphism, only finitely many indecomposable MCM modules. \item\label{item:bdd} $R$ has \emph{bounded} CM type provided there is a bound on the multiplicities of the indecomposable MCM $R$-modules. \item\label{item:str-unbdd}$R$ has \emph{strongly unbounded} CM type provided there is an increasing sequence $n_1<n_2<\cdots$ of positive integers such that, for every $i$, there are infinitely many indecomposable MCM modules of multiplicity $n_i$. \end{enumerate} \end{dfn} Here, then, are the Brauer-Thrall Conjectures for MCM modules: \begin{conj}[First Brauer-Thrall Conjecture for MCM modules (BTM1)]\label{conj:BTM1} If a local ring $(R,\mathfrak{m},\mathsf{k})$ has bounded CM type, then $R$ has finite CM type. \end{conj} \begin{conj}[Second Brauer-Thrall Conjecture for MCM modules (BTM2)]\label{conj:BTM2} If a local ring $(R,\mathfrak{m},\mathsf{k})$ has unbounded CM type and $\mathsf{k}$ is infinite, then $R$ has strongly unbounded CM type. \end{conj} For MCM modules, multiplicity and number of generators enjoy a linear relationship: \begin{equation}\label{eq:mu-mult} \mu_R(M)\le \e_R(M) \le \e(R)\cdot\mu_R(M)\,, \end{equation} for every MCM $R$-module. (See \cite[Corollary A.24]{BOOK} for a proof of the first inequality.) It follows that we could replace multiplicity by number of generators in Definition~\ref{dfn:bdd} without changing the class of rings satisfying bounded (respectively, strongly unbounded) CM type. In fact, Conjecture~\ref{conj:BTM1} is false, the designation ``conjecture'' being merely a convenient nod to history. The first counterexample was given by Dieterich in 1980 \cite{Dieterich:1980}. Let $\mathsf{k}$ be a field of characteristic two, let $A = \mathsf{k}[\![x]\!]$, and let $G$ be the two-element group. Then $AG$ has bounded but infinite CM type. Of course $AG$ is isomorphic to $\mathsf{k}[\![x,y]\!]/(y^2)$, which, as we will see in the next section, has bounded but infinite CM type for \emph{any} field $\mathsf{k}$. \begin{conv-not} Throughout, $R$ will be a local ring (always assumed to be commutative and Noetherian). The notation $(R,\mathfrak{m},\mathsf{k})$ indicates that $\mathfrak{m}$ is the maximal ideal of $R$ and that $\mathsf{k}$ is the residue field $R/\mathfrak{m}$. All modules are assumed to be finitely generated. The $\mathfrak{m}$-adic completion of $R$ is $\widehat R$, and the integral closure of $R$ in its total quotient ring $K:=\{\text{non-zerodivisors}\}^{-1}R$ is $\overline R$. A module $M$ is \emph{maximal Cohen-Macaulay} (abbreviated ``MCM'') provided $\depth(M) = \dim(R)$. We will denote the multiplicity $\e(\mathfrak{m},M)$ of the maximal ideal on $M$ simply by $\e_R(M)$, and we write $\e(R)$ instead of $\e_R(R)$. (See \cite[Chapter 14]{Matsumura}.) The modifier ``Cohen-Macaulay'', when applied to the ring $R$, will often be abbreviated ``CM''. Our standard reference for matters commutative-algebraic will be~\cite{Matsumura}, and for representation theory we refer to~\cite{BOOK} or~\cite{Yoshino:book}. \end{conv-not} \section{Dimension One}\label{sec:BTM-dim1} Before getting started, let's observe that both conjectures are true for local Artinian rings. In this case all finitely generated modules are MCM modules. If $(R,\mathfrak{m})$ is an Artinian principal ideal ring with $\mathfrak{m}^t=0$, the indecomposable modules are $R/\mathfrak{m}^i$, $1\le i \le t$. We have already observed that if $R$ is not a principal ideal ring, then there exist, for each $n\ge1$, indecomposable modules requiring exactly $n$ generators, and, if $\mathsf{k}$ is infinite, $|\mathsf{k}|$ of them. Now, on to dimension one! We recall the characterization of one-dimensional rings of finite CM type: \begin{thm}\label{thm:frt-dim1} Let $(R,\mathfrak{m},\mathsf{k})$ be a Cohen-Macaulay local ring of dimension one. Then $R$ has finite CM type if and only if \begin{enumerate}[label=(\roman{*})] \item $R$ is reduced, \item $\mu_R(\overline R) \le 3$, and \item $\frac{\mathfrak{m} \overline R + R}{R}$ is cyclic as an $R$-module. \end{enumerate} \end{thm} Items (i) and (ii) are equivalent to the condition that $\widehat R$ is reduced and $\e(R) \le 3$. Conditions (ii) and (iii) are often called the ``Drozd-Ro\u\i ter conditions'' \cite{CWW} to recognize the 1966 paper \cite{Drozd-Roiter} where they first appeared and were shown to characterize the rings of finite CM type among local rings essentially module-finite over $\mathbb{Z}$. The work of Drozd and Ro\u\i ter was clarified considerably in 1978 by Green and Reiner \cite{Green-Reiner}, who used explicit matrix reductions to verify finiteness of CM type in the presence of the Drozd-Ro\u\i ter conditions. In 1989 R.~Wiegand~\cite{Wiegand:1989} adapted constructions in \cite{Drozd-Roiter} to prove the ``only if'' direction in general. A separable base-change argument in \cite{Wiegand:1989} and the matrix decompositions of Green and Reiner verified the ``if'' direction, except in the case of an imperfect residue field of characteristic two or three. In \cite{Wiegand:1994} Wiegand took care of the case of characteristic three. Finally, \c Cimen, in his Ph.D.\ dissertation \cite{Cimen:thesis}, completed the proof of Theorem~\ref{thm:frt-dim1} via difficult matrix reductions. (Cf.\ \cite{Cimen:paper}.) Although we won't say much about non-CM rings, we record the following result from \cite{Wiegand:1994}, which, together with Theorem~\ref{thm:frt-dim1}, characterizes the one-dimensional local rings with finite CM type: \begin{thm}\label{non-CM}Let $(R,\mathfrak{m},\mathsf{k})$ be a one-dimensional local ring, not necessarily Cohen-Macaulay, and let $N$ be the nilradical of $R$. Then $R$ has finite CM type if and only if \begin{enumerate}[label=(\roman{*})] \item $R/N$ (which \emph{is} CM) has finite CM type, and \item $N\cap \mathfrak{m} ^n = 0$ for some positive integer $n$. \end{enumerate} \end{thm} The proof of the ``only if'' direction in Theorem~\ref{thm:frt-dim1} (necessity of the Drozd-Ro\u\i ter conditions) in \cite{Wiegand:1989} actually shows more and confirms BTM1 in the analytically unramified case. We will say that a finitely generated module $M$ over a CM local ring $R$ has \emph{constant rank} $n$ provided $K\otimes_RM\cong K^{(n)}$, where $K$ is the total quotient ring. Equivalently, $M_\mathfrak{p}$ is a free $R_\mathfrak{p}$-module of rank $n$ for every minimal prime ideal $\mathfrak{p}$ of $R$. In this case $\e(M) = n\e(R)$. \begin{thm}[BTM1 when $\widehat R$ is reduced, \cite{Wiegand:1989}]\label{thm:BTM1-red} Let $(R, \mathfrak{m},\mathsf{k})$ be a one-dimensional local ring with reduced completion. If $R$ has infinite CM type then for every $n$ there is an indecomposable MCM $R$-module of constant rank $n$. In particular, $R$ has unbounded CM type. \end{thm} We have already seen that BTM1 can fail if there are nilpotents. We showed in 2005 \cite[Theorem 2.4]{Leuschke-Wiegand:bcmt} that there are essentially only three counterexamples to BTM1 in dimension one. Recall that the complete (A$_\infty$) and (D$_\infty$) curve singularities are, respectively, the rings $\mathsf{k}[\![x,y]\!]/(y^2)$ and $\mathsf{k}[\![x,y]\!]/(xy^2)$. They arise as the respective limits of the (A$ _n$) singularities $\mathsf{k}[\![x,y]\!]/(y^2+x^{n+1})$ and the (D$_n$) singularities $\mathsf{k}[\![x,y]\!]/(xy^2+x^{n-1})$ as $n\to\infty$. \begin{thm}[Failure of BTM1, \cite{Leuschke-Wiegand:bcmt}]\label{thm:BTM1-fails} Let $(R,\mathfrak{m},\mathsf{k})$ be an equicharacteristic, one-dimensional, Cohen-Macaulay local ring, with $\mathsf{k}$ infinite. Then $R$ has bounded but infinite CM type if and only if the completion $\widehat R$ is isomorphic to one of the following: \begin{enumerate}[label=(\roman{*})] \item $\mathsf{k}[\![x,y]\!]/(y^2)$, the (A$_\infty$) singularity; \item $T:= \mathsf{k}[\![x,y]\!]/(xy^2)$, the (D$_\infty$) singularity; \item $E :=\End_T(\mathfrak{m}_T)$, the endomorphism ring of the maximal ideal of $T$. \end{enumerate} The ring $E$ has a presentation $E\cong \mathsf{k}[\![X,Y,Z]\!]/(XY,YZ,Z^2)$. \end{thm} The assumption that $\mathsf{k}$ be infinite is annoying. It's tempting to try to eliminate this assumption via the flat local homomorphism $R \to S:=R[z]_{\mathfrak{m} R[z]}$, where $z$ is an indeterminate. The problem would be to show that if $R$ has unbounded CM type then so has $S$. While it is rather easy to show that finite CM type descends along flat local homomorphisms (as long as the closed fiber is CM) \cite[Theorem 1.6]{Wiegand:1998}, it's not known (at least to us) whether an analogous result holds for descent of \emph{bounded} CM type. In fact, it is not even known, in higher dimensions, whether bounded CM type descends from the completion. Such descent was a crucial part of the proof of Theorem~\ref{thm:BTM1-fails}, but the proof of descent was based not on abstract considerations, but on the precise presentations, in \cite{BGS}, of the indecomposable $\widehat R$-modules in each of the three cases. Using these presentations, we were able to say exactly which MCM $\widehat R$-modules are extended from $R$-modules, and thereby deduce that $R$ itself has bounded CM type. Part of the difficulty in proving a general statement of this form is that there may be no uniform bound on the number of indecomposable MCM $\widehat R$-modules required to decompose the completion of an indecomposable MCM $R$-module (see~\cite[Example 17.11]{BOOK}). \medskip At this point we have shown, for CM local rings of dimension one, that BTM1 holds in the analytically unramified case but fails (just a little bit) in general. We turn now to BTM2 for CM local rings of dimension one. \begin{thm}\label{BTM2-dim1}Let $(R,\mathfrak{m},\mathsf{k})$ be a one-dimensional local Cohen-Macaulay ring with unbounded CM type, and with $\mathsf{k}$ infinite. Assume either \begin{enumerate}[label=(\roman{*})] \item $\widehat R$ is reduced, or \item $R$ contains a field. \end{enumerate} Then, for each positive integer $n$, $R$ has $|\mathsf{k}|$ pairwise non-isomorphic indecomposable MCM modules of constant rank $n$. In particular, BTM2 holds for one-dimensional CM local rings that satisfy either (i) or (ii). \end{thm} Karr and R.~Wiegand \cite[Theorem 1.4]{Karr-Wiegand:BT2} proved this in the analytically unramified case (i). Later Leuschke and Wiegand modified that proof, using ideas from \cite{Leuschke-Wiegand:hyperbrt} and \cite{Leuschke-Wiegand:bcmt}, to prove the result in the equicharacteristic case (ii). See \cite[Theorem 17.10]{BOOK}. The rest of this section is devoted to a sketch of the main ideas of the proof of Theorem~\ref{BTM2-dim1}. Assume, for the rest of this section, that $(R,\mathfrak{m},\mathsf{k})$ is a one-dimensional CM local ring satisfying the hypotheses of Theorem~\ref{BTM2-dim1}. In particular, $\mathsf{k}$ is infinite and $R$ has unbounded CM type. The first step, proved by Bass \cite{Bass:ubiquity} in the analytically unramified case, appears as Theorem 2.1 of \cite{Leuschke-Wiegand:hyperbrt}: \begin{lem}\label{lem:mult2}Suppose $\e(R) \le 2$. Then every indecomposable MCM $R$-module is isomorphic to an ideal of $R$ and hence has multiplicity at most two. \end{lem} Thus we may assume that $\e(R)\ge 3$, and in this case $R$ has a finite birational extension $S$ (an intermediate ring between $R$ and its total quotient ring $K$ such that $S$ is finitely generated as an $R$-module) with $\mu_R(S) = \e(R)$. Although we will need to choose $S$ with some care, we note here that $S := \bigcup_{n\ge 1}\End_R(\mathfrak{m}^n)$ has the right number of generators. (See \cite[Lemma 2.6]{Leuschke-Wiegand:hyperbrt}.) In the analytically unramified case, one typically takes $S = \overline R$. (Notice that none of this works if $R$ is not CM, since $R=K$ in that case!) Let $\mathfrak{f}$ be the conductor, that is, the largest ideal of $S$ that is contained in $R$. Putting $A=R/\mathfrak{f}$, $B=S/\mathfrak{f}$, and $D=B/\mathfrak{m} B$, we obtain a commutative diagram \begin{equation}\label{cond-sq} \begin{gathered} \xymatrix{ R \ar[r] \ar[d] & S \ar[d] \\ A \ar[r] \ar[d] & B \ar[d] \\ \mathsf{k} \ar[r] & D } \end{gathered} \end{equation} in which the top square is a pullback and $D$ is a $\mathsf{k}$-algebra of dimension $\e(R)$. Now let $n$ be a fixed positive integer, and let $t\in \mathsf{k}$. We wish to build a family, parametrized by $t$, of indecomposable MCM $R$-modules of constant rank $n$. The following construction \cite[Construction 2.5]{Wiegand:1989}, \cite[Construction 3.13]{BOOK} is based on work of Drozd and Ro\u\i ter \cite{Drozd-Roiter}. Let $I$ be the $n\times n$ identity matrix and $H$ be the nilpotent $n\times n$ Jordan block with $1$'s on the superdiagonal and $0$'s elsewhere. Let $\alpha$ and $\beta$ be elements of $D$ such that $\{1,\alpha,\beta\}$ is linearly independent over $\mathsf{k}$. (Eventually we will have to impose additional restrictions on $\alpha$ and $\beta$.) Let $V_t$ be the $\mathsf{k}$-subspace of $D^{(n)}$ spanned by the columns of the $n \times 2n$ matrix \begin{equation}\label{eq:Psi-def} \Psi_t:= \left[I \quad \alpha I + \beta(tI+H)\right]\,. \end{equation} Let $\pi\colon S^{(n)} \twoheadrightarrow D^{(n)}$ be the canonical surjection, and define $M_t$ by the following pullback diagram. \begin{equation}\label{CD:M_t} \begin{gathered} \xymatrix{ M_t \ar[r] \ar[d] & S^{(n)} \ar[d]^\pi \\ V_t \ar@{^{(}->}[r] & D^{(n)} } \end{gathered} \end{equation} Then $M_t$ is an MCM $R$-module of constant rank $n$, and it is indecomposable if the \emph{pair} $V_t\subseteq D^{(n)}$ is indecomposable in the following sense: There is no idempotent endomorphism $\varepsilon$ of $D^{(n)}$, other than $0$ and the identity, such that $\varepsilon(V_t)\subseteq V_t$. Moreover, if $t,u\in \mathsf{k}$ and $M_t\cong M_u$, then the pairs $(V_t\subseteq D^{(n)})$ and $(V_u\subseteq D^{(n)})$ are isomorphic, in the sense that there is an automorphism $\varphi$ of $D^{(n)}$ such that $\varphi(V_t)\subseteq V_u$. Our goal, then, is to choose $\alpha$ and $\beta$ so that we get $|\mathsf{k}|$ non-isomorphic indecomposable pairs $(V_t\subseteq D^{(n)})$. Suppose first that $\e(R) = 3$. We need to choose a finite birational extension $R\subset S$ such that \begin{equation}\label{eq:mult-3} \mu_R(S)=3 \quad \text{and} \quad \mu_R\left(\frac{\mathfrak{m} S + R}{R}\right) \ge 2\,. \end{equation} If $R$ is analytically unramified, the assumption that $R$ has unbounded (hence infinite) CM type implies failure of the second Drozd-Ro\u\i ter condition (iii) in Theorem~\ref{thm:frt-dim1}, and we can take $S = \overline R$. If $R$ is analytically ramified but contains a field, the fact that $\widehat R$ is \emph{not} one of the three exceptional rings of Theorem~\ref{thm:BTM1-fails} leads, after substantial computation, to the right choice for $S$. (See the proof of \cite[Theorem 1.5]{Leuschke-Wiegand:bcmt} or the proofs of Theorems 17.6 and 17.9 in \cite{BOOK}.) Now, with our carefully chosen birational extension $R\to S$, we have \begin{equation}\label{eq:mult-3-art} \dim_\mathsf{k}(B/\mathfrak{m} B) = 3 \quad \text{and} \quad \dim_\mathsf{k}\left(\frac{\mathfrak{m} B +A}{\mathfrak{m}^2 B+A}\right) \ge 2\,, \end{equation} for the Artinian rings $A$ and $B$ in the diagram~\eqref{cond-sq}. Put $C = \mathfrak{m} B+A$, and choose elements $x,y\in \mathfrak{m} B$ so that their images in $\frac{\mathfrak{m} B +A}{\mathfrak{m}^2 B+A}$ are linearly independent. Since $C/\mathfrak{m} C$ maps onto $\frac{\mathfrak{m} B +A}{\mathfrak{m}^2 B+A}$, the images $\alpha$ and $\beta$ of $x$ and $y$ in $C/\mathfrak{m} C$ are linearly independent. By \cite[Lemmas 3.10 and 3.11]{BOOK} it suffices to build the requisite pairs $(V_t\subseteq (C/\mathfrak{m} C)^{(n)})$, since these will yield, via extension, non-isomorphic indecomposable pairs $(V_t\subseteq D^{(n)})$. Moreover, with this choice of $\alpha$ and $\beta$, we have the relations \begin{equation}\label{eq:very-short} \alpha^2 = \alpha\beta = \beta^2 = 0\,. \end{equation} Returning to the general case $\e(R)\ge 3$, we may assume that either $\dim_\mathsf{k}(D) \ge 4$ or else $D$ contains elements $\alpha$ and $\beta$ satisfying \eqref{eq:very-short}. In order to show that there are enough values of $t$ that produce non-isomorphic indecomposable pairs $(V_t\subseteq D^{(n)})$, we let $t$ and $u$ be elements of $\mathsf{k}$, not necessarily distinct, and suppose that $\varphi$ is a $\mathsf{k}$-endomorphism of $D^{(n)}$ that carries $V_t$ into $V_u$. We regard $\varphi$ as an $n\times n$ matrix with entries in $D$. Recalling that $V_t$ is the column space of the matrix $\Psi_t$ in \eqref{eq:Psi-def}, we see that the condition $\varphi V_t\subseteq V_u$ yields a $2n\times 2n$ matrix $\theta$ over $\mathsf{k}$ satisfying the equation \begin{equation}\label{eq:commute} \varphi\Psi_t =\Psi_u\theta\,. \end{equation} \noindent Write $\theta = \left[ \begin{smallmatrix} E&F\\ P&Q \end{smallmatrix}\right]$, where $E$, $F$, $P$, and $Q$ are $n\times n$ blocks. Then~\eqref{eq:commute} gives the following two equations: \begin{equation}\label{eq:blarg} \begin{aligned} \varphi &=E+\alpha P+\beta(u I+H)P\\ \alpha\varphi + \beta\varphi(t I+H) &=F+\alpha Q+\beta(u I+H)Q\,. \end{aligned} \end{equation} Substituting the first equation into the second and combining terms, we get the following equation: \begin{multline}\label{eq:mess} -F + \alpha(E-Q) + \beta(t E-u Q + E H - HQ) + (\alpha+t\beta)(\alpha+u\beta)P \\ + \alpha\beta(HP + P H) + \beta^2(H P H + t H P+u P H) = 0\,. \end{multline} Suppose there exist elements $\alpha$ and $\beta$ satisfying Equation~\eqref{eq:very-short}. With this choice of $\alpha$ and $\beta$, \eqref{eq:mess} collapses: \begin{equation}\label{eq:gleep} -F + \alpha(E-Q) + \beta(t E-u Q + E H - HQ) =0\,. \end{equation} Since all capital letters in \eqref{eq:gleep} represent matrices over $\mathsf{k}$, and since $\{1,\alpha,\beta\}$ is linearly independent over $\mathsf{k}$, we get the equations $$ F=0\,, \qquad E=Q\,, \qquad \text{and} \qquad (t-u) E + EH-HE = 0\,. $$ After a bit of fiddling (see \cite[Case 3.14]{BOOK} for the details) we reach two conclusions: \begin{enumerate}[label=(\roman{*})] \item If $\varphi$ is invertible, then $t = u$. Thus the modules are pairwise non-isomorphic. \item If $t=u$ and $\varphi$ is idempotent, then $\varphi$ is either $0$ or $I$. Thus all of the modules are indecomposable. \end{enumerate} The key issue in these computations is that the matrix $H$ is non-derogatory, so that its commutator in the full matrix ring is just the local ring $\mathsf{k}[H]\cong \mathsf{k}[X]/(X^n)$. We may therefore assume that $\dim_\mathsf{k}(D) \ge 4$. With a little luck, the algebra $D$ might contain an element $\alpha$ that does \emph{not} satisfy a non-trivial quadratic relation over $\mathsf{k}$. In this case, we choose any element $\beta\in D$ so that $\{1,\alpha,\alpha^2,\beta\}$ is linearly independent, and we set \[ G = \{t\in \mathsf{k}\mid \{1,\alpha,\beta,(\alpha+t\beta)^2\} \text{ is linearly independent}\}\,. \] This set is non-empty and Zariski-open, hence cofinite in $\mathsf{k}$. For $t\in G$, put \[ G_t = \{u\in G \mid \{1,\alpha,\beta,(\alpha+t\beta)(\alpha+u\beta)\} \text{ is linearly independent}\}\,. \] Then $G_t$ is cofinite in $G$ for each $t\in G$. Moreover, one can check the following, using \eqref{eq:mess}: \begin{enumerate}[label=(\roman{*})] \item If $t$ and $u$ are distinct elements of $G$ with $u\in G_t$, then $\varphi$ is not an isomorphism. \item If $t=u\in G$ and $\varphi$ is idempotent, then $\varphi$ is either $0$ or $I$. \end{enumerate} \noindent The desired conclusions follow easily. (See \cite[Case 3.16]{BOOK} for the details.) The remainder of the proof \cite[(3.17)--(3.21)]{BOOK} is a careful analysis of the $\mathsf{k}$-algebras $D$ in which every element is quadratic over $\mathsf{k}$. (The fact that $\mathsf{k}$ is infinite obviates consideration of the last case \cite[Case 3.22]{BOOK}, where our construction does not work and Dade's construction \cite{Dade:1963} is used to produce \emph{one} indecomposable of rank $n$.) \medskip In studying direct-sum decompositions over one-dimensional local rings, it is important to know about indecomposable MCM modules of non-constant rank. (See \cite{Wiegand:1991}, where Wiegand determined exactly how badly Krull-Remak-Schmidt uniqueness can fail.) If $(R,\mathfrak{m},\mathsf{k})$ is a one-dimensional, analytically unramified local ring with minimal prime ideals $\mathfrak{p}_1,\dots,\mathfrak{p}_s$, we define the \emph{rank} of a module to be the $s$-tuple $(r_1,\dots,r_s)$, where $r_i$ is the dimension of $(M_{\mathfrak{p}_i})$ as a vector space over the field $R_{\mathfrak{p}_i}$. Crabbe and Saccon \cite{Crabbe-Saccon} have recently proved the following: \begin{thm}\label{thm:Crabbe-Saccon} Let $(R,\mathfrak{m},\mathsf{k})$ be an analytically unramified local ring of dimension one, with minimal prime ideals $\mathfrak{p}_1,\dots,\mathfrak{p}_s$. Assume that $R/\mathfrak{p}_1$ has infinite CM type. Let $\underline r := (r_1,\dots,r_s)$ be an arbitrary $s$-tuple of non-negative integers with $r_1\ge r_i$ for each $i$, and with $r_1>0$. Then there is an indecomposable MCM $R$-module with $\rank(M) = \underline r$, and $|\mathsf{k}|$ non-isomorphic ones if $\mathsf{k}$ is infinite. \end{thm} \section{Brauer-Thrall I for Hypersurfaces} In Theorem~\ref{thm:BTM1-fails} we saw that there are just two plane curve singularities that contradict BTM1. Here we promote this result to higher-dimensional hypersurfaces, with the following theorem from \cite{Leuschke-Wiegand:hyperbrt} (cf.\ \cite[Theorem 17.5]{BOOK}): \begin{thm}\label{thm:hyperbrt} Let $\mathsf{k}$ be an algebraically closed field of characteristic different from two, and let $R = \mathsf{k}[\![x_0,\dots,x_d]\!]/(f)$, where $f$ is a non-zero element of $(x_0,\dots,x_d)$ and $d\ge 2$. Then $R$ has bounded but infinite CM type if and only if $R\cong \mathsf{k}[\![x_0,\dots, x_d]\!]/(g+x_2^2+\cdots+x_d^2)$, where $g$ is a polynomial in $\mathsf{k}[x_0,x_1]$ defining either an (A$_\infty$) or (D$_\infty$) curve singularity. \end{thm} This theorem and its proof are modeled on the beautiful result of Buchweitz, Greuel, Kn\"orrer, and Schreyer, where ``bounded but infinite'' is replaced by ``finite'', and the singularities in the conclusion are the \emph{simple} or \emph{ADE} singularities, \cite[\S4.3]{BOOK}. The ``if'' direction of Theorem~\ref{thm:hyperbrt} hinges on the following result (see \cite[Theorem 17.2]{BOOK}): \begin{lem}[Kn\"orrer, \cite{Knorrer}]\label{lem:Knorrer} Let $\mathsf{k}$ be a field, and put $S=\mathsf{k}[\![x_0,\dots,x_d]\!]$. Let $f$ be a non-zero non-unit of $S$, $R=S/(f)$, and $R^\# = S[\![z]\!]/(f+z^2)$. \begin{enumerate}[label=(\roman{*})] \item If $R^\#$ has finite (respectively, bounded) CM type, so has $R$. \item Assume $R$ has finite (respectively, bounded) CM type and $\car(\mathsf{k})\ne2$. Then $R^\#$ has finite (respectively, bounded) CM type. More precisely, if $\mu_R(M) \le B$ for every indecomposable MCM $R$-module $M$, then $\mu_{R^\#}(N) \le 2B$ for every indecomposable MCM $R^\#$-module $N$. \end{enumerate} \end{lem} For the ``only if'' direction, we need Lemma~\ref{lem:Knorrer} and the following result due to Kawasaki \cite[Theorem 4.1]{Kawasaki:1996}: \begin{lem}\label{Kawasaki} Let $(R,\mathfrak{m})$ be a $d$-dimensional abstract hypersurface (a local ring whose completion $\widehat R$ has the form $S/(f)$, where $(S,\mathfrak{n})$ is a regular local ring and $f\in \mathfrak{n}$). Let $n$ be any positive integer, and let $M$ be the $(d+1)^{\text{st}}$ syzygy of $R/\mathfrak{m}^n$. If $\e(R) > 2$, then $M$ is an indecomposable MCM $R$-module, and $\mu_R(M) \ge \binom{d+n-1}{d-1}$. In particular, if $d\ge 2$ then $R$ has unbounded CM type. \end{lem} If, now, $d\ge 2$ and $R$ (as in Theorem~\ref{thm:hyperbrt}) has bounded but infinite CM type, then $\e(R) \le 2$. Using the Weierstra\ss\ Preparation Theorem and a change of variables, we can put $f$ into the form $g+x_d^2$, with $g\in \mathsf{k}[\![x_0,\dots,x_{d-1}]\!]$. Then $\mathsf{k}[\![x_0,\dots,x_{d-1}]\!]/(g)$ has bounded but infinite CM type, by Lemma~\ref{lem:Knorrer}. We repeat this process till we get down to dimension one, and then we invoke Theorem~\ref{thm:BTM1-fails}. \section{Brauer-Thrall I for Excellent Isolated Singularities} The starting point here is the Harada-Sai Lemma \cite[Lemmas 11 and 12]{Harada-Sai}, sharpened by Eisenbud and de la Pe\~na in 1998 \cite{Eisenbud-delaPena:1998}. By a \emph{Harada-Sai sequence} we mean a sequence \[ M_1\overset{f_1}{\to}M_2\overset{f_2}{\to}\cdots\overset{f_{s-1}}{\to}M_s \] of $R$-homomorphisms satisfying \begin{enumerate}[label=(\roman{*})] \item each $M_i$ is indecomposable of finite length; \item no $f_i$ is an isomorphism; and \item the composition $f_{s-1}f_{s-2}\cdots f_1$ is non-zero. \end{enumerate} \begin{lem}[Harada-Sai Lemma] With the notation above, suppose $\ell_R(M_i) \le b$ for each $i$. Then $s\le 2^b - 1$. \end{lem} In fact, Eisenbud and de la Pe\~na \cite{Eisenbud-delaPena:1998} characterized the integer sequences that can occur in the form $(\ell_R(M_1),\dots, \ell_R(M_s))$ for some Harada-Sai sequence over some $R$. In order to apply Harada-Sai to MCM modules, we need to reduce modulo a suitable system of parameters to get down to the Artinian case. Of course, an arbitrary system of parameters won't work, since indecomposability and non-isomorphism won't be preserved. What we need is a \emph{faithful system of parameters}, that is, a system of parameters $\underline x = x_1,\dots, x_d$ such that $\underline x\Ext^1_R(M,N) = 0$ for every MCM $R$-module $M$ and every finitely generated $R$-module $N$. Here are some useful properties of faithful systems of parameters (where we write $\underline x^2$ for the system of parameters ($x_1^2,\dots,x_d^2$)): \begin{lem}\label{lem:faithful-sop} Let $\underline x$ be a faithful system of parameters for a CM local ring $R$. \begin{enumerate}[label=(\roman{*})] \item Let $M$ and $N$ be MCM $R$-modules, and suppose $\varphi\colon M/\underline x^2M \to N/\underline x^2N$ is an isomorphism. There is an isomorphism $\tilde\varphi\colon M\to N$ such that $\tilde\varphi\otimes_R(R/(\underline x) )= \varphi\otimes_R(R/(\underline x))$. \item Let $s:\ 0\to N \to E \to M \to 0$ be a short exact of MCM $R$-modules. Then $s$ splits if and only if $s\otimes_R(R/(\underline x^2))$ splits. \item Assume $R$ is Henselian, and let $M$ be an indecomposable MCM $R$-module. Then $M/\underline x^2 M$ is indecomposable. \end{enumerate} \end{lem} Using these properties, one obtains the Harada-Sai Lemma for MCM modules \cite[Theorem 15.19]{BOOK} \begin{lem}\label{Harada-Sai-MCM} Let $(R,\mathfrak{m},\mathsf{k})$ be a CM, Henselian local ring and $\underline x$ a faithful system of parameters. Let \[ M_1\overset{f_1}{\to}M_2\overset{f_2}{\to}\cdots\overset{f_{s-1}}{\to}M_s \] be a sequence of $R$-homomorphisms, with each $M_i$ indecomposable and MCM\@. Assume that \[ (f_{s-1}f_{s-2}\cdots f_1)\otimes_R(R/(\underline x^2)) \ne 0\,. \] If $\ell_R(M_i/\underline x M_i)\le b$ for all $i$, then $s\le 2^b-1$. \end{lem} Suppose, instead, that we have a bound, say, $B$, on the multiplicities $\e(M_i)$. Choosing $t$ such that $\mathfrak{m}^t\subseteq (\underline x^2)$, we get a bound $b:=Bt^d$ on the lengths of the modules $M_i/\underline x^2M_i$. A walk around the AR quiver of $R$ then proves BTM1. (See \cite[Chapter 6]{Yoshino:book} or \cite[\S15.3]{BOOK}.). Of course, none of this does any good unless the ring $R$ \emph{has} a faithful system of parameters. The big theorem here is due to Yoshino \cite{Yoshino:1987} (cf.\ \cite[Theorem 15.8]{BOOK}): \begin{thm}\label{faithful-exist}Let $(R,\mathfrak{m},\mathsf{k})$ be a complete CM local ring containing a field. Assume $\mathsf{k}$ is perfect and that $R$ has an isolated singularity. Then $R$ has a faithful system of parameters. \end{thm} Putting all of this stuff together, we obtain the following theorem, proved independently by Dieterich \cite{Dieterich:1987} and Yoshino \cite{Yoshino:1987}: \begin{thm}\label{complete-BTM1} Let $(R,\mathfrak{m},\mathsf{k})$ be a complete, equicharacteristic local ring with perfect residue field $\mathsf{k}$. Then $R$ has finite CM type if and only if \begin{enumerate}[label=(\roman{*})] \item $R$ has bounded CM type, and \item $R$ has an isolated singularity. \end{enumerate} \end{thm} The main thrust is the ``if'' direction, the converse being a consequence of Auslander's famous theorem \cite{Auslander:isolsing} that complete CM rings with finite CM type must be isolated singularities. In 2005, Leuschke and Wiegand used ascent and descent techniques to prove the following generalization \cite[Theorem 3.4]{Leuschke-Wiegand:bcmt}: \begin{thm}\label{excellent-BTM1} Let $(R,\mathfrak{m},\mathsf{k})$ be an excellent, equicharacteristic local ring with perfect residue field $\mathsf{k}$. Then $R$ has finite CM type if and only if \begin{enumerate}[label=(\roman{*})] \item $R$ has bounded CM type, and \item $R$ has an isolated singularity. \end{enumerate} \end{thm} This time, for the ``only if'' direction, one needs the Huneke-Leuschke version \cite{Huneke-Leuschke:2002} of Auslander's theorem, stating that \emph{every} CM ring of finite CM type has an isolated singularity. Without the word ``excellent'', Theorem~\ref{excellent-BTM1} would be false. For example, the ring $\mathbb C[\![x,y]\!]/(y^2)$ is the completion of an integral domain $(R,\mathfrak{m})$, by Lech's Theorem \cite{Lech}. Theorem~\ref{thm:BTM1-fails} implies that $R$ has bounded but infinite CM type, and of course $R$ has an isolated singularity. \section{Brauer-Thrall II} In Section~\ref{sec:BTM-dim1} we proved a strong form of BTM2 for one-dimensional CM local rings, assuming only that the ring is either analytically unramified or equicharacteristic. In higher dimensions, no such general results are known. One problem, already mentioned, is that there is no general result showing descent of bounded CM type along flat local homomorphisms. Typically, one restricts to complete (or at least excellent Henselian) isolated singularities with algebraically closed residue field, in order to make use of the Auslander-Reiten quiver. The following result was proved by Dieterich \cite[Theorem 20]{Dieterich:1987} in 1987, for characteristics different from two. The case $\car(\mathsf{k}) = 2$ was proved by Popescu and Roczen \cite{Popescu-Roczen:1991} in 1991. \begin{thm}\label{thm:BTM2-hyper}Let $R = \mathsf{k}[\![x_0,\dots,x_d]\!]/(f)$ be a hypersurface isolated singularity, with $\mathsf{k}$ algebraically closed. If $R$ has infinite CM type, then $R$ has strictly unbounded CM type. \end{thm} Using Elkik's theorem \cite{Elkik} on modules extended from the Henselization, one can generalize this result to excellent Henselian rings (cf.\ \cite{Popescu-Roczen:1990}): \begin{cor}\label{cor:BTM2-hensel}Let $(R,\mathfrak{m},\mathsf{k})$ be an excellent, equicharacteristic, Henselian local ring whose completion is a hypersurface. Assume that $R$ has an isolated singularity and that $\mathsf{k}$ is algebraically closed. If $R$ has infinite CM type, then $R$ has strictly unbounded CM type. (In particular, both BTM1 and BTM2 hold for these rings.) \end{cor} Excellence guarantees that the completion $\widehat R$ is an isolated singularity too. (In fact, all one needs is that the inclusion $R\to \widehat R$ be a regular homomorphism (see \cite[Proposition 10.9]{BOOK}).) If $N$ is an MCM $\widehat R$-module, then $N$ is free on the punctured spectrum of $\widehat R$ and hence, by \cite{Elkik}, is extended from an $R$-module. This means that the map $M\mapsto \widehat M$, from MCM $R$-modules to MCM $\widehat R$-modules, is bijective on isomorphism classes. Since $\e_R(M) = \e_{\widehat R}(\widehat M)$, the corollary follows from the theorem. \medskip The main thing we want to talk about in this section is Smal\o's remarkable result \cite{Smalo:1980} that produces, from an infinite family of indecomposable MCM modules of \emph{one fixed} multiplicity $n$, an integer $n' > n$ and an infinite family of indecomposable MCM modules of multiplicity $n'$. In principle, this ought to make proofs of BTM2 lots easier. We will give two such applications and also point out some limitations to this approach. Here is Smal\o's theorem, proved in 1980 for Artin algebras: \begin{thm}\label{Smalo} Let $(R,\mathfrak{m},\mathsf{k})$ be a complete CM isolated singularity, with $\mathsf{k}$ algebraically closed. Suppose $\{M_i\}_{i\in I}$ is an infinite family of pairwise non-isomorphic indecomposable MCM $R$-modules, all of the same multiplicity $n$. There exist an integer $n'> n$, a subset $J$ of $I$ with $|J| = |I|$, and a family $\{N_j\}_{j\in J}$ of pairwise non-isomorphic indecomposable MCM $R$-modules, each of multiplicity $n'$. \end{thm} The basic ideas of Smal\o's proof survive transplantation to the MCM context remarkably well. The proof uses the Harada-Sai Lemma~\ref{Harada-Sai-MCM} as well as a couple of lemmas that control multiplicity as one wanders around the AR quiver. One of these~\cite[Lemma 4.2.7]{Avramov:6lectures} bounds the growth of the Betti numbers $\beta_i(M)$ of a MCM module $M$ over a CM local ring of multiplicity $e$: $\beta_{i+1} \le (e-1) \beta_i$ for all $i$. Another gives a linear bound between the multiplicities of the source and target of an irreducible homomorphism: With $R$ as in the theorem, there is a positive constant $c$ such that $\e_R(M)\le c\e_R(N)\le c^2 \e_R(M)$ whenever $M\to N$ is an irreducible homomorphism of indecomposable MCM $R$-modules. We refer the reader to~\cite[Section 15.4]{BOOK} for the details. Here is an obvious corollary of Smal\o's theorem: \begin{cor}\label{cor:uncountable} Let $(R,\mathfrak{m},\mathsf{k})$ be a complete CM isolated singularity, with $\mathsf{k}$ algebraically closed. If $R$ has uncountable CM type, then there is an sequence $n_1<n_2<n_3<\dots$ of positive integers such that $R$ has, for each $i$, uncountably many non-isomorphic indecomposable MCM modules of multiplicity $n_i$. \end{cor} As another application, one can give a proof of BTM2 in dimension one that is much less computational than the one given in Section~\ref{sec:BTM-dim1}, at least in an important special case. Suppose that $(R,\mathfrak{m},\mathsf{k})$ is a complete, reduced local ring of dimension one, and assume $R$ has infinite CM type. Then the Drozd-Ro\u\i ter conditions ((ii) and (iii) of Theorem~\ref{thm:frt-dim1}) fail. It is now a comparatively simple matter (see \cite[\S4]{Wiegand:1989}) to show that $R$ has an infinite family of pairwise non-isomorphic ideals. We decompose each of these ideals into indecomposable summands, noting that $\e(R)$ bounds the number of summands of each ideal. This yields infinitely many pairwise non-isomorphic indecomposable MCM modules, each with multiplicity bounded by $\e(R)$, and hence an infinite subfamily consisting of modules of fixed multiplicity. Now Smal\o's theorem shows that BTM2 holds for these rings. Don't be misled by this example. In higher dimensions there is no hope of starting the inductive hypothesis with modules of rank one, in view of the following theorem due to Bruns \cite[Corollary 2]{Bruns:1981}: \begin{thm}\label{thm:Bruns} Let $A$ be any commutative Noetherian ring and $M$ a finitely generated $R$-module of constant rank $r$. Let $N$ be a second syzygy of $M$, and let $s$ be the (constant) rank of $N$. If $M$ is not free, then the codimension of its non-free locus is at most $r+s+1$. \end{thm} \begin{cor}\label{cor:Bruns-hyper} Let $(R,\mathfrak{m})$ be a $d$-dimensional isolated singularity whose completion is a hypersurface. Let $M$ be a non-free MCM $R$-module of constant rank $r$. Then $r\ge \frac{1}{2}(d-1)$. \end{cor} This bound is probably much too low. In fact, Buchweitz, Greuel, and Schreyer \cite{BGS} conjecture that $r\ge 2^{d-1}$. Nonetheless, the bound given in the corollary rules out MCM ideals once the dimension exceeds three. \section{Open Questions}\label{sec:questions} Here we list a few open questions, some of which have already been mentioned at least implicitly. \begin{ques}\label{ques:BRT2} Are there \emph{any} counterexamples to BTM2\@: Of course this is the same as asking whether BTM2 is true, but let's not even assume that $(R,\mathfrak{m},\mathsf{k})$ is CM\@. What if $\dim(R) = 1$? What if $\dim(R) = 1$ and $R$ is not CM\@? The list goes on\dots. \end{ques} \begin{ques}\label{ques:infinite-BTM1} Can one delete the assumption, in Theorem~\ref{thm:BTM1-fails}, that $\mathsf{k}$ be infinite? \end{ques} \begin{ques}\label{ques:BCMT-descends} If $(R,\mathfrak{m})$ is a local CM ring whose completion $\widehat R$ has bounded CM type, must $R$ have bounded CM type? More generally, let $R\to S$ be a flat local homomorphism with CM closed fiber. If $S$ has bounded CM type, must $R$ have bounded CM type? \end{ques} \begin{ques}\label{ques:imperfect-BTM1} Can one delete the assumption, in Theorem~\ref{excellent-BTM1}, that $\mathsf{k}$ be perfect? \end{ques} \begin{ques} Can we improve Corollary~\ref{cor:Bruns-hyper}, getting better lower bounds for the rank, or multiplicity, of a non-free MCM module? \end{ques} \newcommand{\arxiv}[2][AC]{\mbox{\href{http://arxiv.org/abs/#2}{\textsf{arXiv:#2}}}} \newcommand{\oldarxiv}[2][AC]{\mbox{\href{http://arxiv.org/abs/math/#2}{\textsf{arXiv:math/#2[math.#1]}}}} \renewcommand{\MR}[1]{% {\href{http://www.ams.org/mathscinet-getitem?mr=#1}{MR #1}.}} \providecommand{\bysame}{\leavevmode\hbox to3em{\hrulefill}\thinspace} \newcommand{\arXiv}[1]{% \relax\ifhmode\unskip\space\fi\href{http://arxiv.org/abs/#1}{arXiv:#1}} \def$'$} \def\cprime{$'$} \def\cprime{$'${$'$} \def$'$} \def\cprime{$'$} \def\cprime{$'${$'$} \def$'$} \def\cprime{$'$} \def\cprime{$'${$'$} \providecommand{\bysame}{\leavevmode\hbox to3em{\hrulefill}\thinspace} \providecommand{\MR}{\relax\ifhmode\unskip\space\fi MR } \providecommand{\MRhref}[2]{% \href{http://www.ams.org/mathscinet-getitem?mr=#1}{#2} } \providecommand{\href}[2]{#2}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} The India-based Neutrino Observatory (INO) \cite{Athar:2006yb} is the proposed underground facility that will house a magnetized Iron CALorimeter detector (ICAL) designed to study neutrino oscillations with atmospheric muon neutrinos. In particular, ICAL will focus on measuring precisely the neutrino oscillation parameters including the sign of the 2--3 mass-squared difference $\Delta m_{32}^{2}$ ($= m_3^2 - m_2^2$) and hence help to determine the neutrino mass hierarchy through matter effects. Oscillation signatures for neutrinos and anti-neutrinos are different in the presence of matter effects which become important in the few GeV energy region. These parameters are sensitive to the momentum $P$ and the zenith angle $\cos\theta$ (through path length travelled) of neutrinos. Reconstruction of these parameters further depends on the energy and direction of muons and hadrons \cite{hadronresponse} produced in charge-current interactions of the neutrinos in the detector; hence studies of muon energy and direction resolutions are crucial. Since ICAL is designed to be mostly sensitive to muons, the main physics issues that ICAL will probe will be through studies of charged current (CC) muon neutrino (anti-neutrino) interactions with muons (anti-muons) produced in the final state. Two types of interactions are relevant: in the first, the neutrino enters the detector, interacting with (dominantly) nucleons in the iron nucleus. These events are identified by vertices which are inside the detector (tracks begin inside the detector). In the second type of interaction, the neutrino interacts with rock around the detector and the produced muons lose energy while propagating through the rock (these are the so-called rock muons or upward-going muons). These events have vertices outside the detector with their tracks starting at an edge of the detector. The first type of interaction dominantly gives low energy (few GeV) muons due to the rapidly falling atmospheric neutrino flux. While in the case of rock muons, most of the lower energy muons are stopped in the rock itself, so that the fraction of higher energy ($>$ 10 GeV) muons is higher in this sample. In addition, cosmic ray muons also enter the detector from above. In an earlier paper \cite{central}, the response of ICAL to few-GeV muons with respect to both momentum magnitude and direction reconstruction was studied through simulations for muons generated in the central region of ICAL where the magnetic field is largely uniform both in direction and magnitude. Here we present for the first time a GEANT4-based simulations study of the muon response in the peripheral region of ICAL, where the magnetic field is not only highly non-uniform in both magnitude and direction, but there are significant edge effects as most of the tracks will be partially contained. Note that a substantial fraction of rock muon events traverses such regions in the detector; hence it is important to understand the muon response in these regions for such physics studies. The inclusion of muon response in the peripheral region of the detector can significantly change the oscillation and mass hierarchy results since it contains resolutions and efficiencies in the energy range 1--50 GeV. The first set of simulations results for precision measurement of neutrino oscillation parameters and the mass hierarchy were produced using only the central region resolutions of about 9--14\% and efficiency of about 80\% (see Ref.~\cite{physics}, \cite{physics1} for more details) in the energy range 1--20 GeV. The paper is organised as follows: in Section 2 we briefly discuss some relevant details about the GEANT4-based simulation of the detector geometry and magnetic field, as well as the methodology of hit and track generation and track reconstruction. In Section 3, we discuss the general features of muon propagation in the different regions of ICAL. In Section 4, we discuss the selection criteria used. We then present in Section 5 the results, with these selection criteria, for the muon efficiencies and resolutions in the peripheral region of ICAL. A comparison of the response in all the regions of ICAL (central and peripheral) is given in Section 6. We conclude in Section 7, with discussions. \section{The ICAL Detector Simulation Framework} Details of the ICAL detector can be found elsewhere \cite{central}. Here we briefly review the relevant simulation inputs for the geometry and magnetic field. The ICAL detector geometry is simulated \cite{central} using GEANT4 software \cite{geant}. It comprises of three identical modules of dimension 16 m $\times$ 16 m $\times$ 14.45 m. The direction along which the modules are placed is chosen to be the $x$-direction (and is labelled with the azimuthal angle $\phi=0$) and the direction perpendicular to the $x$-axis in the horizontal plane is the $y$-direction. The vertical direction is the $z$-direction with the $z$-axis pointing upwards (so the polar angle is also the zenith angle). Each module comprises a stack of 151 layers of 5.6 cm thick magnetized iron, separated by a 4 cm air gap in which the active detector elements, the RPCs are placed. The $y$-direction is in the plane of the iron plates, parallel to the slots of the magnet coil, as shown in Fig.~\ref{fig:magfield}, with the origin of the coordinate system being the centre of the central module. Apart from coil slots (thin 8 m long slots centred around $y=0$ at $x= x_0 \pm 4$ m, where $x_0$ is the central $x$-coordinate of each module), there are vertical steel support structures which are placed at every 2 m along the whole detector in both $x$ and $y$ directions, to maintain the air gap between plates. These are dead spaces that affect the muon reconstruction. In addition, the magnetic field is also not uniform everywhere and so the quality of reconstruction depends on the region where the event is located. \subsection{The Magnetic Field} The magnetic field has been simulated in a single iron plate using the MAGNET6 software \cite{magnetcode}. The magnetic field map is generated at the centre ($z=0$) of the plate and is assumed uniform over the entire thickness of the plate. The magnetic field lines in a single iron plate in the central module are shown in Fig.~\ref{fig:magfield}. The field is generated by current circulating in copper coils that pass through slots in the magnetized plates. The slots can be seen as thin white vertical lines at $x=\pm 4$ m in the figure. The direction and length of arrows denote the direction and magnitude of the magnetic field. \begin{figure}[htp] \renewcommand{\figurename}{Fig.} \begin{center}\includegraphics[width=0.55\textwidth]{mg_new.eps} \end{center} \caption{Magnetic field map as generated by the MAGNET6 \cite{magnetcode} software in the central iron layer of the central module. Points A = $(0, -650, 0)$ cm, B = $(300, -650, 0)$ cm, C = $(-2270, 0, 0)$ cm (in the $1^{st}$ module of ICAL), are defined for later use. Notice that C is actually in the left-most module of the detector and is simply marked here for convenience.} \label{fig:magfield} \end{figure} The ``central region'' is defined as the volume contained within the region $\vert x \vert \le 4$ m, $\vert y \vert \le 4$ m with $z$ unconstrained, that is, the region within the coils slots in each module. The central region has the highest magnetic field that is uniform in both magnitude and direction ($B_y$) while the region labelled ``peripheral region'' (outside the central region in the $y$ direction, $\vert y \vert > 4$ m) has the maximally changing magnetic field in both direction and magnitude, with the field falling to zero at the corners of the module. In an earlier paper \cite{central}, the muon response was studied in the central region; here we study the peripheral region where, apart from the changing magnetic field, the reconstruction is also affected by edge effects. In addition, there are two small regions outside the coil slots in the $x$ direction ($\vert x \vert > 4$ m), labelled as the ``side region'' in Fig.~\ref{fig:magfield} where the magnetic field is about 15\% smaller than in the central region and in the opposite direction. The side regions in the central module are contiguous with the side regions in the adjacent modules and the quality of reconstruction here is expected to be similar to that in the central region. However the left (right) side region of the left (right)-most module will suffer from edge effects and we shall consider them separately in the study. \subsection{Event Reconstruction} In each analysis, 10,000 muons are propagated in the detector with fixed momenta and direction and with the starting point of the muons in either the peripheral or side regions. In contrast to the earlier study in the central region \cite{central}, here the muon momenta vary from 1--50 GeV/c, with the higher energies being of interest for rock muon studies. The muon tracks are reconstructed based on a Kalman filter \cite{kalman} algorithm. The RPCs that signal the passage of muons through them have a position sensitivity of about $\pm 1$ cm in the $x$- and $y$-directions (the RPC strip width is 1.96 cm) and about $\pm 1$ mm in the $z$-direction (the gas gap in the RPCs being 2 mm). In addition, ``hits'' or signals can be generated in adjoining RPC strips as well, and further that the RPC efficiency and time resolution are about 95 \% and 1 ns respectively \cite{rpc_char}. For more details regarding the generation of hits, tracks and their reconstruction, see Ref.~\cite{central}. \section{General Feature of Muon Response in the Peripheral and Side Regions} We first discuss the general expectations, based on the detector geometry and the orientation of the magnetic field. The Lorentz force equations are $\vec{F} = q (\vec{v} \times \vec{B})$, where $\vec{B}$ is the magnetic field that is confined to the $x$-$y$ plane, $q$ is the charge of the particle with momentum $\vec{P}$ and energy $E$ so that its velocity $\vec{v}$ is directed along the momentum with magnitude $v = Pc^2/E$. Since $q = -1$ for $\mu^{-}$, the components of force in the peripheral region for an upward-going muon, momentarily ignoring energy loss, are, \begin{eqnarray} \nonumber F_{x} & = & v_{z} B_{y}~; \nonumber \\ F_{y} & = & - v_{z} B_{x}~; \nonumber \\ F_{z} & = & v_{y} B_{x} - v_{x} B_{y}~, \label{eq:periforce} \end{eqnarray} whereas the analogous components of force in the side region are given as: \begin{eqnarray} \nonumber F_{x} & = & - v_{z} B_{y}~; \nonumber \\ F_{y} & = & 0~; \nonumber \\ F_{z} & = & v_{x} B_{y}~. \label{eq:sideforce} \end{eqnarray} It is seen that in both regions, $F_{x}$ and $F_{y}$ are independent of $\phi$ (that is, independent of the momentum components in the plane of the field) and depend on $P_{z}$ (i.e., on the zenith angle $\theta$ alone) while $F_z$ depends on both $\theta$ and $\phi$. Depending on the components of magnetic field $B_{x}$ and $B_{y}$, Eqs.~\ref{eq:periforce} and \ref{eq:sideforce} give the net force in the different regions of ICAL. Consider the in-plane ($x$, $y$ components) forces in the regions denoted as 1--10 in Fig.~\ref{fig:map}. It can be seen that in regions $1,2,7,8$, $F_{y}$ is such that it causes an upward-going muon ($\cos\theta>0$) to be bent in a direction going out of the detector, thus lowering the reconstruction efficiency. The effect is just the opposite in regions $3,4,5,6$. If $F_{z} > 0$ as well, the already upward-going muon traverses more iron layers as discussed in Ref.~\cite{central} and hence gives good resolution; hence, $F_y$ affects the reconstruction efficiency while $F_z$ determines the quality of reconstruction. Since $F_z$ depends on both $(\theta, \phi)$ as well as the magnetic field, the sign of $F_z$ is shown in Fig.~\ref{fig:map} inside a circle of $\phi$ in each of regions $1,2,3,4$ (for negatively charged upward-going muons), for $\vert B_x \vert \sim \vert B_y \vert$ with purple (cyan) regions denoting $F_z > (<) 0$. \begin{figure}[htp] \renewcommand{\figurename}{Fig.} \begin{center}\includegraphics[width=0.55 \textwidth]{map_detail.eps} \end{center} \caption{Magnetic field map with the net force directions in the peripheral and side region. The thick black arrows indicate the direction of the magnetic field with labels $B(i,j), i=+,-,$ that denote the sign of the $B_x$, $B_y$ components in each region. The brown arrows indicate the direction of $F_x$ or $F_y$ force components that will act on a negatively charged upward-going muon. The small coloured circles indicate the direction of $F_z$ in each region, with purple (cyan) denoting $F_z > 0 (< 0)$. (Note that side regions 9 and 10 are in the $1^{st}$ and $3^{rd}$ modules of the detector respectively and are shown together in the same module for convenience.)} \label{fig:map} \end{figure} Hence the magnetic field that determines the quality of reconstruction breaks the azimuthal symmetry, so muons in different $\phi$ directions (for the same momenta and polar angle $\cos\theta$) have different detector response. This was discussed in detail in Ref.~\cite{central}. Going by the force equations, we therefore analyse the muon response in the peripheral region in four different set of $\phi$ bins as shown in Fig.~\ref{fig:phi_choice}: bin I: $\vert \phi \vert \le \pi/4$, bin II: $\pi/4 \le \phi < 3\pi/4$, bin III: $-3\pi/4 \le \phi < -\pi/4$, and bin IV: $3\pi/4 < \vert \phi \vert \le \pi$. \begin{figure}[btp] \renewcommand{\figurename}{Fig.} \centering \includegraphics[width=0.35\textwidth]{phi_peripheral.eps} \hspace{0.5cm} \includegraphics[width=0.35\textwidth]{phi_centre.eps} \caption{The choice of $\phi$ bins in the peripheral (left) and side (right) regions.} \label{fig:phi_choice} \end{figure} For muons with starting point in the negative $y$ peripheral region (regions marked $1,2,3,4$ in Fig.~\ref{fig:map}), this implies that most of the muons with momenta such that $\phi$ lies in bin III (but otherwise having same magnitude and $\cos\theta$) are prone to exit the detector from the side; however, there will be marked differences in the quality of reconstruction between regions $(1,2)$ and $(3,4)$ due to the different $F_y$ force that turns the track back into the detector in regions $3,4$. Hence the average detector response in this region is an average over these two different behaviours. In addition, $F_z > 0~(< 0)$ for bin II muons in regions $1,2$ $(3, 4)$ and this helps to improve the reconstruction in regions $1,2$, so that bin II muons can be expected to have the best quality of reconstruction of the regions 1--4. A similar analysis can be done for muons which begin in the positive $y$ peripheral region (regions marked $5,6,7,8$). Of course, tracks at the edge of a region or of high energy muons may move from one region to another where a different magnetic field applies; however, we simply bin the events according to the region in which the muon originates. In the side region 9, $F_{x}$ causes the particle to go out of the detector but $F_{z} > 0$ and so this region has good resolution but worse efficiency. The results are opposite in side region 10 since $B_{y}$ $<$ 0. We therefore define the same $\phi$ bins for the side region as were used for the central region, viz., bin I: $\vert \phi \vert \le \pi/4$, bin II: $\pi/4 < \vert \phi \vert \le \pi/2$, bin III: $\pi/2 < \vert \phi \vert \le 3\pi/4$, and bin IV: $3\pi/4 < \vert \phi \vert \le \pi$. The difference is in the definition of the second and third bins, see Fig.~\ref{fig:phi_choice}, and is more appropriate from the point of view of the geometrical configuration in this region. Region 9 (10) will have the worst reconstruction in $\phi$ bin IV (I). However, in region 10, the direction of the $F_x$ force is expected to improve the results just as in the case of regions $3,4$. The results will be the same for downward-coming $\mu^+$ (with $\cos\theta < 0$) while that for downward-coming muons or upward-going anti-muons can be obtained by symmetry. In our analysis, therefore, we study the muon response in the peripheral regions 1--4 and the side region 9. It is important to keep in mind that the support structures and coil gaps also break this azimuthal symmetry in a non-trivial way and the effects of the geometry may alter the trends of the distributions as discussed above. The net effect of the nature of the detector geometry and magnetic field can be seen in the muon resolutions and efficiencies and we will now discuss these. As discussed above, we study the response in the following peripheral and side regions. \paragraph{In the Peripheral Region}: Here, 10,000 muons ($\mu^-$) were propagated with fixed input momenta $P_{\rm in}$ and direction $\cos\theta$ (and smeared over the entire azimuthal angle $\phi$), with their starting point uniformly smeared over the region centred at $(0, -600, 0)$ cm and extending upto $\pm$ $(2400, 200, 720)$ cm from it; this comprises the whole peripheral region along the three modules of the detector in the {\em negative $y$ region} where the magnetic field is non-uniform. \paragraph{In the Side Region}: In the side region, muons ($\mu^-$) were propagated with the same procedure as above but with point of origin smeared in a region centred around $(-2200, 0, 0)$ cm and $(2200, 0, 0)$ cm (which are in the $1^{st}$ and $3^{rd}$ modules of the detector respectively) and smeared uniformly in $\pm$ $(200, 400, 720)$ cm around these. \section{Selection Criteria Used} All tracks which satisfy the loose selection criterion $\chi^2/\hbox{ndf} \le 10$, are used in the analysis, where $\chi^2$ is the standard $\chi$-squared of the fit and ndf are the number of degrees of freedom, $\hbox{ndf} = 2 \times N_{hits} - 5$, where $N_{hits}$ are the number of hits in the event, $N_{hits} \ge 5$ and the Kalman filter involves the fitting of 5 parameters. Further selection criteria are used to get reasonable fits and hence resolutions. Two major constraints have been applied in both the peripheral and side regions to remove low energy tails. The first is similar to that applied when analysing tracks in the central region \cite{central}: the Kalman filter algorithm may generate more than one track. While this may be correct and useful in the case of genuine neutrino CC interactions, where one or more hadrons accompany the muon, this is a problem for single muon analysis and arises because of detector dead spaces (for instance, two portions of a track on either side of a support structure may be reconstructed as two different tracks). This problem will be mitigated by the identification of a vertex in a genuine neutrino interaction, here, we place a constraint and analyse only those events for which exactly one track is reconstructed, leading to a consequent loss in reconstruction efficiency. The second selection criterion is specific to the peripheral and side regions and is described below. Initially, events were generated at fixed points of origin to understand the effect of the magnetic field. In the peripheral region, the starting point was chosen to be either at point A (in a region of nearly zero magnetic field) or B (large magnetic field with both $x$- and $y$-components non-zero), while in the side region, a generic point C was studied (see Fig.~\ref{fig:magfield}). Results of this study clearly indicated that a large fraction of events whose tracks were truncated because the particle exited the detector (so-called partially contained events) were relatively poorly reconstructed. These could not be eliminated by tightening the constraint on $\chi^2$ of the fits; however, they could largely be removed by demanding that the track contain a minimum number of hits, such that either $N_{hits} > n_0$ or $N_{hits}/\cos\theta > n_0$ (note that there may be multiple hits per layer), where $n_0$ needed to be carefully optimised. It was found that for a given momentum and direction of the muon, $n_0$ needed to be larger (smaller) in regions where the magnetic field strength is small (large). Where the muon does not leave the detector, so that the entire track is contained in the detector (fully contained events), no constraint on $N_{hits}$ is needed. With this understanding, the generic peripheral and side region response was studied. \subsection{Effect of Selection Criteria} The effect of $N_{hits} > n_0$ or $N_{hits}/\cos\theta > n_0$ can be seen from Fig.~\ref{fig:fixedcuts-per} for the peripheral region. If the event is fully contained, there is no constraint on $N_{hits}$; the effect of $n_0 = 15$ is shown in the left-hand side of Fig.~\ref{fig:fixedcuts-per} for $(P_{in}, \cos\theta) = (5 \hbox{ GeV/c}, 0.65)$ and $(9 \hbox{ GeV/c}, 0.85)$ where the histogram in the magnitude of the reconstructed momentum $P_{rec}$ is plotted. For $P_{in} = 5$ GeV/c, it is noticed that the $N_{hits}$ constraint does not affect the $P_{rec}$ momentum distribution much, as most of the events are fully contained. But it gives a better (more symmetrical) shape to the distribution by reducing the low-energy tail. On the other hand, the effect for $P_{in} = 9$ GeV/c is stronger, with the hump at lower $P_{rec}$ being eliminated with the $N_{hits}$ selection criteria. Fig.~\ref{fig:nhitsdist-per} shows the effect of $N_{hits} > n_0$ or $N_{hits}/\cos\theta > n_0$ on $N_{hits}$ distributions in the peripheral region. In all cases, the few events surviving below the constraint are from totally contained events, on which no constraint is placed. These cause the histograms to remain non-zero in the region $N_{hits} \le n_{0}$ or $N_{hits}/\cos\theta \le n_{0}$ as seen in Fig.~\ref{fig:nhitsdist-per}. However, these events are relatively few in number, being less than 2\% (3\%) of the total reconstructed events for $n_{0}$ = 15 (20) in Fig.~\ref{fig:nhitsdist-per}. The constraint $N_{hits}/\cos\theta > 15$ is the most conservative one, with a loss of only about 10\% of the reconstructed events and is found to be more optimal than $N_{hits} > n_0$. \begin{figure}[htp] \renewcommand{\figurename}{Fig.} \centering \includegraphics[width=0.48\textwidth]{E5_cth65_nhits15cut.eps} \includegraphics[width=0.48\textwidth]{E5_cth65_nhits20cut.eps}\\ \includegraphics[width=0.48\textwidth]{E9_cth85_nhits15cut.eps} \includegraphics[width=0.48\textwidth]{E9_cth85_nhits20cut.eps} \caption{Top (bottom) figures show the reconstructed momenta $P_{rec}$ using selection criteria $N_{hits}>n_0$ for partially contained events in the peripheral region with ($P_{in}$, $\cos\theta$) = (5 GeV/c, 0.65) (top) and (9 GeV/c, 0.85) (bottom) with $n_0 = 15~(20)$ in the left (right) figure. Fully contained events have no $N_{hits}$ constraint. In each figure, the black curve is without constraints on $N_{hits}$, red is with $N_{hits}/\cos\theta > n_0$ and blue is for $N_{hits} > n_0$.} \label{fig:fixedcuts-per} \end{figure} \begin{figure}[htp] \renewcommand{\figurename}{Fig.} \centering \includegraphics[width=0.48\textwidth]{dist_nhits15_E5cth65_per.eps} \includegraphics[width=0.48\textwidth]{dist_nhits20_E5cth65_per.eps}\\ \includegraphics[width=0.48\textwidth]{dist_nhits15_E9cth85_per.eps} \includegraphics[width=0.48\textwidth]{dist_nhits20_E9cth85_per.eps} \caption{Top (bottom) figures show the $N_{hits}$ distributions using selection criteria $N_{hits}>n_0$ for partially contained events in the peripheral region with ($P_{in}$, $\cos\theta$) = (5 GeV/c, 0.65) (top) and (9 GeV/c, 0.85) (bottom) with $n_0 = 15~(20)$ in the left (right) figure. Fully contained events have no $N_{hits}$ constraint. In each figure, the black curve is without constraints on $N_{hits}$, red is with $N_{hits}/\cos\theta > n_0$ and blue is for $N_{hits} > n_0$.} \label{fig:nhitsdist-per} \end{figure} Different choices of $n_0$ can be used. We have shown the effect of (a) no constraint, (b) $N_{hits} > 15$, and (c) $N_{hits}/\cos\theta > 15$ in the left-hand side of Fig.~\ref{fig:fixedcuts-per}. The last choice is motivated by the fact that a slant-moving muon (in the absence of magnetic field) would move a distance $d/\cos\theta$ in comparison to a vertically upward-going muon of the same momentum that would traverse a distance $d$. Similar figures on the right use the choice $n_0 = 20$, with the more stringent requirement giving distributions with correspondingly smaller root-mean-square or square root of the variance (RMS widths) by about 7--8\%, but showing a decrease in the total number of reconstructed events by 10--15\%. Note also that increasing $n_0$ eventually leads to removal of well-reconstructed events, as visible from the loss of events in the peak apart from just trimming the tails for the lower momentum $P_{in} = 5$ GeV/c for $n_0 = 20$. Similarly, the effect of the selection criteria on the reconstruction in the side regions is shown in Fig.~\ref{fig:fixedcuts-side}. Here the constraint on the partially contained events is not as strongly marked as in the peripheral region: while there is certainly a decrease in the RMS width of the distribution and in the number of selected events when the constraint is applied, a larger fraction of events are lost due to the constraint, with a number of ``good'' events being lost from the peak of the distribution as well, unlike in the peripheral region. \begin{figure}[htp] \renewcommand{\figurename}{Fig.} \centering \includegraphics[width=0.48\textwidth]{E5_cth65_nhits15cut_side9.eps} \includegraphics[width=0.48\textwidth]{E5_cth65_nhits15_side10.eps} \caption{The figures show the reconstructed momenta $P_{\rm rec}$ using the selection criteria $N_{hits}>n_0$ for partially contained events in the side regions 9 (left) and 10 (right) for ($P_{\rm in}$, $\cos\theta$) = (5 GeV/c, 0.65) with $n_0 = 15$. Fully contained events have no $N_{hits}$ constraint. In each figure, the black curve is without constraints on $N_{hits}$, red is with $N_{hits}/\cos\theta > n_0$ and blue is for $N_{hits} > n_0$.} \label{fig:fixedcuts-side} \end{figure} The final choice of selection criteria will be guided by the physics study. In case the requirement is good momentum resolution, then the choice $n_0=20$ may be appropriate (that is, either $N_{hits}>20$ or $N_{hits}/\cos\theta>20$). However, since the shape of the distribution is already reasonable for $n_0=15$, this choice may be used when the focus is not so much on precision reconstruction but on higher event reconstruction rates. In the rest of this paper, we shall apply the constraint $N_{hits}/\cos\theta > 15$ as being appropriate and sufficient. This choice also improves the reconstruction efficiency of large angle (small $\cos\theta$) events whose tracks naturally contain fewer hits and are harder to reconstruct. In the next section, we present the results on muon resolution and efficiencies in the peripheral and side region using these selection criteria. \section{Muon Response in the Peripheral and Side Regions} \subsection{Momentum Reconstruction Efficiency} The reconstruction efficiency is defined as the ratio of the number of reconstructed events $n_{\rm rec}$ (irrespective of charge) to the total number of events, $N_{total}$. We have \begin{eqnarray} \epsilon_{\rm rec} & = & \frac{n_{\rm rec} }{N_{\rm total}}~, \\ \nonumber \hbox{with error } \delta \epsilon_{\rm rec} & = & \sqrt{\epsilon_{\rm rec}(1-\epsilon_{\rm rec})/N_{\rm total}}~~. \end{eqnarray} Fig.~\ref{fig:recoeff-avrg} shows the reconstruction efficiency averaged over $\phi$ as a function of input momentum for different $\cos\theta$ values in the peripheral and side regions. \begin{figure}[htp] \renewcommand{\figurename}{Fig.} \centering \includegraphics[width=0.48\textwidth]{per_reco_eff15.eps} \includegraphics[width=0.48\textwidth]{recoeff-side15.eps} \caption{Reconstruction efficiency averaged over all $\phi$ bins as a function of the input momentum $P_{in}$ (GeV/c) for different zenith angles $\cos\theta$ in the peripheral (left) and side 9 (right) regions. For a discussion of the selection criteria see the text.} \label{fig:recoeff-avrg} \end{figure} The reconstruction efficiency increases for all angles, from $P_{in} = 1$ GeV/c since the number of hits increases as the particle crosses more layers. Since there are fewer hits for more slant-angled muons, the efficiency at a given momentum is better for larger values of $\cos\theta$. Also, the reconstruction efficiency is very similar for all the peripheral and side regions. In all cases, the slight worsening of the efficiency for $\cos\theta = 0.85$ at higher momenta is spurious and is due to the selection criterion that the event should reconstruct exactly one track. At large angles, it is more likely that two portions of a track on either side of a dead space such as a support structure are reconstructed as two separate tracks. Efforts are on to retrieve such events by improving the reconstruction code \cite{Kolahal}. When these tracks are correctly reconstructed, the efficiency is expected to saturate rather than fall off at these momentum values. Again, such tracks are not expected to be troublesome in genuine neutrino events, as discussed earlier. \subsection{Relative Charge Identification Efficiency} The charge identification (cid) of the particle is critical in many studies since it distinguishes events initiated by neutrinos and anti-neutrinos; these have different matter effects as they propagate through the Earth and hence give the required sensitivity to the neutrino mass hierarchy. The charge of the particle is determined from the direction of curvature of the track in the magnetic field. Relative charge identification efficiency is defined as the ratio of the number of events with correct charge identification, $n_{\rm cid}$, to the total number of reconstructed events, $n_{\rm rec}$, i.e., \begin{eqnarray} \epsilon_{\rm cid} & = & \frac{n_{\rm cid} } {n_{\rm rec}}~, \end{eqnarray} \hbox{where the errors in $n_{\rm cid}$ and $n_{\rm rec}$ are correlated so that the error in the ratio is calculated as:} \begin{eqnarray} \delta \epsilon_{\rm cid} & = & \sqrt{\epsilon_{\rm cid}(1-\epsilon_{\rm cid})/n_{\rm rec}}~. \nonumber \end{eqnarray} Fig.~\ref{fig:cideff-avrg} shows the relative charge identification efficiency as a function of input momentum for different $\cos\theta$ values in the peripheral and side region 9. (Similar results apply for side region 10). The muon undergoes multiple scattering while propagating in the detector; for small momentum, since the number of layers traversed is small, this may lead to an incorrectly reconstructed direction of bending, resulting in the wrong charge identification. Hence the charge identification efficiency is relatively poor at lower energies but as the energy increases cid efficiency also improves. At very high input momenta, bending due to the magnetic field is less. For partially contained events, only the initial relatively straight portion of the track is contained within the detector; this leads to large momentum uncertainty as well as mis-identification of charge. Overall the relative charge identification efficiency is marginally smaller than in the central region because of the smaller magnetic field. \begin{figure}[htp] \renewcommand{\figurename}{Fig.} \centering \includegraphics[width=0.48\textwidth]{per_cid_eff15.eps} \includegraphics[width=0.48\textwidth]{cideff-side15.eps} \caption{Charge identification (cid) efficiency averaged over all $\phi$ bins as a function of the input momentum $P_{in}$ (GeV/c) for different zenith angles $\cos\theta$ in the peripheral (left) and side 9 (right) regions. For a discussion of the selection criteria see the text.} \label{fig:cideff-avrg} \end{figure} \subsection {Direction (up/down) Reconstruction} The reconstructed zenith angle distributions for $P_{in}$ = 1 GeV/c at $\cos\theta$ = 0.35 and $\cos\theta$ = 0.85 in the peripheral and side region 9 are shown in Figs.~\ref{fig:direction_per} and \ref{fig:direction_side9} respectively. \begin{figure}[htp] \renewcommand{\figurename}{Fig.} \centering \includegraphics[width=0.48\textwidth]{theta_per_1_cth35.eps} \includegraphics[width=0.48\textwidth]{theta_per_1_cth85.eps} \caption{Reconstructed zenith angle distributions for $P_{in}$ = 1 GeV/c at $\cos\theta$ = 0.35 (left) and 0.85 (right) respectively, in the peripheral region. Note that the y-axis scales are different for the two plots.} \label{fig:direction_per} \end{figure} \begin{figure}[htp] \renewcommand{\figurename}{Fig.} \centering \includegraphics[width=0.48\textwidth]{theta_side9_1_cth35.eps} \includegraphics[width=0.48\textwidth]{theta_side9_1_cth85.eps} \caption{Reconstructed zenith angle distributions for $P_{in}$ = 1 GeV/c at $\cos\theta$ = 0.35 (left) and 0.85 (right) respectively, in the side region 9. Note that the y-axis scales are different for the two plots.} \label{fig:direction_side9} \end{figure} From Fig.~\ref{fig:direction_per}, it is noticed that there are few events reconstructed in the downward direction (wrong direction) with $\theta_{rec} > \pi/2$. For $P_{in}$ = 1 GeV/c with $\cos\theta$ = 0.35 (0.85), this fraction is about 0.48 (0.89)\% and it drops to a negligible value at higher energies for all $\cos\theta$. Similar results are obtained for the side region 9 as can be seen from Fig.~\ref{fig:direction_side9}. This small fraction also contributes to wrong cid since the relative bending in the magnetic field is measured w.r.t the muon momentum direction. The direction determination depends on the time resolution while the charge identification depends also on the strength of the magnetic field. A 1 GeV/c muon with $\cos\theta \sim 1$ traverses about 12 layers; this corresponds to a time difference between first and last hit of about 4 ns. Since the RPCs have a time resolution of 1 ns, this explains why the fraction of muons whose direction is wrongly determined is small. \subsection{Zenith Angle Resolution} Those events which are successfully reconstructed (for all $\phi$) are analysed for their zenith angle resolution. The events distribution as a function of the reconstructed zenith angle $\theta_{rec}$ is shown in Fig.~\ref{fig:theta_histo} for a sample input $(P_{\rm in}, \cos\theta) = (5\hbox{ GeV/c}, 0.65)$ for the peripheral and side region 9 respectively. \begin{figure}[htp] \renewcommand{\figurename}{Fig.} \begin{center} \includegraphics[width=0.48\textwidth,height=0.33\textwidth]{theta_rec_per_deg.eps} \includegraphics[width=0.48\textwidth,height=0.33\textwidth]{theta_rec_side9_deg.eps} \end{center} \caption{Reconstructed distribution $\theta_{rec}$ for input $(P_{\rm in}, \cos\theta) = (5\hbox{ GeV/c}, 0.65)$ in the peripheral region (left) and side region 9 (right). The selection criteria are the same as before.} \label{fig:theta_histo} \end{figure} The angular resolution is good in both the regions and is in fact better than about a degree for input momentum greater than a few GeV, as seen in Fig.~\ref{fig:theta}, with the resolution being marginally better in the side region. Similar results are obtained in side region 10 as well. In addition, the fraction of events reconstructed in the wrong direction (wrong quadrant of $\cos\theta$) is negligibly small, being less than 0.5\% for $P_{\rm in} \ge 2$ GeV/c. \begin{figure}[htp] \renewcommand{\figurename}{Fig.} \begin{center} \includegraphics[width=0.48\textwidth,height=0.33\textwidth]{theta_resol15_deg.eps} \includegraphics[width=0.48\textwidth,height=0.33\textwidth]{theta_resol15_side9_deg.eps} \end{center} \caption{Resolution, $\sigma_\theta$, of reconstructed angle $\theta_{rec}$ as a function of the input momentum $P_{\rm in}$ (GeV/c) for different values of input $\cos\theta$ in the peripheral region (left) and side region 9 (right). The selection criteria are the same as before.} \label{fig:theta} \end{figure} \subsection{Muon Momentum Response} While the cid efficiency and zenith angle resolution are insensitive to the azimuthal angle $\phi$, due to the reasons given above, we analyse the muon momentum response in different $\phi$ bins. The response is shown in Fig.~\ref{fig:f5_65_4phi_nhit15_per} for the peripheral region with the constraint $N_{hits}/\cos\theta > 15$ being applied as usual to the partially contained events, for sample input values of $(P_{in}, \cos\theta) = (5 \hbox{ GeV/c}, 0.65)$. The histograms in $P_{rec}$ have been fitted with Gaussian functions. The width of each distribution of the four sets differs while the mean remains similar. As expected, $\phi$ bin III (with most muons exiting the detector from the side) has the smallest number of reconstructed events and the worst resolution. Bin II has the best resolution, while bins I and IV have a similar response. This is in contrast to the response in the central region \cite{central} where the reconstruction efficiencies were roughly equal in all $\phi$ bins. \begin{figure}[bhp] \renewcommand{\figurename}{Fig.} \centering \includegraphics[width=0.44\textwidth]{abstrkmmModMN150T65CT5GeVnhit15phi1.eps} \includegraphics[width=0.44\textwidth]{abstrkmmModMN150T65CT5GeVnhit15phi2.eps} \includegraphics[width=0.44\textwidth]{abstrkmmModMN150T65CT5GeVnhit15phi3.eps} \includegraphics[width=0.44\textwidth]{abstrkmmModMN150T65CT5GeVnhit15phi4.eps} \caption{Gaussian fits to reconstructed momentum distributions $P_{rec}$ (GeV/c) for muons with fixed energy $(P_{in}, \cos\theta) = (5 \hbox{ GeV/c}, 0.65)$ in four different bins of azimuthal angle in the peripheral region. See text for details on the bins and the selection criteria used.} \label{fig:f5_65_4phi_nhit15_per} \end{figure} Similar histograms are shown in Fig.~\ref{fig:side9-f5_65_4phi} for side region 9. As discussed earlier, $\phi$ bin IV has both the worst reconstruction and the worst resolution, while bin I has the best ones. Unlike the peripheral case where the bins I and IV had similar response, here bins II and III are not similar because the side region is not symmetric between these two bins: muons in bin III are more prone to exit the detector and hence the detector response is worse in both efficiency and quality of reconstruction. The results in region 10 are similar to region 9 with interchange of bins I and IV, and bins II and III as can be easily understood from Fig.~\ref{fig:map}. However, overall the quality of reconstruction is better in region 10 by about 15\% due to the nature of the forces in this region as discussed earlier; see Fig.~\ref{fig:map}. We shall show results for side region 9 everywhere, as being the more conservative result and simply remark on similarities/differences to be expected in region 10. \begin{figure}[htp] \renewcommand{\figurename}{Fig.} \centering \includegraphics[width=0.44\textwidth]{abstrkmmModMN150T65CT5GeVnhit15side9phi1.eps} \includegraphics[width=0.44\textwidth]{abstrkmmModMN150T65CT5GeVnhit15side9phi2.eps} \includegraphics[width=0.44\textwidth]{abstrkmmModMN150T65CT5GeVnhit15side9phi3.eps} \includegraphics[width=0.44\textwidth]{abstrkmmModMN150T65CT5GeVnhit15side9phi4.eps} \caption{Gaussian fits to reconstructed momentum distributions $P_{rec}$ (GeV/c) for muons with fixed energy $(P_{in}, \cos\theta) = (5 \hbox{ GeV/c}, 0.65)$ in four different bins of azimuthal angle in the side region 9. See text for details on the bins and the selection criteria used.} \label{fig:side9-f5_65_4phi} \end{figure} Fig.~\ref{fig:resol-nhits15-20} shows the momentum resolution as a function of $P_{in}$ in the peripheral region for the four $\phi$ bins using the selection criteria $N_{hits}/\cos\theta > n_{0}$ with $n_0$ = 15, 20, for $\cos\theta$ = 0.65. \begin{figure}[tbp] \renewcommand{\figurename}{Fig.} \centering \includegraphics[width=0.48\textwidth]{res_nhits15_per_cth65.eps} \includegraphics[width=0.48\textwidth]{res_nhits20_per_cth65.eps} \caption{Muon resolution in the peripheral region as a function of input momentum $P_{\rm in}$ (GeV/c) for $\cos\theta$ = 0.65 in different bins of $\phi$ with $N_{hits}/\cos\theta > n_0$ cut, where $n_0$ = 15 (20) left (right).} \label{fig:resol-nhits15-20} \end{figure} \subsection{Momentum Resolution as a Function of $(\theta, \phi)$} Gaussian fits to the reconstructed momentum distribution in these regions give the reconstructed mean and RMS width $\sigma$. The momentum resolution ($R$) is defined from these fits as, \begin{eqnarray} R & = & \sigma /P_{\rm in}, \\ \nonumber \hbox{with error } \delta R & = & \delta\sigma/P_{\rm in}~. \end{eqnarray} Fig.~\ref{fig:per-resol-reg} shows the variation of resolution as a function of $P_{in}$ from 1 to 50 GeV/c for different values of $\cos\theta$ from 0.35 to 0.85 in the different $\phi$ bins of the peripheral region. In all bins, the momentum resolution improves with the increase of energy upto about $P_{in} \sim 6$ GeV/c as the number of hits increases, but worsens at higher momenta since the particle then begins to exit the detector. This effect is considerable in the $\phi$ bin III which therefore has the worst resolution while $\phi$ bin II has the best resolution, as expected from the earlier discussions. In general, the resolution improves for more vertical angles (larger $\cos\theta$) as the number of hits in a track increases. Fig.~\ref{fig:side9-resol-reg} shows similar results for the side region 9. Again, it is observed that for all the angles and energies, $\phi$ bin I has the best response while the resolutions worsens in bins III and IV. Results in region 10 are similar to those in region 9 with interchange of response in $\phi$ bins (I, IV) and (II, III), but with a few percent better resolution in all cases. \begin{figure}[htp] \renewcommand{\figurename}{Fig.} \centering \includegraphics[width=0.44\textwidth,height=0.27\textwidth]{per_res15_phi1.eps} \includegraphics[width=0.44\textwidth,height=0.27\textwidth]{per_res15_phi2.eps} \includegraphics[width=0.44\textwidth,height=0.27\textwidth]{per_res15_phi3.eps} \includegraphics[width=0.44\textwidth,height=0.27\textwidth]{per_res15_phi4.eps} \caption{Muon resolution in the peripheral region as a function of input momentum $P_{\rm in}$ (GeV/c) for different values of $\cos\theta$ in different bins of $\phi$.} \label{fig:per-resol-reg} \end{figure} \begin{figure}[tbp] \renewcommand{\figurename}{Fig.} \centering \includegraphics[width=0.44\textwidth,height=0.27\textwidth]{resolside15_phi1.eps} \includegraphics[width=0.44\textwidth,height=0.27\textwidth]{resolside15_phi2.eps} \includegraphics[width=0.44\textwidth,height=0.27\textwidth]{resolside15_phi3.eps} \includegraphics[width=0.44\textwidth,height=0.27\textwidth]{resolside15_phi4.eps} \caption{Muon resolution in the side region 9 as a function of input momentum $P_{\rm in}$ (GeV/c) for different values of $\cos\theta$ in different bins of $\phi$.} \label{fig:side9-resol-reg} \end{figure} The resolution for a given $P_{in}$ is marginally better in the side region than in the peripheral region due to the somewhat larger and uniform magnetic field. A detailed comparison of the response in different regions will be presented in the next section. \section{Comparison of Muon Response in Different Regions of ICAL} We compare the muon response in the peripheral and side regions with that in the central region as presented in Ref.~\cite{central}. For all choices of selection criteria, the reconstruction and cid efficiencies in the central region are better than either the peripheral or side region as shown in Fig.~\ref{fig:comp-eff} for $\cos\theta=0.65$; however, for input momenta upto $P_{in} \sim 8$ GeV/c, the central and side region cid efficiencies are comparable. Note that applying more stringent selection criteria in order to improve the momentum resolution in the peripheral and side regions (and hence overall resolution of the detector) will further worsen the reconstruction efficiencies in these regions. \begin{figure}[htp] \renewcommand{\figurename}{Fig.} \centering \includegraphics[width=0.48\textwidth]{comparisonrecoeff15_cen_per_side_65.eps} \includegraphics[width=0.48\textwidth]{comparisoncideff15_cen_per_side_65.eps} \caption{Comparison of reconstruction (left) and cid efficiency (right) of central, peripheral and side regions as a function of $P_{in}$ (GeV/c) at $\cos\theta = 0.65$. Note that the y-axis scales are different for the two plots.} \label{fig:comp-eff} \end{figure} In addition, the angular resolutions are very similar between the peripheral and side regions, as can be seen from Fig.~\ref{fig:theta} and are in fact similar to those obtained earlier in the central region \cite{central}. The comparison of the $\phi$-averaged peripheral and side region momentum resolutions as a function of input momentum $P_{in}$ from 1 to 15 GeV/c is shown in Fig.~\ref{fig:comp} for $\cos\theta = 0.45, 0.65, 0.85$. We have also shown the $\phi$-averaged central region results \cite{central} in the same plots. The criterion of a single reconstructed track only was also applied to the central region, but no constraint was placed on $N_{hits}$. While the side region resolutions are only marginally better than those in the peripheral region, the central region gives the best resolution, as expected. However, we note that the results are $\phi$ averaged and so the resolutions can be much improved in the peripheral and side regions depending on the $\phi$ bin chosen. The peripheral and side region resolutions can be improved by changing the selection criteria at the cost of reconstruction efficiency. The resolutions in all regions are comparable at low momenta, $P_{\rm in} \le 3$ GeV/c, since almost all tracks are fully contained in this case. ~\hspace{-0.5cm} \begin{figure}[htp] \renewcommand{\figurename}{Fig.} ~\hspace{-0.5cm} \includegraphics[width=0.35\textwidth, height=0.33\textwidth]{comparisonresol15_cen_per_side_45.eps} \hspace{-0.5cm} \includegraphics[width=0.35\textwidth, height=0.33\textwidth]{comparisonresol15_cen_per_side_65.eps} \hspace{-0.5cm} \includegraphics[width=0.35\textwidth, height=0.33\textwidth]{comparisonresol15_cen_per_side_85.eps} \caption{Comparison of resolutions in peripheral and side region 9 as a function of the input momentum $P_{in}$ (GeV/c) along with earlier results in the central region \cite{central} for different values of $\cos\theta = 0.45, 0.65, 0.85$.} \label{fig:comp} \end{figure} \section{Discussions and Conclusion} The goal of the proposed ICAL detector is to study neutrino oscillations using atmospheric neutrinos. It is more sensitive to muons and hence the physics will focus on charged current scattering of $\nu_\mu$ ($\overline{\nu}_\mu$) in the detector. Hence a simulations study of the response of ICAL to muons is crucial. The ICAL geometry was simulated using GEANT4 software and the detector response was studied for muons with momenta from 1 to 50 GeV/c, polar angle $\cos\theta \ge 0.35$ and smeared over all azimuthal angles, $-\pi \le \phi \le \pi$. In the current study, muons were generated in the peripheral and side region of the ICAL detector where the magnetic field is non-uniform in both magnitude and direction and where edge effects are important. The study showed that a crucial selection criterion on the number of hits $N_{hits}/\cos\theta > n_0$ for partially contained tracks was necessary to achieve good detection efficiency. The magnetic field and the detector geometry break the azimuthal symmetry; hence the muon response was analysed in different $\phi$ bins. Results using $N_{hits}/\cos\theta > 15$ show that the best momentum resolutions of about 10--15\% are obtained in bin II ($\pi/4 \le \phi < 3\pi/4$) at input momenta of $P_{\rm in} \ge 4$ GeV/c in the peripheral region and in bins I and II ($\vert \phi \vert \le \pi/4$ and $\pi/4 < \vert \phi \vert \le \pi/2$) in Side region 9 (see Figs.~\ref{fig:map} and \ref{fig:phi_choice} for definitions of these regions). Also, $\phi$-averaged results are obtained with $N_{hits}/\cos\theta > 15$ for the reconstruction efficiency, charge identification efficiency and momentum resolution as shown in Figs.~\ref{fig:comp-eff} and \ref{fig:comp} for the peripheral and side regions of the ICAL detector in comparison with earlier results in the central region \cite{central}. A reconstruction efficiency of about 60--70\% and a correct charge identification of about 97\% of the reconstructed muons was obtained for $P_{\rm in} \ge 4$ GeV/c and this decreased to about 90\% for higher momenta $P_{\rm in} \sim 50$ GeV/c in both regions. Average (over $\phi$) resolutions obtained are between 15--25\% over $P_{\rm in} = 1$--15 GeV/c in the peripheral region and marginally better in the side region, with the central region response being the best. Note that these responses are relevant for studies such as precision measurement of neutrino oscillation parameters or the mass hierarchy determination with ICAL. For the case of physics studies such as rock muons or cosmic ray muons, the response in only certain $\phi$ bins are relevant since the muons in these cases are always entering the detector from outside; for this reason, the performance will be better than the averages shown here. In contrast, good angular resolution of better than a degree for $P_{\rm in} \ge 4$ GeV/c is obtained in the peripheral and side regions, which is comparable to that in the central region. The simulations indicate that the detector has a good response to muons, with reconstruction of momentum with 15--24\% resolution, direction reconstruction of about a degree for muon energies greater than 4 GeV and charge identification of about 97\%. While fully contained events are reconstructed with the same efficiency as in the central region, only those partially contained ones which have at least $N_{hits}/\cos\theta > n_0$ in their tracks, $n_0 \sim 15$, are well reconstructed in the simulations. This implies a loss of reconstruction efficiency due to this criterion. However, the number of events reconstructed in these regions, which is expected to be about 50\% from naive considerations of detector geometry, is about 60--70\%, due to the effect of the magnetic field, which increases the recontruction efficiency in the peripheral region. \paragraph{Acknowledgements}: We thank Naba K Mondal for suggestions and support during this work. We also thank the INO simulations group for their comments and suggestions on the results; Gobinda Majumder and Asmita Redij for code-related discussions; and Shiba Behera for discussions on the magnetic field map. R. Kanishka acknowledges UGC/DST (Govt. of India) for financial support.
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} In general, the exposure of the image can be adjusted by two different ways. The first way is to change the shutter speed when the aperture is fixed to control the amount of light falling on the sensor. The other way is to keep the shutter speed unchanged while adjusting the aperture's size. While the former method may cause the motion blur if there is any possible object motion, the later method results in a shallow depth of field (DoF), causing defocus blur to occur in scene regions outside the DoF~\cite{abuolaim2020defocus} . Removing the defocus blur is critical as we can obtain an image which is captured using a wide aperture but still have everything in focus, which ensures a well-exposed image with sufficiently sharp image. In theory, defocus blur is a result of a sharp region with a spatial point spread function (PSF) that use the neighborhood pixel in producing the blurry pixel~\cite{tang}. As a result, using the dual-pixel alone may not sufficiently enough to faithfully recover the original sharp pixel. However, we believe that by employing the large receptive fields provided by stacking convolution layers with maxpooling, a neural network will be able to produce the non-blurred outputs, given the dual-pixel inputs. Recently, Abuolaim \textit{et al.}~\cite{abuolaim2020defocus} trained an Unet-like model which takes two images as input and produces a defocus blur-free output. Unet is an encoder-decoder framework which considers all pixel and channel equally which we think not suitable as the blurry pixels are distributed differently for each channel and each pixel location. In this work, to employ different feature in both input images intentionally, we propose an attention deep convolutional neural networks (CNNs) to remove the defocus blur artifact which is built upon Encoder - Decoder architecture with the Dual Attention Modules. As mentioned above, we notice that every pixel and channel of the input images should be considered appropriately, make them contribute to the final output at different level. As a result, we redesigned the encoder to extract the useful information by adding the dual-attention module to the classical encoder module. Furthermore, the extracted features from the attention-encoder will be concatenated and put through the triple-local and global-non-local modules before being decoded by decoder modules and generate the sharp output image. We demonstrate the effectiveness of the proposed network through the \textit{NTIRE2021 Defocus Deblurring Challenge~\cite{ntire}}. Using the data provided by the competition, we trained a network and finally archived the average \textit{PSNR} of $26.4243$ \textit{dB}, stands at the \nth{9} position in the competition. \section{Related Works} \subsection{Defocus Deblurring} While there are many previous works on deblurring field, we found that those methods that try to estimate the defocus map and deblurring are the closest methods as they all try to produce the sharp and deblurred output. The most common method for defocus deblurring task is to first estimate the deblurring kernel and then use that kernel as a guidance for deblurring. To find the deblurring kernel, Park \textit{et al.}~\cite{park} fed a combination of pretrained blur classification network to extract the deep blur feature along with the hand-crafted feature to a regression network to estimate the amount of blur in the pixel edge to later deblur it. Karaali \textit{et al.}~\cite{karaali} extracted the difference between gradient of the blurry image and the original one. And most recently, Abuolaim \textit{et al.}~\cite{abuolaim2020defocus} introduced a deep learning model, which consists of encoder and decoder modules, use the dual-pixel data to directly solve for the defocus blur in a single step without estimating the defocus map. \subsection{Attention Modules in Deep Learning} Recently, attention mechanisms show the effectiveness in many computer vision fields including the image restoration. Thanks to its ability to facilitate deep neural networks to determine where to focus and improve the representation of interest~\cite{an}. By observing that each pixel and each image channel should be considered separately, we end up adding the dual-attention~\cite{dual} module to the conventional encoder in our proposed network to tackle with the defocus deblurring challenge. By incorporating the attention mechanisms with the conventional encoder modules, every pixel and channel is dealt separately, make sure they contribute the useful information at different level before being merged and being decoded. With this mechanism, the proposed architecture yields high-quality results in both qualitative and quantitative perspective. \subsection{Defocus Blur Dataset} There are several available datasets for the defocus deblurring task. Salvado \textit{et al.}~\cite{salvado} proposed the RTF dataset which has 22 image pairs of blurred image and its corresponding in-focus-image. The CUHK~\cite{shi} and DUT~\cite{zhao} provide real-blurred images with the corresponding binary masks representing the blur/sharp regions. This dataset is more suitable for blur detection task. Recently, Abuolaim \textit{et al.}~\cite{abuolaim2020defocus} proposed a large dataset with 500 different pairs of non-overlapped scenes. By using the dual-pixels, the dataset is extended to 2000 images with blur images and the corresponding sharp images. This dataset is used for \textit{NTIRE2021 Defocus Deblurring Challenge~\cite{ntire}}. \begin{figure*} \begin{center} \includegraphics[scale=0.43]{images/overall} \end{center} \caption{The overall architecture of the proposed ATTSF.} \label{fig:overall} \end{figure*} \section{Attention! Stay Focus! (ATTSF)} We design an attention encoder decoder network to effectively synthesize the blurry input images and generate a high-quality blur-free output. Figure~\ref{fig:overall} shows the architecture of the proposed network, which takes two images (left blurry image and right blurry image) as input and then reconstructs a sharp image. In details, our proposed ATTSF consists of several attention encoders, triple local, global-local blocks and decoder modules. The attention encoders are used to extract useful features from the blurry input images. The features generated from those encoders are concatenated together and being transferred to the triple-local and global non-local modules in parallel then finally being decoded to get the final sharp output image. To ensure the output image to have the useful feature from the input images, we use the skip-connection to connect the output feature of the encoders and decoders at every level. \subsection{Attention Encoder (ATTE)} The conventional encoder usually consists of several convolution layers following by the activation layers and pooling layers. Encoder modules are good at extracting the high-level feature of the input image. However, all pixel and channel are treated equivalently which is not strong enough in this defocus deblurring challenge in our opinion. We observe that the defocus deblurring challenge, the blur level are not equally distributed both over image channels and image pixel. As a results, we employ the dual-attention mechanism, which is composed of channel attention and pixel attention or position attention. Figure~\ref{fig:dual} shows the architecture of the dual-attention modules. To be specific, each ATTE block has its input to go through the dual-attention module to extract the high-level feature intentionally. Follow the dual-attention module is a couple of convolution layers with ReLU~\cite{relu} activation functions and MaxPooling layers, just similar to the conventional encoder. \begin{figure*} \begin{center} \includegraphics[scale=0.45]{images/dualatt} \end{center} \caption{Dual Attention Module. GAP, GMP are Global Average Pooling and Global Max Pooling, respectively. $\times$ denotes channel-wise multiplication, and $C$ denotes the concatenate operation.} \label{fig:dual} \end{figure*} \textbf{Channel Attention} As the input of the network is dual-pixel, each image should contribute different kind of information to the final output image. Base on this we employed the dual-attention module which includes the channel attention and pixel attention. The channel attention intentionally extract the feature across the channel dimension by calculating the channel attention map from the input feature. The channel attention applies the convolution layer following by the sigmoid function, which ensures that the attention map will range from 0.0 to 1.0, representing the amount of the information contribution of each feature channel to the output. By masking the input feature with the calculated attention mask, we get the output feature which consists of useful information from each channel of the input. \textbf{Pixel Attention} In addition to channel attention, pixel attention a.k.a position attention is also crucial for this task. While channel attention works on channel domain, pixel attention, on the other hand, pays attention to the every pixel in the input feature. The module applies global average pooling(GAP) and global max pooling(GMP) in parallel on the input feature. The GAP and GMP feature are then concatenated together, follows by a $1\times1$ convolution with the sigmoid activation function to generate a pixel attention mask. The attention mask is then multiply equally with every input channel, resulting in the output feature. \textbf{Dual Attention} Having the pixel attention and channel attention, the dual attention module takes the input feature and applies two $3\times3$ convolution layers follows by ReLU~\cite{relu} activation function. The feature is then put through Pixel Attention and Channel Attention simultaneously and being concatenated in the channel axis. Finally, the concatenated feature is $1\times1$ convoluted to match the dimension of the input feature, as shown in Figure~\ref{fig:dual}. \textbf{Attention Encoder} As mentioned before, the proposed attention encoder is built based on the conventional encoder scheme by adding the dual attention module on top of it. Specifically, the input feature first goes through the dual attention module, then through the encoder part, which is composed of several $3\times3$ convolution layers and ReLU activation functions~\cite{relu}. \begin{figure*} \begin{center} \includegraphics[scale=0.45]{images/triplelocal} \end{center} \caption{Triple Local Module} \label{fig:triple} \end{figure*} \subsection{Triple Local} Figure~\ref{fig:triple} shows the architecture of the triple local module. This is inspired by the inception modules, which has multiple convolution kernel with different size, in order to extract the feature of different levels. The small filter is able to extract local details of the features, and the large filter can cover larger regions of the receiving layers. All the features are concatenated in a channel-wise manner and compressed through a convolutional layer. \begin{figure*} \begin{center} \includegraphics[scale=0.55]{images/globallocal} \end{center} \caption{Global Local Module} \label{fig:short} \end{figure*} \subsection{Global Local} Figure 4 illustrates the architecture of the global local module. As we know, the convolution represents the local feature. In this task, although the local features are essential, we do not want to loose the global terms as it makes the whole image to be spatially consistent. Here we employed the idea from~\cite{wang} and~\cite{buades} which calculate the correlation between two input signals of the whole image. The global local module cover large receptive fields so the network can ensure the spatial consistency, avoiding the hallucination. \subsection{Implementation Details} In our implementation, each convolution in encoder module is followed by a Rectifier Linear Unit (ReLU) activation function~\cite{relu}, while each layer in decoder module is followed by a Leaky Rectifier Linear Unit (Leaky ReLU)~\cite{leakyrelu}. The reason behind using the Leaky ReLU instead of ReLU is to avoid the under-bound of the hidden layer's output, which may lead to unwanted reconstructed image. Every layer is initialized follow the \textit{He normal}~\cite{henormal}, and all convolution kernel in encoders or decoders are $3\times3$ . In each training batch, we apply several augmentation technique such as random rotation, horizontal and vertical flipping. All input images are normalized between 0.0 and 1.0. We first trained the networks using the Adam optimizer~\cite{adam} with the learning rate of \num{1e-4}, and the batch size was set to 4 for 200 epochs. We then change the loss function to loss function 2 as shown in Eq.~\ref{eqn:loss} and train the model with SGD optimizer with the batch size of 2, learning rate of \num{5e-5} with 100 more epochs, where $\alpha=1$, $\beta=0.5$, respectively. We also apply the Learning Rate Scheduler to decrease the learning rate by half every 20 epochs. Finally, the model is implemented using Tensorflow~\cite{tf} with the help of Tensorflow Addons package. We experimentally found that finetuning the network again with the loss function~\ref{eqn:loss} that has more weight on \textit{SSIM} , the model not only gets a high \textit{PSNR} but also high \textit{SSIM} value. And by achieving high \textit{SSIM} score, the final predicted images are more similar to the ground-truths, make them become closer to the real sharp images. We used the training dataset provided by the \textit{NTIRE2021 Defocus Deblurring Challenge~\cite{ntire}}. The dataset is divided into three parts: training, validation and testing . We used the training set for training and validation set, test set for validating and testing the model, respectively. Because of the memory limitation, we did not use the original images for training, but we cropped into many patches of size $560\times560$ from the training and validation sets by sliding over images with strides of $140\times140$. As a results, we end up with more than $2000$ images for training and $500$ images for validation. For testing, we keep the original sizes. The training took approximately three days using a computer with Intel® Core™ i7, 32GB RAM, and Nvidia V100 GPU. After training and fine-tuning, we use the original test set provided by the competition. The model takes about $0.5$ second for one image, which is reasonably fast. \begin{equation} \label{eqn:loss} Loss = \alpha\times SSIMLoss + \beta\times MAELoss \end{equation} \section{Experimental Results} \subsection{Quantitative and Qualitative Evaluation} \begin{table} \begin{center} \begin{tabular}{|l|c|c|c|} \hline Method & \textit{PSNR} & \textit{SSIM} & \textit{MAE} \\ \hline\hline Abuolaim \textit{et al.}~\cite{abuolaim2020defocus} & 25.13 & 78.59 & 0.0406 \\ Ours &\textbf{25.98} & \textbf{81.15} & \textbf{0.0377} \\ \hline \end{tabular} \end{center} \caption{\textit{PSNR}, \textit{SSIM} and \textit{MAE} values of our proposed algorithm, compared with Abuolaim \textit{et al.}~\cite{abuolaim2020defocus} The bold values indicates the better results. We can notice that our network is far better than the state of the arts.} \label{table:kysymys} \end{table} We evaluate the performance of the proposed network using the test set provided by the \textit{NTIRE2021 Defocus Deblurring Challenge~\cite{ntire}}. The competition provides two different test sets. One of them has ground truth images which we use to compare the qualitative metrics with state-of-the-art methods recently, the other one does not include the ground truth, so we only use them for visual comparison, as shown on Figure~\ref{fig:three}. Table \ref{table:kysymys} compares the \textit{PSNR}, \textit{MAE} and \textit{SSIM} calculated using the former test set mention above. We only compare with Abuolaim \textit{et al.}~\cite{abuolaim2020defocus} method as we notice that Abuolaim \textit{et al.}~\cite{abuolaim2020defocus} 's method is the only method that tries to solve the defocus deblurring problem using deep learning model, which is close to our work. The proposed method outperforms the state-of-the-art algorithm in terms of qualitative metrics, as the feature of the input images are extracted attentionally, and contributed to output the sharp images. \begin{figure*} \label{fig1} \begin{subfigure}[b]{0.245\textwidth} \includegraphics[width=\linewidth]{rs/1P0A2417_i} \end{subfigure}% \hspace{\fill} \begin{subfigure}[b]{0.245\textwidth} \includegraphics[width=\linewidth]{rs/1P0A2417_g} \end{subfigure}% \hspace{\fill} \begin{subfigure}[b]{0.245\textwidth} \includegraphics[width=\linewidth]{rs/1P0A2417_t} \end{subfigure}% \hspace{\fill} \begin{subfigure}[b]{0.245\textwidth} \includegraphics[width=\linewidth]{rs/1P0A2417_o} \end{subfigure} \begin{subfigure}[b]{0.245\textwidth} \includegraphics[width=\linewidth]{cr/1P0A2417_i_rs} \caption{Input} \label{fig:gull} \end{subfigure}% \hspace{\fill} \begin{subfigure}[b]{0.245\textwidth} \includegraphics[width=\linewidth]{cr/1P0A2417_g_rs} \caption{Ground Truth} \label{fig:gull2} \end{subfigure}% \hspace{\fill} \begin{subfigure}[b]{0.245\textwidth} \includegraphics[width=\linewidth]{cr/1P0A2417_t_rs} \caption{Abuolaim \textit{et al.}~\cite{abuolaim2020defocus} } \label{fig:tiger} \end{subfigure}% \hspace{\fill} \begin{subfigure}[b]{0.245\textwidth} \includegraphics[width=\linewidth]{cr/1P0A2417_o_rs} \caption{Proposed} \label{fig:mouse} \end{subfigure} \caption{Visual comparison of the proposed algorithm and Abuolaim \textit{et al.}~\cite{abuolaim2020defocus} 's algorithm. The proposed algorithm outperforms the Abuolaim \textit{et al.}~\cite{abuolaim2020defocus} as it faithfully recovers the wall region, makes it close to the ground truth image.}\label{fig:one} \end{figure*} \begin{figure*} \label{fig2} \begin{subfigure}[b]{0.245\textwidth} \includegraphics[width=\linewidth]{rs/1P0A2350_i} \end{subfigure}% \hspace{\fill} \begin{subfigure}[b]{0.245\textwidth} \includegraphics[width=\linewidth]{rs/1P0A2350_g} \end{subfigure}% \hspace{\fill} \begin{subfigure}[b]{0.245\textwidth} \includegraphics[width=\linewidth]{rs/1P0A2350_t} \end{subfigure}% \hspace{\fill} \begin{subfigure}[b]{0.245\textwidth} \includegraphics[width=\linewidth]{rs/1P0A2350_o} \end{subfigure} \begin{subfigure}[b]{0.245\textwidth} \includegraphics[width=\linewidth]{cr/1P0A2350_i_rs} \caption{Input} \label{fig:gull} \end{subfigure}% \hspace{\fill} \begin{subfigure}[b]{0.245\textwidth} \includegraphics[width=\linewidth]{cr/1P0A2350_g_rs} \caption{Ground Truth} \label{fig:gull2} \end{subfigure}% \hspace{\fill} \begin{subfigure}[b]{0.245\textwidth} \includegraphics[width=\linewidth]{cr/1P0A2350_t_rs} \caption{Abuolaim \textit{et al.}~\cite{abuolaim2020defocus} } \label{fig:tiger} \end{subfigure}% \hspace{\fill} \begin{subfigure}[b]{0.245\textwidth} \includegraphics[width=\linewidth]{cr/1P0A2350_o_rs} \caption{Proposed} \label{fig:mouse} \end{subfigure} \caption{Visual comparison of the proposed algorithm and Abuolaim \textit{et al.}~\cite{abuolaim2020defocus}'s algorithm. The proposed algorithm successfully deblur the blurry regions such as the door or the light on the ceiling.}\label{fig:two} \end{figure*} \begin{figure*} \begin{subfigure}[b]{0.32\textwidth} \includegraphics[width=\linewidth]{rs/0542_i} \end{subfigure}% \hspace{\fill} \begin{subfigure}[b]{0.32\textwidth} \includegraphics[width=\linewidth]{rs/0542_t} \end{subfigure}% \hspace{\fill} \begin{subfigure}[b]{0.32\textwidth} \includegraphics[width=\linewidth]{rs/0542_o} \end{subfigure}% \hspace{\fill} \begin{subfigure}[b]{0.32\textwidth} \includegraphics[width=\linewidth]{cr/0542_i_rs} \end{subfigure}% \hspace{\fill} \begin{subfigure}[b]{0.32\textwidth} \includegraphics[width=\linewidth]{cr/0542_t_rs} \end{subfigure}% \hspace{\fill} \begin{subfigure}[b]{0.32\textwidth} \includegraphics[width=\linewidth]{cr/0542_o_rs} \end{subfigure}% \hspace{\fill} \begin{subfigure}[b]{0.32\textwidth} \includegraphics[width=\linewidth]{rs/0518_i} \end{subfigure}% \hspace{\fill} \begin{subfigure}[b]{0.32\textwidth} \includegraphics[width=\linewidth]{rs/0518_t} \end{subfigure}% \hspace{\fill} \begin{subfigure}[b]{0.32\textwidth} \includegraphics[width=\linewidth]{rs/0518_o} \end{subfigure}% \hspace{\fill} \begin{subfigure}[b]{0.32\textwidth} \includegraphics[width=\linewidth]{cr/0518_i_rs} \caption{Input} \end{subfigure}% \hspace{\fill} \begin{subfigure}[b]{0.32\textwidth} \includegraphics[width=\linewidth]{cr/0518_t_rs} \caption{Abuolaim \textit{et al.}~\cite{abuolaim2020defocus} } \end{subfigure}% \hspace{\fill} \begin{subfigure}[b]{0.32\textwidth} \includegraphics[width=\linewidth]{cr/0518_o_rs} \caption{Proposed} \label{fig:visual} \end{subfigure}% \hspace{\fill} \caption{Visual comparison of the proposed algorithm and Abuolaim \textit{et al.}~\cite{abuolaim2020defocus}'s algorithm. Abuolaim \textit{et al.}~\cite{abuolaim2020defocus} fails to produce the non-blur output while the proposed algorithm faithfully remove the blur artifact and generate sharp images.}\label{fig:three} \end{figure*} Figure~\ref{fig:one} and Figure~\ref{fig:two} visually compare the defocus deblurring results on the test set with ground truths provided. Abuolaim \textit{et al.}~\cite{abuolaim2020defocus} still yields blur artifacts while ATTSF preserves sharp edges and fine details more faithfully. We also verify the effectiveness of the our proposed method using the second test set provided by the competition. As this set does not contain the ground truth images, we only able to compare our results without the ground truth. The results shown on Figure~\ref{fig:three} again show that our proposed method successfully recover the blur region and out-perform the state-of-the-art algorithm. Although there was no ground truth to compare the results qualitatively, it is reported that the our \textit{PSNR} on this test set was $26.4243$ \textit{dB}, \nth{9} position in the competition. \section{Conclusion} In this work, we proposed an attention deep learning network which leverages the original encoder and decoder architecture by adding the dual-attention modules before every encoder blocks to attentionally extract the feature in each blur input image. Furthermore, at the bottleneck point, we also added the triple local and global local modules in parallel to efficiently extract the local features in different level as well as keep the global context of the input images. The features are then being concatenated with the encoded feature at every level and being decoded by the decoder modules, then finally restore the sharp output image. We demonstrated the effectiveness of the proposed defocus deblurring architecture through the \textit{NTIRE2021 Defocus Deblurring Challenge~\cite{ntire}}. \bibliographystyle{ieee_fullname}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} Cosmic inflation is the most promising solution to many puzzles surrounding the big bang and offers a mechanism to generate cosmological perturbations from primordial quantum fluctuations. The approximate scale invariance of the corresponding inflationary power spectrum reported by the Planck collaboration \cite{Akrami2020} may also be hinting towards an underlying theory that is scale invariant before dynamical symmetry breaking; a possibility that we will embrace as others have done in the past \cite{Kannike_2015,Farzinnia_2016,Karam2019,Kubo2021,GarciaBellido:2011de,Rinaldi:2015uvu, Ferreira:2016wem, Benisty:2018fja, Barnaveli:2018dxo, Ghilencea:2018thl, Kubo:2018kho, Ishida:2019wkd,Kannike:2014mia,Barrie:2016rnv,Vicentini:2019etr,Gialamas:2020snr,Aoki:2021skm}. Many successful models of inflation are constructed around $f(R)$ gravitational sectors which may generically contain more than one power of the Ricci scalar e.g.\ the Starobinsky model \cite{Starobinsky:1980te} where the additional degree of freedom (DOF) due to the $R^2$ term plays the role of the inflaton. More general higher order terms built from the other independent contractions of the Riemann tensor are rarely included in the action, however, these terms are necessarily generated by quantum effects even if they are not included at the classical level from the start \cite{tHooft:2011aa}. While it is usually accepted that such terms contribute to inflation only negligibly at the classical level \cite{Baumann:2015xxa,Salvio:2017xul}, it is not necessarily true that quantum corrections arising from the higher order contractions of the Riemann tensor are also negligible. Indeed, we find that the massive spin-2 ghost that originates from the Weyl tensor squared term ($C^2$) is particularly important as it can generate an inflationary potential that dynamically induces the Planck scale via radiative corrections \'{a} la Coleman-Weinberg\footnote{This fact also implies that modifications to the inflaton potential may be recognized at the level of the beta functions of the quartic couplings (see e.g.\ the beta functions in \cite{Salvio2014}).} \cite{Coleman1973a}. It should be noted that the scalaron degree of freedom originating from the $R^2$ term alone is not sufficient for triggering radiative symmetry breaking in the Jordan frame. In this respect, our considerations will allow for the construction of the most minimal scale invariant model that yields a dynamical generation of the Planck scale and inflationary potential, as no additional bosonic degrees of freedom besides the inflaton scalar and the metric degrees of freedom are necessary. It is well-known that the massive ghost DOFs that appear when one considers the $C^2$ term in the action threaten the unitarity of the resulting quantum theory. This quantum version of the Ostrogradsky instability, usually referred to as the ``ghost problem'', is a subtle and complicated topic that we will not address in this work. Rather, we refer the curious reader to a few of the most interesting attempts to solve this problem, namely, the work of Donoghue and Menezes that centers around the decay of the massive ghost \cite{Donoghue2019,Donoghue2021}, as well as the interesting possibility of $\mathcal{P}\mathcal{T}$ quantization championed by Bender and Mannheim \cite{Bender2007,Bender2008}. Though the details of these works are beyond the scope of the current paper, one important detail they share is that the massive spin-2 ghost is considered a genuine physical particle and is not merely some calculational relic as one might consider Faddeev-Popov ghosts, for example. As such, if one includes the $C^2$ term in the action, the physical effects of the massive ghost on inflationary predictions should not be neglected as they traditionally are. We begin our investigations in the next section by establishing the full non-linear action, extracting the gravitational degrees of freedom, and deriving the propagators for said DOFs. We then calculate the Coleman-Weinberg one-loop effective potential, including contributions from the scalars and spin-2 ghost, which allows us to identify the dynamically generated Planck scale and establish the inflationary potential after transforming to the Einstein frame. Finally, we perform a numerical analysis of the predicted inflationary parameters and end with a discussion of the results. \section{The model} We consider the following general action describing globally scale invariant quadratic gravity non-minimally coupled to a single additional matter scalar $S(x)$, \begin{gather} S_\text{T} = S_\text{QG} + S_\text{S} \label{ST} \,, \\[0.3em] S_\text{QG} = \int\dd^4x\sqrt{-g}\Big(\gamma R^2 - \kappa C_{\mu\nu\rho\sigma}C^{\mu\nu\rho\sigma}\Big) \label{SQG} \,, \\[0.3em] S_\text{S} = \int\dd^4x\sqrt{-g}\bigg(\frac{1}{2}\nabla_\mu S\nabla^\mu S - \frac{\beta}{2}S^2R - \frac{\lambda}{4}S^4\bigg) \,, \label{SS} \end{gather} where $\gamma$, $\kappa$, $\beta$, and $\lambda$ are arbitrary dimensionless constants. As is standard practice in studies of quadratic gravity, the gravitational part of this action is parameterized in terms of the sum of squares of the Ricci scalar and Weyl tensor, which is equivalent to a general combination of the three independent contractions of the Riemann tensor after neglecting total derivatives \cite{Salvio2018}. The complete action (\ref{ST}) is invariant under infinitesimal local diffeomorphisms as well as the global scale transformations \begin{align} &g_{\mu\nu} \enskip\rightarrow\enskip \omega^2g_{\mu\nu} \,, &&S \enskip\rightarrow\enskip \omega^{-1}S \,, \end{align} where $\omega$ is a constant. The presence of this global symmetry is of particular interest because, as laid out in \cite{Kubo2021}, the scalar $S$ may form a condensate $\langle S \rangle = v_S$ that leads to the spontaneous breakdown of scale invariance and the subsequent generation of an Einstein-Hilbert term and identification of the Planck mass $M_\text{Pl} \propto v_S$. Since we are interested in the effects of gravitational DOFs on inflation, we separate out the dynamical part of the metric by linearizing the action around flat space with \begin{align} g_{\mu\nu} \enskip\rightarrow\enskip \eta_{\mu\nu} + h_{\mu\nu} \,, \end{align} where $\eta_{\mu\nu}$ is the Minkowski metric and $h_{\mu\nu}(x)$ is a small perturbation. After performing this linearization up to second order in the graviton $h_{\mu\nu}$, integrating by parts, and dropping interaction terms, we arrive at the total action \begin{align} S_\text{T}^{\text{(lin)}} = &\int\dd^4x\bigg[\frac{\beta}{8}S_\text{cl}^2\Big(h^{\mu\nu}\Boxh_{\mu\nu} + 2h^{\mu\nu}\partial_\nu\partial^\rho h_{\mu\rho} - \hs{\mu}\Box\hs{\nu} - 2\hs{\mu}\partial_\nu\partial_\rho h^{\nu\rho}\Big) - \frac{\lambda}{4}S_\text{cl}^4 \nonumber \\[0.3em] &+ \gamma\Big(h^{\mu\nu}\partial_\mu\partial_\nu\partial_\rho\partial_\sigma h^{\rho\sigma} + \hs{\mu}\Box^2\hs{\nu} + 2\hs{\mu}\Box\partial_\nu\partial_\rho h^{\nu\rho}\Big) \nonumber \\[0.3em] &+\frac{\kappa}{6}\Big(\!\!-3h^{\mu\nu}\Box^2h_{\mu\nu} - 6h^{\mu\nu}\Box\partial_\nu\partial^\rho h_{\mu\rho} - 2h^{\mu\nu}\partial_\mu\partial_\nu\partial_\rho\partial_\sigma h^{\rho\sigma} \nonumber \\[0.3em] &+ \hs{\mu}\Box^2\hs{\nu} + 2\hs{\mu}\Box\partial_\nu\partial_\rho h^{\nu\rho}\Big)\bigg] \,, \end{align} where $\Box=-\partial_\mu\partial^\mu$. Here we have also set $S$ to its classical (approximately constant) background value $S_\text{cl}$ since quantum fluctuations, in the standard Coleman-Weinberg sense, around $S_\text{cl}$ make only negligible contributions to the inflationary potential at one-loop order \cite{Kubo2021}. We may further separate the gravitational DOFs according to their spin by performing a York decomposition in terms of transverse-traceless tensor modes $\tilde{h}_{\mu\nu}(x)$, transverse vector modes $V(x)$, the scalar trace $\hs{\mu}(x)$, and an additional scalar mode $a(x)$ as \begin{gather} \label{York} h_{\mu\nu} = \tilde{h}_{\mu\nu} + \partial_\mu V_\nu + \partial_\nu V_\mu + \left(\partial_\mu\partial_\nu - \frac{1}{4}\eta_{\mu\nu}\Box\right)a + \frac{1}{4}\eta_{\mu\nu}\hs{\rho} \,, \end{gather} where $\partial^\mu\tilde{h}_{\mu\nu}=\hts{\mu}=0$ and $\partial_\mu V^\mu=0$ \cite{Antoniadis1991}. It is also instructive to redefine the graviton trace in terms of the gauge-invariant scalar quantity $\phi(x)$, \begin{align} \label{scalaron} \phi = \hs{\mu} - \Box\,a \,, \end{align} which may be identified as the well-known ``scalaron'' degree of freedom \cite{Alvarez-Gaume2016}. After applying these definitions, all of the quadratic terms containing $V_\mu$ and $a$ cancel out leaving us with the simple action below. \begin{align} \label{SYork} S_\text{Y} = \int\dd^4x\left[\phi\bigg(\frac{9\gamma}{16}\Box^2 - \frac{3\beta}{64}S_\text{cl}^2\Box\bigg)\phi - \tilde{h}_{\mu\nu}\left(\frac{\kappa}{2}\Box^2 - \frac{\beta}{8}S_\text{cl}^2\Box\right)\tilde{h}^{\mu\nu} - \frac{\lambda}{4}S_\text{cl}^4\right] \end{align} In this York-decomposed form it is straightforward to calculate the propagators and mass terms for each of the gravitational degrees of freedom. To do so, we perform a Fourier transform to identify the inverse propagators as the Hessians of (\ref{SYork}) with respect to each field, which may then be inverted to yield the propagators \begin{align} &i\bra{0}\mathcal{T}\phi\phi\ket{0} = \frac{32}{3\beta S_\text{cl}^2}\bigg(\!\!-\frac{1}{p^2} + \frac{1}{p^2 - m_\phi^2}\bigg) \,, \label{propphi} \\[0.3em] &i\bra{0}\mathcal{T}\tilde{h}_{\mu\nu}\tilde{h}_{\rho\sigma}\ket{0} = \frac{4}{\beta S_\text{cl}^2}\bigg(\frac{1}{p^2} - \frac{1}{p^2 - m_\text{gh}^2}\bigg)\delta_{\mu\nu\rho\sigma} \label{proph} \,, \end{align} where $p^2 = p_\mu p^\mu$, $\delta_{\mu\nu\rho\sigma} = \frac{1}{2}(\eta_{\mu\rho}\eta_{\nu\sigma} + \eta_{\mu\sigma}\eta_{\nu\rho})$, and the masses are given by \begin{align} \label{massdefs} &m_\phi^2 = \frac{\beta}{12\gamma}S_\text{cl}^2 \,, &&m_\text{gh}^2 = \frac{\beta}{4\kappa}S_\text{cl}^2 \,. \end{align} \section{Inflation} \subsection{The effective potential} After neglecting classical background contributions from the Weyl tensor squared term, the effective action for inflation may be written as \begin{align} \label{Seff} S_{\text{eff}} = \int\dd^4x\sqrt{-g}\bigg(\frac{1}{2}S \Box S - \frac{\beta}{2}S^2R + \gamma R^2 - U_{\text{eff}}(S)\bigg) \,, \end{align} where $U_{\text{eff}}$ is the quantum effective one-loop potential. This term receives contributions from the massive spin-0 and spin-2 sectors, each of which may be calculated using standard Coleman-Weinberg (CW) methods \cite{Coleman1973a}. The $\phi$ contribution to the CW potential is calculated by expanding the action (\ref{SYork}) around the field's classical background as $\phi=\phi_{\text{cl}} + \delta\phi$ and integrating out the fluctuations $\delta\phi$. The part of the functional integral that is quadratic in $\delta\phi$ is Gaussian, leading to an effective potential that is proportional to \begin{align} \ln\bigg[\text{det}\bigg(\frac{\partial^2 S_\text{Y}}{\partial\delta\phi\partial\delta\phi}\bigg)\bigg] = \text{Tr}\Big[\ln\Big(\Box - m_\phi^2\Big)\Big] + \cdots \,, \end{align} where the ``$\cdots$'' stand for irrelevant constant terms that are independent of $S$. This trace may be written as a sum of the momentum space eigenvalues of the operator $\ln(\Box - m_\phi^2)$ and evaluated using dimensional regularization to give the scalaron's one-loop contribution to the the effective potential. \begin{align} U_\phi(S) &= \int\frac{\dd^4p}{(2\pi)^4}\ln\bigg(\frac{p^2 - m_\phi^2}{p^2}\bigg) = \frac{1}{64\pi^2}m_\phi^4\bigg[\ln\bigg(\frac{m_\phi^2}{\mu^2}\bigg) - \frac{3}{2}\bigg] \end{align} Here, we have employed $\overline{\text{MS}}$, introducing the renormalization scale $\mu$ in the process, and we have absorbed the divergent terms into the renormalized constant $\lambda$. Performing the same calculation for the $S$ contributions, which has already been performed in \cite{Kubo2021}, yields the analogous result \begin{align} &U_S(S) = \frac{\lambda}{4}S^4 + \frac{1}{64\pi^2}m_S^4\bigg[\ln\bigg(\frac{m_S^2}{\mu^2}\bigg) - \frac{3}{2}\bigg] \,, &&m^2_S = 3\lambda S^2 \,, \end{align} where the tree-level contributions have also been included. Calculation of the spin-2 part follows in much the same way as the spin-0, with the non-zero contributions coming from the term $\tilde{h}^{\mu\nu}\delta_{\mu\nu\rho\sigma}\big(\Box - m_\text{gh}^2\big)\tilde{h}^{\rho\sigma}$, i.e.\ only from the massive part of the inverse propagator. However, when going to momentum space, we must take advantage of the transverse-traceless nature of $\tilde{h}_{\mu\nu}$ to write \begin{align} \tilde{h}^{\mu\nu}\delta_{\mu\nu\rho\sigma}\tilde{h}^{\rho\sigma} = \tilde{h}^{\mu\nu}P^{(2)}_{\mu\nu\rho\sigma}\tilde{h}^{\rho\sigma} \,, \end{align} where \begin{align} P^{(2)}_{\mu\nu\rho\sigma} = \frac{1}{2}\big(\theta_{\mu\rho}\theta_{\nu\sigma} + \theta_{\mu\sigma}\theta_{\nu\rho}\big) - \frac{1}{d-1}\theta_{\mu\nu}\theta_{\rho\sigma} \qquad\text{with}\qquad \theta_{\mu\nu} = \eta_{\mu\nu} - \frac{p_\mu p_\nu}{p^2} \,, \end{align} is a spin-2 projection operator \cite{VanNieuwenhuizen1973}. Making this replacement ensures that we count the correct number of degrees of freedom, which is five for a massive spin-2 field in four dimensions, after noting that \begin{align} \text{Tr}\big(P^{(2)}_{\mu\nu\rho\sigma}\big) = \delta^{\mu\nu\rho\sigma}P^{(2)}_{\mu\nu\rho\sigma} = \frac{1}{2}(d + 1)(d - 2) \,. \end{align} With these considerations, we find that the massive spin-2 contributes \begin{align} U_h(S) &= \lim_{d \rightarrow 4}\bigg[\mu^{4-d} \int\frac{\dd^dp}{(2\pi)^d}\frac{1}{2}(d + 1)(d - 2)\ln\bigg(\frac{p^2 - m_\text{gh}^2}{p^2}\bigg)\bigg] \nonumber \\ &= \frac{5}{64\pi^2}m_\text{gh}^4\bigg[\ln\bigg(\frac{m_\text{gh}^2}{\mu^2}\bigg) - \frac{1}{10}\bigg] \end{align} to the effective potential, where we have subtracted the divergent part according to the $\overline{\text{MS}}$ scheme. Finally, the entire effective potential is then given by \begin{align} \label{Ueff3} U_{\text{eff}}(S) = U_\phi(S) + U_S(S) + U_h(S) + U_0 \,, \end{align} where $U_0$ is an arbitrary constant background that may be tuned in order to ensure that the classical zero-point energy vanishes, provided that scale invariance is broken spontaneously, which, as we will see in the next section, is indeed the case here. \subsection{The inflationary action} To calculate predictions for inflationary parameters, we need the proper inflationary potential in the Einstein frame. We must therefore calculate the symmetry breaking behavior of the Jordan frame potential given in (\ref{Seff}) and (\ref{Ueff3}), ensuring a vanishing zero-point energy both in Jordan and Einstein frame in the process. The effective one-loop potential may be written as \begin{align} \label{Ueff0} U_{\text{eff}}(S,0) = U_0 + \bigg[C_1 + C_2 \ln\bigg(\frac{S^2}{\mu^2}\bigg)\bigg]S^4 \,, \end{align} where \begin{align} &C_1 = \frac{\lambda}{4} + \frac{9\lambda^2}{128\pi^2}\Big(2\ln\big(3\lambda\big) - 3\Big) \nonumber \\[0.3em] &\phantom{C_1 =}- \frac{\beta^2}{2048\pi^2}\bigg[\frac{1}{9\gamma^2}\bigg(2\ln\bigg(\frac{12\gamma}{\beta}\bigg) + 3\bigg) + \frac{1}{\kappa^2}\bigg(10\ln\bigg(\frac{4\kappa}{\beta}\bigg) + 1\bigg)\bigg] \label{C1} \,, \\[0.3em] &C_2 = \frac{9\lambda^2}{64\pi^2} + \frac{\beta^2}{1024\pi^2}\bigg(\frac{1}{9\gamma^2} + \frac{5}{\kappa^2}\bigg) \,, \label{C2} \end{align} are dimensionless constants that depend only on the coupling constants. We may now solve for the vacuum expectation value (VEV) of $S$, $v_S$, which is defined as the minimum of this potential \begin{align} \label{VEV} &\frac{\partial U_{\text{eff}}(S)}{\partial S} \, \bigg\rvert_{S=v_S} = 0 \,, &&v_S = \mu \exp\bigg(\!\!-\frac{1}{4} - \frac{C_1}{2C_2}\bigg) \,. \end{align} The non-zero value of this minimum indicates a spontaneous breakdown of global scale symmetry, as advertised. We may also easily calculate the explicit value of $U_0$ by requiring that the effective potential vanishes in the broken phase which yields \begin{align} &U_{\text{eff}}(v_S)=0\,, &&U_0 = \frac{\mu^4}{2}C_2 \exp \bigg(\!\!-1 - \frac{2C_1}{C_2}\bigg) \,. \end{align} Finally, we obtain the explicit value of the Planck mass that is generated by the breaking of scale invariance by identifying the canonical Einstein term in (\ref{Seff}) as \begin{align} \label{PlanckIdent} &-\frac{1}{2}\beta S^2 R \, \bigg\rvert_{S=v_S} = -\frac{1}{2} M_\text{Pl}^2 R \,, &&M_{\text{Pl}}^2 = \beta v_S^2 \,. \end{align} In analogy to \cite{Kubo2021}, this relates $M_\text{Pl}$ to the renormalization scale $\mu$ via (\ref{VEV}). In contrast to the aforementioned work (\cite{Kubo2021}), we do not need two external scalars to achieve successful Coleman-Weinberg symmetry breaking of scale invariance in the Jordan frame. This is due to the additional contributions from the scalar and tensor degrees of freedom to the effective scalar potential which is a novel consideration. To calculate predictions of the inflationary parameters that result from spontaneous symmetry breaking, we follow the procedure outlined in \cite{Kubo2021}. Since we have shown that scale invariance is spontaneously broken, we may introduce an auxiliary field to remove the $R^2$ term, then transform to the Einstein frame with a Weyl rescaling. There we find two dynamical scalar fields, $S$ and the scalaron. The corresponding potential exhibits a valley structure \cite{Kubo2021}, a flat direction with steep perpendicular potential lines, and the fields will thus always fall into a trajectory along that flat direction. After solving the minimum equations for the scalaron\footnote{Depending on the parameter configuration, solving for $S$ rather than the scalaron may result in a better description of the flat direction. However, in our case the choice of contour has only a minor influence on the inflationary parameter predictions as both contours are valid for all calculated points. For an extended discussion we refer the reader to section 4.2 and Appendix A of \cite{Kubo2021}.}, the final inflationary potential can be rewritten to depend only on the original external scalar $S$ and the coupling constants $\lambda$, $\beta$, $\gamma$, and $\kappa$. This effective inflationary action in the Einstein frame has the form \begin{align} S_\text{inf}^\text{E} = \int\dd^4x \sqrt{-g} \bigg(\!\!-\frac{1}{2} M_\text{Pl}^2 R + \frac{1}{2} F(S)^2 S \Box S - U_\text{inf}(S)\bigg) \, , \end{align} where $F(S)$ denotes the modification to the kinetic term for $S$ and is given by \begin{align} \label{field_norm} F(S) &= \frac{1}{\big(1 + 4A\big)B}\bigg[\big(1 + 4A\big)B + \frac{3}{2}M_\text{Pl}^2\Big(\big(1 + 4A\big)B' + 4A'B\Big)^2\bigg]^{1/2} \,, \end{align} where $A$ and $B$ are functions of the scalar field $S$ given by \begin{align} &A(S) =\frac{4 \gamma \, U_\mathrm{inf}(S)}{B(S)^2 M_\mathrm{Pl}^2} \,, &&B(S) = \frac{\beta S^2}{M_\text{Pl}^2} \,, \end{align} and primes denote derivatives with respect to $S$. With these definitions, the full inflationary potential $U_\text{inf}(S)$ is thus determined to be \begin{align} \label{Uinf} U_\text{inf} (S) = \frac{U_\text{eff}(S)}{B(S)^2 + 16 \gamma \, U_\text{eff}(S)/ M_\text{Pl}^4} \,. \end{align} One may also obtain the canonically normalized field $\hat{S}$ via a simple integration. \begin{align} \label{normRelation} \hat{S}(S) = \int_{v_S}^S \dd x \, F(x) \end{align} \subsection{Numerical analysis of slow-roll inflation} Inflationary CMB observables, namely, the scalar spectral index $n_S$ and the tensor-to-scalar ratio $r$, may be expressed in terms of the slow-roll parameters $\varepsilon$ and $\eta$ as \begin{align} &n_S = 1 + 2\eta_* - 6\varepsilon_* \,, &&r = 16\varepsilon_* \,, \end{align} where the asterisks indicate quantities evaluated at $S=S_*$, the value of S at time of photon decoupling (CMB horizon exit). Since our inflationary potential depends only on the scalar field $S$, we can apply the well known formulas for $\epsilon$, $\eta$, and $N_\text{e}$ of one-field slow-roll inflation, modified to depend on the non-normalized field $S$ using the relation (\ref{normRelation}). \begin{align} &\varepsilon (S) = \frac{M_{\rm Pl}^2}{2\,F^2(S)}\left(\frac{U'_{\rm inf}(S)}{U_{\rm inf}(S)}\right)^2 \label{epsilon} \\[0.3em] &\eta(S) = \frac{M_{\rm Pl}^2}{F^2(S)} \left(\frac{U''_{\rm inf}(S)}{U_{\rm inf}(S)} -\frac{ F'(S)}{F(S)}\frac{U'_{\rm inf}(S)}{U_{\rm inf}(S)}\right) \label{eta} \\[0.3em] &N_e = \int_{S_*}^{S_\mathrm{end}}\frac{F^2(S)}{M_{\rm Pl}^2}\frac{U_{\rm inf}(S)} {U'_{\rm inf}(S)} \label{efolding} \end{align} Here, $S_\text{end}$ denotes the value of $S$ at the end of inflation which is defined by $\text{max}$ $\{\epsilon(S=S_\text{end}),$ $\rvert\eta(S=S_\text{end})\rvert\}=1$. With this we may calculate expressions of $n_S$ and $r$ that depend only on the dimensionless couplings $\lambda$, $\beta$, $\gamma$ and $\kappa$, as $\mu$ is fixed after demanding the correct value for $M_\text{Pl}$ as in (\ref{PlanckIdent}). To constrain this model we use the latest data from the Planck satellite mission \cite{Akrami2020} and assume an inflation duration of $N_e \approx 50-60$ e-folds. To ensure our predictions are consistent with the Planck data, we constrain the parameter space of the dimensionless couplings so that it ultimately fulfills the scalar power spectrum amplitude $A_s$ constraints below. \begin{align} \label{AsConstraint} &\ln (10^{10} A_s) = 3.044 \pm 0.014 &&A_s = \frac{U_{\text{inf}\, *}}{24 \pi^2 \epsilon_* M_\text{Pl}} \end{align} Predictions corresponding to the resulting coupling values below are displayed in (Fig.\,\ref{ns-r-prediction}). \begin{align} \label{couplingvals} &\lambda = 0.005 &\beta \in [10^3,10^4] &&\gamma \in [10^7,10^9] &&\kappa \in [10^2,10^{3.25}] \end{align} \begin{figure}[h] \centering \includegraphics[width=0.82\textwidth]{Random_ns_r_l0005_Gamma_100k_N50_N55_N60_KPT_BAO_Limits_noTitle.png}\\ \vspace{0.2cm} \noindent \includegraphics[width=0.82\textwidth]{Random_ns_r_l0005_Kappa_100k_N50_N55_N60_KPT_BAO_Limits_noTitle.png} \caption{Predictions for the scalar spectral index $n_\mathrm{s}$ and the tensor-to-scalar ratio $r$ with varying numbers of e-folds $N_e$ are displayed. For the points shown, $\lambda$ is fixed, while $\beta, \gamma$, and $\kappa$ are taken randomly from (\ref{couplingvals}) while satisfying (\ref{AsConstraint}). We include the Planck TT,TE,EE+lowE+lensing+BK15+BAO $68\%$ and $95\%$ CL regions from \cite{Akrami2020}, as well as predictions of the Starobinsky model (green) and linear inflation (red).} \label{ns-r-prediction} \end{figure}% The ranges for the dimensionless couplings in (\ref{couplingvals}) result from incorporating the Planck constraint on $A_s$ (\ref{AsConstraint}). More parameter space was explored but did not yield promising predictions while being compatible with this constraint. We see that for the full range of possible e-folds, there are points that are compatible with even the tightest Planck constraints. We also see that the upper limit of our predictions for $r$ approaches the upper limits of linear inflation ($m \phi^3$), while the lower limits match those of Starobinsky inflation. The circles for linear and Starobinsky inflation in (Fig.\,\ref{ns-r-prediction}) represent the predictions for $N_e=50$ (left) and $N_e=60$ (right) e-folds respectively. The point labelled ``B$1$'' in (Fig.\,\ref{ns-r-prediction}) corresponds to the following benchmark values. \begin{align} \label{B1} &\mathrm{B}1: \; \lambda = \SI{0.005}{} &\beta=\SI{5.62e2}{} &&\gamma= \SI{1.22e8}{} &&\kappa= \SI{837}{} \end{align} In order to get an order of magnitude estimate, we calculate the field masses $m_\phi, \, m_\text{gh}$ via the relation (\ref{massdefs}) evaluated at the non-zero VEV of $S$, \begin{align} &m^{\mathrm{B}1}_\phi(S=v_S) \simeq \SI{6.35e13}{\giga\electronvolt} \,, &&m^{\mathrm{B}1}_\text{gh}(S=v_S) \simeq \SI{4.21e16}{\giga\electronvolt} \,. \end{align} These masses are representative for the most points, while the field masses of all points displayed in (\ref{ns-r-prediction}) are roughly contained in the ranges $m_\phi \in \left[ 10^{13} \, \SI{}{\giga\electronvolt} , 10^{16}\, \SI{}{\giga\electronvolt}\right]$ and $m_\text{gh} \in \left[ 10^{16} \, \SI{}{\giga\electronvolt}, 10^{17} \, \SI{}{\giga\electronvolt} \right]$. Here, high $m_\phi$ goes hand in hand with small $\gamma$ and therefore relatively large tensor-to-scalar ratios (see Fig.~\ref{ns-r-prediction}). Additionally, we take into account classical corrections to the inflation parameters due to the presence of the $C^2$-term. Two different contributions are introduced in \cite{Baumann:2015xxa} (2.24) and \cite{Salvio:2017xul} (7.4) respectively. To calculate the latter correction we need to use the slow-roll approximation during inflation\footnote{To ensure a field value that is representative for inflation we choose $S=S_*$ to calculate $V(S)$ and $m_\text{gh}(S)$.}, $H^2 \approx V/(3 M_\text{Pl}^2)$. Here we find that the correction is largest for large $\kappa$ with a maximum of $\approx 22\%$, increasing the predicted tensor-to-scalar ratio. For smaller $\kappa$ and large $\gamma$, we get a correction towards smaller $r$ with a maximum of $\approx 11 \%$. This leaves us with even the corrected predictions being fully compatible with the currently strongest cosmological constraints from Planck18. \section{Conclusion} We have investigated a classical scale invariant framework that dynamically generates the Planck mass via spontaneous symmetry breaking in the Jordan frame in the most minimal way i.e.\ with only one external scalar in addition to the quantum contributions of the graviton degrees of freedom. Given that higher powers of curvature tensors are necessarily generated via quantum corrections even if these terms are not considered at tree-level, we include the Weyl tensor squared term from the start and find that the resulting quantum contributions allow for spontaneous symmetry breaking via the Coleman Weinberg mechanism in the Jordan frame with only the one external scalar. Specifically, it is the massive spin-2 ghost DOF originating from the Weyl squared term that allows for spontaneous symmetry breaking, a role that is usually filled by additional external scalars in other scale invariant models. Starting with a scale invariant Lagrangian, we are able to explicitly cancel the cosmological constant in both frames if scale invariance is spontaneously broken in the Jordan frame which is the one of the foremost advantages of our model when compared to other one-scalar models with symmetry breaking in the Einstein frame \cite{Ghilencea2019,Karam2019}. Furthermore, the potential resulting from symmetry breaking in our framework leads to an inflationary potential that is in perfect agreement with the current strongest constraints from the Planck collaboration, as seen in (Fig.\ \ref{ns-r-prediction}). We also include the classical corrections to the predicted inflationary parameters due to the presence of the $C^2$-term, which are calculated in \cite{Baumann:2015xxa,Salvio:2017xul}. These come out to be of order $10\%-20\%$ and reduce the available parameter space of predictions in agreement with the current limits, though at the same time, they improve the predictions of the parameters close to the limit of Starobinsky inflation. Therefore, though the impact of the $C^2$-term's classical corrections is minor, its quantum contributions turn out to be important due to the fact that they lead to a one-loop scalar potential that enables symmetry breaking in the Jordan frame with only one external scalar. To conclude, we note that though the primordial non-Gaussianities in cosmological fluctuations are suppressed in single-field systems of inflation \cite{Maldacena:2002vr}, they can be generated in a multi-field system and appear in the CMB anisotropy as well as in measurements of the large scale structure of the Universe (see for instance \cite{Bartolo:2004if} and \cite{Planck:2019kim}). Future experimental projects such as LiteBird \cite{LiteBIRD:2022}, Euclid \cite{Amendola:2016saw}, LSST \cite{LSSTScience:2009jmu}, etc.\ will be able to measure the magnitude of these non-Gassianities and constrain their existence. Though our model contains only one scalar field at the beginning, the scalaron which originates from the $R^2$ term in the action (\ref{SQG}) makes the system behave as an effectively two-field system. In \cite{Mori:2017caa} it is outlined how one may compute the non-Gaussianities in such models. Furthermore, as we have mentioned with reference to \cite{Salvio:2017xul} in section III.C, the massive spin-2 mode can contribute to the metric perturbations during inflation, thus altering the inflationary parameters. This correction has turned out to be very small in our model, but the size of its contribution to the non-Gaussianities is not known as of yet. We thus plan to put the focus of our future investigations on the primordial non-Gaussianities in inflationary models based on scale invariance. \acknowledgments We thank Manfred Lindner and Andreas Trautner for many fruitful discussions. J.R.\ and P.S.\ are supported by the IMPRS-PTFS. J.\ Kubo is partially supported by the Grant-in-Aid for Scientific Research (C) from the Japan Society for Promotion of Science (Grant No.19K03844). \bibliographystyle{JHEP}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction}\label{Intro} A subset $S$ of a metric space is a {\em $k$-distance set\/} if there are exactly $k$ non-zero distances occuring between points of $S$. We also call a $1$-distance set an {\em equilateral set.} In this paper we find upper bounds for the cardinalities of $k$-distance sets in {\em Minkowski spaces}, i.e.\ finite-dimensional Banach spaces (see Theorems~\ref{thA} to \ref{Up}), and make a conjecture concerning tight upper bounds. In Euclidean spaces $k$-distance sets have been studied extensively; see e.g.\ \cite{Erdos,Erdos2,Kelly,Golomb,BB,BBS,Blokhuis,Blokhuis2,Beck,Chung,CEGSW,CST,HP,Sz}, and the books \cite{Pach} and \cite[sections F1 and F3]{CFG}. For general $d$-dimensional Minkowski spaces it is known that the maximum cardinality of an equilateral set is $2^d$, with equality iff the unit ball of the space is a parallelotope, and that if $d\geq 3$, there always exists an equilateral set of at least $4$ points \cite{Petty}. It is unknown whether there always exists an equilateral set of $d+1$ points; see \cite{LM,Morgan} and \cite[p.\ 129, p.\ 308 problem 4.1.1]{Thompson}. However, Brass \cite{Brass2} recently proved that for each $n$ there is a $d=d(n)$ such that any $d$-dimensional Minkowski space has an equilateral set of at least $n$ points. See \cite{Guy} for problems on equilateral sets in $\ell_p$ spaces. Equilateral sets in Minkowski spaces have been used in \cite{LM} to construct energy-minimizing cones over wire-frames. See also \cite{Morgan}. As far as we know, $k$-distance sets for $k\geq 2$ have not been studied in spaces other than euclidean. Our main results are the following. \begin{thm} \label{thA} If the unit ball of a $d$-dimensional Minkowski space is a parallelotope, then a $k$-distance set in $X$ has cardinality at most $(k+1)^d$. This bound is tight. \end{thm} \begin{thm}\label{Cor1} Given any set $S$ of $n$ points in a $d$-dimensional Minkowski space with a parallelotope as unit ball, there exists a point in $S$ from which there are at least $\lceil n^{1/d}\rceil-1$ distinct non-zero distances to points in $S$. This bound is tight. \end{thm} \begin{thm} \label{thB} The cardinality of a $k$-distance set in a $2$-dimensional Minkowski space is at most $(k+1)^{2}$, with equality iff the space has a parallelogram as unit ball. \end{thm} \begin{thm} \label{Cor2} Given any set of $n$ points in a $2$-dimensional Minkowski space, there exists a point in $S$ from which there are at least $\lceil n^{1/2}\rceil-1$ distinct non-zero distances to points in $S$. \end{thm} \begin{thm} \label{Up} The cardinality of a $k$-distance set in a $d$-dimensional Minkowski space is at most $\min(2^{kd}, (k+1)^{(11^{d}-9^{d})/2})$. \end{thm} In the light of Theorems~\ref{thA} and \ref{thB} and the results of \cite{Petty}, we make the following \begin{conj} The cardinality of a $k$-distance set in any $d$-dimensional Minkowski space is at most $(k+1)^{d}$, with equality iff the unit ball is a parallelotope. \end{conj} As mentioned above, \cite{Petty} shows that this conjecture is true for $k=1$. By Theorem~\ref{thB} the conjecture is true if $d=2$, and by Theorem~\ref{thA} if the unit ball is a parallelotope. In the sequel, $(\mathbb{R}^d, \norm{\cdot})$ is a $d$-dimensional Minkowski space with norm $\norm{\cdot}$, $B(x,r)$ is the closed ball with centre $x$ and radius $r>0$, and $B:= B(0,1)$ the {\em unit ball}\/ of the space. Recall that two $d$-dimensional Minkowski spaces are isometric iff their unit balls are affinely equivalent (by the Mazur-Ulam Theorem; see e.g.\ \cite[Theorem 3.1.2]{Thompson}). In particular, a Minkowski space has a parallelotope as unit ball iff it is isometric to $(\mathbb{R}^d, \norm{\cdot}_{\infty})$, where $\norm{(\lambda_{1}, \lambda_{2},\dots,\lambda_{d})}_{\infty}:= \max_{i=1,\dots,d}\abs{\lambda_{i}}$. We define a {\it cone} (or more precisely, an {\em acute cone}) $P$ to be a convex set in $\mathbb{R}^d$ that is positively homogeneous (i.e., for any $x\in P$ and $\lambda\geq 0$ we have $\lambda x\in P$) and satisfies $P\cap(-P)=\{0\}$. Recall that such a cone defines a partial order on $\mathbb{R}^d$ by $x\leq y \iff y-x\in P$. We denote the cardinality of a set $S$ by $\#S$. For measurable $S\subseteq\mathbb{R}^{d}$, let $\vol{S}$ denote the Lebesgue measure of $S$. For later reference we state Lyusternik's version of the Brunn-Minkowski inequality (see \cite[Theorem 8.1.1]{BZ}). \begin{lem} If $A,B\subseteq\mathbb{R}^{d}$ are compact, then \[ \vol{A+B}^{1/d} \geq \vol{A}^{1/d}+\vol{B}^{1/d}. \] If equality holds and $\vol{A}, \vol{B} > 0$, then $A$ and $B$ are convex bodies such that $A = v + \lambda B$ for some $\lambda >0$ and $v\in \mathbb{R}^d$.\qed \end{lem} \section{Proofs} \begin{proof}[Proof of Theorem~\ref{thA}] We may assume without loss of generality that the space is $(\mathbb{R}^d,\norm{\cdot}_\infty)$. We introduce partial orders on $\mathbb{R}^d$ following Blokhuis and Wilbrink \cite{BlW}. For each $i=1,\dots,d$, let $\leq_{i}$ be the partial order with cone \[ P_{i}=\bigl\{(\lambda_{1},\dots,\lambda_{d})\in\mathbb{R}^d: \max_{j=1,\dots,d}\abs{\lambda_j}=\lambda_{i}\bigr\}. \] For each $x$ in a $k$-distance set $S$, let $h_{i}(x)$ be the length of the longest descending $\leq_{i}$-chain starting with $x$, i.e.\ $h_{i}(x)$ is the largest $h$ such that there exist $x_{1},x_{2},\dots,x_{h}\in S$ for which $x >_{i} x_{1} >_{i} x_2 >_i \dots >_{i} x_{h}$. Since $\bigcup_{i=1}^{d}(P_{i}\cup -P_{i}) = \mathbb{R}^d$, for all distinct $x,y\in\ell_{\infty}^{d}$ there exists $i$ such that $x <_{i} y$ or $y <_{i} x$. Exactly as in \cite{BlW}, it follows that the mapping $x\mapsto (h_{1}(x),\dots,h_{d}(x))$ is injective, and thus $\#S\leq (h+1)^{d}$, where \[ h:=\max_{x\in S, i=1,\dots, d} h_{i}(x). \] It remains to show that $h\leq k$. Suppose not. Then for some $x\in S$ and some $i$ there exist $x_{1},\dots,x_{k+1}\in S$ such that $x >_{i} x_{1} >_{i} \dots >_{i} x_{k+1}$. Since $S$ is a $k$-distance set, $\norm{x-x_{m}}_{\infty}=\norm{x-x_{n}}_{\infty}$ for some $1\leq m < n \leq k+1$. Also, $x-x_{m}, x-x_{n}\in P_{i}$. Now note that if $\norm{a}_{\infty}=\norm{b}_{\infty}$ with $a,b\in P_{i}, a\neq b$, then $a$ and $b$ are $\leq_{i}$-incomparable; in particular, $b-a\not\in P_{i}$. Therefore, $x_{m}-x_{n}\not\in P_{i}$, a contradiction. The set $\{0,1,\dots,k\}^d$ is a $k$-distance set of cardinality $(k+1)^d$. Note that it is not difficult to see that in fact the only $k$-distance sets of cardinality $(k+1)^d$ are of the form $S=a+\lambda\{0,1,\dots,k\}^d$ for some $a\in\mathbb{R}^d$ and $\lambda > 0$. \qed \end{proof} \begin{proof}[Proof of Theorem~\ref{Cor1}] Consider the mapping $x\mapsto (h_1(x),\dots,h_d(x))$ in the proof of Theorem~\ref{thA}. If $h$ is the length of the longest $\leq_i$-chain over all $i$, then $n\leq (h+1)^d$. Thus there is a $\leq_i$-chain $x_0>_i x_1>_i \dots >_i x_h$ of length $h\geq \lceil n^{1/d}\rceil-1$. By the last paragraph of the proof of Theorem~\ref{thA}, the distances $\rho(x_0,x_j), j=1,\dots,h$ are all distinct. Any $S\subseteq\mathbb{R}^d$ such that \[ \{0,1,\dots,\lceil n^{1/d}\rceil-2\}^d\subsetneq S \subseteq \{0,1,\dots,\lceil n^{1/d}\rceil-1\}^d, \] has exactly $\lceil n^{1/d}\rceil-1$ distinct distances in the norm $\norm{\cdot}_\infty$. \qed \end{proof} The following corollary is easily gleaned from the proof of Theorem~\ref{thA}. \begin{cor} \label{Cor} Suppose that $\{P_{i}: i\in I\}$ is a family of cones in a Minkowski space $(\mathbb{R}^{d},\norm{\cdot})$ satisfying \begin{equation} \label{one} \bigcup_{i\in I} (P_{i}\cup - P_{i}) = \mathbb{R}^{d}, \end{equation} and \begin{equation} \label{two} \forall\, i\in I \: \forall\, \text{distinct } x,y\in P_{i}, \text{ if } \norm{x}=\norm{y} \text{ then } \pm(x-y)\not\in P_{i}. \end{equation} Then a $k$-distance set in $(\mathbb{R}^{d},\norm{\cdot})$ has cardinality at most $(k+1)^{\#I}$. \qed \end{cor} \begin{lem}\label{metriclemma} Let $S$ be a $k$-distance set in a metric space $(X,\rho)$ with distances $\rho_{1} < \rho_{2} < \dots < \rho_{k}$. If $\rho_{k}/\rho_{1} > 2^{k-1}$, then for some $i=1, \dots, k-1$, the relation \[ x\sim_{i} y \iff \rho(x,y) \leq \rho_{i} \] is an equivalence relation. \end{lem} \begin{proof} The relation $\sim_{i}$ is reflexive and symmetric. If it is not transitive, there exist $x,y,z\in S$ such that $\rho(x,y),\rho(y,z) \leq \rho_{i}$ and $\rho(x,z) > \rho_{i}$. Thus $\rho_{i+1}\leq \rho(x,z) \leq \rho(x,y)+\rho(y,z) \leq 2\rho_{i}$. If this holds for all $i=1, \dots, k-1$, we obtain $\rho_{k}\leq 2^{k-1}\rho_{1}$. \qed \end{proof} \begin{lem} \label{Up1} The cardinality of a $k$-distance set in a $d$-dimensional Minkowski space is at most $2^{kd}$. \end{lem} \begin{proof} Let $\{x_{1}, \dots, x_{m}\}$ be a $k$-distance set with distances $\rho_{1} < \rho_{2} < \dots < \rho_{k}$. Set $V := \bigcup_{i=1}^{m} B(x_{i}, \rho_{1}/2)$. Then we have \begin{equation} \label{voleq} \vol{V} = m(\rho_{1}/2)^{d}\vol{B}. \end{equation} Also, $V-V\subseteq B(0, \rho_{k}+\rho_{1})$, since if $x,y\in V$, there exist $i$ and $j$ such that $\norm{x-x_{i}}\leq \rho_{1}/2$, $\norm{y-x_{j}}\leq \rho_{1}/2$. Thus \[ \norm{x-y} \leq \norm{x-x_{i}} + \norm{x_{i} - x_{j}} + \norm{x_{i}-x_{j}} \leq \rho_{1} + \rho_{k}. \] Therefore, \begin{equation} \label{voleq2} \vol{V-V}\leq (\rho_{1}+\rho_{k})^{d}\vol{B}. \end{equation} Substituting \eqref{voleq} and \eqref{voleq2} into the Brunn-Minkowski inequality \begin{equation} \label{bmineq} \vol{V-V}^{1/d}\geq\vol{V}^{1/d}+\vol{-V}^{1/d}, \end{equation} we obtain $\rho_{1}+\rho_{k}\geq m^{1/d}\rho_{1}$, and $m\leq (1+\rho_{k}/\rho_{1})^{d}$. If $1+\rho_{k}/\rho_{1}\leq 2^k$, there is nothing to prove. Otherwise, $\rho_{k}/\rho_{1} > 2^k-1 \geq 2^{k-1}$, and by Lemma~\ref{metriclemma}, $x\sim_{i} y \iff \rho(x,y) \leq \rho_{i}$ is an equivalence relation for some $i=1,\dots,k-1$. By induction on $k$ we obtain that each equivalence class, being an $i$-distance set, has at most $2^{id}$ points. By choosing a representative from each equivalence class, we obtain a $(k-i)$-distance set with at most $2^{(k-i)d}$ points. Therefore, $m\leq 2^{id}2^{(k-i)d} = 2^{kd}$. \qed \end{proof} In the proof of Theorem~\ref{thB}, we need the following geometric lemma, which is a modification of \cite[corollary 3.2.6]{Thompson} in $2$ dimensions. \begin{lem} \label{Auerbach} Let $B_1$ be the convex hull of $\{(\pm 1,0), (0,\pm 1)\}$ and $B_\infty$ the square $[-1,1]^2$. For any symmetric convex disc $C$ in $\mathbb{R}^{2}$ there exists an invertible linear transformation taking $C$ to $C'$ such that $B_1\subseteq C'\subseteq B_\infty$ and such that any straight-line segment contained in the boundary of $C'$ lies completely in one of the four coordinate quadrants. \end{lem} \begin{proof} We consider all triangles with vertices $0,x,y$, where $x$ and $y$ are on the boundary of $C$. By compactness there exist $x_{0}$ and $y_{0}$ such that the area of the triangle is a maximum. Then $\{x_{0}+\lambda y_{0}: \lambda\in\mathbb{R}\}$ is a support line of $C$ at $x_{0}$, since otherwise we can replace $x_{0}$ by a point on the side of the line opposite $0$ to enlarge the area of the triangle. Similarly, $\{y_{0}+\lambda x_{0}: \lambda\in\mathbb{R}\}$ is a support line of $C$ at $y_{0}$. Since $C$ is symmetric, it follows that $C$ is contained in the parallelogram $\{\lambda x_{0}+\mu y_{0}:-1\leq \lambda,\mu\leq 1\}$. See Figure~\ref{auerfig}. \begin{figure} \begin{center} \includegraphics{fig1} \end{center} \caption{}\label{auerfig} \end{figure} If $x_{0}$ is an interior point of a straight-line segment contained in the boundary of $C$, we may shift $x_{0}$ to a boundary point of such a segment, without changing the area of the triangle. Thus $C$ is still contained in a parallelogram as above. A similar remark holds for $y_{0}$. We now apply the linear transformation sending $x_{0}$ and $y_{0}$ to the standard unit vectors $e_1$ and $e_2$, respectively (see Figure~\ref{auerfig2}). \qed \end{proof} \begin{proof}[Proof of Theorem~\ref{thB}] We have to find two cones $P_{1}$ and $P_{2}$ satisfying \eqref{one} and \eqref{two} of Corollary~\ref{Cor}. By Lemma~\ref{Auerbach} we may replace the space by an isometric space $(\mathbb{R}^{2},\norm{\cdot})$ such that the unit ball $B$ of $\norm{\cdot}$ lies between $B_1$ and $B_\infty$, and such that any straight-line segment contained in the boundary of the unit ball lies completely in a quadrant of the plane. We provisionally let $P_{1}$ be the closed first quadrant, and $P_{2}$ the closed second quadrant. See Figure~\ref{auerfig2}. \begin{figure} \begin{center} \includegraphics{fig2} \end{center} \caption{}\label{auerfig2} \end{figure} Then \eqref{one} is satisfied. The only way that \eqref{two} could fail is if there is a straight-line segment contained in the boundary of the unit ball parallel to either the x-axis or the y-axis, lying in $P_{1}$ or $P_{2}$. If there is a segment in the boundary of the unit ball in $P_{1}$ parallel to the x-axis, say, we remove the positive x-axis $\{(\lambda,0): \lambda > 0\}$ from $P_{1}$. If in this case there were another straight-line segment in the boundary parallel to the x-axis in $P_{2}$, then there would be a straight-line segment in the boundary lying in the first and second quadrants, giving a contradiction. Thus we do not have to remove the negative x-axis from $P_{2}$, and \eqref{one} is still satisfied. We do the same thing for segments parallel to the y-axis, and for $P_{2}$. In the end, the modified $P_{1}$ and $P_{2}$ satisfy \eqref{one} and \eqref{two}, and we deduce $\#S\leq (k+1)^{2}$ from Corollary~\ref{Cor}. If equality holds, then the mapping $x\mapsto (h_{1}(x),h_{2}(x))$ in the proof of Theorem~\ref{thA} is a bijection from $S$ to $\{0,\dots,k\}^{2}$. We now denote a point $x\in S$ by $p_{i,j}$, where $(i,j) = (h_{1}(x),h_{2}(x))$. Suppose that two of the distances $\norm{p_{0,i}-p_{0,0}}$ ($i=1,\dots,k$) are equal, say $\norm{p_{0,i}-p_{0,0}} = \norm{p_{0,j}-p_{0,0}}$ with $0<i<j$. Then, since $p_{0,j} >_2 p_{0,i} >_2 p_{0,0}$, we have $p_{0,i}-p_{0,0}, p_{0,j}-p_{0,0} \in P_2$, contradicting \eqref{two}. It follows that the distances $\norm{p_{0,i}-p_{0,0}}$, $i=1,\dots,k$ are distinct, and thus are exactly the $k$ different distances in increasing order. Similarly, the distances $\norm{p_{0,i}-p_{0,1}}$, $i=2,\dots,k$ are in increasing order. If $\norm{p_{0,k}-p_{0,1}}=\rho_{k}$, the three points $p_{0,0},p_{0,1},p_{0,k}$ again contradict \eqref{two}. Thus these distances are $\rho_{1},\dots, \rho_{k-1}$ in increasing order, etc. In the end we find that $\norm{p_{0,i+1}-p_{0,i}}=\rho_{1}$ for all $i$. Thus $\rho_{k}\leq k\rho_{1}$, by the triangle inequality. Using the Brunn-Minkowski inequality as in the proof of Lemma~\ref{Up1}, we find that equality holds in \eqref{bmineq} and \eqref{voleq2}, implying that for $V := \bigcup_{i=1}^{\#S} B(x_{i}, \rho_{1}/2)$ we have $V-V = B(0,\rho_{k}+\rho_{1})$, and $V-V$ and $V$ are homothetic. Thus $V$ is a ball that is perfectly packed by smaller balls. By a result of \cite{Groemer}, this implies that the unit ball is a parallelogram. \qed \end{proof} \begin{proof}[Proof of Theorem~\ref{Cor2}] Follows from the proof of Theorem~\ref{thB} in the same way that Theorem~\ref{Cor1} follows from Theorem~\ref{thA}. \qed \end{proof} \begin{proof}[Proof of Theorem~\ref{Up}] Lemma~\ref{Up1} already gives part of the theorem. For the remaining part we apply Corollary~\ref{Cor}. In order for a cone $P$ to satisfy \eqref{two}, it is sufficient that \begin{equation} \label{condition} \forall\, a,b\in P: \text{ if } \norm{a}=\norm{b}=1, \text{ then } \norm{a-b}<1. \end{equation} To see this, suppose that $P$ does not satisfy the condition in \eqref{two}, i.e.\ there exist distinct $x,y\in P$ such that $\norm{x}=\norm{y}$ and $y-x\in P$. Let $a:= \norm{x}^{-1}x$, $b:= \norm{y}^{-1}y$, $c:= \norm{y-x}^{-1}(y-x)$, and $0<\lambda := \norm{x}/(\norm{y-x}+\norm{x}) < 1$. Then $a=(1-\lambda)(a-c)+\lambda b$, and \[ 1=\norm{a}\leq(1-\lambda)\norm{a-c}+\lambda\norm{b} =(1-\lambda)\norm{a-c}+\lambda, \] implying $\norm{a-c}\geq 1$. In order for \eqref{one} to be satisfied too, we need a cover of the unit sphere by sets such that, if they are extended to positive cones, are convex. We do this with the following construction: Let $C=\{c_{1},c_{2},\dots,c_{m}\}$ be a maximal set of unit vectors satisfying $\norm{c_{i} - c_{j}}, \norm{c_{i} + c_{j}} \geq\tfrac{1}{5}$ for all $1\leq i<j\leq m$. Then for any unit vector $x$ there exists $i$ such that $\norm{c_{i} - x} < \tfrac{1}{5}$ or $\norm{c_{i} + x} < \tfrac{1}{5}$. For $i=1,\dots,m$, let $P_{i}$ be the cone generated by \[ Q_{i}:=\bigl\{x\in\mathbb{R}^{d}:\norm{x}=1,\norm{c_{i}-x}<\tfrac{1}{5}\bigr\}, \] i.e.\ $P_{i}:=\{\sum_{j}\lambda_{j}x_{j}:\lambda_{j}\geq 0, x_{j}\in Q_{i}\}$. Then the $P_{i}$'s satisfy \eqref{one} by the maximality of $C$. Each $P_{i}$ satisfies \eqref{condition}: Let $\sum_{j}\lambda_{j}x_{j}\in P_{i}$, where $\lambda_{j}\geq 0, \|x_{j}\|=1, \norm{c_{i}-x_{j}}<\tfrac{1}{5}$ and $\norm{\sum\lambda_{j}x_{j}}=1$. Then \begin{align*} \bignorm{c_{i}-\sum_{j}\lambda_{j}x_{j}} & = \bignorm{\sum_{j}\lambda_{j}(c_{i}-x_{j}) + (1-\sum_{j}\lambda_{j})c_{i}} \\ & < \sum_{j}\lambda_{j}/5 -1 + \sum_{j}\lambda_{j} \quad\text{(since } \sum_{j}\lambda_{j}\geq 1) \\ & = \tfrac{6}{5}\sum_{j}\lambda_{j} - 1. \end{align*} Also, since \[ 1+\sum_{j}\lambda_{j}/5 > \bignorm{\sum_{j}\lambda_{j}x_{j}} + \sum_{j}\norm{\lambda_{j}x_{j} -\lambda_{j}c} \geq \sum_{j}\lambda_{j}\norm{c}=\sum_{j}\lambda_{j}, \] we obtain $\sum_{j}\lambda_{j} < \tfrac{5}{4}$, and $\vnorm{c_{i}-\sum_{j}\lambda_{j}x_{j}} < \tfrac{6}{5}\cdot\frac{5}{4} - 1 = \tfrac{1}{2}$. A volume argument gives the upper bound for $\# C$: The balls \[ B(0,\tfrac{9}{10}), B(\pm c_{i},\tfrac{1}{10}), i=1,\dots m \] have disjoint interiors and are contained in the ball $B(0,\tfrac{11}{10})$. Therefore, \[ (\tfrac{9}{10})^{d}\vol{B} + 2m(\tfrac{1}{10})^{d}\vol{B} \leq (\tfrac{11}{10})^{d}\vol{B}, \] giving $m\leq \tfrac{1}{2} (11^{d}-9^{d})$. \qed \end{proof} \section*{Acknowledgement} This paper is part of the author's PhD thesis written under supervision of Prof.\ W. L. Fouch\'e at the University of Pretoria. I thank the referees as well as Graham Brightwell for their suggestions on the layout of the paper. \providecommand{\bysame}{\leavevmode\hbox to3em{\hrulefill}\thinspace}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} \label{section:introduction} The problem of computing shortest paths in planar graphs arises in application fields such as intelligent transportation system (ITS) and geographic information system (GIS)~\cite{jing-huang,ziliaskopoulos}, route planning~\cite{bauer-delling,goldberg,raney-nagel}, logistic~\cite{masucci-stanilov}, traffic simulations~\cite{baker-gokhale} and robotics~\cite{kim-maxemchuk}. In particular, non-crossing paths in a planar graph are studied to optimize VLSI layout~\cite{bhatt-leighton}, where two \emph{non-crossing} paths may share edges and vertices, but they do not cross each other in the plane. We are given a planar graph $G=(V,E)$, where $V$ is a set of $n$ vertices and $E$ is a set of edges, with $|E|=O(n)$. The graph has a fixed embedding, and we are also given a set of $k$ terminal pairs $(s_{1},t_{1}), (s_{2},t_{2}), \ldots, (s_{k},t_{k})$ lying on the external face of $G$. The non-crossing shortest paths problem (NCSP problem) consists in computing the union of $k$ non-crossing shortest paths in $G$, each joining a terminal pair $(s_{i},t_{i})$, provided that such non-crossing paths exist. \paragraph{State of the art} Takahashi \emph{et al.}~\cite{giappo2} solved the NCSP problem in a non-negative edge-weighted planar graph in $O(n\log k)$ worst-case time (actually, in their paper the worst-case time is $O(n\log n)$, that can easily reduced to $O(n\log k)$ by applying the planar single source shortest path algorithm by Henzinger \emph{et al.}~\cite{henzinger}). Their result is improved by Steiger in $O(n\log\log k)$ worst-case time~\cite{steiger}, exploiting the algorithm by Italiano \emph{et al.}~\cite{italiano}. These two algorithms maintain the same time complexity also in the unweighted case. \paragraph{Our results} In this paper, we solve the NCSP problem on unweighted planar graphs in $O(n)$ worst-case time. We improve, in the unweighted case, the results in~\cite{steiger,giappo2}. By applying the technique in~\cite{err_giappo} we can compute distances between all terminal pairs in linear time. Our algorithm relies on two main results: \begin{itemize} \item an algorithm due to Eisenstat and Klein~\cite{klein1}, that gives in $O(n)$ worst-case time an implicit representation of a sequence of shortest-path trees in an undirected unweighted planar graph $G$, where each tree is rooted in a vertex of the external face of $G$. Note that, if we want to compute shortest paths from the implicit representation of shortest path trees given in~\cite{klein1}, then we spend $\Theta(kn)$ worst-case time; this happens when all $k$ shortest paths share a subpath of $\Theta(n)$ edges. \item the novel concept of \emph{incremental shortest paths (ISP) subgraph} of a graph $G$, introduced in Section~\ref{sec:ISP}, that is a subgraph incrementally built by adding a sequence of shortest paths in $G$ starting from the infinite face of $G$. We show that an ISP subgraph of $G$ partitions the embedding of $G$ into \emph{distance preserving} regions, i.e., for any two vertices $a,b$ in $G$ lying in the same region $R$ it is always possible to find a shortest path in $G$ joining $a$ and $b$ that is contained in $R$. \end{itemize} \paragraph{Related work}Our article fits into a wider context of computing many distances in planar graphs. In the positive weighted case, the all pairs shortest paths (APSP) problem is solved by Frederickson in $O(n^2)$ worst-case time~\cite{frederickson}, while the single source shortest paths (SSSP) problem is solved in linear time by Henzinger \emph{et al.}~\cite{henzinger}. The best known algorithm for computing many distances in planar graphs is due to Gawrychowski \emph{et al.}~\cite{gawrychowski-mozes} and it allows us to compute the distance between any two vertices in $O(\log n)$ worst-case time after a preprocessing requiring $O(n^{3/2})$ worst-case time. In the planar unweighted case, SSSP trees rooted at vertices in the external face can be computed in linear time as in~\cite{klein1}. More results on many distances problem can be found in~\cite{cabello,chen-xu,djidjev,fakcharoenphol-rao,mozes-sommer,nussbaum}. If we are interested in distances from any vertex in the external face to any other vertex, then we can use Klein's algorithm~\cite{klein2005} that, with a preprocessing of $O(n\log n)$ worst-case time, answers to each distance query in $O(\log n)$ worst-case time. Kowalik and Kurowski~\cite{kowalik-kurowski} deal with the problem of deciding whether any two query vertices of an unweighted planar graph are closer than a fixed constant $k$. After a preprocessing of $O(n)$ worst-case time, their algorithm answers in $O(1)$ worst-case time, and, if so, a shortest path between them is returned. Non-crossing shortest paths are also used to compute max-flow in undirected planar graphs~\cite{hassin,hassin-johnson,reif}. In particular, they are used to compute the vitality of edges and vertices with respect to the max-flow~\cite{ausiello-franciosa_2,ausiello-franciosa_1}. Balzotti and Franciosa~\cite{err_giappo} show that, given the union of a set of shortest non-crossing paths in a planar graph, the lengths of each shortest path can be computed in linear time. This improves the result of~\cite{giappo2}, that can only be applied when the union of the shortest paths is a forest. Wagner and Weihe~\cite{wagner-weihe} present an $O(n)$ worst-case time algorithm for finding edge-disjoint (not necessarily shortest) paths in a undirected planar graph such that each path connects two specified vertices on the infinite face of the graph. \paragraph{Improved results}We specialize the problem of finding $k$ shortest non-crossing path in~\cite{giappo2} to the unweighted case, decreasing the worst-case time complexity from $O(n\log k)$ to $O(n)$ (for every $k$). Therefore, in unweighted graphs we improve the results in~\cite{erickson-nayyeri,kusakari-masubuchi,giappo_rectilinear}. Erickson and Nayyeri~\cite{erickson-nayyeri} generalized the work in~\cite{giappo2} to the case in which the $k$ terminal pairs lie on $h$ face boundaries. They prove that $k$ non-crossing paths, if they exists, can be found in $2^{O(h^2)}n\log k$ time. Applying our results, if the graph is unweighted, then the time complexity decreases to $2^{O(h^2)}n$ in the worst case. The same authors of~\cite{giappo2} used their algorithm to compute $k$ non-crossing rectilinear paths with minimum total length in a plane with $r$ obstacles~\cite{giappo_rectilinear}. They found such paths in $O(n\log n)$ worst-case time, where $n=r+k$, which reduces to $O(n)$ worst-case time if the graph is unweighted by using our results. Kusakari \emph{et al.}~\cite{kusakari-masubuchi} showed that a set of non-crossing forests in a planar graph can be found in $O(n\log n)$ worst-case time, where two forest $F_1$ and $F_2$ are \emph{non-crossing} if for any pair of paths $p_1\subseteq F_1$ and $p_2\subseteq F_2$, $p_1$ and $p_2$ are non-crossing. With our results, if the graph is unweighted, then the time complexity becomes linear. \paragraph{Our approach} We represent the structure of terminal pairs by a partial order called \emph{genealogy tree}. We introduce a new class of graphs, ISP subgraphs, that partition a planar graph into regions that preserve distances. Our algorithm is split in two parts. In the first part we use Eisenstat and Klein's algorithm that gives a sequence of shortest path trees rooted in the vertices of the external face. We choose some specific shortest paths from each tree to obtain a sequence of ISP subgraphs $X_1,\ldots X_k$. By using the distance preserving property of regions generated by ISP subgraphs', we prove that $X_i$ contains a shortest $s_i$-$t_i$ path, for all $i\in\{1,\ldots,k\}$. In the second part of our algorithm, we extract from each $X_i$ a shortest $s_i$-$t_i$ path and we obtain a set of shortest non-crossing paths that is our goal. In this part we strongly use the partial order given by the genealogy tree. \paragraph{Structure of the paper} After giving some definitions in Section~\ref{sec:definitions}, in Section~\ref{sec:ISP} we explain the main theoretical novelty. In Section \ref{sec:our_algorithm} first we resume Eisenstat and Klein's algorithm in Subsection~\ref{section:klein's_algorithm}, then in Subsections~\ref{sec:TOPOLINO} and \ref{sec:computational_complexity} we show the two parts of our algorithm, and we prove the whole computational complexity. Conclusions are given in Section~\ref{sec:conclusions}. \section{Definitions}\label{sec:definitions} Let $G$ be a plane graph, i.e., a planar graph with a fixed planar embedding. We denote by $f^\infty_{G}$ (or simply $f^{\infty}$) its unique infinite face, it will be also referred to as the \emph{external} face of $G$. Given a face $f$ of $G$ we denote by $\partial f$ its boundary cycle. Topological and combinatorial definitions of planar graph, embedding and face can be found in~\cite{gross-tucker}. We recall standard union and intersection operators on graphs, for convenience we define the empty graph as a graph without edges. \begin{definition} Given two undirected (or directed) graphs $G=(V(G),E(G))$ and $H=(V(H),E(H))$, we define the following operations and relations: \begin{itemize} \item $G\cup H=(V(G)\cup V(H),E(G)\cup E(H))$, \item $G\cap H=(V(G)\cap V(H),E(G)\cap E(H))$, \item $G\subseteq H\Longleftrightarrow V(G)\subseteq V(H)$ and $E(G)\subseteq E(H)$, \item $G\setminus H=(V(G),E(G)\setminus E(H))$, \item $G=\emptyset\Longleftrightarrow$ $E(G)=\emptyset$ ($V(G)$ can be nonempty). \end{itemize} \end{definition} \noindent Given an undirected (resp., directed) graph $G=(V(G),E(G))$, given an edge (resp., dart) $e$ and a vertex $v$ we write, for short, $e\in G$ in place of $e\in E(G)$ and $v\in G$ in place of $v\in V(G)$. We denote by $uv$ the edge whose endpoints are $u$ and $v$ and we denote by $\overrightarrow{uv}$ the dart from $u$ to $v$. For every dart $\overrightarrow{uv}$ we define $\rev[\overrightarrow{uv}]=\overrightarrow{vu}$ and $\head[\overrightarrow{uv}]=v$. For every vertex $v\in V(G)$ we define the \emph{degree of $v$} as $deg(v)=|\{e\in E(G) \,|\, \text{$v$ is an endpoint of $e$}\}|$. For each $\ell\in\mathbb{N}$ we denote by $[\ell]$ the set $\{1,\ldots,\ell\}$. Given a (possibly not simple) cycle $C$, we define the \emph{region bounded by $C$}, denoted by $R_{C}$, as the maximal subgraph of $G$ whose external face has $C$ as boundary. \subsection{Paths and non-crossing paths} \label{section:paths_and_non-crossing_paths} Given a directed path $p$ we denote by $\overline{p}$ its undirected version, in which each dart $\overrightarrow{ab}$ is replaced by edge $ab$; moreover, we denote by $\rev[p]$ its reverse version, in which each dart $\overrightarrow{ab}$ is replaced by dart $\overrightarrow{ba}$. We say that a path $p$ is an \emph{$a$-$b$ path} if its extremal vertices are $a$ and $b$; clearly, if $p$ is a directed path, then $p$ starts in $a$ and it ends in $b$. Moreover, given $i\in[k]$, we denote by \emph{$i$-path} an $s_i$-$t_i$ path, where $s_{i}, t_{i}$ is one of the terminal pairs on the external face. Given an $a$-$b$ path $p$ and a $b$-$c$ path $q$, we define $p\circ q$ as the (possibly not simple) $a$-$c$ path obtained by the union of $p$ and $q$. Let $p$ be a simple path and let $a,b\in V(p)$. We denote by $p[a,b]$ the subpath of $p$ with extremal vertices $a$ and $b$. We denote by $w(p)$ the length of a path $p$ of a general positive weighted graph $G$. If $G$ is unweighted, then we denote the length of $p$ as $|p|$, that is the number of edges. We say that two paths in a plane graph $G$ are \emph{non-crossing} if the (undirected) curves they describe in the graph embedding do not cross each other, non-crossing paths may share vertices and/or edges or darts. This property obviously depends on the embedding of the graph; a combinatorial definition of non-crossing paths can be based on the \emph{Heffter-Edmonds-Ringel rotation principle}~\cite{gross-tucker}. Crossing and non-crossing paths are given in Figure~\ref{fig:non_crossing_and_single-touch}. \begin{figure}[h] \captionsetup[subfigure]{justification=centering} \centering \begin{subfigure}{2.6cm} \begin{overpic}[width=2.6cm,percent]{images/crossing_paths3_a.eps} \end{overpic} \caption{}\label{fig:non-crossing_a} \end{subfigure} \quad \begin{subfigure}{2.6cm} \begin{overpic}[width=2.6cm,percent]{images/crossing_paths3_b.eps} \end{overpic} \caption{}\label{fig:non-crossing_b} \end{subfigure} \quad \begin{subfigure}{2.6cm} \begin{overpic}[width=2.6cm,percent]{images/crossing_paths3_c.eps} \end{overpic} \caption{}\label{fig:non-crossing_c} \end{subfigure} \quad \begin{subfigure}{2.6cm} \begin{overpic}[width=2.6cm,percent]{images/crossing_paths3_d.eps} \end{overpic} \caption{}\label{fig:non-crossing_d} \end{subfigure} \quad \caption{paths in (\subref{fig:non-crossing_a}) and (\subref{fig:non-crossing_b}) are crossing, while paths in (\subref{fig:non-crossing_c}) and (\subref{fig:non-crossing_d}) are non-crossing.} \label{fig:non_crossing_and_single-touch} \end{figure} \subsection{Genealogy tree} \label{section:genealogy_tree} W.l.o.g., we assume that terminal pairs are distinct, i.e., there is no pair $i,j\in[k]$ such that $\{s_i,t_i\}=\{s_j,t_j\}$. Let $\gamma_i$ be the path in $f^\infty_G$ that goes clockwise from $s_i$ to $t_i$, for $i\in[k]$. We also assume that pairs $\{(s_i,t_i)\}_{i\in[k]}$ are \emph{well-formed}, i.e., for all $j,\ell\in[k]$ either ${\gamma_j}\subset{\gamma_\ell}$ or ${\gamma_j}\supset{\gamma_\ell}$ or ${\gamma_j}\cap{\gamma_\ell}=\emptyset$; otherwise it can be easily seen that it is not possible to find a set of $k$ non-crossing paths joining terminal pairs. This property can be easily verified in linear time, since it corresponds to checking that a string of parentheses is balanced, and it can be done by a sequential scan of the string. We define here a partial ordering as in~\cite{err_giappo,giappo2} that represents the inclusion relation between $\gamma_i$'s. This relation intuitively corresponds to an \emph{adjacency} relation between non-crossing shortest paths joining each pair. Choose an arbitrary $i^*$ such that there are neither $s_{j}$ nor $t_{j}$, with $j\neq i^*$, walking on $f^{\infty}$ from $s_{i^*}$ to $t_{i^*}$ (either clockwise or counterclockwise), and let $e^*$ be an arbitrary edge on that walk. For each $j\in[k]$, we can assume that $e^*\not\in\gamma_j$, indeed if it is not true, then it suffices to switch $s_j$ with $t_j$. We say that $i \prec j$ if $\gamma_i\subset\gamma_j$. We define the \emph{genealogy tree} $T_G$ of a set of well-formed terminal pairs as the transitive reduction of poset $([k],\prec)$. W.l.o.g., we assume that $i^*=1$, hence the root of $T_G$ is 1. If $i\prec j$, then we say that $i$ is a \emph{descendant} of $j$ and $j$ is an \emph{ancestor} of $i$. Moreover, we say that $j$ is the \emph{parent of $i$}, and we write $p(i)=j$, if $i\prec j$ and there is no $r$ such that $i\prec r$ and $r \prec j$. Figure~\ref{fig:genealogy_tree} shows a set of well-formed terminal pairs, and the corresponding genealogy tree for $i^*=1$. From now on, in all figures we draw $f^\infty_G$ by a solid light grey line. W.l.o.g., we assume that the external face is a simple cycle, hence, $G$ is a biconnected graph. Indeed, if not, it suffices to solve the NCSP problem in each biconnected component. \begin{figure}[h] \captionsetup[subfigure]{justification=centering} \centering \begin{subfigure}{4.5cm} \begin{overpic}[width=4.5cm,percent]{images/genealogy_tree_piccolo.eps} \put(65,66){$s_1$} \put(23,65){$t_1$} \put(98,45){$s_2$} \put(34.5,-4.5){$t_2$} \put(100,37.2){$s_3$} \put(94,12.5){$t_3$} \put(87.5,7.5){$s_4$} \put(50,-6){$t_4$} \put(78.6,2.5){$s_5$} \put(61,-5.5){$t_5$} \put(15,3){$s_6$} \put(4,55){$t_6$} \put(-2,16.5){$s_7$} \put(-3,45){$t_7$} \put(41,67.5){$e^*$} \end{overpic} \end{subfigure} \qquad\qquad \begin{subfigure}{1.9cm} \begin{overpic}[width=1.9cm,percent]{images/tree_piccolo.eps} \put(23.15,89.25){1} \put(43.5,60.8){2} \put(4.2,60.8){6} \put(54.5,31.9){3} \put(32,31.9){4} \put(4,31.9){7} \put(32.5,3){5} \end{overpic} \end{subfigure} \par\smallskip \caption{on the left a set of well-formed terminal pairs. Any value in $\{1,3,5,7\}$ can be chosen as $i^*$. If we choose $i^*=1$, then we obtain the genealogy tree on the right.} \label{fig:genealogy_tree} \end{figure} \section{ISP subgraphs}\label{sec:ISP} In this section we introduce the concept of \emph{incremental shortest paths (ISP) subgraph} of a graph $G$, that is a subgraph incrementally built by adding a sequence of shortest paths in $G$ starting from $f^\infty_G$ (see Definition~\ref{def:ISP}). The interest towards ISP subgraphs is due to the fact that for any two vertices $a,b$ in $G$ lying in a same face $f$ of the ISP subgraph there is always a shortest path in $G$ joining $a$ and $b$ contained in $f$ (boundary included). All the results of this section hold for positive edge weighted graphs, where the length of a path is the sum of edge weights instead of the number of edges. This is the main novel result of this paper, that allows us to prove that, in order to build the union of shortest paths joining terminal pairs, we can start from the union of some of the shortest paths computed by the algorithm in~\cite{klein1}. \begin{definition}\label{def:ISP} A graph $X$ is an \emph{incremental shortest paths (ISP) subgraph} of a positive weighted graph $G$ if $X=X_r$, where $X_{0}$, $X_{1}$, \ldots, $X_{r}$ is a sequence of subgraphs of $G$ built in the following way: $X_0=f^\infty_G$ and $X_i=X_{i-1}\cup p_i$, where $p_i$ is a shortest $x_i$-$y_i$ path in $G$ with $x_i,y_i\in X_{i-1}$. \end{definition} \begin{remark}\label{remark:degree_1} All degree one vertices of an ISP subgraph of $G$ are in $f^\infty_G$. \end{remark} We define now operator $\downarrow$, that given a path $\pi$ and a cycle $C$, in case $\pi$ crosses $C$, replaces some subpaths of $\pi$ by some portions of $C$, as depicted in Figure~\ref{fig:example_ISP+downarrow}(\subref{fig:downarrow}). We observe that $\pi\downarrow \partial f$ could be not a simple path even if $\pi$ is. \begin{definition}\label{def:downarrow} Let $C$ be a cycle in $G$. Let $a,b$ be two vertices in $R_{C}$ and let $\pi$ be a simple $a$-$b$ path. In case $\pi\subseteq R_C$ we define $\pi\downarrow C=\pi$. Otherwise, let $(v_1,v_2,\ldots,v_{2r})$ be the ordered subset of vertices of $\pi$ that satisfies the following: $\pi[a,v_1]\subseteq R_C$, $\pi[v_{2r},b]\subseteq R_C$, $\pi[v_{2i-1},v_{2i}]\cap R_C=\emptyset$ and $\pi[v_{2i},v_{2i-1}]\subseteq R_C$, for all $i\in[r]$. For every $i\in[r]$, let $\mu_i$ be the $v_{2i-1}$-$v_{2i}$ path on $C$ such that the region bounded by $\mu_i\circ\pi[v_{2i-1},v_{2i}]$ does not contain $R_C$. We define $\pi\downarrow C=\pi[a,v_1]\circ\mu_1\circ\pi[v_2,v_3]\circ\mu_2\ldots\circ\pi[v_{2r-2},v_{2r-1}]\circ\mu_{r}\circ\pi[v_{2r},b]$. \end{definition} Definition~\ref{def:ISP} and Definition~\ref{def:downarrow} are depicted in Figure~\ref{fig:example_ISP+downarrow}. \begin{figure}[h] \captionsetup[subfigure]{justification=centering} \centering \begin{subfigure}[t]{4.5cm} \begin{overpic}[width=4.5cm,percent]{images/external_regions_grigi_vec_4.eps} \put(75,.5){$x_1$} \put(95,15.2){$y_1$} \put(-3,17.5){$x_2$} \put(-4.5,44.4){$y_2$} \put(31.6,58.5){$x_4$} \put(90.5,36.5){$y_4$} \put(85,58){$x_3$} \put(17.5,63.5){$y_3$} \put(61,38){$x_5$} \put(15.2,29.5){$y_5$} \end{overpic} \caption{}\label{fig:ISP} \end{subfigure} \qquad\qquad \begin{subfigure}[t]{5.5cm} \begin{overpic}[width=5.5cm,percent]{images/downarrow2.eps} \put(31,15){$C$} \put(3.5,35.5){$\pi$} \put(61.5,31){$\pi\downarrow C$} \put(19,8){$a$} \put(26,36){$b$} \put(12.5,-2.5){$v_1$} \put(-1.5,11.5){$v_2$} \put(1,23){$v_3$} \put(14.5,30.5){$v_4$} \put(43,32){$v_5$} \put(30.5,51.2){$v_6$} \end{overpic} \caption{}\label{fig:downarrow} \end{subfigure} \caption{ (\subref{fig:ISP}) an ISP subgraph $X$ of $G$; extremal vertices $x_{i}, y_{i}$ of $p_i$ are drawn, for $i \in [5]$. Different faces of $X$ have different colors. An example of Definition~\ref{def:downarrow} is given in (\subref{fig:downarrow}).} \label{fig:example_ISP+downarrow} \end{figure} In the following theorem we show that, given any face $f$ of an ISP subgraph $X$ of $G$, every path $\pi$ in $G$ whose extremal vertices are in $R_{\partial f}$ is not shorter than $\pi\downarrow \partial f$. \begin{theorem}\label{prop:main} Let $X$ be an ISP subgraph of $G$. Let $f$ be any face of $X$, and let $a,b$ be two distinct vertices in $R_{\partial f}$. For any $a$-$b$ path $\pi$ we have $w(\pi\downarrow \partial f) \leq w(\pi)$. \end{theorem} \begin{proof} Let $\{X_i\}_{i\in[r]}$ be the sequence of ISP subgraphs such that $X=X_r$, and let $p_i$ be the path that builds $X_i$ from $X_{i-1}$. We assume that $p_{i}$ has no vertices in $X_{i-1}$ other than its endpoints $x_i$ and $y_i$, otherwise we can split $p_{i}$ on intersections with $X_{i-1}$ and repeatedly apply the same proof to each portion of $p_{i}$. We prove the thesis by induction on $j$ for every choice of a face $f$ of $X_j$, $a,b\in R_{\partial f}$ and $a$-$b$ path $\pi$. In the base case, where $j=1$, there are exactly two faces $A$ and $B$ in $X_1$ other than $f^\infty_G$. Let $a,b\in V(R_{\partial A})$ (the same argument holds for $B$) and let $\pi$ be any $a$-$b$ path. In case $\pi \subseteq R_{\partial A}$ we have $\pi\downarrow \partial A = \pi$, hence the thesis trivially holds. In case $\pi \not\subseteq R_{\partial A}$, then $\pi\downarrow \partial A$ is not longer than $\pi$ because some subpaths of $\pi$ have been replaced by subpaths of $p_1$ with the same extremal vertices and $p_1$ is a shortest path. We assume that the thesis holds for all $i<j$ and we prove it for $j$. Let $f$ be a face of $X_j$ and let $f'$ be the unique face of $X_{j-1}$ such that $f\subset f'$ (Figure~\ref{fig:f_f'}(\subref{fig:f_f'_1}) and ~\ref{fig:f_f'}(\subref{fig:f_f'_2}) show faces $f$ and $f'$, respectively). Let $a,b\in V(R_{\partial f})$ and let $\pi$ be an $a$-$b$ path. Three cases may occur: \begin{itemize} \item \textbf{case $\pi\subseteq R_{\partial f}$:} the thesis trivial holds, since $\pi\downarrow \partial f=\pi$; \item \textbf{case $\pi\subseteq R_{\partial f'}$ and $\pi\not\subseteq R_{\partial f}$:} since $\pi\subseteq R_{\partial f'}$ and $\pi\not\subseteq R_{\partial f}$, then $\pi$ crosses $p_j$ an even number of times, thus $\pi\downarrow \partial f$ is not longer than $\pi$, since some subpaths of $\pi$ have been replaced by subpaths of $p_j$ with the same extremal vertices and $p_j$ is a shortest path (see Figure~\ref{fig:f_f'}(\subref{fig:f_f'_3}) where $\pi$ is the red and dashed path); \item \textbf{case $\pi\not\subseteq R_{\partial f'}$:} since $f \subseteq f'$, it is easy to see that $\pi \downarrow \partial f = (\pi \downarrow \partial f') \downarrow \partial f$. Let us consider $\pi' = \pi\downarrow \partial f'$. By induction, it holds that $w(\pi') \leq w(\pi)$. We observe now that $\pi'\subseteq R_{\partial f'}$ and $\pi'\not\subseteq R_{\partial f}$, hence the previous case applies, showing that $w(\pi' \downarrow \partial f) \leq w(\pi')$. Finally, the two previous inequalities imply $w(\pi \downarrow \partial f) \leq w(\pi \downarrow \partial f') \leq w(\pi)$ (see Figure~\ref{fig:f_f'}(\subref{fig:f_f'_3}) where $\pi$ is the green and continue path).\qed \end{itemize} \end{proof} \begin{figure}[h] \captionsetup[subfigure]{justification=centering} \centering \begin{subfigure}{4.2cm} \begin{overpic}[width=4.2cm,percent]{images/f_f_primo_2.eps} \put(40,43){$f$} \put(40,27){$p_j$} \put(5.5,61){$X_j$} \end{overpic} \caption{a face $f$ in $X_j$ \\$\mathbf{}$ \\$\mathbf{}$}\label{fig:f_f'_1} \end{subfigure} \quad \begin{subfigure}{4.2cm} \begin{overpic}[width=4.2cm,percent]{images/f_f_primo_1.eps} \put(42,35){$f'$} \put(0,61){$X_{j-1}$} \end{overpic} \caption{a face $f'$ in $X_{j-1}$ \\$\mathbf{}$ \\$\mathbf{}$}\label{fig:f_f'_2} \end{subfigure} \quad \begin{subfigure}{4.2cm} \begin{overpic}[width=4.2cm,percent]{images/f_f_primo_3.eps} \put(5.5,61){$X_j$} \put(27,41){$a$} \put(41,42){$b$} \end{overpic} \caption{two examples of $\pi$ in the second case (dashed red) and third case (continuous green)}\label{fig:f_f'_3} \end{subfigure} \caption{in (\subref{fig:f_f'_1}) and (\subref{fig:f_f'_2}) faces $f$ and $f'$ build on the ISP graph in Figure~\ref{fig:example_ISP+downarrow}(\subref{fig:ISP}). In (\subref{fig:f_f'_3}) we depict the second and third case of the proof of Theorem~\ref{prop:main}.} \label{fig:f_f'} \end{figure} We can state now the main property of ISP subgraphs. \begin{corollary} Let $X$ be an ISP subgraph of $G$ and let $f$ be any face of $X$. For every $a,b\in R_{\partial f}$ there exists a shortest $a$-$b$ path of $G$ contained in $R_{\partial f}$. \end{corollary} \section{Our algorithm}\label{sec:our_algorithm} We summarize in Subsection~\ref{section:klein's_algorithm} the result of Eisenstat and Klein's paper~\cite{klein1}, that deals with the multiple-source shortest paths problem. For the sake of clarity, we split our algorithm in two parts: \begin{itemize} \item in Subsection~\ref{sec:TOPOLINO} we introduce algorithm \TOPOLINO, that builds a sequence $\{X_i\}_{i\in[k]}$ of subgraphs of $G$ such that $X_k$ contains a shortest path for each terminal pair, and it possibly contains some extra edges. We anticipate that $X_i\cup f^\infty_G$ is an ISP subgraph of $G$, for all $i\in[k]$. \item in Subsection~\ref{sec:computational_complexity} we present algorithm \MINNIE that, by using algorithm \TOPOLINO, builds a directed graph that is exactly the union of the shortest directed paths joining each terminal pair contained in the output of algorithm \TOPOLINO. \end{itemize} \subsection{Eisenstat and Klein's result} \label{section:klein's_algorithm} The algorithm in~\cite{klein1} takes as input an undirected unweighted planar graph $G$, where $v_1, v_{2},\ldots,v_r$ is the sequence of vertices in the external face of $G$ in clockwise order, and returns an implicit representation of a sequence of shortest path trees $\mathcal{T}_i$, for $i\in[r]$, where each $\mathcal{T}_i$ is rooted in $v_{i}$. The sequence of trees $\mathcal{T}_i$, for $i\in[r]$, is represented by explicitly listing the darts in $\mathcal{T}_1$, and listing the darts that are added to transform $\mathcal{T}_i$ into $\mathcal{T}_{i+1}$, for $1 < i \leq r$ (for each added dart from $x$ to $y$, the unique dart that goes to $y$ in $\mathcal{T}_i$ is deleted; with the only two exceptions of the added dart leading to $v_{i}$, and the deleted dart leading to $v_{i+1}$). Hence, the output of their algorithm is $\mathcal{T}_1$ and a sequence of sets of darts. A key result in~\cite{klein1} shows that if a dart $d$ appears in $\mathcal{T}_{i+1}\setminus\mathcal{T}_i$, then $d$ cannot appear in any $\mathcal{T}_{j+1}\setminus\mathcal{T}_j$, for $j>i$. Thus the implicit representation of the sequence of shortest path trees has size $O(n)$. This representation can be computed in $O(n)$ worst-case time. \subsection{Algorithm \TOPOLINO} \label{sec:TOPOLINO} Algorithm \TOPOLINO builds a sequence $\{X_i\}_{i\in[k]}$ of subgraphs of $G$ by using the sequence of shortest path trees given by Eisenstat and Klein's algorithm. We point out that we are not interested in the shortest path trees rooted at every vertex of $f^\infty_G$, but we only need the shortest path trees rooted in $s_i$'s. So, we define $T_i$ as the shortest path tree rooted in $s_i$, for $i\in[k]$. We denote by $T_i[v]$ the path in $T_i$ from $s_i$ to $v$. The algorithm starts by computing the first subgraph $X_1$, that is just the undirected 1-path in $T_1$, i.e., $\overline{T_{1}[t_{1}]}$ (we recall that all $T_i$'s trees given by algorithm in~\cite{klein1} are rooted directed tree). Then the sequence of subgraphs $X_i$, for $i=2,\ldots,k$ is computed by adding some undirected paths extracted from the shortest path trees $T_i$'s defined by Eisenstat and Klein's algorithm. We define the set $H_{i}\subseteq X_i$ of vertices $h$ such that at least one dart $d$ is added while passing from $T_{i-1}$ to $T_{i}$ such that $\head[d]=h$. Hence, $H_i$ is the set of vertices whose parent in $T_{i}$ differs from the parent in $T_{i-1}$. At iteration $i$, we add path $\overline{T_i[h]}$ to $X_i$, for each $h$ in $H_i$. \begin{figure}[h] \begin{algorithm}[H] \SetAlgorithmName{Algorithm \texttt{NCSPsupergraph}}{}{} \renewcommand{\thealgocf}{} \caption{} \KwIn{an undirected unweighted planar embedded graph $G$ and $k$ well-formed terminal pairs of vertices $(s_i,t_i)$, for $i\in[k]$, on the external face of $G$} \KwOut{an undirected graph $X_k$ that contains a set of non-crossing paths $P=\{\pi_1,\ldots,\pi_k\}$, where $\pi_i$ is a shortest $s_i$-$t_i$ path, for $i\in[k]$} {$X_1=\overline {T_1[t_1]}$\label{line:1_compute_pi_1}\; \For{$i=2,\ldots,k$}{ $X_i=X_{i-1}$\label{line:X_i=X_i-1}\; For all $h\in H_i$, $X_i=X_i\cup \overline{T_i[h]}$\label{line:d2}\; Let $\eta_i$ be the undirected path on $T_i$ that starts in $t_i$ and walks backwards until a vertex in $X_i$ is reached\label{line:mu_i}\; $X_i=X_i\cup \eta_i$\label{line:X+mu_i}\; } } \end{algorithm} \end{figure} \begin{lemma}\label{lemma:TOPOLINO_O(n)} Algorithm \TOPOLINO has $O(n)$ worst-case time complexity. \end{lemma} \begin{proof} Eisenstat and Klein's algorithm requires $O(n)$ worst-case time, implying that the $H_i$'s and the $T_i$'s can be found in $O(n)$ worst-case time. Algorithm \TOPOLINO visits each edge of $G$ at most $O(1)$ times (in Line~\ref{line:d2}, $\overline{T_i[h]}$ can be found by starting in $h$ and by walking backwards on $T_i$ until a vertex of $X_i$ is found). The thesis follows.\qed \end{proof} Figure~\ref{fig:TOPO} shows how algorithm \TOPOLINO builds $X_4$ starting from $X_3$. Starting from $X_{3}$ in Figure~\ref{fig:TOPO}(\subref{fig:TOPO_1}), Figure~\ref{fig:TOPO}(\subref{fig:TOPO_2}) shows the darts whose head is in $H_4$. Consider the unique dart $d$ whose head is the vertex $x$: we observe that $\overline{d}$ is already in $X_3$, this happens because $\rev[d]\in T_3[t_3]$. Indeed, it is possible that at iteration $i$ some portions of some undirected paths that we add in Line~\ref{line:d2} are already in $X_{i-1}$. Figure \ref{fig:TOPO}(\subref{fig:TOPO_3}) highlights $\bigcup_{h\in H_4}T_4[h]$ and $\eta_4$, while in Figure \ref{fig:TOPO}(\subref{fig:TOPO_4}) $X_4$ is drawn. \begin{figure}[h] \captionsetup[subfigure]{justification=centering} \centering \begin{subfigure}{4.5cm} \begin{overpic}[width=4.5cm,percent]{images/Topo_grigio.eps} \put(44,69){$s_1$} \put(25,66){$t_1$} \put(62,68){$s_2$} \put(76,63){$t_2$} \put(86,58){$s_3$} \put(5,56){$t_3$} \end{overpic} \caption{$X_{3}$ in black\\$\mathbf{}$}\label{fig:TOPO_1} \end{subfigure} \qquad \begin{subfigure}{4.5cm} \begin{overpic}[width=4.5cm,percent]{images/Topo_darts_rosse_ter.eps} \put(44,69){$s_1$} \put(25,66){$t_1$} \put(62,68){$s_2$} \put(76,63){$t_2$} \put(86,58){$s_3$} \put(5,56){$t_3$} \put(31,31.5){$x$} \end{overpic} \caption{$X_{3}$ in grey and the darts whose head is in $H_4$ in red}\label{fig:TOPO_2} \end{subfigure} \qquad \par\bigskip \begin{subfigure}{4.5cm} \begin{overpic}[width=4.5cm,percent]{images/Topo_tutto_quater.eps} \put(44,69){$s_1$} \put(25,66){$t_1$} \put(62,68){$s_2$} \put(76,63){$t_2$} \put(86,58){$s_3$} \put(5,56){$t_3$} \put(53,-4.5){$s_4$} \put(3,12){$t_4$} \end{overpic} \par\medskip \caption{$\bigcup_{h\in H_4}T_4[h]$ in red and $\eta_4$ in green}\label{fig:TOPO_3} \end{subfigure} \qquad \begin{subfigure}{4.5cm} \begin{overpic}[width=4.5cm,percent]{images/Topo_nero.eps} \put(44,69){$s_1$} \put(25,66){$t_1$} \put(62,68){$s_2$} \put(76,63){$t_2$} \put(86,58){$s_3$} \put(5,56){$t_3$} \put(53,-4.5){$s_4$} \put(3,12){$t_4$} \end{overpic} \par\medskip \caption{$X_4$ in black\\$\mathbf{}$}\label{fig:TOPO_4} \end{subfigure} \caption{algorithm \TOPOLINO: graph $X_4$ is built starting from $X_3$.} \label{fig:TOPO} \end{figure} Subgraphs $\{X_i\}_{i\in[k]}$ built by algorithm \TOPOLINO, together with $f^\infty_G$, satisfy all the hypothesis of Theorem~\ref{prop:main}. Indeed, paths added in Line~\ref{line:d2} and Line~\ref{line:X+mu_i} are shortest paths in $G$ joining vertices in $X_{i-1}$, thus fulfilling Definition~\ref{def:ISP}. So, we exploit Theorem~\ref{prop:main} to prove that $X_i$ contains an $i$-path, for $i\in[k]$, and, in particular, $X_k$ contains a set of non-crossing paths $P=\{\pi_1,\ldots,\pi_k\}$, where $\pi_i$ is a shortest $i$-path, for $i\in[k]$. The main idea is to show that $X_i$ contains an undirected path that has the same length as the shortest $i$-path found by the algorithm by Eisenstat and Klein. This is proved in Theorem~\ref{th:TOPOLINO}. Given a subgraph $X$ of $G$, we say that an $i$-path $p$ is the \emph{leftmost $i$-path in $X$} if for every $i$-path $q\subseteq X$ it holds $R_{p\circ\gamma_i}\subseteq R_{q\circ\gamma_i}$. We say that an undirected $a$-$b$ path $p$ \emph{always turns left} if $p$ choose the leftmost edge, w.r.t. the fixed embedding, in each vertex going from $a$ to $b$. \begin{theorem}\label{th:TOPOLINO} Let $\pi_i$ be the leftmost $i$-path in $X_i$, for $i\in[k]$. The following hold: \begin{enumerate}[label=\thetheorem.(\arabic*), ref=\thetheorem.(\arabic*),leftmargin=\widthof{1.(1)}+\labelsep] \item\label{item:leftmost} $\pi_i$ is the $s_i$-$t_i$ path in $X_i$ that always turns left, for $i\in[k]$, \item\label{item:shortest} $\pi_i$ is a shortest $i$-path, for $i\in[k]$, \item\label{item:pi_i_non-croossing} for all $i,j\in[k]$, $\pi_i$ and $\pi_j$ are non-crossing. \end{enumerate} \end{theorem} \begin{proof} \begin{itemize} \item\textbf{\ref{item:leftmost}} For convenience, for every $i\in[k]$, let $\lambda_i$ be the undirected path on $X_i$ that starts in $s_i$ and always turns left until it reaches either $t_i$ or a vertex $x$ of degree one in $X_i$; we observe that $\lambda_i$ is well defined and, by Remark~\ref{remark:degree_1}, $x\in f^\infty_G$. We have to prove that $\lambda_i=\pi_i$. Let $i\in[k]$. First, we observe that $s_i\in X_i$ because $s_{i-1}\in H_i$, thus, by Line~\ref{line:d2}, $\overline{T_i[s_{i-1}]}\subseteq X_i$. This implies $s_i\in X_i$ as we have claimed. Let $x$ be the extremal vertex of $\lambda_i$ other than $s_i$. Assume by contradiction that $x\neq t_i$. Two cases are possible: either $x\in V(f^\infty_G)\setminus V(\gamma_i)$ or $x\in V(\gamma_i)\setminus\{t_i\}$. The first case cannot occur because Line~\ref{line:d2} and Line~\ref{line:X+mu_i} imply $\overline{T_i[t_i]}\subseteq X_i$, thus $\lambda_i$ would cross $\eta_i$, absurdum. In the second case, let us assume by contradiction that $x\in V(\gamma_i)\setminus\{t_i\}$. Let $d\in\lambda_i$ be the dart such that $\head[d]=x$. By definition of $\lambda_i$, vertex $x$ has degree one in $X_i$. By Line~\ref{line:1_compute_pi_1}, Line~\ref{line:d2} and Line~\ref{line:X+mu_i}, all vertices with degree one are equal to either $s_\ell$ or $t_\ell$, for some $\ell\in[k]$, and this implies that there exists $j<i$ such that $x\in\{s_j,t_j\}$. This is absurdum because there is not $s_j$ or $t_j$ in $V(\gamma_i)\setminus\{s_i,t_i\}$ such that $j<i$. Hence $\lambda_i$ is an $i$-path, and, by its definition, $\lambda_i$ is the leftmost $i$-path in $X_i$. Therefore $\lambda_i=\pi_i$. \item\textbf{\ref{item:shortest}} We prove that $\pi_i$ is a shortest $i$-path by using Theorem~\ref{prop:main}, indeed, $X_i\cup f^\infty_G$ is an ISP subgraph of $G$ by construction. Let $G'$ be the graph obtained from $G$ by adding a dummy path $q$ from $s_i$ to $t_i$ in $f^\infty_G$ with high length (for example, $|q|=|E(G)|$). Let $C$ be the cycle $\pi_i\circ q$. We observe that $\overline{T_i[t_i]}\downarrow C=\pi_i$ and $C$ is the boundary of a face of $G'$. Thus, by Theorem~\ref{prop:main}, $|\pi_i|\leq|\overline{T_i[t_i]}|$. Since $\overline{T_i[t_i]}$ is a shortest path, then $\pi_i$ is a shortest path in $G'$, hence it also is a shortest path in $G$. \item\textbf{\ref{item:pi_i_non-croossing}} Let us assume by contradiction that there exist $i,j\in[k]$ such that $\pi_i$ and $\pi_j$ are crossing, with $i<j$. Thus $\pi_j$ has not turned always left in $X_j$, absurdum.\qed \end{itemize} \end{proof} \subsection{Algorithm \MINNIE}\label{sec:computational_complexity} The graph $X_k$ given by the algorithm \TOPOLINO contains a shortest path for each terminal pair, but $X_k$ may also contain edges that do not belong to any shortest path. To overcome this problem we apply algorithm \MINNIE, that builds a directed graph $Y_k=\bigcup_{i\in[k]}\rho_i$, where $\rho_i$ is a directed shortest $i$-path, for $i\in[k]$. Moreover, we prove that $Y_k$ can be built in linear time. This implies that, by using the results in~\cite{err_giappo}, we can compute the length of all shortest $i$-paths, for $i\in[k]$, in $O(n)$ worst-case time (see Theorem~\ref{th:main}). We use the sequence of subgraphs $\{X_i\}_{i\in[k]}$. By Theorem~\ref{th:TOPOLINO}, we know that $X_i$ contains a shortest undirected $i$-path $\pi_i$ and we can list its edges in $O(|\pi_i|)$ time. But if an edge $e$ is shared by many $\pi_i$'s, then $e$ is visited many times. Thus obtaining $\bigcup_{i\in[k]}\pi_i$ by this easy procedure requires $O(kn)$ worst-case time. To overcome this problem, we should visit every edge in $\bigcup_{i\in[k]} \pi_i$ only a constant number of times. Now we introduce two useful lemmata the will be used later. The first lemma shows that two uncomparable directed paths $\pi_i$ and $\pi_j$ (i.e., such that $i \not\prec j$ and $j \not\prec i$) in the genealogy tree $T_G$ cannot share a dart, although it is possible that $\overrightarrow{ab}\in\pi_i$ and $\overrightarrow{ba}\in\pi_j$. The second lemma deals with the intersection of non-crossing paths joining comparable pairs. \begin{lemma}\label{lemma:preparatore_fratelli}(\cite{err_giappo}) Let $\pi_i$ be a shortest directed $i$-path and let $\pi_j$ be a shortest directed $j$-path, for some $i,j\in[k]$. If $j$ is not an ancestor neither a descendant of $i$ in $T_G$, then $\pi_i$ and $\pi_j$ have no common darts. \end{lemma} \begin{proof} Let us assume by contradiction that $\pi_i$ and $\pi_j$ have some common darts, and let $d$ be the dart in $\pi_i\cap\pi_j$ that appears first in $\pi_i$. Let $R$ be the region bounded by $\overline{\pi_j[s_j,\tail(d)]}$, $\overline{\pi_i[s_i,\tail(d)]}$ and the clockwise undirected $s_i-s_j$ path in $f^\infty$ (Figure~\ref{fig:directed_non-crossing}(\subref{fig:directed_non-crossing_1}) shows $\pi_i$, $\pi_j$ and $R$). Being $\pi_j$ a simple path, then $\pi_j$ crosses $\pi_i$ in at least one vertex in $\pi_i[s_i,\tail(d)]$. Let $x$ be the first vertex in $\pi_i[s_i,\tail(d)]$ after $\head(d)$ in $\pi_j$. Now by looking to the cycle $\pi_i[x,\head(d)]\circ\pi_j[\head(d),x]$, it follows that $\pi_i$ and $\pi_j$ can be both shortest paths, absurdum (Figure~\ref{fig:directed_non-crossing}(\subref{fig:directed_non-crossing_2}) shows this cycle).\qed \end{proof} \begin{lemma}\label{lemma:preparatore_2}(\cite{err_giappo}) Let $\{\pi_i\}_{i\in[k]}$ be a set of non-crossing directed paths. Let $i,j\in[k]$, if $i$ is a descendant of $j$, then $\pi_i\cap\pi_j\subseteq\pi_\ell$, for all $\ell\in[k]$ such that $i\prec \ell\prec j$. \end{lemma} \begin{proof} Let us assume $\pi_i\cap\pi_j\neq\emptyset$ and choose $\ell\in[k]$ such that $i \prec\ell \prec j$. Let $e$ be the dart in $\pi_i\cap\pi_j$ that appears first in $\pi_i$ and let $Q$ be the region bounded by $\overline{\pi_j[s_j,\tail(e)]}$, $\overline{\pi_i[s_i,\tail(e)]}$ and the clockwise undirected $s_j-s_i$ path in $f^\infty$ (a region $Q$ and dart $e$ are shown in Figure~\ref{fig:directed_non-crossing}(\subref{fig:directed_non-crossing_3})). It is clear that if $e\not\in\pi_\ell$, then $\{\pi_i,\pi_j,\pi_\ell\}$ is not a set of non-crossing paths, absurdum.\qed \end{proof} \begin{figure}[h] \captionsetup[subfigure]{justification=centering} \centering \begin{subfigure}{4cm} \begin{overpic}[width=4cm,percent]{images/directed_non-crossing_1.eps} \put(55,26){$d$} \put(43,12){$R$} \put(91,10){$s_i$} \put(61,-5.5){$t_i$} \put(31,-4){$s_j$} \put(12,3.5){$t_j$} \end{overpic} \caption{}\label{fig:directed_non-crossing_1} \end{subfigure} \qquad \begin{subfigure}{4cm} \begin{overpic}[width=4cm,percent]{images/directed_non-crossing_2.eps} \put(87,24){$x$} \put(91,10){$s_i$} \put(61,-5.5){$t_i$} \put(31,-4){$s_j$} \put(12,3.5){$t_j$} \end{overpic} \caption{}\label{fig:directed_non-crossing_2} \end{subfigure} \qquad \begin{subfigure}{4cm} \begin{overpic}[width=4cm,percent]{images/directed_non-crossing_3.eps} \put(73,24){$Q$} \put(48,39.5){$e$} \put(61.5,-4){$s_i$} \put(13.5,2){$t_i$} \put(90,9.5){$s_\ell$} \put(5,9){$t_\ell$} \put(97.9,21){$s_j$} \put(-4.5,22){$t_j$} \end{overpic} \caption{}\label{fig:directed_non-crossing_3} \end{subfigure} \caption{in (\subref{fig:directed_non-crossing_1}) and (\subref{fig:directed_non-crossing_2}) the paths $\pi_j$ and $\pi_i$, the dart $d$, the region $R$ and the vertex $x$ used in the proof of Lemma~\ref{lemma:preparatore_fratelli}. In (\subref{fig:directed_non-crossing_3}) the region $Q$ and the dart $d$ in the proof of Lemma~\ref{lemma:preparatore_2}.} \label{fig:directed_non-crossing} \end{figure} Now we show how to use these two lemmata for our goals. Let $\rho_i$ be a shortest directed $i$-path and let $\rho_j$ be a shortest directed $j$-path, for some $i,j\in[k]$, $i\neq j$. By Lemma~\ref{lemma:preparatore_fratelli}, if $i$ and $j$ are not comparable in $T_G$, then $\rho_i$ and $\rho_j$ have no common darts. Moreover, by Lemma~\ref{lemma:preparatore_2}, if $i$ is an ancestor of $j$ in $T_G$, then $\rho_i\cap\rho_j\subseteq\rho_{p(j)}$. By using these two facts, in order to list darts in $\rho_{i}$, then it suffices to find darts in $\rho_i\setminus\rho_{p(i)}$, for all $i \in [k] \setminus \{1\}$ (we remind that 1 is the root of $T_G$). To this goal we use algorithm \MINNIE, that builds a sequence of directed graphs $\{Y_i\}_{i\in[k]}$ such that $Y_k$ is equal to $\bigcup_{i\in[k]}\rho_i$, where $\rho_i$ is a shortest directed $i$-path, for $i\in[k]$. We prove the correctness of algorithm \MINNIE in Theorem~\ref{th:MINNIE}. At iteration $i$ we compute $\rho_i\setminus\rho_{p(i)}$, showing that $\rho_i\setminus\rho_{p(i)}=\sigma_i\cup\rev[\tau_i]$, where $\sigma_i$ and $\tau_i$ are computed in Line~\ref{line:sigma_i} and Line~\ref{line:tau_i}, respectively. We observe that if $\rho_i\cap\rho_{p(i)}=\emptyset$, then $\sigma_i=\rev[\tau_i]=\rho_i$. \begin{figure} \begin{algorithm}[H] \SetAlgorithmName{Algorithm \texttt{NCSPunion}}{}{} \renewcommand{\thealgocf}{} \caption{} \KwIn{an undirected unweighted planar embedded graph $G$ and $k$ well-formed terminal pairs of vertices $(s_i,t_i)$, for $i\in[k]$, on the external face of $G$} \KwOut{a directed graph $Y_k$ formed by the union of directed shortest non-crossing paths from $s_i$ to $t_i$, for $i\in[k]$} {Compute $X_1$ as in algorithm \TOPOLINO\; $Y_1$ is the directed version of $X_1$ oriented from $s_1$ to $t_1$\label{line:1_compute_pi_1}\; \For{$i=2,\ldots,k$}{ Compute $X_i$ as in algorithm \TOPOLINO\; $\sigma_i$ is the directed path that starts in $s_i$ and always turns left in $X_i$ until either $\sigma_i$ reaches $t_i$ or the next dart $d_i$ of $\sigma_i$ satisfies $d_i\in Y_{i-1}$\label{line:sigma_i}\; $\tau_i$ is the directed path that starts in $t_i$ and always turns right in $X_i$ until either $\tau_i$ reaches $s_i$ or the next dart $d_i'$ of $\tau_i$ satisfies $\rev[d_i']\in Y_{i-1}$\label{line:tau_i}\; $Y_i=Y_{i-1}\cup\sigma_i\cup \rev[\tau_i]$\label{line:Y}\; } } \end{algorithm} \end{figure} \begin{theorem}\label{th:main} Algorithm \MINNIE has $O(n)$ worst-case time complexity. \end{theorem} \begin{proof} Algorithm \MINNIE uses algorithm \TOPOLINO, that requires $O(n)$ worst-case time by Lemma~\ref{lemma:TOPOLINO_O(n)}. Moreover, algorithm \MINNIE visits each dart of the ``directed version" of $X_k$ at most $O(1)$ times, where the \emph{directed version of} $X_k$ is the directed graph built from $X_k$ by replacing each edge $ab$ by the pair of darts $\overrightarrow{ab}$ and $\overrightarrow{ba}$. Thus, algorithm \MINNIE requires $O(n)$ worst-case time, since $X_k$ is a subgraph of $G$.\qed \end{proof} \begin{theorem}\label{th:MINNIE} Graph $Y_k$ computed by algorithm \MINNIE is the union of $k$ shortest non-crossing $i$-paths, for $i\in[k]$. \end{theorem} \begin{proof} Let $\{\pi_i\}_{i\in[k]}$ be the set of paths defined in Theorem~\ref{th:TOPOLINO}. For all $i\in[k]$, we denote by $\overrightarrow{\pi_i}$ the directed version of $\pi_i$, oriented from $s_i$ to $t_i$. First we define $\rho_1=\overrightarrow{\pi_1}$ and for all $i\in[k]\setminus\{1\}$ we define \begin{equation} \label{eq:rho_i} \rho_i= \begin{cases} \overrightarrow{\pi_i}[s_i,u_i]\circ \rho_{p(i)}[u_i,v_i]\circ \overrightarrow{\pi_i}[v_i,t_i], &\text{ if }\,\, \overrightarrow{\pi_i}\cap\rho_{p(i)}\neq\emptyset,\\ \overrightarrow{\pi_i}, &\text{ otherwise}, \end{cases} \end{equation} where we assume that if $V(\overrightarrow{\pi_i}\cap\rho_{p(i)})\neq\emptyset$, then $u_i$ and $v_i$ are the vertices in $V(\overrightarrow{\pi_i}\cap\rho_{p(i)})$ that appear first and last in $\overrightarrow{\pi_i}$, respectively; the definition of $\rho_i$ as in \eqref{eq:rho_i} is shown in Figure~\ref{fig:eq_rho_i}. Now we split the proof into three parts: first we prove that $\{\rho_i\}_{i\in[k]}$ is a set of shortest paths (we need it to apply Lemma~\ref{lemma:preparatore_fratelli}); second we prove that $\{\rho_i\}_{i\in[k]}$ is a set of non-crossing paths (we need it to apply Lemma~\ref{lemma:preparatore_2}); third we prove that $Y=\bigcup_{i\in[k]}\rho_i$ (we prove it by Lemma~\ref{lemma:preparatore_fratelli} and Lemma~\ref{lemma:preparatore_2}). \begin{itemize} \item \textbf{$\{\rho_i\}_{i\in[k]}$ is a set of shortest paths:} we proceed by induction on $i$. The base case is trivial because $\pi_1$ is a shortest path by definition. Let us assume that $\rho_j$ is a shortest $j$-path, for $j<i$, we have to prove that $\rho_i$ is a shortest $i$-path. If $\overrightarrow{\pi_i}\cap\rho_{p(i)}=\emptyset$, then $\rho_i=\overrightarrow{\pi_i}$ by \eqref{eq:rho_i}, thus the thesis holds because $\{\pi_i\}_{i\in[k]}$ a set of shortest paths. Hence let us assume that $\overrightarrow{\pi_i}\cap\rho_{p(i)}\neq\emptyset$, then it suffices, by definition of $\rho_i$, that $|\pi_i[u_i,v_i]|=|\rho_{p(i)}[u_i,v_i]|$. It is true by induction. \item \textbf{$\{\rho_i\}_{i\in[k]}$ is a set of non-crossing paths:} we proceed by induction on $i$. The base case is trivial because there is only one path. Let us assume that $\{\rho_j\}_{j\in[i-1]}$ is a set of non-crossing paths, we have to prove that $\rho_i$ does not cross $\rho_j$, for any $j<i$. If $\rho_i$ and $\rho_j$ are crossing and $j$ is not an ancestor of $i$, then, by construction of $\rho_i$, either $\rho_{p(i)}$ and $\rho_j$ are crossing or $\pi_i$ and $\pi_j$ are crossing; that is absurdum in both cases by induction and Theorem~\ref{th:TOPOLINO}. Moreover, by definition, $\rho_i$ does not cross $\rho_{p(i)}$, and by induction, if $\ell$ is an ancestor of $i$ such that $\ell\neq p(i)$, then $\rho_i$ does not cross $\rho_\ell$, indeed, if not, then $\rho_\ell$ would cross $\rho_{p(i)}$, absurdum. Hence $\{\rho_i\}_{i\in[k]}$ is a set of non-crossing paths. \item \textbf{$Y$ is the union of $\rho_i$'s:} now we prove that $Y=\bigcup_{i\in[k]}\rho_i$. In particular we show that $\rho_1=\overrightarrow{\pi_1}$ and for all $i\in[k]\setminus\{1\}$ \begin{equation} \label{eq:rho_i_2} \rho_i= \begin{cases} \sigma_i\circ \rho_{p(i)}[u_i,v_i]\circ\rev[\tau_i], &\text{ if }\,\, \overrightarrow{\pi_{i}}\cap\rho_{p(i)}\neq\emptyset,\\ \overrightarrow{\pi_i}, &\text{ otherwise}. \end{cases} \end{equation} \noindent Again, we proceed by induction on $i$. The base case is trivial, thus we assume that (\ref{eq:rho_i}) is equivalent to (\ref{eq:rho_i_2}) for all $i<\ell$. We have to prove that (\ref{eq:rho_i}) is equivalent to (\ref{eq:rho_i_2}) for $i=\ell$. If $\overrightarrow{\pi_\ell}$ does not intersect any dart of $\rho_{p(\ell)}$, then \eqref{eq:rho_i} is equivalent to \eqref{eq:rho_i_2}. Thus we assume that $\overrightarrow{\pi_\ell}\cap\rho_{p(\ell)}\neq\emptyset$. By (\ref{eq:rho_i}) and (\ref{eq:rho_i_2}) and by definition of $\sigma_i$ and $\tau_i$ in Line~\ref{line:sigma_i} and Line~\ref{line:tau_i}, respectively, it suffices to prove that $d_i\in \rho_{p(i)}$ and $\rev[d_i']\in\rho_{p(i)}$. Now, by induction we know that $d_i\in\rho_\ell$ for some $\ell<i$, we have to show that $d_i\in \rho_{p(i)}$. By Lemma~\ref{lemma:preparatore_fratelli} and being $\{\rho_j\}_{j\in[k]}$ a set of shortest paths, it holds that $\ell$ is an ancestor or a descendant of $i$. Being the $s_j$'s visited clockwise by starting from $s_1$, then $\ell$ is an ancestor of $i$. Finally, by Lemma~\ref{lemma:preparatore_2} and being $\{\rho_j\}_{j\in[k]}$ a set of non-crossing path, it holds that $\rho_i\cap\rho_\ell\subseteq\rho_{p(i)}$. Being $p(i)<i$, then $d_i\in\rho_{p(i)}$ as we claimed. By a similar argument, it holds that $\rev[d_i']\in\rho_{p(i)}$.\qed \end{itemize} \end{proof} \begin{figure}[h] \captionsetup[subfigure]{justification=centering} \centering \begin{subfigure}{4.5cm} \begin{overpic}[width=4.5cm,percent]{images/eq1.eps} \put(76,1){$s_i$} \put(13,4){$t_i$} \put(95,15.2){$s_{p(i)}$} \put(-8,17.5){$t_{p(i)}$} \put(51,21.5){$\overrightarrow{\pi_i}$} \put(51,43){$\overrightarrow{\pi}_{\!\!p(i)}$} \put(74.5,37){$u_i$} \put(30.5,40.5){$v_i$} \end{overpic} \caption{path $\overrightarrow{\pi_i}$ and $\overrightarrow{\pi}_{\!\!p(i)}$}\label{fig:eq_rho_i_a} \end{subfigure} \qquad\quad \begin{subfigure}{4.5cm} \begin{overpic}[width=4.5cm,percent]{images/eq1_b.eps} \put(76,1){$s_i$} \put(13,4){$t_i$} \put(95,15.2){$s_{p(i)}$} \put(-8,17.5){$t_{p(i)}$} \put(29,22){$\rho_i$} \end{overpic} \caption{path $\rho_i$}\label{fig:eq_rho_i_b} \end{subfigure} \caption{proof of Theorem~\ref{th:MINNIE}, explanation of \eqref{eq:rho_i}.} \label{fig:eq_rho_i} \end{figure} It is proved in~\cite{err_giappo} that, starting from the union of a set of shortest (not necessarily non-crossing) paths between well-formed terminal pairs, distances between terminal pairs can be computed in linear time. Thus we can give the following main theorem. \begin{theorem}\label{th:main} Given an undirected unweighted plane graph $G$ and a set of well-formed terminal pairs $\{(s_i,t_i)\}$ on the external face $f^\infty$ of $G$ we can compute $U=\bigcup_{i\in[k]}p_i$ and the lengths of all $p_i$, for $i\in[k]$, where $p_i$ is a shortest $i$-path and $\{p_i\}_{i\in[k]}$ is a set of non-crossing paths, in $O(n)$ worst-case time. \end{theorem} \begin{proof} By Theorem~\ref{th:MINNIE}, the required graph $U$ is the undirected version $\overline{Y_k}$ of the graph computed by algorithm \MINNIE, that has $O(n)$ worst-case time complexity. Moreover, we compute the length of $p_i$, for all $i\in[k]$, in $O(n)$ worst-case time by using the results in~\cite{err_giappo}.\qed \end{proof} \begin{remark} For graphs with small integer weights, we can obtain all the previous results in $O(n+L)$ worst-case time, where $L$ is the sum of all edge weights, by splitting an edge of weight $r$ in $r$ unweighted edges. \end{remark} \section{Conclusions}\label{sec:conclusions} In this paper we have shown a linear time algorithm to compute the union of non-crossing shortest paths whose extremal vertices are in the external face of an undirected unweighted planar graph. The algorithm relies on the algorithm by Eisenstat and Klein for computing SSSP trees rooted on the vertices of the external face and on the novel concept of ISP subgraph of a planar graph, that can be of interest itself. The same approach cannot be extended to weighted graphs, because the algorithm of Eisenstat and Klein works only in the unweighted case. As stated in \cite{erickson-nayyeri} our results may be applied in the case of terminal pairs lying on $h$ face boundaries. We wish to investigate the non-crossing shortest paths problem when each terminal pair contains only one vertex on the external face. \bibliographystyle{siam}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction and results}\label{sect1} Let $B_i(t)$, $t\ge 0$, $i\ge 1$, be independent standard Brownian motions. We consider the zero temperature Brownian semi-discrete directed polymer, \cite{Bar}, \cite{GraTrWi},\cite{BoJeu}, \cite{OCon}. The last-passage time in this model is defined by \begin{equation}\label{1.1} H(\mu,n)=\sup_{0=\tau_0<\tau_1<\dots<\tau_n=\mu}\sum_{i=1}^nB_i(\tau_i)-B_i(\tau_{i-1}). \end{equation} We are interested in the asymptotics of the joint distribution function \begin{equation}\label{1.2} \mathbb{P}[H(\mu_1,n_1)\le\xi_1,H(\mu_2,n_2)\le\xi_2] \end{equation} when $(\mu_1,n_1)$ and $(\mu_2,n_2)$ are ordered in the time-like direction, $\mu_1<\mu_2$, $n_1<n_2$. The random variable (\ref{1.1}) is distributed as the largest eigenvalue of a GUE random matrix, \cite{Bar}. More precisely, \begin{equation}\label{1.3} \mathbb{P}[H(\mu,n)\le\xi]=\frac 1{Z_{\mu,n}}\int_{(-\infty,\xi]^n}\prod_{1\le j<k\le n}(x_k-x_j)^2\prod_{j=1}^ne^{-\frac{x_j^2}{2\mu}}\,d^nx. \end{equation} By standard results this leads to the following limit law for $H(\mu,n)$. Let $t,\nu$ and $\eta$ be fixed. Then \begin{equation}\label{1.4} \lim_{M\to\infty}\mathbb{P}\left[H(tM-\nu(tM)^{2/3},[tM+\nu(tM)^{2/3}])\le 2tM+(\eta-\nu^2)(tM)^{1/3}\right]=F_2(\eta), \end{equation} where $F_2$ is the GUE Tracy-Widom distribution, \begin{equation}\label{1.5} F_2(\eta)=\det (I-K_{\text{Ai\,}})_{L^2(\eta,\infty)}. \end{equation} Here $K_{\text{Ai\,}}$ is the Airy kernel, \begin{equation}\label{1.6} K_{\text{Ai\,}}(x,y)=\int_0^\infty\text{Ai\,}(x+\tau)\text{Ai\,}(y+\tau)\,d\tau. \end{equation} When $(\mu_1,n_1)$ and $(\mu_2,n_2)$ have a space-like ordering, $\mu_1<\mu_2$, $n_1>n_2$, the asymptotics for (\ref{1.2}) analogous to (\ref{1.4}) can be computed and expressed in terms of a Fredholm determinant with the extended Airy kernel. This leads to the possibility of proving convergence to the Airy process along space-like paths, \cite{BorOl}, \cite{Fer}. However, the case when $(\mu_1,n_1)$ and $(\mu_2,n_2)$ are ordered in the time-like direction (more precisely along a characteristic, see e.g. \cite{Fer}) has not been considered previously except non-rigorously in a related model by Dotsenko, \cite{Dots}, using the replica method. The main result of this paper is given in the next theorem. \begin{theorem}\label{thm1.1} Let $0<t_1<t_2$, $\eta_1,\eta_2,\nu_1,\nu_2\in\mathbb{R}$ be given. Set \begin{equation}\label{1.7} \alpha=(t_1/\Delta t)^{1/3}, \end{equation} where $\Delta t=t_2-t_1$, and let $F_{\text{tt}}(\eta_1, \eta_2;\alpha, \nu_1, \nu_2)$ be given by (\ref{1.18}) below. Introduce the scaling \begin{equation}\label{scaling} \mu_i=t_iM-\nu_i(t_iM)^{2/3},\,n_i=t_iM+\nu_i(t_iM)^{2/3},\,\xi_i=2t_iM+(\eta_i-\nu_i^2)(t_iM)^{1/3}, \end{equation} $i=1,2$. With this scaling, define \begin{equation}\label{1.5'} F_M(\eta_1, \eta_2;t_1,t_2, \nu_1, \nu_2)=\mathbb{P}[H(\mu_1,n_1)\le\xi_1,H(\mu_2,n_2)\le\xi_2] \end{equation} Then, \begin{equation}\label{1.6'} \lim_{M\to\infty} F_M(\eta_1, \eta_2;t_1,t_2, \nu_1, \nu_2)=F_{\text{tt}}(\eta_1, \eta_2;\alpha, \nu_1, \nu_2). \end{equation} \end{theorem} The theorem will be proved in section \ref{sect4}. In order to give the formula for the limiting distribution function we first need to define some functions. Set \begin{align}\label{1.8} \Delta\nu&=\nu_2\left(\frac{t_2}{\Delta t}\right)^{2/3}-\nu_1\left(\frac{t_1}{\Delta t}\right)^{2/3}, \\ \Delta\eta&=(\eta_2-\nu_2^2)\left(\frac{t_2}{\Delta t}\right)^{1/3}-(\eta_1-\nu_1^2)\left(\frac{t_1}{\Delta t}\right)^{1/3} +\Delta\nu^2.\notag \end{align} Let \begin{equation}\label{1.9} \phi_1(x,y)=-\alpha e^{\alpha\Delta\nu x-\nu_1y}\int_0^\infty e^{(\nu_1-\alpha\Delta\nu)\tau}K_{\text{Ai\,}}(\eta_1-\tau,\eta_1-y) K_{\text{Ai\,}}(\Delta\eta+\alpha\tau,\Delta\eta+\alpha x)\,d\tau, \end{equation} \begin{equation}\label{1.10} \psi_1(x,y)=\alpha e^{\alpha\Delta\nu x-\nu_1y}\int_0^\infty e^{-(\nu_1-\alpha\Delta\nu)\tau}K_{\text{Ai\,}}(\eta_1+\tau,\eta_1-y) K_{\text{Ai\,}}(\Delta\eta-\alpha\tau,\Delta\eta+\alpha x)\,d\tau, \end{equation} \begin{equation}\label{1.11} \phi_2(x,y)=\alpha e^{\alpha\Delta\nu(x-y)}K_{\text{Ai\,}}(\Delta\eta+\alpha x,\Delta\eta+\alpha y), \end{equation} and \begin{equation}\label{1.12} \phi_3(x,y)=e^{\nu_1(x-y)}K_{\text{Ai\,}}(\eta_1-x,\eta_1-y). \end{equation} Define \begin{equation}\label{1.13} \phi(x,y)=\phi_1(x,y)+1(y\ge 0)\phi_2(x,y)-1(x<0)\phi_3(x,y), \end{equation} and \begin{equation}\label{1.14} \psi(x,y)=-\psi_1(x,y)-1(y>0)\phi_2(x,y)+1(x\le0)\phi_3(x,y), \end{equation} where $1(\cdot)$ is the indicator function. We will use the following notation in block matrices. If $f$ is a function of two real variables, $\mathbf{x}\in\mathbb{R}^s$ and $\mathbf{y}\in\mathbb{R}^t$ we write \begin{equation}\label{1.15} f(\mathbf{x},\mathbf{y})=(f(x_i,y_j))_{\substack{ 1\le i\le s\\1\le j\le t}}\,\,\,, \end{equation} for a matrix block. When $s$ or $t$ is equal to zero this block is just empty and left out of the block matrix. Let $r_1,r_2,s,t\ge 0$, $\mathbf{x}\in\mathbb{R}^{r_1}$, $\mathbf{x'}\in\mathbb{R}^{s}$, $\mathbf{y}\in\mathbb{R}^{r_2}$, $\mathbf{y'}\in\mathbb{R}^{t}$ and $0\in\mathbb{R}$. Define the determinants \begin{equation}\label{1.16} W_{r_1,s,r_2,t}^{(1)}(\mathbf{x},\mathbf{x'},\mathbf{y},\mathbf{y'})= \left|\begin{matrix} \psi(\mathbf{x},\mathbf{x}) &\psi(\mathbf{x},\mathbf{x'}) &\psi(\mathbf{x},0) &\psi(\mathbf{x},\mathbf{y}) &\psi(\mathbf{x},\mathbf{y'})\\ \phi(\mathbf{x'},\mathbf{x}) &\phi(\mathbf{x'},\mathbf{x'}) &\phi(\mathbf{x'},0) &\phi(\mathbf{x'},\mathbf{y}) &\phi(\mathbf{x'},\mathbf{y'})\\ \psi(0,\mathbf{x}) &\psi(0,\mathbf{x'}) &\psi(0,0) &\psi(0,\mathbf{y}) &\psi(0,\mathbf{y'})\\ \phi(\mathbf{y},\mathbf{x}) &\phi(\mathbf{y},\mathbf{x'}) &\phi(\mathbf{y},0) &\phi(\mathbf{y},\mathbf{y}) &\phi(\mathbf{y},\mathbf{y'})\\ \psi(\mathbf{y'},\mathbf{x}) &\psi(\mathbf{y'},\mathbf{x'}) &\psi(\mathbf{y'},0) &\psi(\mathbf{y'},\mathbf{y}) &\psi(\mathbf{y'},\mathbf{y'}) \end{matrix}\right| \end{equation} (the determinant is of size $r_1+s+r_2+t+1$) and \begin{equation}\label{1.17} W_{r_1,s,r_2,t}^{(2)}(\mathbf{x},\mathbf{x'},\mathbf{y},\mathbf{y'})= \left|\begin{matrix} \psi(\mathbf{x},\mathbf{x}) &\psi(\mathbf{x},\mathbf{x'}) &\psi(\mathbf{x},0) &\psi(\mathbf{x},\mathbf{y}) &\psi(\mathbf{x},\mathbf{y'})\\ \phi(\mathbf{x'},\mathbf{x}) &\phi(\mathbf{x'},\mathbf{x'}) &\phi(\mathbf{x'},0) &\phi(\mathbf{x'},\mathbf{y}) &\phi(\mathbf{x'},\mathbf{y'})\\ \phi(0,\mathbf{x}) &\phi(0,\mathbf{x'}) &\phi(0,0) &\phi(0,\mathbf{y}) &\phi(0,\mathbf{y'})\\ \phi(\mathbf{y},\mathbf{x}) &\phi(\mathbf{y},\mathbf{x'}) &\phi(\mathbf{y},0) &\phi(\mathbf{y},\mathbf{y}) &\phi(\mathbf{y},\mathbf{y'})\\ \psi(\mathbf{y'},\mathbf{x}) &\psi(\mathbf{y'},\mathbf{x'}) &\psi(\mathbf{y'},0) &\psi(\mathbf{y'},\mathbf{y}) &\psi(\mathbf{y'},\mathbf{y'}) \end{matrix}\right|. \end{equation} We can now give the expression for the distribution function $F_{\text{tt}}(\eta_1, \eta_2;\alpha, \nu_1, \nu_2)$ in theorem \ref{thm1.1}. Define \begin{align}\label{1.18} &F_{\text{tt}}(\eta_1^\ast, \eta_2;\alpha, \nu_1, \nu_2) \\&=F_2(\eta_2)- \sum\limits_{r,s,t=0}^\infty\frac 1{(r!)^2s!t!}\int\limits_{\eta_1^\ast}^\infty d\eta_1\int\limits_{(-\infty,0]^r}d^rx\int\limits_{(-\infty,0]^s}d^sx' \int\limits_{[0,\infty)^r}d^ry\int\limits_{[0,\infty)^t}d^ty' W_{r,s,r,t}^{(1)}(\mathbf{x},\mathbf{x'},\mathbf{y},\mathbf{y'})\notag\\ &-\sum\limits_{r=1}^\infty\sum\limits_{s,t=0}^\infty\frac 1{r!(r-1)!s!t!}\int\limits_{\eta_1^\ast}^\infty\,d\eta_1\int\limits_{(-\infty,0]^r}d^rx \int\limits_{(-\infty,0]^s}d^sx' \int\limits_{[0,\infty)^{r-1}}d^{r-1}y\int_{[0,\infty)^t}d^ty' W_{r,s,r-1,t}^{(2)}(\mathbf{x},\mathbf{x'},\mathbf{y},\mathbf{y'})\notag, \end{align} where $F_2$ is the Tracy-Widom distribution given by (\ref{1.5}). Recall that the Tracy-Widom distribution $F_2$ in (\ref{1.5}) can be written as a Fredholm expansion. The two-time distribution function $F_{\text{tt}}$ is not given by a Fredholm expansion although the expansion in (\ref{1.18}) has some similarities with a block Fredholm expansion. We will derive the formulas that we will use to prove (\ref{1.6'}) by thinking of $H(\mu,n)$ as a limit of a last-passage time in a discrete model. Let $\left(w(i,j)\right)_{i,j\ge 1}$ be independent geometric random variables with parameter $q$, $$ \mathbb{P}[w(i,j)=k]=(1-q)q^k,\quad k\ge 0. $$ Consider the last-passage times \begin{equation}\label{1.19} G(m,n)=\max_{\pi:(1,1)\nearrow (m,n)} \sum_{(i,j)\in\pi} w(i,j), \end{equation} where the maximum is over all up/right paths from $(1,1)$ to $(m,n)$, see \cite{JoSh}. We have the following limit law \begin{equation}\label{1.20} \frac{G([\mu T],n)-\frac q{1-q}[\mu T]}{\frac{\sqrt{q}}{1-q}\sqrt{T}} \to H(\mu,n) \end{equation} in distribution as $T\to\infty$, see \cite{Bar}. The distribution function $\mathbb{P}[G(m_1,n_1)\le v_1, G(m_2, n_2)\le v_2]$ will be analyzed using a formula from \cite{JoMar}, see section \ref{sect2} below. \begin{remark}\label{rem1} {\rm As mentioned above Dotsenko has given a non-rigorous derivation of the limiting distribution function $F_{\text{tt}}$. The formula in \cite{Dots} has similarities with (\ref{1.18}) but we have not attempted to prove that they are the same. Dotsenko also used a similar derivation in the space-like direction, \cite{Dots2}, see also \cite{ImSaSp}. } \end{remark} \begin{remark}\label{rem2} {\rm This paper is a contribution to the understanding of models in the so called KPZ universality class, which have been of great interest in the last 15 years. We will not survey this development here, see for example the papers \cite{BorCorw}, \cite{BorPet}, \cite{Corw}, \cite{JoHouch}, \cite{Quas} and references therein. In particular the results of this paper could be of interest in understanding the so called Airy sheet, a conjectural limiting object for many models, see \cite{CorwQua} and \cite{CorLiuWa}. One aspect about last-passage percolation models in the time direction has been studied previously, namely the so called slow de-correlation phenomenon, see \cite{CorFerPec}, \cite{Fer}. This means that the scaling exponent in the time direction (characteristic direction) is 1; we need $\mu_1$ and $\Delta\mu$ to be of order $M$ above to get a non-trivial limit. } \end{remark} \begin{remark}\label{rem3} {\rm It is not so hard to check, disregarding technical details, that $F(\eta_1,\eta_2;\alpha)\to F_2(\eta_1)$ as $\eta_2\to\infty$ and $F(\eta_1,\eta_2;\alpha)\to F_2(\eta_2)$ as $\eta_1\to\infty$. We also expect that $F(\eta_1,\eta_2;\alpha)\to F_2(\eta_1)F_2(\eta_2)$ as $\alpha\to 0+$. This limit can be checked heuristically but appears to be rather subtle and we will not discuss it further.} \end{remark} \begin{remark}\label{rem3} {\rm Below we will derive formulas for the corresponding problem for the last-passage times $G(m,n)$ before taking the limit to $H(\mu,n)$. It should be possible to carry out the whole proof below but with $G(m,n)$ instead, but some of the computations in section \ref{sect3} appear to be harder. The role of the Hermite polynomials there would then be replaced by the Meixner polynomials.} \end{remark} The outline of the paper is as follows. In section \ref{sect2} we will prove a formula for the joint distribution function $\mathbb{P}[G(m_1,n_1)\le v_1, G(m_2, n_2)\le v_2]$ based on results from \cite{JoMar}. By taking a limit this leads to a formula for (\ref{1.2}). This computation involves certain symmetrization identities that will be proved in section \ref{sect5}. In section \ref{sect3} the formula from section \ref{sect2} will be rewritten and expanded in terms of determinants. Section \ref{sect4} gives the proof of theorem \ref{thm1.1} based on the expansion, certain asymptotic limits and estimates. These limits and estimates are finally proved in section \ref{sect6}. Throughout this paper $\gamma_r$ will denote a positively oriented circle around the origin with radius $r$, and $\Gamma_d$ will denote a straight line through $d$ parallell to the imaginary axis and oriented upwards. \bigskip {\bf Acknowledgement.} This work was inspired by a talk by Victor Dotsenko at the Simons Symposium {\it The Kardar-Parisi-Zhang Equation and Universality Class}, which appeared as \cite{Dots}. I thank the Simons Foundation for the invitation. The work was started while visiting the Institute of Advanced Study. I thank the Institute for inviting me to the special year on {\it Non-equilibrium Dynamics and Random Matrices}. I thank an anonymous referee for many valuable comments and suggestions. \section{A formula for the joint distribution function}\label{sect2} Let $G(m,n)$, $m,n\ge 1$, be the last-passage times defined by (\ref{1.19}), and write $$ \mathbf{G}(m)=(G(m,1),\dots,G(m,n)). $$ We put $\mathbf{G}(0)=\mathbf{0}$. Introduce the difference operators $\Delta f(x)=f(x+1)-f(x)$, $\Delta^{-1}f(x) =\sum_{y=-\infty}^{x-1} f(y)$, where $f:\mathbb{Z}\to\mathbb{R}$ is a given function. Set, for $m\ge 1$, $x\in\mathbb{Z}$, \begin{equation}\label{2.1} w_m(x)=(1-q)^m\binom{x+m-1}{x} q^x1(x\ge 0). \end{equation} Also, we let $W_n=\{x\in \mathbb{Z}^n\,;\,x_1\le x_2\le\dots\le x_n\}$. In \cite{JoMar} the following result was proved, inspired by \cite{Warr}. \begin{proposition}\label{prop2.1} For $x,y\in W_n$ and $m>\ell\ge 0$, \begin{equation}\label{2.2} \mathbb{P}[\mathbf{G}(m)=y\,|\,\mathbf{G}(\ell)=x]=\det \left(\Delta^{j-i} w_{m-\ell}(y_j-x_i)\right)_{1\le i,j\le n}. \end{equation} In particular \begin{equation}\label{2.3} \mathbb{P}[\mathbf{G}(m)=x]= \det\left(\Delta^{j-i} w_{m}(x_j)\right)_{1\le i,j\le n}. \end{equation} \end{proposition} It follows from (\ref{2.2}) and (\ref{2.3}) that \begin{align} &\mathbb{P}[\mathbf{G}(m_1)=x,\mathbf{G}(m_2)=y]=\mathbb{P}[\mathbf{G}(m_2)=y\,|\,\mathbf{G}(m_1)=x]\mathbb{P}[\mathbf{G}(m_1)=x] \notag\\ &=\det \left(\Delta^{j-i} w_{m_2-m_1}(y_j-x_i)\right)_{1\le i,j\le n}\det\left(\Delta^{j-i} w_{m_1}(x_j)\right)_{1\le i,j\le n}. \notag \end{align} Thus, \begin{align}\label{2.4} P&:=\mathbb{P}[G(m_1,n_1)\le v_1, G(m_2,n_2)\le v_2] \\ &=\sum\limits_{u=-\infty}^{v_1}\sum\limits_{\substack{x\in W_{n_2}\\x_{n_1}=u}}\sum\limits_{\substack{y\in W_{n_2}\\y_{n_2}\le v_2}} \det\left(\Delta^{j-i} w_{m_1}(x_j)\right)_{1\le i,j\le n_2}\det \left(\Delta^{j-i} w_{m_2-m_1}(y_j-x_i)\right)_{1\le i,j\le n_2}, \notag \end{align} where $1\le m_1<m_2$ and $1\le n_1\le n_2$. This formula is the starting point of our analysis. In order to get a more useful formula we rewrite it in terms of multiple contour integrals. We can write $w_m$ in (\ref{2.1}) as \begin{equation}\label{2.5} w_m(x)=\frac{(1-q)^m}{2\pi i}\int_{\gamma_r}\frac{dz}{(1-qz)^mz^{x+1}}, \end{equation} where $\gamma_r$ is a positively oriented circle around the origin with radius $r$ and $0<r<1/q$. This gives \begin{equation}\label{2.6} \Delta^kw_m(x)=\frac{(1-q)^m}{2\pi i}\int_{\gamma_r}\frac{(1-z)^kdz}{(1-qz)^mz^{x+k+1}}, \end{equation} for all $k\in\mathbb{Z}$ if $0<r<1$. Inserting (\ref{2.6}) into (\ref{2.4}) will after some rather lengthy and non-trivial manipulations lead to the following formula for $P$. \begin{proposition}\label{prop2.2} Let $P$ be defined by (\ref{2.4}), and let $0<s_1<r_1<1$, $0<r_2<s_2<1$. Assume that $m_1<m_2$ and $n_1<n_2$ and let $\Delta n=n_2-n_1$, $\Delta m=m_2-m_1$. Then \begin{align}\label{2.7} &P=\sum\limits_{u=-\infty}^{v_1}\frac{(1-q)^{m_2n_2}(-1)^{n_2(n_2-1)/2}}{(2\pi i)^{2n_2}n_1!^2(\Delta n)!^2} \int\limits_{\gamma_{s_1}^{n_1}}d^{n_1}z\int\limits_{\gamma_{s_2}^{\Delta n}}d^{\Delta n}z \int\limits_{\gamma_{r_1}^{n_1}}d^{n_1}w\int\limits_{\gamma_{r_2}^{\Delta n}}d^{\Delta n}w \\ &\times\det \left(z_j^{i-1}\right)_{1\le i,j\le n_2}\det \left(w_j^{i-1}\right)_{1\le i,j\le n_2} \det \left(\frac 1{w_j-z_i}\right)_{1\le i,j\le n_1}\det \left(\frac 1{z_j-w_i}\right)_{n_1<i,j\le n_2}\notag\\ &\times\prod\limits_{j=n_1+1}^{n_2}\frac{1-z_j}{1-w_j}\left(1-\prod\limits_{j=1}^{n_1}\frac{z_j}{w_j}\right) \prod\limits_{j=1}^{n_2}\frac{w_j^{u-v_2-\Delta n}} {z_j^{u+n_1}(1-z_j)^{\Delta n}(1-qz_j)^{m_1}(1-w_j)^{n_1}(1-qw_j)^{\Delta m}}. \notag \end{align} \end{proposition} Here we have used the notation \begin{equation}\label{2.8} \int\limits_{\gamma_{s_1}^{n_1}}d^{n_1}z\int\limits_{\gamma_{s_2}^{\Delta n}}d^{\Delta n}z= \int\limits_{\gamma_{s_1}}dz_1\dots\int\limits_{\gamma_{s_1}}dz_{n_1}\int\limits_{\gamma_{s_2}}dz_{n_1+1}\dots\int\limits_{\gamma_{s_2}}dz_{n_2}. \end{equation} Before we can prove (\ref{2.7}) we need some preliminary results. \begin{lemma}\label{lem2.3} We have the following two algebraic symmetrization identities, \begin{align}\label{2.9} &\sum\limits_{\sigma\in S_n}\text{sgn\,}(\sigma)\prod\limits_{j=1}^n\left(\frac{1-w_{\sigma(j)}}{w_{\sigma(j)}}\right)^j \frac 1{(1-w_{\sigma(1)})(1-w_{\sigma(1)}w_{\sigma(2)})\cdots (1-w_{\sigma(1)}\cdots w_{\sigma(n)})}\\ &=(-1)^{\frac{n(n-1)}2}\prod\limits_{j=1}^n \frac 1{w_j^n}\det\left(w_j^{i-1}\right)_{1\le i,j\le n},\notag \end{align} and \begin{align}\label{2.10} &\sum\limits_{\sigma_1,\sigma_2\in S_n}\text{sgn\,}(\sigma_1\sigma_2)\prod\limits_{j=1}^n \left(\frac{w_{\sigma_2(j)}(1-z_{\sigma_1(j)})}{z_{\sigma_1(j)}(1-w_{\sigma_2(j)})}\right)^j \frac 1{\left(1-\frac{z_{\sigma_1(1)}}{w_{\sigma_2(1)}}\right)\left(1-\frac{z_{\sigma_1(1)}z_{\sigma_1(2)}}{w_{\sigma_2(1)}w_{\sigma_2(2)}}\right)\cdots \left(1-\frac{z_{\sigma_1(1)}\cdots z_{\sigma_1(n)}}{w_{\sigma_2(1)}\cdots w_{\sigma_2(n)}}\right)}\\ &=\prod\limits_{j=1}^n \frac {w_j^{n+1}(1-z_j)^n}{z_j^n(1-w_j)^n}\det\left(\frac 1{w_j-z_i}\right)_{1\le i,j\le n}.\notag \end{align} \end{lemma} The first identity is a direct consequence of one of the Tracy-Widom ASEP identities, \cite{TrWi}. The other identity is new as far as we know. The lemma will be proved in sect. \ref{sect5}. We can now give the proof of proposition \ref{prop2.2}. \begin{proof}({\it Proposition \ref{prop2.2}}) Inserting (\ref{2.6}) into (\ref{2.4}) gives \begin{align}\label{2.11} P&=\sum\limits_{u=-\infty}^{v_1}\sum\limits_{\substack{x\in W_{n_2}\\x_{n_1}=u}}\sum\limits_{\substack{y\in W_{n_2}\\y_{n_2}\le v_2}} \det\left(\frac{(1-q)^{m_1}}{2\pi i}\int_{\gamma_{r_1}}\left(\frac{1-z_j}{z_j}\right)^{j-i}\frac{dz_j}{(1-qz_j)^{m_1}z_j^{x_j+1}}\right)_{1\le i,j\le n_2} \\ &\det\left(\frac{(1-q)^{\Delta m}}{2\pi i}\int_{\gamma_{r_2}} \left(\frac{1-w_i}{w_i}\right)^{j-i}\frac{dw_i}{(1-qw_i)^{\Delta m}w_i^{y_j-x_i+1}}\right)_{1\le i,j\le n_2}, \notag \end{align} where $0<r_1,r_2<1$. Now, the first determinant in (\ref{2.11}) can be rewritten as \begin{align}\label{2.12} &\det\left(\frac{(1-q)^{m_1}}{2\pi i}\int_{\gamma_{r_1}}\left(\frac{1-z_j}{z_j}\right)^{j-i}\frac{dz_j}{(1-qz_j)^{m_1}z_j^{x_j+1}}\right)_{1\le i,j\le n_2} \\ &=\frac{(1-q)^{m_1n_2}}{(2\pi i)^{n_2}}\int_{\gamma_{r_1}^{n_2}}d^{n_2}z\prod\limits_{j=1}^{n_2}\left(\frac{1-z_j}{z_j}\right)^{j} \prod\limits_{j=1}^{n_2}\frac 1{(1-qz_j)^{m_1}z_j^{x_j+1}}\det\left(\left(\frac{z_j}{1-z_j}\right)^{i}\right)_{1\le i,j\le n_2} \notag\\ &=\frac{(1-q)^{m_1n_2}}{(2\pi i)^{n_2}}\int_{\gamma_{r_1}^{n_2}}d^{n_2}z\prod\limits_{j=1}^{n_2}\left(\frac{1-z_j}{z_j}\right)^{j} \prod\limits_{j=1}^{n_2}\frac 1{(1-qz_j)^{m_1}(1-z_j)^{n_2}z_j^{x_j}}\det\left(z_j^{i-1}\right)_{1\le i,j\le n_2}. \notag \end{align} Here we used the identity \begin{equation} \det\left(\left(\frac{z_j}{1-z_j}\right)^{i}\right)_{1\le i,j\le n_2}=\left(\prod\limits_{j=1}^{n_2}\frac{z_j}{(1-z_j)^{n_2}}\right) \det\left(z_j^{i-1}\right)_{1\le i,j\le n_2}. \notag \end{equation} This follows from the following computation, \begin{align} &\det\left(\left(\frac{z_j}{1-z_j}\right)^{i}\right)_{1\le i,j\le n_2}= \det\left(\left(\frac{z_j}{1-z_j}\right)^{i-1}\right)_{1\le i,j\le n_2}\prod\limits_{j=1}^{n_2}\frac{z_j}{1-z_j} \notag\\ &=\prod\limits_{j=1}^{n_2}\frac{z_j}{1-z_j}\prod_{1\le i<j\le n_2}\left(\frac{z_j}{1-z_j}-\frac{z_i}{1-z_i}\right) \notag\\ &=\prod\limits_{j=1}^{n_2}\frac{z_j}{1-z_j}\prod_{1\le i<j\le n_2}\frac 1{(1-z_j)(1-z_i)}\prod_{1\le i<j\le n_2} \left(z_j(1-z_i)-z_i(1-z_j)\right) \notag\\ &=\prod\limits_{j=1}^{n_2}\frac{z_j}{1-z_j}\prod\limits_{j=2}^{n_2}\frac 1{(1-z_j)^{j-1}} \prod\limits_{i=1}^{n_2-1}\frac 1{(1-z_i)^{n_2-i}}\prod_{1\le i<j\le n_2}(z_j-z_i) \notag\\ &=\prod\limits_{j=1}^{n_2}\frac{z_j}{1-z_j}\prod\limits_{j=1}^{n_2}\frac 1{(1-z_j)^{j-1}}\frac 1{(1-z_j)^{n_2-j}} \det\left(z_j^{i-1}\right)_{1\le i,j\le n_2} = \det\left(z_j^{i-1}\right)_{1\le i,j\le n_2}\prod\limits_{j=1}^{n_2}\frac{z_j}{(1-z_j)^{n_2}}. \notag \end{align} Consider now the second determinant in (\ref{2.11}) together with the $y$-summation. We get \begin{align}\label{2.13} &\sum\limits_{\substack{y\in W_{n_2}\\y_{n_2}\le v_2}}\det\left(\frac{(1-q)^{\Delta m}}{2\pi i}\int_{\gamma_{r_2}} \left(\frac{1-w_i}{w_i}\right)^{j-i}\frac{dw_i}{(1-qw_i)^{\Delta m}w_i^{y_j-x_i+1}}\right)_{1\le i,j\le n_2} \\ &=\frac{(1-q)^{\Delta mn_2}}{(2\pi i)^{n_2}}\int_{\gamma_{r_2}^{n_2}}d^{n_2}w\sum\limits_{\substack{y\in W_{n_2}\\y_{n_2}\le v_2}} \sum\limits_{\sigma\in S_{n_2}}\text{sgn\,}(\sigma)\prod\limits_{j=1}^{n_2}\left(\frac{1-w_{\sigma(j)}}{w_{\sigma(j)}}\right)^{j-\sigma(j)} \frac{1}{(1-qw_{\sigma(j)})^{\Delta m}w_{\sigma(j)}^{y_j-x_{\sigma(j)}+1}} \notag \\ &=\frac{(1-q)^{\Delta mn_2}}{(2\pi i)^{n_2}}\int_{\gamma_{r_2}^{n_2}}d^{n_2}w\prod\limits_{j=1}^{n_2}\left(\frac{w_j}{1-w_j}\right)^{j} \frac{w_j^{x_j-1}}{(1-qw_j)^{\Delta m}} \notag\\ &\times\sum\limits_{\sigma\in S_{n_2}}\text{sgn\,}(\sigma)\prod\limits_{j=1}^{n_2}\left(\frac{1-w_{\sigma(j)}}{w_{\sigma(j)}}\right)^{j} \left(\sum\limits_{\substack{y\in W_{n_2}\\y_{n_2}\le v_2}}\prod\limits_{j=1}^{n_2}\frac 1{w_{\sigma(j)}^{y_j}}\right). \notag \end{align} Since $0<r_2<1$ we see, by summing the geometric series, that \begin{align}\label{2.14} \sum\limits_{\substack{y\in W_{n_2}\\y_{n_2}\le v_2}}\prod\limits_{j=1}^{n_2}\frac 1{w_{\sigma(j)}^{y_j}} &=\sum\limits_{y_{n_2}=-\infty}^{v_2}(w_{\sigma(1)}\dots w_{\sigma(n_2)})^{-y_{n_2}} \sum\limits_{y_{n_2-1}=-\infty}^{y_{n_2}}(w_{\sigma(1)}\dots w_{\sigma(n_2-1)})^{y_{n_2}-y_{n_2-1}}\dots \sum\limits_{y_{1}=-\infty}^{y_2}w_{\sigma(1)}^{y_2-y_1}\notag\\ &=\prod\limits_{j=1}^{n_2}\frac 1{w_j^{v_2}}\frac 1{(1-w_{\sigma(1)})(1-w_{\sigma(1)}w_{\sigma(2)})\cdots (1-w_{\sigma(1)}\cdots w_{\sigma(n_2)})}. \end{align} Combining (\ref{2.14}) with the identity (\ref{2.9}) we get \begin{equation} \sum\limits_{\sigma\in S_{n_2}}\text{sgn\,}(\sigma)\prod\limits_{j=1}^{n_2}\left(\frac{1-w_{\sigma(j)}}{w_{\sigma(j)}}\right)^{j} \left(\sum\limits_{\substack{y\in W_{n_2}\\y_{n_2}\le v_2}}\prod\limits_{j=1}^{n_2}\frac 1{w_{\sigma(j)}^{y_j}}\right) =(-1)^{\frac{n_2(n_2-1)}2}\prod\limits_{j=1}^{n_2}\frac 1{w_j^{n_2+v_2}}\det\left(w_j^{i-1}\right)_{1\le i,j\le n_2}.\notag \end{equation} We can now use this identity in (\ref{2.13}) and obtain \begin{align}\label{2.15} &\sum\limits_{\substack{y\in W_{n_2}\\y_{n_2}\le v_2}}\det\left(\frac{(1-q)^{\Delta m}}{2\pi i}\int_{\gamma_{r_2}} \left(\frac{1-w_i}{w_i}\right)^{j-i}\frac{dw_i}{(1-qw_i)^{\Delta m}w_i^{y_j-x_i+1}}\right)_{1\le i,j\le n_2} \\ &=\frac{(1-q)^{\Delta mn_2}(-1)^{\frac{n_2(n_2-1)}2}}{(2\pi i)^{n_2}}\int_{\gamma_{r_2}^{n_2}}d^{n_2}w \prod\limits_{j=1}^{n_2}\left(\frac{w_j}{1-w_j}\right)^{j} \frac{w_j^{x_j-v_2-n_2-1}}{(1-qw_j)^{\Delta m}}\det\left(w_j^{i-1}\right)_{1\le i,j\le n_2}. \notag \end{align} Next, insert (\ref{2.12}) and (\ref{2.15}) into (\ref{2.11}) to get \begin{align}\label{2.16} P&=\sum\limits_{u=-\infty}^{v_1}\sum\limits_{\substack{x\in W_{n_2}\\x_{n_1}=u}} \frac{(1-q)^{m_2n_2}(-1)^{\frac{n_2(n_2-1)}2}}{(2\pi i)^{2n_2}} \int_{\gamma_{r_1}^{n_2}}d^{n_2}z\int_{\gamma_{r_2}^{n_2}}d^{n_2}w \det\left(z_j^{i-1}\right)_{1\le i,j\le n_2}\det\left(w_j^{i-1}\right)_{1\le i,j\le n_2} \\ &\times \prod\limits_{j=1}^{n_2}\left(\frac{1-z_j}{z_j}\right)^{j}\left(\frac{w_j}{1-w_j}\right)^{j} \frac 1{(1-qz_j)^{m_1}(1-z_j)^{n_2}z_j^{x_j}}\frac{w_j^{x_j-v_2-n_2-1}}{(1-qw_j)^{\Delta m}}. \notag \end{align} In this expression we symmetrize in $\{z_j\}$ and $\{w_j\}$. We find \begin{align}\label{2.17} P&=\sum\limits_{u=-\infty}^{v_1}\sum\limits_{\substack{x\in W_{n_2}\\x_{n_1}=u}} \frac{(1-q)^{m_2n_2}(-1)^{\frac{n_2(n_2-1)}2}}{(2\pi i)^{2n_2}(n_2!)^2} \int_{\gamma_{r_1}^{n_2}}d^{n_2}z\int_{\gamma_{r_2}^{n_2}}d^{n_2}w \det\left(z_j^{i-1}\right)_{1\le i,j\le n_2}\det\left(w_j^{i-1}\right)_{1\le i,j\le n_2} \\ &\times \prod\limits_{j=1}^{n_2} \frac 1{(1-qz_j)^{m_1}(1-z_j)^{n_2}w_j^{v_2+n_2+1}(1-qw_j)^{\Delta m}} \notag\\ &\times \left(\sum\limits_{\sigma_1,\sigma_2\in S_n}\text{sgn\,}(\sigma_1\sigma_2)\prod\limits_{j=1}^{n_2}\left(\frac{1-z_{\sigma_1(j)}}{z_{\sigma_1(j)}}\right)^j \left(\frac{w_{\sigma_2(j)}}{1-w_{\sigma_2(j)}}\right)^j\left(\frac{w_{\sigma_2(j)}}{z_{\sigma_1(j)}}\right)^{x_j}\right). \notag \end{align} Let $(S_j^-,S_j^+)$, $j=1,2$, be two partitions of $[1,n_2]=\{1,\dots n_2\}$, such that $|S_j^-|=n_1$ and $|S_j^+|=\Delta n$. For $\sigma_1,\sigma_2\in S_{n_2}$, we say that $\sigma_j\in S_{n_2}(S_j^-,S_j^+)$ if $\sigma_j([1,n_1])=S_j^-$ and consequently $\sigma_j([n_1+1,n_2])=S_j^+$, $j=1,2$. Write $$ \sigma_j^-=\sigma_j\left|_{[1,n_1]}\right.,\,\,\sigma_j^+=\sigma_j\left|_{[n_1+1,n_2]}\right.,\,\,j=1,2, $$ for the restricted permutations. Given $S_j^-\,(S_j^+)$ we can identify $\sigma_j^-\,(\sigma_j^+)$ with a permutation in $S_{n_1}\,(S_{\Delta n})$. We do this by taking the order preserving bijection from $S_j^-\,(S_j^+)$ to $S_{n_1}\,(S_{\Delta n})$. The signs are related by \begin{equation}\label{2.18} \text{sgn\,}(\sigma_j)=(-1)^{\kappa(S_j^-,S_j^+)}\text{sgn\,}(\sigma_j^-)\text{sgn\,}(\sigma_j^+), \end{equation} where $$ \kappa(U,V)=|\{(i,j)\,;\,i\in U,j\in V,i>j\}|. $$ We will now choose our radii in the circles in the contour integrals depending on $S_j^\pm$. Recall that $$ 0<s_1<r_1<1,\,\,0<r_2<s_2<1, $$ which we assumed in the proposition. Given $S_j^-$, $j=1,2$, we take \begin{align}\label{2.19} &|z_k|=s_1,\,k\in S_1^-,\,\,|w_k|=r_1,\,k\in S_2^-, \\ &|z_k|=s_2,\,k\in S_1^+,\,\,|w_k|=r_2,\,k\in S_2^+. \notag \end{align} We can write \begin{align}\label{2.20} &\prod\limits_{j=1}^{n_2}\left(\frac{1-z_{\sigma_1(j)}}{z_{\sigma_1(j)}}\right)^j \left(\frac{w_{\sigma_2(j)}}{1-w_{\sigma_2(j)}}\right)^j\left(\frac{w_{\sigma_2(j)}}{z_{\sigma_1(j)}}\right)^{x_j} \\ &=\prod\limits_{j=1}^{n_1}\left(\frac{1-z_{\sigma_1^-(j)}}{z_{\sigma_1^-(j)}}\right)^j \left(\frac{w_{\sigma_2^-(j)}}{1-w_{\sigma_2^-(j)}}\right)^j\left(\frac{w_{\sigma_2^-(j)}}{z_{\sigma_1^-(j)}}\right)^{x_j} \prod\limits_{j=n_1+1}^{n_2}\left(\frac{1-z_{\sigma_1^+(j)}}{z_{\sigma_1^+(j)}}\right)^j \left(\frac{w_{\sigma_2^+(j)}}{1-w_{\sigma_2^+(j)}}\right)^j\left(\frac{w_{\sigma_2^+(j)}}{z_{\sigma_1^+(j)}}\right)^{x_j} \notag \end{align} From (\ref{2.17}), (\ref{2.18}) and (\ref{2.20}) we find \begin{align}\label{2.21} P&=\sum\limits_{S_1^-,S_2^-}(-1)^{\kappa(S_1^-,S_1^+)+\kappa(S_2^-,S_2^+)}\sum\limits_{u=-\infty}^{v_1} \frac{(1-q)^{m_2n_2}(-1)^{\frac{n_2(n_2-1)}2}}{(2\pi i)^{2n_2}(n_2!)^2} \\ &\times\int_{\gamma_{s_1}^{n_1}}\prod\limits_{j\in S_1^-}dz_j\int_{\gamma_{s_2}^{\Delta n}}\prod\limits_{j\in S_1^+}dz_j \int_{\gamma_{r_1}^{n_1}}\prod\limits_{j\in S_2^-}dw_j\int_{\gamma_{r_2}^{\Delta n}}\prod\limits_{j\in S_2^+}dw_j \notag\\ &\times\det\left(z_j^{i-1}\right)_{1\le i,j\le n_2}\det\left(w_j^{i-1}\right)_{1\le i,j\le n_2}\prod\limits_{j=1}^{n_2} \frac 1{(1-qz_j)^{m_1}(1-z_j)^{n_2}w_j^{v_2+n_2+1}(1-qw_j)^{\Delta m}} \notag\\ &\times\sum\limits_{\substack{x\in W_{n_2}\\x_{n_1}=u}}\left(\sum_{\sigma_1^-,\sigma_2^-}\text{sgn\,}(\sigma_1^-)\text{sgn\,}(\sigma_2^-) \prod\limits_{j=1}^{n_1}\left(\frac{1-z_{\sigma_1^-(j)}}{z_{\sigma_1^-(j)}}\right)^j \left(\frac{w_{\sigma_2^-(j)}}{1-w_{\sigma_2^-(j)}}\right)^j\left(\frac{w_{\sigma_2^-(j)}}{z_{\sigma_1^-(j)}}\right)^{x_j}\right) \notag\\ &\times\left(\sum_{\sigma_1^+,\sigma_2^+}\text{sgn\,}(\sigma_1^+)\text{sgn\,}(\sigma_2^+)\prod\limits_{j=n_1+1}^{n_2}\left(\frac{1-z_{\sigma_1^+(j)}}{z_{\sigma_1^+(j)}}\right)^j \left(\frac{w_{\sigma_2^+(j)}}{1-w_{\sigma_2^+(j)}}\right)^j\left(\frac{w_{\sigma_2^+(j)}}{z_{\sigma_1^+(j)}}\right)^{x_j}\right). \notag \end{align} The next step is to do the $x$-summations, \begin{align}\label{2.22} &\sum\limits_{x_1\le\dots\le x_{n_1-1}\le x_{n_1}=u}\prod\limits_{j=1}^{n_1}\left(\frac{w_{\sigma_2^-(j)}}{z_{\sigma_1^-(j)}}\right)^{x_j}\\ &=\prod\limits_{j=1}^{n_1}\left(\frac{w_{\sigma_2^-(j)}}{z_{\sigma_1^-(j)}}\right)^{u} \sum\limits_{y_1\le\dots\le y_{n_1-1}\le u}\left(\frac{z_{\sigma_1^-(j)}}{w_{\sigma_2^-(j)}}\right)^{-y_j}\notag\\ &=\prod\limits_{j=1}^{n_1}\left(\frac{w_{\sigma_2^-(j)}}{z_{\sigma_1^-(j)}}\right)^u \frac 1{\left(1-\frac{z_{\sigma_1^-(1)}}{w_{\sigma_2^-(1)}}\right)\left(1-\frac{z_{\sigma_1^-(1)}z_{\sigma_1^-(2)}}{w_{\sigma_2^-(1)}w_{\sigma_2^-(2)}}\right)\cdots \left(1-\frac{z_{\sigma_1^-(1)}\cdots z_{\sigma_1^-(n_1-1)}}{w_{\sigma_2^-(1)}\cdots w_{\sigma_2^-(n_1-1)}}\right)}, \notag \end{align} by the same computation as (\ref{2.14}). Here we used the fact that $|z_{\sigma_1^-(j)}/w_{\sigma_2^-(j)}|=s_1/r_1<1$. Similarly, \begin{align}\label{2.23} &\sum\limits_{u\le x_{n_1+1}\le\dots\le x_{n_2}} \prod\limits_{j=n_1+1}^{n_2}\left(\frac{w_{\sigma_2^+(j)}}{z_{\sigma_1^+(j)}}\right)^{x_j}\\ &=\prod\limits_{j=n_1+1}^{n_2}\left(\frac{w_{\sigma_2^+(j)}}{z_{\sigma_1^+(j)}}\right)^u \frac 1{\left(1-\frac{w_{\sigma_2^+(n_2)}}{z_{\sigma_1^+(n_2)}}\right)\left(1-\frac{w_{\sigma_2^+(n_2)}w_{\sigma_2^+(n_2-1)}}{z_{\sigma_1^+(n_2)}z_{\sigma_1^+(n_2-1)}}\right)\cdots \left(1-\frac{w_{\sigma_2^+(n_2)}\cdots w_{\sigma_2^+(n_1+1)}}{z_{\sigma_1^+(n_2)}\cdots z_{\sigma_1^+(n_1+1)}}\right)}, \notag \end{align} since $|w_{\sigma_2^+(j)}/z_{\sigma_1^+(j)}|=r_2/s_2<1$. We can now apply (\ref{2.10}) in lemma \ref{lem2.3} to see that \begin{align}\label{2.24} &\sum_{\sigma_1^-,\sigma_2^-}\text{sgn\,}(\sigma_1^-)\text{sgn\,}(\sigma_2^-) \prod\limits_{j=1}^{n_1}\left(\frac{w_{\sigma_2^-(j)}(1-z_{\sigma_1^-(j)})}{z_{\sigma_1^-(j)}(1-w_{\sigma_2^-(j)})}\right)^j\\ &\times \frac 1{\left(1-\frac{z_{\sigma_1^-(1)}}{w_{\sigma_2^-(1)}}\right)\left(1-\frac{z_{\sigma_1^-(1)}z_{\sigma_1^-(2)}}{w_{\sigma_2^-(1)}w_{\sigma_2^-(2)}}\right)\cdots \left(1-\frac{z_{\sigma_1^-(1)}\cdots z_{\sigma_1^-(n_1-1)}}{w_{\sigma_2^-(1)}\cdots w_{\sigma_2^-(n_1-1)}}\right)} \notag\\ &=\left(1-\prod\limits_{j\in S_1^-}z_j\prod\limits_{j\in S_2^-}\frac 1{w_j}\right) \prod\limits_{j\in S_1^-}\frac{(1-z_j)^{n_1}}{z_j^{n_1}}\prod\limits_{j\in S_2^-}\frac{w_j^{n_1+1}}{(1-w_j)^{n_1}} \det\left(\frac 1{w_j-z_i}\right)_{\substack{i\in S_1^-\\j\in S_2^-}}. \notag \end{align} From (\ref{2.23}) we see that we also want to compute \begin{align}\label{2.25} &\sum_{\sigma_1^+,\sigma_2^+}\text{sgn\,}(\sigma_1^+)\text{sgn\,}(\sigma_2^+) \prod\limits_{j=n_1+1}^{n_2}\left(\frac{w_{\sigma_2^+(j)}(1-z_{\sigma_1^+(j)})}{z_{\sigma_1^+(j)}(1-w_{\sigma_2^+(j)})}\right)^j\\ &\times \frac 1{\left(1-\frac{w_{\sigma_2^+(n_2)}}{z_{\sigma_1^+(n_2)}}\right)\left(1-\frac{w_{\sigma_2^+(n_2)}w_{\sigma_2^+(n_2-1)}}{z_{\sigma_1^+(n_2)}z_{\sigma_1^+(n_2-1)}}\right)\cdots \left(1-\frac{w_{\sigma_2^+(n_2)}\cdots w_{\sigma_2^+(n_1+1)}}{z_{\sigma_1^+(n_2)}\cdots z_{\sigma_1^+(n_1+1)}}\right)}. \notag \end{align} Let $\tau(j)=n_2+1-j$, $1\le j\le\Delta n$ and $\tilde{\sigma}^+_i=\sigma^+_i\circ\tau$, $i=1,2$. Then, $\tilde{\sigma}^+_i\,:\,[1,\Delta n]\to S_i^+$ and \begin{equation*} \text{sgn\,}(\tilde{\sigma}^+_1)\text{sgn\,}(\tilde{\sigma}^+_2)=\text{sgn\,}(\sigma^+_1)\text{sgn\,}(\sigma^+_2). \end{equation*} Also, \begin{align*} &\prod\limits_{j=n_1+1}^{n_2}\left(\frac{w_{\sigma_2^+(j)}(1-z_{\sigma_1^+(j)})}{z_{\sigma_1^+(j)}(1-w_{\sigma_2^+(j)})}\right)^j =\prod\limits_{j=1}^{\Delta n}\left(\frac{w_{\tilde{\sigma}_2^+(j)}(1-z_{\tilde{\sigma}_1^+(j)})}{z_{\tilde{\sigma}_1^+(j)}(1-w_{\tilde{\sigma}_2^+(j)})}\right)^{n_2+1-j}\\ &=\prod\limits_{j\in S_1^+}\left(\frac{1-z_j}{z_j}\right)^{n_2+1}\prod\limits_{j\in S_2^+}\left(\frac{w_j}{1-w_j}\right)^{n_2+1} \prod\limits_{j=1}^{\Delta n}\left(\frac{z_{\tilde{\sigma}_1^+(j)}(1-w_{\tilde{\sigma}_2^+(j)})}{w_{\tilde{\sigma}_2^+(j)}(1-z_{\tilde{\sigma}_1^+(j)})}\right)^j. \end{align*} Thus (\ref{2.25}) can be written \begin{align*} &\prod\limits_{j\in S_1^+}\left(\frac{1-z_j}{z_j}\right)^{n_2+1}\prod\limits_{j\in S_2^+}\left(\frac{w_j}{1-w_j}\right)^{n_2+1} \sum_{\tilde{\sigma}_1^+,\tilde{\sigma}_2^+}\text{sgn\,}(\tilde{\sigma}_1^+)\text{sgn\,}(\tilde{\sigma}_2^+) \prod\limits_{j=1}^{\Delta n}\left(\frac{z_{\tilde{\sigma}_1^+(j)}(1-w_{\tilde{\sigma}_2^+(j)})}{w_{\tilde{\sigma}_2^+(j)}(1-z_{\tilde{\sigma}_1^+(j)})}\right)^j\\ &\times \frac 1{\left(1-\frac{w_{\tilde{\sigma}_2^+(n_2)}}{z_{\tilde{\sigma}_1^+(n_2)}}\right) \left(1-\frac{w_{\tilde{\sigma}_2^+(n_2)}w_{\tilde{\sigma}_2^+(n_2-1)}}{z_{\tilde{\sigma}_1^+(n_2)}z_{\tilde{\sigma}_1^+(n_2-1)}}\right)\cdots \left(1-\frac{w_{\tilde{\sigma}_2^+(n_2)}\cdots w_{\tilde{\sigma}_2^+(n_1+1)}}{z_{\tilde{\sigma}_1^+(n_2)}\cdots z_{\tilde{\sigma}_1^+(n_1+1)}}\right)} \end{align*} and by (\ref{2.10}) in lemma \ref{lem2.3} this equals \begin{align*} &\prod\limits_{j\in S_1^+}\left(\frac{1-z_j}{z_j}\right)^{n_2+1}\prod\limits_{j\in S_2^+}\left(\frac{w_j}{1-w_j}\right)^{n_2+1} \prod\limits_{j\in S_1^+}\frac{z_j^{\Delta n+1}}{(1-z_j)^{\Delta n}}\prod\limits_{j\in S_2^+}\left(\frac{1-w_j}{w_j}\right)^{\Delta n} \det\left(\frac 1{z_i-w_j}\right)_{\substack{i\in S_1^+\\j\in S_2^+}}\\ &=\prod\limits_{j\in S_1^+}\frac{(1-z_j)^{n_1+1}}{z_j^{n_1}}\prod\limits_{j\in S_2^+}\frac{w_j^{n_1+1}}{(1-w_j)^{n_1+1}} \det\left(\frac 1{z_i-w_j}\right)_{\substack{i\in S_1^+\\j\in S_2^+}}. \end{align*} Using this, (\ref{2.22}), (\ref{2.23}) and (\ref{2.24}) in (\ref{2.21}) we find \begin{align}\label{2.26} P&=\sum\limits_{S_1^-,S_2^-}(-1)^{\kappa(S_1^-,S_1^+)+\kappa(S_2^-,S_2^+)}\sum\limits_{u=-\infty}^{v_1} \frac{(1-q)^{m_2n_2}(-1)^{\frac{n_2(n_2-1)}2}}{(2\pi i)^{2n_2}(n_2!)^2} \\ &\times\int_{\gamma_{s_1}^{n_1}}\prod\limits_{j\in S_1^-}dz_j\int_{\gamma_{s_2}^{\Delta n}}\prod\limits_{j\in S_1^+}dz_j \int_{\gamma_{r_1}^{n_1}}\prod\limits_{j\in S_2^-}dw_j\int_{\gamma_{r_2}^{\Delta n}}\prod\limits_{j\in S_2^+}dw_j \notag\\ &\times\det\left(z_j^{i-1}\right)_{1\le i,j\le n_2}\det\left(w_j^{i-1}\right)_{1\le i,j\le n_2} \det\left(\frac 1{w_j-z_i}\right)_{\substack{i\in S_1^-\\j\in S_2^-}}\det\left(\frac 1{z_i-w_j}\right)_{\substack{i\in S_1^+\\j\in S_2^+}} \notag\\ &\times \left(1-\prod\limits_{j\in S_1^-}z_j\prod\limits_{j\in S_2^-}\frac 1{w_j}\right) \prod\limits_{j\in S_1^-}\frac{(1-z_j)^{n_1}}{z_j^{n_1}}\prod\limits_{j\in S_2^-}\frac{w_j^{n_1+1}}{(1-w_j)^{n_1}} \prod\limits_{j\in S_1^+}\frac{(1-z_j)^{n_1+1}}{z_j^{n_1}}\prod\limits_{j\in S_2^+}\frac{w_j^{n_1+1}}{(1-w_j)^{n_1+1}} \notag\\ &\times \prod\limits_{j=1}^{n_2}\frac{w_j^u}{z_j^u}\prod\limits_{j=1}^{n_2} \frac 1{(1-qz_j)^{m_1}(1-z_j)^{n_2}w_j^{v_2+n_2+1}(1-qw_j)^{\Delta m}}. \notag \end{align} To see that the summation over $S_1^-,S_2^-$ in (\ref{2.26}) is actually trivial, in the sense that the summand does not depend on the choice of $S_1^-,S_2^-$, we use the following observation. Write $$ \Delta_S(z)=\prod\limits_{j<k,j,k\in S}(z_k-z_j) $$ for $S\subseteq [1,n_2]$. Then, by the standard formula for a Vandermonde determinant, \begin{equation*} \det\left(z_j^{i-1}\right)_{1\le i,j\le n_2}=\prod\limits_{1\le j<k\le n_2}(z_k-z_j) =\Delta_{S_1^-}(z)\Delta_{S_1^+}(z)\prod\limits_{j\in S_1^-, k\in S_1^+}(z_k-z_j) (-1)^{\kappa(S_1^-,S_1^+)}. \end{equation*} If we insert this into (\ref{2.26}) for both $z$ and $w$ we see that we can relabel the indices \begin{align} &(z_j)_{j\in S_1^-}\to (z_j)_{j=1}^{n_1}\,\,,\,\,(z_j)_{j\in S_1^+}\to (z_j)_{j=n_1+1}^{n_2} \notag\\ &(w_j)_{j\in S_2^-}\to (z_j)_{j=1}^{n_1}\,\,,\,\,(w_j)_{j\in S_2^+}\to (w_j)_{j=n_1+1}^{n_2} \notag \end{align} and then the sums over $S_1^-,S_2^-$ become trivial. Note that $$ \sum\limits_{S_i^-}1=\binom{n_2}{n_1}, $$ $i=1,2$. Formula (\ref{2.26}) then reduces to (\ref{2.7}) and we have proved the proposition. \end{proof} From proposition \ref{prop2.2} we can, by a limiting procedure, obtain a corresponding formula in the Brownian directed polymer model. Let $\Gamma_d$ denote the vertical straight line contour through $d\in\mathbb{R}$ oriented upwards, $\Gamma_d:t\to d+it$, $t\in\mathbb{R}$. Define \begin{align}\label{2.27} &Q(h)=\frac{(-1)^{\frac{n_2(n_2-1)}2}}{(2\pi i)^{2n_2}n_1!\Delta n!}\int_{\Gamma_{d_1}^{n_1}}d^{n_1}z\int_{\Gamma_{d_2}^{\Delta n}}d^{\Delta n}z \int_{\Gamma_{d_3}^{n_1}}d^{n_1}w\int_{\Gamma_{d_4}^{\Delta n}}d^{\Delta n}w\det\left(z_j^{i-1}\right)_{1\le i,j\le n_2}\det\left(w_j^{i-1}\right)_{1\le i,j\le n_2} \\ &\times\prod\limits_{j=1}^{n_1}\frac{e^{\frac 12\mu_1z_j^2-\xi_1z_j+\frac 12\Delta\mu w_j^2-\Delta\xi w_j}}{z_j^{\Delta n}w_j^{n_1}}\left(\frac 1{z_j-w_j}-h\right) \prod\limits_{j=n_1+1}^{n_2}\frac{e^{\frac 12\mu_1z_j^2-\xi_1z_j+\frac 12\Delta\mu w_j^2-\Delta\xi w_j}}{z_j^{\Delta n-1}w_j^{n_1+1}(w_j-z_j)} \notag \end{align} where \begin{equation}\label{2.28} d_1<d_3<0\,\,,\,\,d_4<d_2<0. \end{equation} Here, we have written \begin{equation}\label{2.29} \Delta\xi=\xi_2-\xi_1\,\,,\,\,\Delta\mu=\mu_2-\mu_1. \end{equation} We can now state a proposition concerning the joint distribution function that we are interested in in theorem \ref{thm1.1}. \begin{proposition}\label{prop2.4} Let $H(\mu,n)$ be defined by (\ref{1.1}). Then \begin{equation}\label{2.30} \frac{\partial}{\partial\xi_1}\mathbb{P}\left[H(\mu_1,n_1)\le\xi_1,H(\mu_2,n_2)\le\xi_2\right]=\left.\frac{\partial}{\partial h}\right|_{h=0}Q(h). \end{equation} \end{proposition} \begin{proof} Just as in (\ref{1.20}) we have the formula \begin{align}\label{2.31} &\mathbb{P}\left[H(\mu_1,n_1)\le\xi_1,H(\mu_2,n_2)\le\xi_2\right] \\ &=\lim\limits_{T\to\infty}\mathbb{P}\left[G([\mu_1T],n_1)\le \frac{q}{1-q}[\mu_1T]+\xi_1\frac{\sqrt{q}}{1-q}\sqrt{T}, G([\mu_2T],n_2)\le \frac{q}{1-q}[\mu_2T]+\xi_2\frac{\sqrt{q}}{1-q}\sqrt{T}\right]. \notag \end{align} In the formula (\ref{2.7}) we assume that we have chosen $r_1,r_2, s_1, s_2$ so that \begin{equation}\label{2.32} (r_1/s_1)^{n_1}>(s_2/r_2)^{\Delta n}, \end{equation} which can always be done for fixed $n_1, n_2$. We can then do the $u$-summation in (\ref{2.7}) to get \begin{equation}\label{2.33} \sum\limits_{u=-\infty}^{v_1}\left(\prod\limits_{j=1}^{n_2}\frac{w_j}{z_j}\right)^u=\frac{\prod\limits_{j=1}^{n_2}w_j^{v_1}/z_j^{v_1}} {1-\prod\limits_{j=1}^{n_2}z_j/w_j}. \end{equation} Insert this into (\ref{2.7}), expand the Cauchy determinants and symmetrize. This gives \begin{align}\label{2.34} &P=\frac{(1-q)^{m_2n_2}(-1)^{n_2(n_2-1)/2}}{(2\pi i)^{2n_2}n_1!(\Delta n)!} \int\limits_{\gamma_{s_1}^{n_1}}d^{n_1}z\int\limits_{\gamma_{s_2}^{\Delta n}}d^{\Delta n}z \int\limits_{\gamma_{r_1}^{n_1}}d^{n_1}w\int\limits_{\gamma_{r_2}^{\Delta n}}d^{\Delta n}w \frac{1-\prod\limits_{j=1}^{n_1}z_j/w_j} {1-\prod\limits_{j=1}^{n_2}z_j/w_j}\\ &\times\det \left((z_j-1)^{i-1}\right)_{1\le i,j\le n_2}\det \left((w_j-1)^{i-1}\right)_{1\le i,j\le n_2} \prod\limits_{j=1}^{n_1}\frac 1{w_j-z_j}\prod\limits_{j=n_1+1}^{n_2}\frac 1{z_j-w_j}\notag\\ &\times\prod\limits_{j=n_1+1}^{n_2}\frac{1-z_j}{1-w_j} \prod\limits_{j=1}^{n_2}\frac{1} {z_j^{v_1+n_1}(1-z_j)^{\Delta n}(1-qz_j)^{m_1}w_j^{v_2-v_1+\Delta n}(1-w_j)^{n_1}(1-qw_j)^{\Delta m}}. \notag \end{align} Here, we have also used the fact that $ \det \left(z_j^{i-1}\right)=\det \left((z_j-1)^{i-1}\right), $ by the standard product formula for a Vandermonde determinant. We now want to take the limit in (\ref{2.31}) using the formula (\ref{2.34}), i.e. we let \begin{equation}\label{2.36} m_i=[\mu_iT]\,\,,\,\,v_i=\frac{q}{1-q}[\mu_iT]+\xi_i\frac{\sqrt{q}}{1-q}\sqrt{T}, \end{equation} $i=1,2$. Let $\Gamma_d^{(T)}$ be given by $t\to d+it$, $|t|\le\pi(1-q)^{-1}\sqrt{qT}$. In (\ref{2.34}) we make the change of variables \begin{align}\label{2.37} z_j&=e^{(1-q)z_j'/\sqrt{qT}}\,\,,\,\,z_j'\in\Gamma_{d_1}^{(T)}\,\,,\,\,1\le j\le n_1, \\ z_j&=e^{(1-q)z_j'/\sqrt{qT}}\,\,,\,\,z_j'\in\Gamma_{d_2}^{(T)}\,\,,\,\,n_1+1\le j\le n_2, \notag\\ w_j&=e^{(1-q)w_j'/\sqrt{qT}}\,\,,\,\,w_j'\in\Gamma_{d_3}^{(T)}\,\,,\,\,1\le j\le n_1, \notag\\ w_j&=e^{(1-q)w_j'/\sqrt{qT}}\,\,,\,\,w_j'\in\Gamma_{d_4}^{(T)}\,\,,\,\,n_1+1\le j\le n_2, \notag \end{align} where the $d_i$ satisfy (\ref{2.28}). The condition (\ref{2.32}) becomes \begin{equation}\label{2.38} n_1d_3+\Delta n d_4>n_1d_1+\Delta n d_2. \end{equation} From (\ref{2.37}) it follows using Taylor expansions and (\ref{2.36}) that \begin{equation*} \lim\limits_{T\to\infty}\frac{(1-qz_j)^{m_1}z_j^{v_1+n_1}}{(1-q)^{m_1}}=e^{-\frac 12\mu_1 z_j'^2+\xi_1z_j'}, \end{equation*} \begin{equation*} \lim\limits_{T\to\infty}\frac{\sqrt{qT}}{1-q}(z_j-1)=z_j', \end{equation*} and similar limits involving $w_j$ instead, and \begin{equation*} \lim\limits_{T\to\infty}\frac{1-\prod_{j=1}^{n_1}z_j/w_j}{1-\prod_{j=1}^{n_2}z_j/w_j}=\frac{\sum_{j=1}^{n_1}(w_j'-z_j')}{\sum_{j=1}^{n_2}(w_j'-z_j')}. \end{equation*} If we insert (\ref{2.37}) into (\ref{2.34}) and use these limits we can take the limit $T\to\infty$. To make the argument complete we also need some estimates, but we omit the details. After some computation we find, using (\ref{2.31}), that (we have dropped the primes on the $z$- and $w$-variables) \begin{align}\label{2.39} &\mathbb{P}\left[H(\mu_1,n_1)\le\xi_1,H(\mu_2,n_2)\le\xi_2\right]\notag\\ &=\frac{(-1)^{n_2(n_2-1)/2}}{(2\pi i)^{2n_2}n_1!(\Delta n)!}\int_{\Gamma_{d_1}^{n_1}}d^{n_1}z\int_{\Gamma_{d_2}^{\Delta n}}d^{\Delta n}z \int_{\Gamma_{d_3}^{n_1}}d^{n_1}w\int_{\Gamma_{d_4}^{\Delta n}}d^{\Delta n}w\det\left(z_j^{i-1}\right)_{1\le i,j\le n_2}\det\left(w_j^{i-1}\right)_{1\le i,j\le n_2} \\ &\times \frac{\sum\limits_{j=1}^{n_1}w_j-z_j}{\sum\limits_{j=1}^{n_2}w_j-z_j} \prod\limits_{j=1}^{n_1}\frac{e^{\frac 12\mu_1z_j^2-\xi_1z_j+\frac 12\Delta\mu w_j^2-\Delta\xi w_j}}{z_j^{\Delta n}w_j^{n_1}(z_j-w_j)} \prod\limits_{j=n_1+1}^{n_2}\frac{e^{\frac 12\mu_1z_j^2-\xi_1z_j+\frac 12\Delta\mu w_j^2-\Delta\xi w_j}}{z_j^{\Delta n-1}w_j^{n_1+1}(w_j-z_j)} \notag \end{align} From (\ref{2.27}) and (\ref{2.39}) we see that (\ref{2.30}) follows (recall that $\Delta\xi=\xi_2-\xi_1$). Note that in $Q$ the condition (\ref{2.38}) is no longer important. This completes the proof. \end{proof} \section{Expansion}\label{sect3} In order to use the formula (\ref{2.30}) to prove theorem \ref{thm1.1} we must rewrite $Q(h)$ given in (\ref{2.27}) further so that we can expand it in a way appropriate for the asymptotic analysis. This expansion is similar in some ways to writing a distribution function like (\ref{1.3}) as a Fredholm expansion. Behind this expansion there is a certain orthogonality related to the orthogonality of the Hermite polynomials. However, this orthogonality is seen at the level of the generating function for the Hermite polynomials. We will prove a lemma which is the first step towards the expansion and which uses an integral formula for the Hermite polynomials. \begin{lemma}\label{lem3.1} The function $Q(h)$ defined by (\ref{2.27}) is also given by \begin{align}\label{3.1} &Q(h)=\frac{1}{(2\pi i)^{4n_2}n_1!(\Delta n)!}\int_{\Gamma_{d_1}^{n_1}}d^{n_1}z\int_{\Gamma_{d_2}^{\Delta n}}d^{\Delta n}z \int_{\Gamma_{d_3}^{n_1}}d^{n_1}w\int_{\Gamma_{d_4}^{\Delta n}}d^{\Delta n}w\int_{\gamma_{\tau_1}^{n_2}}d^{n_2}\zeta \int_{\gamma_{\tau_2}^{n_2}}d^{n_2}\omega \\ &\times\det\left(\frac 1{\zeta_j^i}\right)_{1\le i,j\le n_2}\det\left(\frac 1{\omega_j^{n_2+1-i}}\right)_{1\le i,j\le n_2} \notag\\ &\times \prod\limits_{j=1}^{n_1}\frac{z_j^{n_1}w_j^{\Delta n}e^{\frac 12\mu_1z_j^2-\xi_1z_j+\frac 12\Delta\mu w_j^2-\Delta\xi w_j}} {e^{\frac 12\mu_1\zeta_j^2-\xi_1\zeta_j+\frac 12\Delta\mu \omega_j^2-\Delta\xi \omega_j}(\zeta_j-z_j)(\omega_j-w_j)}\left( \frac 1{z_j-w_j}-h\right) \notag\\ & \times\prod\limits_{j=n_1+1}^{n_2}\frac{z_j^{n_1+1}w_j^{\Delta n-1}e^{\frac 12\mu_1z_j^2-\xi_1z_j+\frac 12\Delta\mu w_j^2-\Delta\xi w_j}} {e^{\frac 12\mu_1\zeta_j^2-\xi_1\zeta_j+\frac 12\Delta\mu \omega_j^2-\Delta\xi \omega_j}(w_j-z_j)(\zeta_j-z_j)(\omega_j-w_j)}, \notag \end{align} where \begin{equation}\label{3.1'} d_1<d_3<-\max(\tau_1,\tau_2)<0\,\,,\,\,d_4<d_2<-\max(\tau_1,\tau_2)<0. \end{equation} \end{lemma} \begin{proof} Reversing the order of the rows in a determinant of size $n$ gives a sign factor $(-1)^{n(n-1)/2}$. If we do this in a Vandermonde determinant we get the identity \begin{equation}\label{vandermondeidentity} \det \left(z_j^{i-1}\right)_{1\le i,j\le n_2}=(-1)^{\frac{n_2(n_2-1)}2}\det \left(z_j^{n_2-i}\right)_{1\le i,j\le n_2} =(-1)^{\frac{n_2(n_2-1)}2}\left(\prod\limits_{j=1}^{n_2}z_j^{n_2}\right) \det \left(\frac 1{z_j^i}\right)_{1\le i,j\le n_2}. \end{equation} Provided $\text{Re\,} z_j<0$, we see that for $i\ge 1$ $$ \int_0^\infty \frac{u_j^{i-1} e^{u_jz_j}}{(i-1)!}\,du_j=\frac {(-1)^i}{z_j^i}. $$ It follows from these two identities that \begin{align} &\det \left(z_j^{i-1}\right)_{1\le i,j\le n_2}=(-1)^{\frac{n_2(n_2-1)}2}\prod\limits_{j=1}^{n_2}z_j^{n_2} \det\left(\frac{(-1)^i}{(i-1)!}\int_0^\infty u_j^{i-1} e^{u_jz_j}\,du_j\right)_{1\le i,j\le n_2} \notag\\ &=(-1)^{\frac{n_2(n_2-1)}2}\prod\limits_{j=1}^{n_2}z_j^{n_2}\det\left(\frac{(-1)^i}{(i-1)!}\int_{-\infty}^a (a-x_j)^{i-1} e^{(a-x_j)z_j}\,dx_j\right)_{1\le i,j\le n_2} \notag\\ &=(-1)^{\frac{n_2(n_2+1)}2}\prod\limits_{j=1}^{n_2}z_j^{n_2}\int\limits_{(-\infty,a]^{n_2}}\prod\limits_{j=1}^{n_2}e^{(a-x_j)z_j} \det\left(\frac{(x_j-a)^{i-1}}{(i-1)!}\right)_{1\le i,j\le n_2}\,d^{n_2}x \notag \end{align} for any $a\in\mathbb{R}$. From this we see that \begin{equation}\label{3.2} \det \left(z_j^{i-1}\right)_{1\le i,j\le n_2}=(-1)^{\frac{n_2(n_2+1)}2}\prod\limits_{j=1}^{n_2}z_j^{n_2}\int\limits_{(-\infty,a]^{n_2}}\prod\limits_{j=1}^{n_2}e^{(a-x_j)z_j} \det\left(\frac{(x_j-a)^{i-1}}{(i-1)!}\right)_{1\le i,j\le n_2}\,d^{n_2}x. \end{equation} Similarly, we get \begin{equation}\label{3.3} \det \left(w_j^{i-1}\right)_{1\le i,j\le n_2}=(-1)^{\frac{n_2(n_2+1)}2}\prod\limits_{j=1}^{n_2}w_j^{n_2}\int\limits_{(-\infty,b]^{n_2}}\prod\limits_{j=1}^{n_2}e^{(b-y_j)w_j} \det\left(\frac{(y_j-b)^{i-1}}{(i-1)!}\right)_{1\le i,j\le n_2}\,d^{n_2}y, \end{equation} for any $b\in\mathbb{R}$. Choose $a=\xi_1$ and $b=\Delta\xi$. Using the identities (\ref{3.2}) and (\ref{3.3}) in (\ref{2.27}) we obtain \begin{align}\label{3.4} &Q(h)=\frac{(-1)^{\frac{n_2(n_2-1)}2}}{(2\pi i)^{2n_2}n_1!(\Delta n)!}\int_{\Gamma_{d_1}^{n_1}}d^{n_1}z\int_{\Gamma_{d_2}^{\Delta n}}d^{\Delta n}z \int_{\Gamma_{d_3}^{n_1}}d^{n_1}w\int_{\Gamma_{d_4}^{\Delta n}}d^{\Delta n}w \\ &\int\limits_{(-\infty,\xi_1]^{n_2}}d^{n_2}x\int\limits_{(-\infty,\Delta\xi]^{n_2}}d^{n_2}y \det\left(\frac{x_j^{i-1}}{(i-1)!}\right)_{1\le i,j\le n_2}\det\left(\frac{y_j^{i-1}}{(i-1)!}\right)_{1\le i,j\le n_2} \notag\\ &\times \prod\limits_{j=1}^{n_1}z_j^{n_1}w_j^{\Delta n}e^{\frac 12\mu_1z_j^2-x_jz_j+\frac 12\Delta\mu w_j^2-y_jw_j} \left(\frac 1{z_j-w_j}-h\right) \notag\\ & \times\prod\limits_{j=n_1+1}^{n_2}\frac{z_j^{n_1+1}w_j^{\Delta n-1}e^{\frac 12\mu_1z_j^2-x_jz_j+\frac 12\Delta\mu w_j^2-y_jw_j}} {w_j-z_j}. \notag \end{align} Here we also used the fact that \begin{equation*} \det\left(\frac{(x_j-a)^{i-1}}{(i-1)!}\right)_{1\le i,j\le n_2}= \det\left(\frac{x_j^{i-1}}{(i-1)!}\right)_{1\le i,j\le n_2} \end{equation*} by the standard formula for the Vandermonde determinant, and similarly for the other determinant. Let $H_k(x)=2^kx^k+\dots$, $k\ge 0$, be the standard Hermite polynomials so that, for any $a>0$, \begin{align} \det\left(\frac{x_j^{i-1}}{(i-1)!}\right)_{1\le i,j\le n_2}&=\frac 1{a^{\frac{n_2(n_2-1)}2}}\det\left(\frac{(ax_j)^{i-1}}{(i-1)!}\right)_{1\le i,j\le n_2} =\frac 1{a^{\frac{n_2(n_2-1)}2}}\det\left(\frac{H_{i-1}(ax_j)}{2^{i-1}(i-1)!}\right)_{1\le i,j\le n_2} \notag\\ &=\frac 1{(a\sqrt{2\mu_1})^{\frac{n_2(n_2-1)}2}}\det\left(\frac 1{2\pi i}\int_{\gamma_{\tau_1}}\frac{e^{a\sqrt{2\mu_1}x_j\zeta_j-\frac 12\mu_1\zeta_j^2}} {\zeta_j^i}d\zeta_j\right)_{1\le i,j\le n_2}, \notag \end{align} where we have chosen $\tau_1$ so that (\ref{3.1'}) holds. Take $a=1/\sqrt{2\mu_1}$. We have shown that \begin{equation}\label{3.5} \det\left(\frac{x_j^{i-1}}{(i-1)!}\right)_{1\le i,j\le n_2}=\frac 1{(2\pi i)^{n_2}}\int_{\gamma_{\tau_1}^{n_2}}d^{n_2}\zeta \prod\limits_{j=1}^{n_2}e^{x_j\zeta_j-\frac 12\mu_1\zeta_j^2}\det\left(\frac 1{\zeta_j^i}\right)_{1\le i,j\le n_2}. \end{equation} Similarly, \begin{equation}\label{3.6} \det\left(\frac{y_j^{i-1}}{(i-1)!}\right)_{1\le i,j\le n_2}=\frac{(-1)^{\frac{n_2(n_2-1)}2}} {(2\pi i)^{n_2}}\int_{\gamma_{\tau_2}^{n_2}}d^{n_2}\omega \prod\limits_{j=1}^{n_2}e^{y_j\omega_j-\frac 12\mu_1\omega_j^2}\det\left(\frac 1{\omega_j^{n_2+1-i}}\right)_{1\le i,j\le n_2}, \end{equation} where $\tau_2$ satisfies (\ref{3.1'}). If we insert (\ref{3.5}) and (\ref{3.6}) into (\ref{3.4}) the $x_j$- and $y_j$-integrations become \begin{equation} \int_{-\infty}^{\xi_1}e^{x_j(\zeta_j-z_j)}dx_j=\frac{e^{\xi_1(\zeta_j-z_j)}}{\zeta_j-z_j}\,\,,\,\, \int_{-\infty}^{\Delta\xi}e^{y_j(\omega_j-w_j)}dy_j=\frac{e^{\Delta\xi(\omega_j-w_j)}}{\omega_j-w_j}, \notag \end{equation} where the integrals converge because of (\ref{3.1'}). The resulting formula is (\ref{3.1}). \end{proof} Write \begin{equation}\label{3.7} G_{n,\mu,\xi}(z)=z^ne^{\frac 12\mu z^2-\xi z}. \end{equation} Recall (\ref{3.1'}). For $1\le k,\ell\le n_2$, we define $A_h(\ell,k)$ by \begin{align}\label{3.8} &\delta_{\ell k}1(\ell\le n_1)+A_h(\ell,k) \\ &=\frac 1{(2\pi i)^4}\int_{\Gamma_{d_1}}dz\int_{\Gamma_{d_3}}dw\int_{\gamma_{\tau_1}}d\zeta\int_{\gamma_{\tau_2}}d\omega \frac{G_{n_1,\mu_1,\xi_1}(z)G_{\Delta n,\Delta\mu,\Delta\xi}(w)}{G_{k,\mu_1,\xi_1}(\zeta)G_{n_2+1-\ell,\Delta\mu,\Delta\xi}(\omega)} \frac 1{(z-\zeta)(w-\omega)}\left(\frac 1{z-w}-h\right), \notag \end{align} and $B(\ell,k)$ by \begin{align}\label{3.9} &\delta_{\ell k}1(\ell>n_1)+B(\ell,k) \\ &=-\frac 1{(2\pi i)^4}\int_{\Gamma_{d_2}}dz\int_{\Gamma_{d_4}}dw\int_{\gamma_{\tau_1}}d\zeta\int_{\gamma_{\tau_2}}d\omega \frac{G_{n_1+1,\mu_1,\xi_1}(z)G_{\Delta n-1,\Delta\mu,\Delta\xi}(w)}{G_{k,\mu_1,\xi_1}(\zeta)G_{n_2+1-\ell,\Delta\mu,\Delta\xi}(\omega)} \frac 1{(z-w)(z-\zeta)(w-\omega)}. \notag \end{align} Expand the determinants in (\ref{3.1}), \begin{align} \det\left(\frac 1{\zeta_j^i}\right)&=\sum\limits_{\sigma\in S_{n_2}}\text{sgn\,}(\sigma)\prod\limits_{j=1}^{n_2}\frac 1{\zeta_j^{\sigma(j)}}, \notag\\ \det\left(\frac 1{\omega_j^{n_2+1-i}}\right)&=\sum\limits_{\tau\in S_{n_2}}\text{sgn\,}(\tau)\prod\limits_{j=1}^{n_2}\frac 1{\omega_j^{n_2+1-\tau(j)}}. \notag \end{align} From (\ref{3.8}) and (\ref{3.9}) it then follows that we can write \begin{align}\label{3.10} Q(h)=\frac 1{n_1!\Delta n!}\sum\limits_{\sigma,\tau\in S_{n_2}}\text{sgn\,}(\sigma\tau)&\prod\limits_{j=1}^{n_1}\left(\delta_{\tau(j),\sigma(j)}1(\tau(j)\le n_1)+A_h(\tau(j),\sigma(j))\right)\\ \times&\prod\limits_{j=n_1+1}^{n_2}\left(\delta_{\tau(j),\sigma(j)}1(\tau(j)>n_1)+B(\tau(j),\sigma(j))\right). \notag \end{align} This way of writing $Q(h)$ is useful because (\ref{3.10}) leads to a determinant expansion of $Q(h)$, and $A_h$ and $B$ can be rewritten in a way that is useful for taking limits, see lemma \ref{lem4.1}. Write $$ [a,b]_<^n=\{(x_1,\dots,x_n)\in[a,b]^n\,;\,x_1<\dots<x_n\}, $$ which is empty if $n=0$, and recall the notation (\ref{1.15}). By expanding (\ref{3.10}) we can prove \begin{proposition}\label{prop3.2} We have the formula \begin{equation}\label{3.11} Q(h)=\sum\limits_{r=0}^{\min(n_1,\Delta n)}\sum\limits_{s=0}^{n_1}\sum\limits_{t=0}^{\Delta n}\sum\limits_{\substack{\mathbf{c}\in[1,n_1]_<^r\\ \mathbf{c'}\in[1,n_1]_<^s}}\sum\limits_{\substack{\mathbf{d}\in[n_1+1,n_2]_<^r\\ \mathbf{d'}\in[n_1+1,n_2]_<^t}} \det\left(\begin{matrix} B(\mathbf{c},\mathbf{c}) &B(\mathbf{c},\mathbf{c'}) &B(\mathbf{c},\mathbf{d}) &B(\mathbf{c},\mathbf{d'}) \\ A_h(\mathbf{c'},\mathbf{c}) &A_h(\mathbf{c'},\mathbf{c'}) &A_h(\mathbf{c'},\mathbf{d}) &A_h(\mathbf{c'},\mathbf{d'})\\ A_h(\mathbf{d},\mathbf{c}) &A_h(\mathbf{d},\mathbf{c'}) &A_h(\mathbf{d},\mathbf{d}) &A_h(\mathbf{d},\mathbf{d'})\\ B(\mathbf{d'},\mathbf{c}) &B(\mathbf{d'},\mathbf{c'}) &B(\mathbf{d'},\mathbf{d}) &B(\mathbf{d'},\mathbf{d'}) \end{matrix}\right). \end{equation} \end{proposition} \begin{proof} Set \begin{equation}\label{3.12} E_h(j;\ell,k)=\begin{cases} \delta_{\ell k}1(\ell\le n_1)+A_h(\ell,k) & \text{if } 1\le j\le n_1 \\ \delta_{\ell k}1(\ell >n_1)+B(\ell,k) & \text{if } n_1<j\le n_2 \end{cases}. \end{equation} Then, by (\ref{3.10}), \begin{equation}\label{3.13} Q(h)=\frac 1{n_1!\Delta n!}\sum\limits_{\sigma,\tau\in S_{n_2}}\text{sgn\,}(\sigma\tau)\prod\limits_{j=1}^{n_2}E_h(j;\tau(j),\sigma(j)). \end{equation} By reordering the product we get \begin{align*} Q(h)&=\frac 1{n_1!\Delta n!}\sum\limits_{\sigma,\tau\in S_{n_2}}\text{sgn\,}(\sigma\tau)\prod\limits_{j=1}^{n_2}E_h(\tau^{-1}(j);j,\sigma(\tau^{-1}(j)))\\ &=\frac 1{n_1!\Delta n!}\sum\limits_{\sigma,\tau\in S_{n_2}}\text{sgn\,}(\sigma\tau^{-1})\prod\limits_{j=1}^{n_2}E_h(\tau^{-1}(j);j,\sigma(\tau^{-1}(j))), \end{align*} since $\text{sgn\,}(\sigma\tau^{-1})=\text{sgn\,}(\sigma\tau)$. If we replace $\sigma\tau^{-1}$ by $\sigma$ and then $\tau^{-1}$ by $\tau$, we see that \begin{equation}\label{3.14} Q(h)=\frac 1{n_1!\Delta n!}\sum\limits_{\sigma,\tau\in S_{n_2}}\text{sgn\,}(\sigma)\prod\limits_{j=1}^{n_2}E_h(\tau(j);j,\sigma(j)). \end{equation} Let $J_-\subseteq[1,n_2]$, $|J_-|=n_1$ and $J_+=[1,n_2]\setminus J_-$. Then, by (\ref{3.12}) and (\ref{3.14}), \begin{align}\label{3.15} Q(h)&=\frac 1{n_1!\Delta n!}\sum\limits_{J_-}\sum\limits_{\sigma\in S_{n_2}}\text{sgn\,}(\sigma)\sum\limits_{\tau\in S_{n_2};\tau(J_-)=[1,n_1]} \prod\limits_{j\in J_-}E_h(\tau(j);j,\sigma(j))\prod\limits_{j\in J_+}E_h(\tau(j);j,\sigma(j)) \\ &=\sum\limits_{J_-}\sum\limits_{\sigma\in S_{n_2}}\text{sgn\,}(\sigma) \prod\limits_{j\in J_-}(\delta_{j,\sigma(j)}1(j\le n_1)+A_h(j,\sigma(j))) \prod\limits_{j\in J_+}(\delta_{j,\sigma(j)}1(j>n_1)+B(j,\sigma(j))), \notag \end{align} since $$ \sum\limits_{\tau\in S_{n_2}:\tau(J_-)=[1,n_1]}1=n_1!\Delta n!. $$ We can rewrite (\ref{3.15}) as \begin{align}\label{3.16} Q(h) =\sum\limits_{J_-}\sum\limits_{\sigma\in S_{n_2}}\text{sgn\,}(\sigma) &\prod\limits_{j\in J_-\cap[1,n_1]}(\delta_{j,\sigma(j)}+A_h(j,\sigma(j))) \prod\limits_{j\in J_-\cap[n_1+1,n_2]}A_h(j,\sigma(j)) \\ \times&\prod\limits_{j\in J_+\cap[1,n_1]}B(j,\sigma(j)) \prod\limits_{j\in J_+\cap[n_1+1,n_2]}(\delta_{j,\sigma(j)}+B(j,\sigma(j))). \notag \end{align} We want to expand the products involving the Kronecker deltas. Let $$ \gamma=J_+\cap[1,n_1]\,\,,\,\,\delta=J_-\cap[n_1+1,n_2]. $$ Set $r=|J_+\cap[1,n_1]|$. Then, $|J_-\cap[1,n_1]|=n_1-r$ and we see that $0\le r\le n_1$. Since $|J_-|=n_1$, we get $$ |J_-\cap[n_1+1,n_2]|=n_1-|J_-\cap[1,n_1]|=r. $$ Thus, $|\gamma|=|\delta|=r$. Given $\gamma,\delta$ we see that $J_-=\delta\cup([1,n_1]\setminus \gamma)$, so $J_-$ is uniquely determined by $\gamma,\delta$. Hence, (\ref{3.16}) can be written as \begin{align}\label{3.17} Q(h) =\sum\limits_{r=0}^{\min(n_1,\Delta n)}\sum\limits_{\gamma,\delta}\sum\limits_{\sigma\in S_{n_2}}\text{sgn\,}(\sigma) &\prod\limits_{j\in [1,n_1]\setminus\gamma} \left(\delta_{j,\sigma(j)}+A_h(j,\sigma(j))\right) \prod\limits_{j\in\delta}A_h(j,\sigma(j)) \\ \times&\prod\limits_{j\in\gamma}B(j,\sigma(j)) \prod\limits_{j\in [n_1+1,n_2]\setminus\delta}\left(\delta_{j,\sigma(j)}+B(j,\sigma(j))\right), \notag \end{align} where we sum over all $\gamma,\delta$ such that $\gamma\subseteq [1,n_1]$, $\delta\subseteq [n_1+1,n_2]$, $|\gamma|=|\delta|=r$. Now, \begin{equation}\label{3.18} \prod\limits_{j\in [1,n_1]\setminus\gamma}(\delta_{j,\sigma(j)}+A_h(j,\sigma(j))) =\sum\limits_{\gamma'\subseteq [1,n_1]\setminus\gamma}\prod\limits_{j\in[1,n_1]\setminus(\gamma\cup\gamma')} \delta_{j,\sigma(j)}\prod\limits_{j\in\gamma'}A_h(j,\sigma(j)) \end{equation} and \begin{equation}\label{3.19} \prod\limits_{j\in [n_1+1,n_2]\setminus\delta}(\delta_{j,\sigma(j)}+B(j,\sigma(j))) =\sum\limits_{\delta'\subseteq [n_1+1,n_2]\setminus\delta}\prod\limits_{j\in[n_1+1,n_2]\setminus(\delta\cup\delta')} \delta_{j,\sigma(j)}\prod\limits_{j\in\delta'}B(j,\sigma(j)). \end{equation} Inserting (\ref{3.18}) and (\ref{3.19}) into (\ref{3.17}) yields \begin{align}\label{3.20} &Q(h) =\sum\limits_{r=0}^{\min(n_1,\Delta n)}\sum\limits_{s=0}^{n_1-r}\sum\limits_{t=0}^{\Delta n-r} \sum\limits_{\gamma,\gamma',\delta,\delta'}\sum\limits_{\sigma\in S_{n_2}}\text{sgn\,}(\sigma) \\ &\times\prod\limits_{j\in[1,n_1]\setminus(\gamma\cup\gamma')}\delta_{j,\sigma(j)}\prod\limits_{j\in[n_1+1,n_2]\setminus(\delta\cup\delta')}\delta_{j,\sigma(j)} \prod\limits_{j\in\gamma}B(j,\sigma(j))\prod\limits_{j\in\gamma'}A_h(j,\sigma(j))\prod\limits_{j\in\delta}A_h(j,\sigma(j)) \prod\limits_{j\in\delta'}B(j,\sigma(j)) \notag \end{align} where we sum over all $\gamma,\gamma',\delta,\delta'$ such that \begin{align}\label{3.21} &\gamma,\gamma'\subseteq[1,n_1]\,,\,\delta,\delta'\subseteq[n_1+1,n_2]\,,\,\gamma\cap\gamma'=\emptyset\,,\,\delta\cap\delta'=\emptyset, \\ &|\gamma|=|\delta|=r\,,\,|\gamma'|=s\,,\,|\delta'|=t. \notag \end{align} Let $\Lambda=\gamma\cup\gamma'\cup\delta\cup\delta'$ and $L=|\Delta|=2r+t+s$. Terms in (\ref{3.20}) are $\neq 0$ only if $\sigma(j)=j$ for $j\in[1,n_2]\setminus\Lambda$. The permutation $\sigma$ is then reduced to a bijection $\tilde{\sigma}:\Lambda\to\Lambda$. Let $\Lambda=\{\lambda_1,\dots,\lambda_L\}$, where $\lambda_1<\dots<\lambda_L$. Then $\tilde{\sigma}$ is a permutation of $\Lambda$ and we have $\text{sgn\,}(\tilde{\sigma})=\text{sgn\,}(\sigma)$. Thus, \begin{align}\label{3.22} &Q(h) =\sum\limits_{r=0}^{\min(n_1,\Delta n)}\sum\limits_{s=0}^{n_1-r}\sum\limits_{t=0}^{\Delta n-r} \sum\limits_{\gamma,\gamma',\delta,\delta'}\sum\limits_{\tilde{\sigma}:\Lambda\to\Lambda}\text{sgn\,}(\tilde{\sigma}) \\ &\times \prod\limits_{j\in\gamma}B(j,\tilde{\sigma}(j))\prod\limits_{j\in\gamma'} A_h(j,\tilde{\sigma}(j))\prod\limits_{j\in\delta} A_h(j,\tilde{\sigma}(j)) \prod\limits_{j\in\delta'}B(j,\tilde{\sigma}(j)). \notag \end{align} Define, \begin{equation} T_h(j;\ell,k)=\begin{cases} B(\ell,k) & \text{if } j\in\gamma \\ A_h(\ell,k) & \text{if } j\in\gamma' \\ A_h(\ell,k) & \text{if } j\in\delta \\ B(\ell,k) & \text{if } j\in\delta'. \end{cases} \end{equation} Then, by (\ref{3.22}), \begin{equation} Q(h) =\sum\limits_{r=0}^{\min(n_1,\Delta n)}\sum\limits_{s=0}^{n_1-r}\sum\limits_{t=0}^{\Delta n-r} \sum\limits_{\gamma,\gamma',\delta,\delta'}\sum\limits_{\tilde{\sigma}:\Lambda\to\Lambda}\text{sgn\,}(\tilde{\sigma}) \prod\limits_{j\in\Lambda} T_h(j;j,\tilde{\sigma}(j)). \end{equation} Define $\tau\in S_L$ by $\tilde{\sigma}(\lambda_j)=\lambda_{\tau(j)}$, $1\le j\le L$. Then $\text{sgn\,}(\tilde{\sigma})=\text{sgn\,}(\tau)$ and we find \begin{align}\label{3.23} Q(h) &=\sum\limits_{r=0}^{\min(n_1,\Delta n)}\sum\limits_{s=0}^{n_1-r}\sum\limits_{t=0}^{\Delta n-r} \sum\limits_{\gamma,\gamma',\delta,\delta'}\sum\limits_{\tau\in S_L}\text{sgn\,}(\tau)\prod\limits_{i=1}^L T_h(\lambda_i;\lambda_i,\lambda_{\tau(i)}) \\ &=\sum\limits_{r=0}^{\min(n_1,\Delta n)}\sum\limits_{s=0}^{n_1-r}\sum\limits_{t=0}^{\Delta n-r} \sum\limits_{\gamma,\gamma',\delta,\delta'}\det\left(T_h(\lambda_i;\lambda_i,\lambda_{j})\right). \notag \end{align} Let \begin{align} \gamma&=\{c_1,\dots,c_r\}\,,\, \mathbf{c}=(c_1,\dots,c_r)\in[1,n_1]_<^r, \notag\\ \gamma'&=\{c'_1,\dots,c'_s\}\,,\, \mathbf{c'}=(c'_1,\dots,c'_r)\in[1,n_1]_<^s, \notag\\ \delta&=\{d_1,\dots,d_r\}\,,\, \mathbf{d}=(d_1,\dots,d_r)\in[n_1+1,n_2]_<^r, \notag\\ \delta&=\{d'_1,\dots,d'_t\}\,,\, \mathbf{d'}=(d'_1,\dots,d'_t)\in[n_1+1,n_2]_<^t. \notag \end{align} Notice that the determinant in (\ref{3.23}) is unchanged under permutations of the $\lambda_i$'s. Thus we can reorder the $\lambda_i$'s in (\ref{3.23}) so that we get the order $c_1,\dots,c_r,c'_1,\dots,c'_s,d_1,\dots,d_r,d'_1,\dots,d'_t$. Also, notice that if $c_i=c_j'$ or $d_i=d_j'$ for some $i,j$, then the determinant is $=0$. Hence, we can remove the restrictions $\gamma\cap\gamma'=\emptyset$ and $\delta\cap\delta'=\emptyset$ in (\ref{3.21}). Note that if e.g. $s>n_1-r$, then we must have $c_i=c_j'$ for some $i,j$. Thus, the right side in (\ref{3.23}) equals the right side in (\ref{3.11}). \end{proof} We now want to give expressions for $A_h$ and $B$ that will be useful in the asymptotic analysis. First, we need some definitions. Recall the notation (\ref{3.7}). Let $0<\tau_1,\tau_2<D_1<D_2$ and define \begin{equation}\label{3.24} a_{0,1}(\ell,k)=\frac 1{(2\pi i)^4}\int_{\Gamma_{D_1}}dz\int_{\Gamma_{D_2}}dw\int_{\gamma_{\tau_1}}d\zeta\int_{\gamma_{\tau_2}}d\omega \frac{G_{n_1,\mu_1,\xi_1}(z)G_{\Delta n,\Delta\mu,\Delta\xi}(w)}{G_{k,\mu_1,\xi_1}(\zeta)G_{n_2+1-\ell,\Delta\mu,\Delta\xi}(\omega)(z-w)(z-\zeta)(w-\omega)}, \end{equation} \begin{equation}\label{3.25} b_1(\ell,k)=\frac 1{(2\pi i)^4}\int_{\Gamma_{D_2}}dz\int_{\Gamma_{D_1}}dw\int_{\gamma_{\tau_1}}d\zeta\int_{\gamma_{\tau_2}}d\omega \frac{G_{n_1+1,\mu_1,\xi_1}(z)G_{\Delta n-1,\Delta\mu,\Delta\xi}(w)}{G_{k,\mu_1,\xi_1}(\zeta)G_{n_2+1-\ell,\Delta\mu,\Delta\xi}(\omega)(z-w)(z-\zeta)(w-\omega)}. \end{equation} Let $0<\tau<D$ and define \begin{equation}\label{3.26} c_2(\ell,k)=\frac 1{(2\pi i)^2}\int_{\Gamma_{D}}dw\int_{\gamma_{\tau}}d\omega\frac{G_{n_2-k,\Delta\mu,\Delta\xi}(w)} {G_{n_2+1-\ell,\Delta\mu,\Delta\xi}(\omega)(w-\omega)}, \end{equation} \begin{equation}\label{3.27} c_3(\ell,k)=\frac 1{(2\pi i)^2}\int_{\Gamma_{D}}dz\int_{\gamma_{\tau}}d\zeta\frac{G_{\ell-1,\mu_1,\xi_1}(z)} {G_{k,\mu_1,\xi_1}(\zeta)(z-\zeta)}. \end{equation} We now set, \begin{subequations}\label{3.28} \begin{align} a_{0,2}(\ell,k)&=-1(k>n_1)c_2(\ell,k)\\ a_{0,3}(\ell,k)&=1(\ell\le n_1)c_3(\ell,k)\\ b_2(\ell,k)&=-1(k>n_1+1)c_2(\ell,k)\\ b_3(\ell,k)&=1(\ell\le n_1+1)c_3(\ell,k)\\ a_2^\ast(\ell)&=c_2(\ell,n_1)\\ a_3^\ast(k)&=c_3(n_1+1,k), \end{align} \end{subequations} and finally, we define \begin{subequations}\label{3.29} \begin{align} a_0(\ell,k)&=a_{0,1}(\ell,k)-a_{0,2}(\ell,k)-a_{0,3}(\ell,k)\\ b(\ell,k)&=-b_1(\ell,k)+ b_2(\ell,k)+b_3(\ell,k)\\ A_0^\ast(\ell,k)&=-(\delta_{k,n_1+1}-a_3^\ast(k))(\delta_{\ell,n_1}-a_2^\ast(\ell))\label{3.29c}. \end{align} \end{subequations} With this notation we can formulate our next lemma. \begin{lemma}\label{lem3.3} If $A_h(\ell,k)$ and $B(\ell,k)$, $1\le\ell,k\le n_2$, are defined by (\ref{3.8}) and (\ref{3.9}) then \begin{equation}\label{3.30} A_0(\ell,k)=a_0(\ell,k), \end{equation} \begin{equation}\label{3.31} B(\ell,k)=-\delta_{k,n_1+1}\delta_{\ell,n_1+1}+b(\ell,k) \end{equation} and \begin{equation}\label{3.32} \left.\frac{\partial}{\partial h}\right|_{h=0}A_h(\ell,k)=A_0^\ast(\ell,k). \end{equation} \end{lemma} \begin{proof} Recall the condition (\ref{3.1'}), \begin{equation}\label{3.33} d_1<d_3<-\max(\tau_1,\tau_2)<0\,\,,\,\,d_4<d_2<-\max(\tau_1,\tau_2)<0. \end{equation} Choose $D_1,D_2,r_1,r_2,\tau_1,\tau_2$ so that \begin{equation}\label{3.34} 0<\tau_2<\tau_1<r_1<r_2<D_1<D_2. \end{equation} In the integral in the right side of (\ref{3.8}) we can deform $\Gamma_{d_3}$ to $\Gamma_{D_2}$ and $-\gamma_{r_1}$, and then $\Gamma_{d_1}$ to $\Gamma_{D_1}$ and $-\gamma_{r_2}$ without passing any poles. This gives \begin{align}\label{3.35} &\delta_{\ell k}1(\ell\le n_1)+A_h(\ell,k) =\frac 1{(2\pi i)^4}\left(\int_{\Gamma_{D_1}}dz\int_{\Gamma_{D_2}}dw-\int_{\Gamma_{D_1}}dz\int_{\gamma_{r_1}}dw -\int_{\gamma_{r_2}}dz\int_{\Gamma_{D_2}}dw\right) \\&\times \int_{\gamma_{\tau_1}}d\zeta\int_{\gamma_{\tau_2}}d\omega \frac{G_{n_1,\mu_1,\xi_1}(z)G_{\Delta n,\Delta\mu,\Delta\xi}(w)}{G_{k,\mu_1,\xi_1}(\zeta)G_{n_2+1-\ell,\Delta\mu,\Delta\xi}(\omega)} \frac 1{(z-\zeta)(w-\omega)}\left(\frac 1{z-w}-h\right) \notag\\ &+\frac 1{(2\pi i)^4}\int_{\gamma_{r_2}}dz\int_{\gamma_{r_1}}dw\int_{\gamma_{\tau_1}}d\zeta\int_{\gamma_{\tau_2}}d\omega \frac{G_{n_1,\mu_1,\xi_1}(z)G_{\Delta n,\Delta\mu,\Delta\xi}(w)}{G_{k,\mu_1,\xi_1}(\zeta)G_{n_2+1-\ell,\Delta\mu,\Delta\xi}(\omega)} \frac 1{(z-\zeta)(w-\omega)}\left(\frac 1{z-w}-h\right). \notag \end{align} Consider the last integral in (\ref{3.35}). The $w$-integral has its only pole in $w=\omega$ and hence it equals \begin{equation}\label{3.36} \frac 1{(2\pi i)^3}\int_{\gamma_{r_2}}dz\int_{\gamma_{\tau_1}}d\zeta\int_{\gamma_{\tau_2}}d\omega \frac{G_{n_1,\mu_1,\xi_1}(z)}{G_{k,\mu_1,\xi_1}(\zeta)\omega^{n_1+1-\ell}(z-\zeta)}\left(\frac 1{z-\omega}-h\right). \end{equation} In this integral the $z$-integral has poles at $z=\zeta$ and at $z=\omega$, which gives \begin{equation}\label{3.37} \frac 1{(2\pi i)^2}\int_{\gamma_{\tau_1}}d\zeta\int_{\gamma_{\tau_2}}d\omega \frac{\zeta^{n_1-k}}{\omega^{n_1+1-\ell}}\left(\frac 1{\zeta-\omega}-h\right)+ \frac 1{(2\pi i)^2}\int_{\gamma_{\tau_1}}d\zeta\int_{\gamma_{\tau_2}}d\omega \frac{G_{\ell-1,\mu_1,\xi_1}(\omega)}{G_{k,\mu_1,\xi_1}(\zeta)(\omega-\zeta)}. \end{equation} Since $\tau_2<\tau_1$, the $\zeta$-integral in the second integral in (\ref{3.37}) is $=0$. The first integral in (\ref{3.37}) equals $\delta_{\ell, k}1(\ell\le n_1)-h\delta_{k,n_1+1}\delta_{\ell,n_1}$. Combined with (\ref{3.35}) this gives, \begin{equation}\label{3.38} A_h(\ell,k)=-h\delta_{k,n_1+1}\delta_{\ell,n_1}+a_{h,1}(\ell,k)-a_{h,2}(\ell,k)-a_{h,3}(\ell,k), \end{equation} where \begin{equation}\label{3.39} a_{h,1}(\ell,k)=\frac 1{(2\pi i)^4}\int_{\Gamma_{D_1}}dz\int_{\Gamma_{D_2}}dw\int_{\gamma_{\tau_1}}d\zeta\int_{\gamma_{\tau_2}}d\omega \frac{G_{n_1,\mu_1,\xi_1}(z)G_{\Delta n,\Delta\mu,\Delta\xi}(w)\left(\frac 1{z-w}-h\right)}{G_{k,\mu_1,\xi_1}(\zeta)G_{n_2+1-\ell,\Delta\mu,\Delta\xi}(\omega) (z-\zeta)(w-\omega)} \end{equation} \begin{equation}\label{3.40} a_{h,2}(\ell,k)=\frac 1{(2\pi i)^4}\int_{\gamma_{r_2}}dz\int_{\Gamma_{D_2}}dw\int_{\gamma_{\tau_1}}d\zeta\int_{\gamma_{\tau_2}}d\omega \frac{G_{n_1,\mu_1,\xi_1}(z)G_{\Delta n,\Delta\mu,\Delta\xi}(w)\left(\frac 1{z-w}-h\right)}{G_{k,\mu_1,\xi_1}(\zeta)G_{n_2+1-\ell,\Delta\mu,\Delta\xi}(\omega) (z-\zeta)(w-\omega)} \end{equation} and \begin{equation}\label{3.41} a_{h,3}(\ell,k)=\frac 1{(2\pi i)^4}\int_{\Gamma_{D_1}}dz\int_{\gamma_{r_1}}dw\int_{\gamma_{\tau_1}}d\zeta\int_{\gamma_{\tau_2}}d\omega \frac{G_{n_1,\mu_1,\xi_1}(z)G_{\Delta n,\Delta\mu,\Delta\xi}(w)\left(\frac 1{z-w}-h\right)}{G_{k,\mu_1,\xi_1}(\zeta)G_{n_2+1-\ell,\Delta\mu,\Delta\xi}(\omega) (z-\zeta)(w-\omega)}. \end{equation} We see that $a_{0,1}(\ell,k)$ in (\ref{3.39}) agrees with (\ref{3.24}). Also \begin{equation}\label{3.43} a_{0,2}(\ell,k)=\frac 1{(2\pi i)^4}\int_{\gamma_{r_2}}dz\int_{\Gamma_{D_2}}dw\int_{\gamma_{\tau_1}}d\zeta\int_{\gamma_{\tau_2}}d\omega \frac{G_{n_1,\mu_1,\xi_1}(z)G_{\Delta n,\Delta\mu,\Delta\xi}(w)}{G_{k,\mu_1,\xi_1}(\zeta)G_{n_2+1-\ell,\Delta\mu,\Delta\xi}(\omega) (z-w)(z-\zeta)(w-\omega)}. \end{equation} The $z$-integral in (\ref{3.43}) has its only pole in $z=\zeta$ and hence \begin{equation} a_{0,2}(\ell,k)=\frac 1{(2\pi i)^3}\int_{\Gamma_{D_2}}dw\int_{\gamma_{\tau_1}}d\zeta\int_{\gamma_{\tau_2}}d\omega \frac{G_{\Delta n,\Delta\mu,\Delta\xi}(w)}{\zeta^{k-n_1}G_{n_2+1-\ell,\Delta\mu,\Delta\xi}(\omega) (\zeta-w)(w-\omega)}. \notag \end{equation} The $\zeta$-integral is $=0$ unless $k>n_1$, and if $k>n_1$ the $\zeta$-integral has $\zeta=w$ as its only pole outside $\gamma_{\tau_1}$. Thus, \begin{equation} a_{0,2}(\ell,k)=-\frac {1(k>n_1)}{(2\pi i)^2}\int_{\Gamma_{D_2}}dw\int_{\gamma_{\tau_2}}d\omega \frac{G_{n_2-k,\Delta\mu,\Delta\xi}(w)}{G_{n_2+1-\ell,\Delta\mu,\Delta\xi}(\omega)(w-\omega)} =-1(k>n_1)c_2(\ell,k). \notag \end{equation} Similarly, we can show that \begin{equation} a_{0,3}(\ell,k)=\frac {1(\ell\le n_1)}{(2\pi i)^2}\int_{\Gamma_{D_1}}dz\int_{\gamma_{\tau_1}}d\zeta \frac{G_{\ell-1,\mu_1,\xi_1}(z)}{G_{k,\mu_1,\xi_1}(\zeta)(z-\zeta)}=1(\ell\le n_1)c_3(\ell,k). \notag \end{equation} This proves (\ref{3.30}). Now, \begin{align}\label{3.44} &\left.\frac{\partial}{\partial h}\right|_{h=0}a_{h,1}(\ell,k) \\ &=-\left(\frac 1{(2\pi i)^2}\int_{\Gamma_{D_1}}dz\int_{\gamma_{\tau_1}}d\zeta\frac{G_{n_1,\mu_1,\xi_1}(z)}{G_{k,\mu_1,\xi_1}(\zeta)(z-\zeta)}\right) \left(\frac 1{(2\pi i)^2}\int_{\Gamma_{D_2}}dw\int_{\gamma_{\tau_2}}d\omega\frac{G_{\Delta n,\Delta\mu,\Delta\xi}(w)}{G_{n_2+1-\ell,\Delta\mu,\Delta\xi}(\omega) (w-\omega)}\right) \notag\\ &=-c_3(n_1+1,k)c_2(\ell,n_1)=-a_3^\ast(k)a_2^\ast(\ell). \notag \end{align} Similarly \begin{align}\label{3.45} &\left.\frac{\partial}{\partial h}\right|_{h=0}a_{h,2}(\ell,k) \\ &=-\left(\frac 1{(2\pi i)^2}\int_{\gamma_{r_2}}dz\int_{\gamma_{\tau_1}}d\zeta\frac{G_{n_1,\mu_1,\xi_1}(z)}{G_{k,\mu_1,\xi_1}(\zeta)(z-\zeta)}\right) \left(\frac 1{(2\pi i)^2}\int_{\Gamma_{D_2}}dw\int_{\gamma_{\tau_2}}d\omega\frac{G_{\Delta n,\Delta\mu,\Delta\xi}(w)}{G_{n_2+1-\ell,\Delta\mu,\Delta\xi}(\omega) (w-\omega)}\right) \notag\\ &=-\left(\frac 1{2\pi i}\int_{\gamma_{\tau_1}}\zeta^{n_1-k}d\zeta\right)c_2(\ell,n_1)==-\delta_{k,n_1+1}a_2^\ast(\ell), \notag \end{align} and \begin{equation}\label{3.46} \left.\frac{\partial}{\partial h}\right|_{h=0}a_{h,3}(\ell,k)=-\delta_{\ell,n_1}a_3^\ast(k). \end{equation} If we use (\ref{3.44}) - (\ref{3.46}) in (\ref{3.38}) we see that we have proved (\ref{3.32}). Consider next $B(\ell,k)$. In the integral in the right side of (\ref{3.9}) we deform $\Gamma_{d_2}$ to $\Gamma_{D_2}$ and $-\gamma_{r_1}$, and then $\Gamma_{d_4}$ to $\Gamma_{D_1}$ and $-\gamma_{r_2}$, which can be done without passing any poles. We obtain \begin{align}\label{3.47} &\delta_{\ell k}1(\ell> n_1)+B(\ell,k) =\frac 1{(2\pi i)^4}\left(-\int_{\Gamma_{D_2}}dz\int_{\Gamma_{D_1}}dw+\int_{\Gamma_{D_2}}dz\int_{\gamma_{r_2}}dw +\int_{\gamma_{r_1}}dz\int_{\Gamma_{D_1}}dw\right) \\&\times \int_{\gamma_{\tau_1}}d\zeta\int_{\gamma_{\tau_2}}d\omega \frac{G_{n_1+1,\mu_1,\xi_1}(z)G_{\Delta n-1,\Delta\mu,\Delta\xi}(w)}{G_{k,\mu_1,\xi_1}(\zeta)G_{n_2+1-\ell,\Delta\mu,\Delta\xi}(\omega)} \frac 1{(z-w)(z-\zeta)(w-\omega)} \notag\\ &-\frac 1{(2\pi i)^4}\int_{\gamma_{r_1}}dz\int_{\gamma_{r_2}}dw\int_{\gamma_{\tau_1}}d\zeta\int_{\gamma_{\tau_2}}d\omega \frac{G_{n_1+1,\mu_1,\xi_1}(z)G_{\Delta n-1,\Delta\mu,\Delta\xi}(w)}{G_{k,\mu_1,\xi_1}(\zeta)G_{n_2+1-\ell,\Delta\mu,\Delta\xi}(\omega)} \frac 1{(z-w)(z-\zeta)(w-\omega)}. \notag \end{align} Consider the last integral in (\ref{3.47}). The $z$-integral has its only pole at $z=\zeta$ and hence it equals \begin{equation}\label{3.48} \frac 1{(2\pi i)^3}\int_{\gamma_{r_2}}dw\int_{\gamma_{\tau_1}}d\zeta\int_{\gamma_{\tau_2}}d\omega \frac{G_{\Delta n-1,\Delta\mu,\Delta\xi}(w)}{\zeta^{k-(n_1+1)}G_{n_2+1-\ell,\Delta\mu,\Delta\xi}(\omega)(w-\omega)(w-\zeta)}. \end{equation} The $w$-integral has poles at $w=\omega$ and $w=\zeta$ and consequently (\ref{3.48}) equals \begin{equation}\label{3.49} \frac 1{(2\pi i)^2}\int_{\gamma_{\tau_1}}d\zeta \int_{\gamma_{\tau_2}}d\omega \frac{\zeta^{n_1+1-k}\omega^{\ell-(n_1+2)}}{\omega-\zeta}+ \frac 1{(2\pi i)^2}\int_{\gamma_{\tau_1}}d\zeta\int_{\gamma_{\tau_2}}d\omega \frac{G_{n_2-k,\Delta\mu,\Delta\xi}(\zeta)}{G_{n_2+1-\ell,\Delta\mu,\Delta\xi}(\omega)(\zeta-\omega)}. \end{equation} The first integral in (\ref{3.49}) equals $-\delta_{\ell,k}1(\ell\le n_1+1)$ and in the second one the $\zeta$-integral has its only pole at $\zeta=\omega$ and hence equals $\delta_{\ell,k}$. Thus, the integral in (\ref{3.49}) equals $\delta_{\ell,k}1(\ell>n_1+1)$ and we see from (\ref{3.47}) that \begin{align} B(\ell,k) &=-\delta_{k,n_1+1}\delta_{\ell,n_1+1}+ \frac 1{(2\pi i)^4}\left(-\int_{\Gamma_{D_2}}dz\int_{\Gamma_{D_1}}dw+\int_{\Gamma_{D_2}}dz\int_{\gamma_{r_2}}dw +\int_{\gamma_{r_1}}dz\int_{\Gamma_{D_1}}dw\right) \notag\\ &\times\int_{\gamma_{\tau_1}}d\zeta\int_{\gamma_{\tau_2}}d\omega \frac{G_{n_1+1,\mu_1,\xi_1}(z)G_{\Delta n-1,\Delta\mu,\Delta\xi}(w)}{G_{k,\mu_1,\xi_1}(\zeta)G_{n_2+1-\ell,\Delta\mu,\Delta\xi}(\omega)} \frac 1{(z-w)(z-\zeta)(w-\omega)}. \notag \end{align} This leads to the formula (\ref{3.31}) by using an argument that is analogous to how we proved (\ref{3.30}). \end{proof} Before we can carry out the asymptotic analysis of the expression for $Q(h)$ in (\ref{3.11}) we have to rewrite it further. Define \begin{equation}\label{3.492} \tilde{a}_0(\ell,n_1)=a_0(\ell,n_1)+a_2^\ast(\ell)= a_{0,1}(\ell,n_1)+c_2(\ell,n_1)-1(\ell\le n_1)c_3(\ell,n_1). \end{equation} Set \begin{equation}\label{3.493} V(\mathbf{c},\mathbf{c'},\mathbf{d},\mathbf{d'})= \left( \begin{matrix} b(\mathbf{c},\mathbf{c}) &b(\mathbf{c},\mathbf{c'}) &b(\mathbf{c},n_1) &b(\mathbf{c},\mathbf{d}) &b(\mathbf{c},\mathbf{d'}) \\ a_0(\mathbf{c'},\mathbf{c}) &a_0(\mathbf{c'},\mathbf{c'}) &\tilde{a}_0(\mathbf{c'},n_1) &a_0(\mathbf{c'},\mathbf{d}) &a_0(\mathbf{c'},\mathbf{d'}) \\ b(n_1+1,\mathbf{c}) &b(n_1+1,\mathbf{c'}) &b(n_1+1,n_1) &b(n_1+1,\mathbf{d}) &b(n_1+1,\mathbf{d'}) \\ a_0(\mathbf{d},\mathbf{c}) &a_0(\mathbf{d},\mathbf{c'}) &\tilde{a}_0(\mathbf{d},n_1) &a_0(\mathbf{d},\mathbf{d}) &a_0(\mathbf{d},\mathbf{d'}) \\ b(\mathbf{d'},\mathbf{c}) &b(\mathbf{d'},\mathbf{c'}) &b(\mathbf{d'},n_1) &b(\mathbf{d'},\mathbf{d}) &b(\mathbf{d'},\mathbf{d'}) \end{matrix}\right), \end{equation} \begin{equation}\label{3.494} U(\mathbf{c},\mathbf{c'},\mathbf{d},\mathbf{d'})= \left( \begin{matrix} b(\mathbf{c},\mathbf{c}) &b(\mathbf{c},\mathbf{c'}) &b(\mathbf{c},n_1) &b(\mathbf{c},\mathbf{d}) &b(\mathbf{c},\mathbf{d'}) \\ a_0(\mathbf{c'},\mathbf{c}) &a_0(\mathbf{c'},\mathbf{c'}) &\tilde{a}_0(\mathbf{c'},n_1) &a_0(\mathbf{c'},\mathbf{d}) &a_0(\mathbf{c'},\mathbf{d'}) \\ a_0(n_1+1,\mathbf{c}) &a_0(n_1+1,\mathbf{c'}) &\tilde{a}_0(n_1+1,n_1) &a_0(n_1+1,\mathbf{d}) &a_0(n_1+1,\mathbf{d'}) \\ a_0(\mathbf{d},\mathbf{c}) &a_0(\mathbf{d},\mathbf{c'}) &\tilde{a}_0(\mathbf{d},n_1) &a_0(\mathbf{d},\mathbf{d}) &a_0(\mathbf{d},\mathbf{d'}) \\ b(\mathbf{d'},\mathbf{c}) &b(\mathbf{d'},\mathbf{c'}) &b(\mathbf{d'},n_1) &b(\mathbf{d'},\mathbf{d}) &b(\mathbf{d'},\mathbf{d'}) \end{matrix}\right), \end{equation} and \begin{equation}\label{3.50} M_h(\mathbf{c},\mathbf{c'},\mathbf{d},\mathbf{d'})= \left( \begin{matrix} b(\mathbf{c},\mathbf{c}) &b(\mathbf{c},\mathbf{c'}) &b(\mathbf{c},\mathbf{d}) &b(\mathbf{c},\mathbf{d'}) \\ A_h(\mathbf{c'},\mathbf{c}) &A_h(\mathbf{c'},\mathbf{c'}) &A_h(\mathbf{c'},\mathbf{d}) &A_h(\mathbf{c'},\mathbf{d'}) \\ A_h(\mathbf{d},\mathbf{c}) &A_h(\mathbf{d},\mathbf{c'}) &A_h(\mathbf{d},\mathbf{d}) &A_h(\mathbf{d},\mathbf{d'}) \\ b(\mathbf{d'},\mathbf{c}) &b(\mathbf{d'},\mathbf{c'}) &b(\mathbf{d'},\mathbf{d}) &b(\mathbf{d'},\mathbf{d'}) \end{matrix}\right), \end{equation} Define \begin{equation}\label{3.501} Q'_1(0)=\sum\limits_{r=0}^{\min(n_1,\Delta n)}\sum\limits_{s=0}^{n_1}\sum\limits_{t=0}^{\Delta n-1} \sum\limits_{\substack{\mathbf{c}\in [1,n_1]_<^r\\\mathbf{c'}\in [1,n_1]_<^s}}\sum\limits_{\substack{\mathbf{d}\in [n_1+2,n_2]_<^r\\\mathbf{d'}\in [n_1+2,n_2]_<^t}} \det V(\mathbf{c},\mathbf{c'},\mathbf{d},\mathbf{d'}) \end{equation} and \begin{equation}\label{3.502} Q'_2(0)=\sum\limits_{r=1}^{\min(n_1,\Delta n)}\sum\limits_{s=0}^{n_1}\sum\limits_{t=0}^{\Delta n-1} \sum\limits_{\substack{\mathbf{c}\in [1,n_1]_<^r\\\mathbf{c'}\in [1,n_1]_<^s}}\sum\limits_{\substack{\mathbf{d}\in [n_1+2,n_2]_<^{r-1}\\\mathbf{d'}\in [n_1+2,n_2]_<^t}} \det U(\mathbf{c},\mathbf{c'},\mathbf{d},\mathbf{d'}). \end{equation} If $A$ is an $n\times n$-matrix and $1\le i,j\le n$, we let $A(\{i\}',\{j\}')$ denote the matrix $A$ with row $i$ and column $j$ removed. Set (recall $L=2r+s+t$) \begin{equation}\label{3.503} Q'_3(0)=\sum\limits_{r=0}^{\min(n_1,\Delta n)}\sum\limits_{s=1}^{n_1}\sum\limits_{t=1}^{\Delta n} \sum\limits_{\substack{\mathbf{c}\in [1,n_1]_<^r\\\mathbf{c'}\in [1,n_1]_<^s\\c'_s=n_1}}\sum\limits_{\substack{\mathbf{d}\in [n_1+2,n_2]_<^{r}\\\mathbf{d'}\in [n_1+1,n_2]_<^t\\d'_1=n_1+1}} \sum\limits_{j=1}^L(-1)^{r+s+j}a_3^\ast(f_j)\det M_0(\{r+s\}',\{j\}'), \end{equation} where we use the notation \begin{equation}\label{3.504} f_j=\begin{cases} c_j & \text{if }1\le j\le r \\ c'_{j-r} & \text{if } r<j\le r+s \\ d_{j-r-s} & \text{if } r+s<j\le 2r+s \\ d'_{j-2r-s} & \text{if } 2r+s<j\le L \end{cases}. \end{equation} Also, set \begin{equation}\label{3.505} Q'_4(0)=\sum\limits_{r=0}^{\min(n_1,\Delta n)}\sum\limits_{\substack{s=0\\r+s\ge 1}}^{n_1}\sum\limits_{t=1}^{\Delta n} \sum\limits_{\substack{\mathbf{c}\in [1,n_1]_<^r\\\mathbf{c'}\in [1,n_1]_<^s}}\sum\limits_{\substack{\mathbf{d}\in [n_1+2,n_2]_<^{r}\\\mathbf{d'}\in [n_1+1,n_2]_<^t\\d'_1=n_1+1}} \sum\limits_{i=r+1}^{2r+s}\sum\limits_{j=1}^L(-1)^{i+j+1}a_2^\ast(f_i)a_3^\ast(f_j)\det M_0(\{i\}',\{j\}'). \end{equation} Similarly, we define \begin{equation}\label{3.506} Q'_5(0)=\sum\limits_{r=1}^{\min(n_1,\Delta n)}\sum\limits_{s=1}^{n_1}\sum\limits_{t=0}^{\Delta n} \sum\limits_{\substack{\mathbf{c}\in [1,n_1]_<^r\\\mathbf{c'}\in [1,n_1]_<^s\\c'_s=n_1}}\sum\limits_{\substack{\mathbf{d}\in [n_1+1,n_2]_<^{r}\\\mathbf{d'}\in [n_1+2,n_2]_<^t\\d_1=n_1+1}} \sum\limits_{j=1}^L(-1)^{r+s+j}a_3^\ast(f_j)\det M_0(\{r+s\}',\{j\}'), \end{equation} and \begin{equation}\label{3.506'} Q'_6(0)=\sum\limits_{r=1}^{\min(n_1,\Delta n)}\sum\limits_{s=0}^{n_1}\sum\limits_{t=0}^{\Delta n} \sum\limits_{\substack{\mathbf{c}\in [1,n_1]_<^r\\\mathbf{c'}\in [1,n_1]_<^s}}\sum\limits_{\substack{\mathbf{d}\in [n_1+1,n_2]_<^{r}\\\mathbf{d'}\in [n_1+2,n_2]_<^t\\d_1=n_1+1}} \sum\limits_{i=r+1}^{2r+s}\sum\limits_{j=1}^L(-1)^{i+j}a_2^\ast(f_i)a_3^\ast(f_j)\det M_0(\{i\}',\{j\}'). \end{equation} Note that the expressions $Q_1'(0)$ and $Q_2'(0)$ have a very similar structure, and this is true also for $Q_3'(0)$ and $Q_5'(0)$, as well as for $Q_4'(0)$ and $Q_6'(0)$. In section \ref{sect4} we will compute the asymptotics of $Q'_k(0)$, $1\le k\le 6$, which is all we need because of the next lemma and proposition \ref{prop2.4}. \begin{lemma}\label{lem3.4} We have the formula \begin{equation}\label{3.507} \left.\frac{\partial}{\partial h}\right|_{h=0} Q(h)=\sum\limits_{k=1}^6 Q'_k(0), \end{equation} with $Q'_k(0)$ as defined above. \end{lemma} \begin{proof} From (\ref{3.31}) we see that $B(\ell,k)=b(\ell,k)$ unless $k=\ell=n_1+1$ in which case $B(n_1+1,n_1+1)=-1+b(n_1+1,n_1+1)$. This case can occur in the formula (\ref{3.11}) for $Q(h)$ if and only if $d'_1=n_1+1$, which requires $t\ge 1$. Let $E_{2r+s+1}$ be the matrix which is zero everywhere except at position $(2r+s+1,2r+s+1)$ where it is $=1$. In the sum in (\ref{3.11}) we can assume that $d_1\neq d'_1$ since otherwise the determinant is $=0$. Hence, by (\ref{3.11}) we can write \begin{equation}\label{3.51} Q(h)=q_0(h)+q_1(h)+q_2(h), \end{equation} where \begin{equation}\label{3.52} q_0(h)=\sum\limits_{r=0}^{\min(n_1,\Delta n)}\sum\limits_{s=0}^{n_1}\sum\limits_{t=0}^{\Delta n} \sum\limits_{\substack{\mathbf{c}\in [1,n_1]_<^r\\\mathbf{c'}\in [1,n_1]_<^s}}\sum\limits_{\substack{\mathbf{d}\in [n_1+2,n_2]_<^r\\\mathbf{d'}\in [n_1+2,n_2]_<^t\\d_1\neq d'_1}} \det M_h, \end{equation} where $M_h$ is given by (\ref{3.50}), \begin{equation}\label{3.53} q_1(h)=\sum\limits_{r=0}^{\min(n_1,\Delta n)}\sum\limits_{s=0}^{n_1}\sum\limits_{t=1}^{\Delta n} \sum\limits_{\substack{\mathbf{c}\in [1,n_1]_<^r\\\mathbf{c'}\in [1,n_1]_<^s}}\sum\limits_{\substack{\mathbf{d}\in [n_1+2,n_2]_<^r\\\mathbf{d'}\in [n_1+1,n_2]_<^t\\d'_1=n_1+1}} \det\left(-E_{2r+s+1}+M_h\right), \end{equation} and \begin{equation}\label{3.54} q_2(h)=\sum\limits_{r=1}^{\min(n_1,\Delta n)}\sum\limits_{s=0}^{n_1}\sum\limits_{t=0}^{\Delta n} \sum\limits_{\substack{\mathbf{c}\in [1,n_1]_<^r\\\mathbf{c'}\in [1,n_1]_<^s}}\sum\limits_{\substack{\mathbf{d}\in [n_1+1,n_2]_<^r\\\mathbf{d'}\in [n_1+2,n_2]_<^t\\d_1=n_1+1}} \det M_h. \end{equation} We see from (\ref{3.53}) that \begin{align}\label{3.55} q_1(h)&=\sum\limits_{r=0}^{\min(n_1,\Delta n)}\sum\limits_{s=0}^{n_1}\sum\limits_{t=1}^{\Delta n} \sum\limits_{\substack{\mathbf{c}\in [1,n_1]_<^r\\\mathbf{c'}\in [1,n_1]_<^s}} \sum\limits_{\substack{\mathbf{d}\in [n_1+2,n_2]_<^r\\\mathbf{d'}\in [n_1+1,n_2]_<^t\\d'_1=n_1+1}} \det M_h \\ &-\sum\limits_{r=0}^{\min(n_1,\Delta n)}\sum\limits_{s=0}^{n_1}\sum\limits_{t=0}^{\Delta n-1} \sum\limits_{\substack{\mathbf{c}\in [1,n_1]_<^r\\\mathbf{c'}\in [1,n_1]_<^s}} \sum\limits_{\substack{\mathbf{d}\in [n_1+2,n_2]_<^r\\\mathbf{d'}\in [n_1+2,n_2]_<^t}} \det M_h:=q_3(h)-q_4(h). \notag \end{align} Note that \begin{equation} q_0(h)-q_4(h)=\sum\limits_{r=0}^{\min(n_1,\Delta n)}\sum\limits_{s=0}^{n_1} \sum\limits_{\substack{\mathbf{c}\in [1,n_1]_<^r\\\mathbf{c'}\in [1,n_1]_<^s}} \sum\limits_{\substack{\mathbf{d}\in [n_1+2,n_2]_<^r\\\mathbf{d'}\in [n_1+2,n_2]_<^{\Delta n}}}\det M_h=0 \notag \end{equation} since $[n_1+2,n_2]_<^{\Delta n}=\emptyset$. Thus, by (\ref{3.51}), \begin{equation}\label{3.56} Q(h)=q_2(h)+q_3(h). \end{equation} If $A$ is a matrix and $\mathbf{v}$ a row vector, $(A|\mathbf{v})_{\text{row\,}(i)}$ will denote the matrix obtained by replacing row $i$ in $A$ with $\mathbf{v}$. Similarly, if $\mathbf{v}$ is a column vector, $(A|\mathbf{v})_{\text{col\,}(j)}$ will denote the matrix obtained by replacing column $j$ in $A$ with $\mathbf{v}$. Let \begin{equation}\label{3.57} \mathbf{v}_i=\left(\begin{matrix} A_0^\ast(f_i,\mathbf{c}) &A_0^\ast(f_i,\mathbf{c'}) &A_0^\ast(f_i,\mathbf{d}) &A_0^\ast(f_i,\mathbf{d'}) \end{matrix}\right), \end{equation} where $A_0^\ast$ is given by (\ref{3.29}); recall (\ref{3.32}). We see then that \begin{align}\label{3.58} q_3'(0)&=\left.\frac{\partial}{\partial h}\right|_{h=0} q_3(h) \\ &=\sum\limits_{r=0}^{\min(n_1,\Delta n)}\sum\limits_{\substack{s=0\\r+s\ge 1}}^{n_1}\sum\limits_{t=1}^{\Delta n} \sum\limits_{\substack{\mathbf{c}\in [1,n_1]_<^r\\\mathbf{c'}\in [1,n_1]_<^s}} \sum\limits_{\substack{\mathbf{d}\in [n_1+2,n_2]_<^r\\\mathbf{d'}\in [n_1+1,n_2]_<^t\\d'_1=n_1+1}} \sum\limits_{i=r+1}^{2r+s}\det (M_0|\mathbf{v}_i)_{\text{row\,}(i)}. \notag \end{align} We have to have $r+s\ge 1$ to get a non-zero contribution when taking the $h$-derivative. Similarly, \begin{align}\label{3.60} q_2'(0)&=\left.\frac{\partial}{\partial h}\right|_{h=0} q_2(h) \\ &=\sum\limits_{r=1}^{\min(n_1,\Delta n)}\sum\limits_{s=0}^{n_1}\sum\limits_{t=1}^{\Delta n} \sum\limits_{\substack{\mathbf{c}\in [1,n_1]_<^r\\\mathbf{c'}\in [1,n_1]_<^s}} \sum\limits_{\substack{\mathbf{d}\in [n_1+1,n_2]_<^r\\\mathbf{d'}\in [n_1+2,n_2]_<^t\\d_1=n_1+1}} \sum\limits_{i=r+1}^{2r+s}\det (M_0|\mathbf{v}_i)_{\text{row\,}(i)}. \notag \end{align} Expand the determinant in (\ref{3.58}) along row $i$. This gives \begin{align}\label{3.61} q_3'(0)=&\sum\limits_{r=0}^{\min(n_1,\Delta n)}\sum\limits_{\substack{s=0\\r+s\ge 1}}^{n_1}\sum\limits_{t=1}^{\Delta n} \sum\limits_{\substack{\mathbf{c}\in [1,n_1]_<^r\\\mathbf{c'}\in [1,n_1]_<^s}} \sum\limits_{\substack{\mathbf{d}\in [n_1+2,n_2]_<^r\\\mathbf{d'}\in [n_1+1,n_2]_<^t\\d'_1=n_1+1}} \sum\limits_{i=r+1}^{2r+s}\sum\limits_{j=1}^{L}(-1)^{i+j+1} \\ &\times\left(\delta_{f_i,n_1}-a_2^\ast(f_i)\right)\left(\delta_{f_j,n_1+1}-a_3^\ast(f_j)\right)\det M_0(\{i\}',\{j\}'), \notag \end{align} where we have used (\ref{3.29c}) and (\ref{3.57}). Now, $$ \left(\delta_{f_i,n_1}-a_2^\ast(f_i)\right)\left(\delta_{f_j,n_1+1}-a_3^\ast(f_j)\right)= \delta_{f_i,n_1}\delta_{f_j,n_1+1}-\delta_{f_i,n_1}a_3^\ast(f_j)-\delta_{f_j,n_1+1}a_2^\ast(f_i)+a_2^\ast(f_i)a_3^\ast(f_j) $$ leads to a corresponding decomposition \begin{equation}\label{3.62} q_3'(0)=q_{3,1}'(0)+q_{3,2}'(0)+q_{3,3}'(0)+q_{3,4}'(0). \end{equation} We will now show that $Q_1'(0)=q_{3,1}'(0)+q_{3,3}'(0)$, $Q_3'(0)=q_{3,2}'(0)$ and $Q_4'(0)=q_{3,4}'(0)$. A similar argument for (\ref{3.60}) will give $Q_2'(0)+Q_5'(0)+Q_6'(0)$. The term $\delta_{f_i,n_1}\delta_{f_j,n_1+1}$ requires $j=2r+s+1$ and $f_{2r+s+1}=d'_1=n_1+1$, and $i=r+s$ and $f_{r+s}=c'_{s}=n_1$. Hence, $s\ge 1$ and we obtain \begin{equation}\label{3.63} q_{3,1}'(0)=\sum\limits_{r=0}^{\min(n_1,\Delta n)}\sum\limits_{s=1}^{n_1}\sum\limits_{t=1}^{\Delta n} \sum\limits_{\substack{\mathbf{c}\in [1,n_1]_<^r\\\mathbf{c'}\in [1,n_1]_<^s\\c'_s=n_1}} \sum\limits_{\substack{\mathbf{d}\in [n_1+2,n_2]_<^r\\\mathbf{d'}\in [n_1+1,n_2]_<^t\\d'_1=n_1+1}}(-1)^r\det M_0(\{r+s\}',\{2r+s+1\}'). \end{equation} The term $-\delta_{f_i,n_1}a_3^\ast(f_j)$ requires $i=r+s$, and gives \begin{equation}\label{3.64} q_{3,2}'(0)=\sum\limits_{r=0}^{\min(n_1,\Delta n)}\sum\limits_{s=1}^{n_1}\sum\limits_{t=1}^{\Delta n} \sum\limits_{\substack{\mathbf{c}\in [1,n_1]_<^r\\\mathbf{c'}\in [1,n_1]_<^s\\c'_s=n_1}} \sum\limits_{\substack{\mathbf{d}\in [n_1+2,n_2]_<^r\\\mathbf{d'}\in [n_1+1,n_2]_<^t\\d'_1=n_1+1}} \sum\limits_{j=1}^{L}(-1)^{r+s+j}a_3^\ast(f_j)\det M_0(\{r+s\}',\{j\}'), \end{equation} which is equal to $Q_3'(0)$ as defined by (\ref{3.503}). The term $-\delta_{f_j,n_1+1}a_2^\ast(f_i)$ requires $j=2r+s+1$, which gives \begin{equation} q_{3,3}'(0)=\sum\limits_{r=0}^{\min(n_1,\Delta n)}\sum\limits_{\substack{s=0\\r+s\ge 1}}^{n_1}\sum\limits_{t=1}^{\Delta n} \sum\limits_{\substack{\mathbf{c}\in [1,n_1]_<^r\\\mathbf{c'}\in [1,n_1]_<^s}} \sum\limits_{\substack{\mathbf{d}\in [n_1+2,n_2]_<^r\\\mathbf{d'}\in [n_1+1,n_2]_<^t\\d'_1=n_1+1}} \sum\limits_{i=r+1}^{2r+s}(-1)^{i+2r+s+1}a_2^\ast(f_i)\det M_0(\{i\}',\{2r+s+1\}'). \notag\end{equation} If we write \begin{equation}\label{3.65} \mathbf{a}_2^\ast=\left(\begin{matrix} 0 \\ a_2^\ast(\mathbf{c'}) \\ a_2^\ast(\mathbf{d}) \\ 0 \end{matrix}\right), \end{equation} where the blocks have length $r,s,r$ and $t$ respectively, we see that \begin{equation}\label{3.67} q_{3,3}'(0)=\sum\limits_{r=0}^{\min(n_1,\Delta n)}\sum\limits_{\substack{s=0\\r+s\ge 1}}^{n_1}\sum\limits_{t=1}^{\Delta n} \sum\limits_{\substack{\mathbf{c}\in [1,n_1]_<^r\\\mathbf{c'}\in [1,n_1]_<^s}} \sum\limits_{\substack{\mathbf{d}\in [n_1+2,n_2]_<^r\\\mathbf{d'}\in [n_1+1,n_2]_<^t\\d'_1=n_1+1}} \det (M_0|\mathbf{a}_2^\ast)_{\text{col\,}(2r+s+1)}. \end{equation} Finally, we get \begin{equation} q_{3,4}'(0)=\sum\limits_{r=0}^{\min(n_1,\Delta n)}\sum\limits_{\substack{s=0\\r+s\ge 1}}^{n_1}\sum\limits_{t=1}^{\Delta n} \sum\limits_{\substack{\mathbf{c}\in [1,n_1]_<^r\\\mathbf{c'}\in [1,n_1]_<^s}} \sum\limits_{\substack{\mathbf{d}\in [n_1+2,n_2]_<^r\\\mathbf{d'}\in [n_1+1,n_2]_<^t\\d'_1=n_1+1}} \sum\limits_{i=r+1}^{2r+s}\sum\limits_{j=1}^{L}(-1)^{i+j+1}a_2^\ast(f_i)a_3^\ast(f_j)\det M_0(\{i\}',\{j\}'), \notag\end{equation} which is $Q_4'(0)$. We can now split (\ref{3.60}) in the same way, \begin{equation}\label{3.69} q_2'(0)=q_{2,1}'(0)+q_{2,2}'(0)+q_{2,3}'(0)+q_{2,4}'(0), \end{equation} where \begin{equation}\label{3.70} q_{2,1}'(0)=\sum\limits_{r=1}^{\min(n_1,\Delta n)}\sum\limits_{s=1}^{n_1}\sum\limits_{t=0}^{\Delta n} \sum\limits_{\substack{\mathbf{c}\in [1,n_1]_<^r\\\mathbf{c'}\in [1,n_1]_<^s\\c'_s=n_1}} \sum\limits_{\substack{\mathbf{d}\in [n_1+1,n_2]_<^r\\\mathbf{d'}\in [n_1+2,n_2]_<^t\\d_1=n_1+1}}\det M_0(\{r+s\}',\{2r+s+1\}'), \end{equation} $q_{2,2}'(0)=Q_5'(0)$, with $Q_5'(0)$ given by (\ref{3.506}), \begin{equation}\label{3.72} q_{2,3}'(0)=\sum\limits_{r=1}^{\min(n_1,\Delta n)}\sum\limits_{s=0}^{n_1}\sum\limits_{t=0}^{\Delta n} \sum\limits_{\substack{\mathbf{c}\in [1,n_1]_<^r\\\mathbf{c'}\in [1,n_1]_<^s}} \sum\limits_{\substack{\mathbf{d}\in [n_1+1,n_2]_<^r\\\mathbf{d'}\in [n_1+2,n_2]_<^t\\d_1=n_1+1}} \det (M_0|\mathbf{a}_2^\ast)_{\text{col\,}(r+s+1)}, \end{equation} and, with $Q_6'(0)$ given by (\ref{3.506'}), $q_{2,4}'(0)=Q_6'(0)$. From (\ref{3.56}), (\ref{3.62}) and (\ref{3.69}) we see that \begin{equation}\label{3.74} \left.\frac{\partial}{\partial h}\right|_{h=0}Q(h) = q_{3,1}'(0)+q_{3,3}'(0)+q_{2,1}'(0)+q_{2,3}'(0)+\sum\limits_{k=3}^6Q_k'(0). \end{equation} In order to prove the lemma it remains to show that \begin{equation}\label{3.75} Q_1'(0)=q_{3,1}'(0)+q_{3,3}'(0)\,\,,\,\,Q_2'(0)=q_{2,1}'(0)+q_{2,3}'(0). \end{equation} In the expression (\ref{3.63}) for $q_{3,1}'(0)$ we move row $2r+s+1$ to row $r+s+1$. This gives a sign change $(-1)^r$. We then shift the $s$-and $t$-summations by 1, using the fact that $d'_1=n_1+1$ and $c'_s=n_1$ are fixed. This gives \begin{align}\label{3.81} q_{3,1}'(0)=&\sum\limits_{r=0}^{\min(n_1,\Delta n)}\sum\limits_{s=0}^{n_1-1}\sum\limits_{t=0}^{\Delta n-1} \sum\limits_{\substack{\mathbf{c}\in [1,n_1]_<^r\\\mathbf{c'}\in [1,n_1-1]_<^s}} \sum\limits_{\substack{\mathbf{d}\in [n_1+2,n_2]_<^r\\\mathbf{d'}\in [n_1+2,n_2]_<^t}} \\ &\det\left( \begin{matrix} b(\mathbf{c},\mathbf{c}) &b(\mathbf{c},\mathbf{c'}) &b(\mathbf{c},n_1) &b(\mathbf{c},\mathbf{d}) &b(\mathbf{c},\mathbf{d'}) \\ a_0(\mathbf{c'},\mathbf{c}) &a_0(\mathbf{c'},\mathbf{c'}) &a_0(\mathbf{c'},n_1) &a_0(\mathbf{c'},\mathbf{d}) &a_0(\mathbf{c'},\mathbf{d'}) \\ b(n_1+1,\mathbf{c}) &b(n_1+1,\mathbf{c'}) &b(n_1+1,n_1) &b(n_1+1,\mathbf{d}) &b(n_1+1,\mathbf{d'}) \\ a_0(\mathbf{d},\mathbf{c}) &a_0(\mathbf{d},\mathbf{c'}) &a_0(\mathbf{d},n_1) &a_0(\mathbf{d},\mathbf{d}) &a_0(\mathbf{d},\mathbf{d'}) \\ b(\mathbf{d'},\mathbf{c}) &b(\mathbf{d'},\mathbf{c'}) &b(\mathbf{d'},n_1) &b(\mathbf{d'},\mathbf{d}) &b(\mathbf{d'},\mathbf{d'}). \end{matrix}\right) \notag \end{align} In the expression (\ref{3.67}) for $q_{3,3}'(0)$ we move row $2r+s+1$ to row $r+s+1$ and column $2r+s+1$ to column $r+s+1$. This gives no net sign change. Note that if $r+s=0$ then $\mathbf{a}_2^\ast=0$ so we can remove the condition $r+s\ge 1$ in the summation in (\ref{3.67}). Also, we shift the $t$-summation by 1. We obtain \begin{align}\label{3.82} q_{3,3}'(0)=&\sum\limits_{r=0}^{\min(n_1,\Delta n)}\sum\limits_{s=0}^{n_1}\sum\limits_{t=0}^{\Delta n-1} \sum\limits_{\substack{\mathbf{c}\in [1,n_1]_<^r\\\mathbf{c'}\in [1,n_1]_<^s}} \sum\limits_{\substack{\mathbf{d}\in [n_1+2,n_2]_<^r\\\mathbf{d'}\in [n_1+2,n_2]_<^t}} \\ &\det\left( \begin{matrix} b(\mathbf{c},\mathbf{c}) &b(\mathbf{c},\mathbf{c'}) &0 &b(\mathbf{c},\mathbf{d}) &b(\mathbf{c},\mathbf{d'}) \\ a_0(\mathbf{c'},\mathbf{c}) &a_0(\mathbf{c'},\mathbf{c'}) &a_2^\ast(\mathbf{c'}) &a_0(\mathbf{c'},\mathbf{d}) &a_0(\mathbf{c'},\mathbf{d'}) \\ b(n_1+1,\mathbf{c}) &b(n_1+1,\mathbf{c'}) &0 &b(n_1+1,\mathbf{d}) &b(n_1+1,\mathbf{d'}) \\ a_0(\mathbf{d},\mathbf{c}) &a_0(\mathbf{d},\mathbf{c'}) &a_2^\ast(\mathbf{d}) &a_0(\mathbf{d},\mathbf{d}) &a_0(\mathbf{d},\mathbf{d'}) \\ b(\mathbf{d'},\mathbf{c}) &b(\mathbf{d'},\mathbf{c'}) &0 &b(\mathbf{d'},\mathbf{d}) &b(\mathbf{d'},\mathbf{d'}) \end{matrix}\right). \notag \end{align} Note that the $\mathbf{c'}$-summation in (\ref{3.81}) can be extended to $[1,n_1]_<^s$, since if $c'_s=n_1$, then two columns in the determinant in (\ref{3.81}) are equal. In fact, if $s\ge 1$ so that the sum is non-trivial, then extending the summation to $\mathbf{c'}\in[1,n_1]_<^s$ means that we also have the case $c_s'=n_1$, and in this case the columns $r+s$ and $r+s+1$ are equal. Also, we can extend the $s-$summation to $s=n_1$ since in that case we must have $c'_s=n_1$. We can thus add the two formulas (\ref{3.81}) and (\ref{3.82}) and this gives the first formula in (\ref{3.75}) with $\tilde{a}_0(\ell, n_1)=a_0((\ell, n_1)+a_2^\ast(\ell)$, which agrees with (\ref{3.492}). The proof of the second formula in (\ref{3.75}) is analogous. \end{proof} \section{Asymptotics and proof of the main theorem}\label{sect4} We begin by recalling some notation from section \ref{sect1}, (\ref{1.8}). Let $\lambda_i=\eta_i-\nu_i^2$, $i=1,2$ and write $$ \Delta\lambda=\lambda_2\left(\frac{t_2}{\Delta t}\right)^{1/3}-\lambda_1\left(\frac{t_1}{\Delta t}\right)^{1/3}. $$ Then, \begin{equation}\label{4.1} \Delta\eta=\Delta\lambda+\Delta\nu^2, \end{equation} where $\Delta\nu$ is given by (\ref{1.8}). We will write \begin{equation}\label{4.2} N_1=t_1M\,\,,\,\, N_2=\Delta t M, \end{equation} where we will let $M\to\infty$ as in theorem \ref{thm1.1}. The scalings in (\ref{scaling}) and in the arguments $\ell, k$ can then be written \begin{align}\label{4.3} n_1&=N_1+\nu_1N_1^{2/3}\,,\,n_2=t_2M+\nu_2(t_2M)^{2/3}\,,\,\Delta n=n_2-n_1=N_2+\Delta\nu N_2^{2/3} \\ \mu_1&=N_1-\nu_1N_1^{2/3}\,,\,\mu_2=t_2M-\nu_2(t_2M)^{2/3}\,,\,\Delta \mu=\mu_2-\mu_1=N_2-\Delta\nu N_2^{2/3} \notag\\ \xi_1&=2N_1+\lambda_1N_1^{1/3}\,,\,\xi_2=2t_2M+\lambda_2(t_2M)^{1/3}\,,\,\Delta \xi=\xi_2-\xi_1=2N_2+\Delta\lambda N_2^{2/3} \notag\\ \ell&=n_1+1+xN_1^{1/3}\,,\,k=n_1+yN_1^{1/3}, \notag \end{align} where we have ignored integer parts. We will now state two lemmas that we will need in order to prove theorem \ref{thm1.1} from proposition \ref{prop3.2} and lemma \ref{lem3.4}. The proofs of the lemmas is postponed to section \ref{sect6}. \begin{lemma}\label{lem4.1} Recall (\ref{3.24}) to (\ref{3.27}). Under the scalings (\ref{4.3}) with $N_1,N_2$ given by (\ref{4.2}) we have the following limits, uniformly for $\nu_i, \eta_i,x,y$ in compact sets, \begin{equation}\label{4.4} \lim_{M\to\infty} N_1^{1/3}a_{0,1}(\ell,k)=\phi_1(x,y), \end{equation} \begin{equation}\label{4.5} \lim_{M\to\infty} N_1^{1/3}b_1(\ell,k)=\psi_1(x,y), \end{equation} \begin{equation}\label{4.6} \lim_{M\to\infty} N_1^{1/3}c_2(\ell,k)=\phi_2(x,y), \end{equation} \begin{equation}\label{4.7} \lim_{M\to\infty} N_1^{1/3}c_3(\ell,k)=\phi_3(x,y), \end{equation} where $\phi_i,\psi_1$ are given by (\ref{1.10}) to (\ref{1.13}). \end{lemma} We will also need some estimates in order to control the convergence of the whole expansion. \begin{lemma}\label{lem4.2} Assume that we have the scalings (\ref{4.3}) with $N_1,N_2$ given by (\ref{4.2}). There are constants $c,C>0$, which depend on $t_i, \nu_i, \eta_i$, such that for all $M\ge 1$, \begin{equation}\label{4.8} \left|N_1^{1/3}a_{0,1}(\ell,k)\right|\le Ce^{-c(x_+^{3/2}+(-y)_+^{3/2})+C(y_++(-x)_+)}, \end{equation} \begin{equation}\label{4.9} \left|N_1^{1/3}b_1(\ell,k)\right|\le Ce^{-c(x_+^{3/2}+(-y)_+^{3/2})+C(y_++(-x)_+)}, \end{equation} \begin{equation}\label{4.10} \left|N_1^{1/3}c_2(\ell,k)\right|\le Ce^{-c(x_+^{3/2}+y_+^{3/2})+C((-y)_+ +(-x)_+)}, \end{equation} \begin{equation}\label{4.11} \left|N_1^{1/3}c_3(\ell,k)\right|\le Ce^{-c((-x)_+^{3/2}+(-y)_+^{3/2})+C(y_++x_+)}, \end{equation} for all $1\le\ell,k\le n_2$, where $a_+=\max(0,a)$. \end{lemma} As an immediate corollary of this lemma and the definitions (\ref{3.28}), (\ref{3.29}) and (\ref{3.492}), we obtain \begin{corollary}\label{cor4.3} Assume that we have the scalings (\ref{4.3}) with $N_1,N_2$ given by (\ref{4.2}). There are constants $c,C>0$, which depend on $t_i, \nu_i, \eta_i$, such that for all $M\ge 1$, and all $1\le\ell,k\le n_2$, \begin{subequations}\label{4.12} \begin{align} \left|N_1^{1/3}a_{0}(\ell,k)\right|&\le Ce^{-c(x_+^{3/2}+(-y)_+^{3/2})+C(y_++(-x)_+)}, \\ \left|N_1^{1/3}b(\ell,k)\right|&\le Ce^{-c(x_+^{3/2}+(-y)_+^{3/2})+C(y_++(-x)_+)}, \\ \left|N_1^{1/3}\tilde{a}_{0}(\ell,n_1)\right|&\le Ce^{-cx_+^{3/2}+C(-x)_+}, \\ \left|N_1^{1/3}a_2^\ast(\ell)\right|&\le Ce^{-cx_+^{3/2}+C(-x)_+}, \\ \left|N_1^{1/3}a_3^\ast(k)\right|&\le Ce^{-c(-y)_+^{3/2}+Cy_+}. \end{align} \end{subequations} \end{corollary} Recall the formula (\ref{3.507}) in lemma \ref{lem3.4}. We want to control the terms $Q_k'(0)$ asymptotically as $M\to\infty$. \begin{lemma}\label{lem4.4} We have the following limits. \begin{equation}\label{4.14} \lim_{M\to\infty} N_1^{1/3}Q_k'(0)=0 \end{equation} for $3\le k\le 6$, \begin{align}\label{4.16} &\Psi^{(1)}(\eta_1,\eta_2):=\lim_{M\to\infty} N_1^{1/3}Q_1'(0) \\ &=\sum\limits_{r,s,t=0}^\infty\frac 1{(r!)^2s!t!}\int\limits_{(-\infty,0]^r}d^rx\int\limits_{(-\infty,0]^s}d^sx' \int\limits_{[0,\infty)^r}d^ry\int\limits_{[0,\infty)^t}d^ty' W_{r,s,r,t}^{(1)}(\mathbf{x},\mathbf{x'},\mathbf{y},\mathbf{y'}), \notag \end{align} where $W_{r,s,r,t}^{(1)}$ is given by (\ref{1.16}), and \begin{align}\label{4.17} &\Psi^{(2)}(\eta_1,\eta_2):=\lim_{M\to\infty} N_1^{1/3}Q_2'(0) \\ &=\sum\limits_{r=1,s,t=0}^\infty\frac 1{r!s!(r-1)!t!}\int\limits_{(-\infty,0]^r}d^rx\int\limits_{(-\infty,0]^s}d^sx' \int\limits_{[0,\infty)^{r-1}}d^{r-1}y\int\limits_{[0,\infty)^t}d^ty' W_{r,s,r-1,t}^{(2)}(\mathbf{x},\mathbf{x'},\mathbf{y},\mathbf{y'}), \notag \end{align} where $W_{r,s,r,t}^{(2)}$ is given by (\ref{1.17}). \end{lemma} \begin{proof} Consider $M_0$ given by (\ref{3.50}) with $h=0$. Recall (\ref{3.30}). Let $[M_0]_i$ denote the $i$:th row in $M_0$, and $[M_0]^j$ the $j$:th column. We will use the following scalings \begin{align}\label{4.18} c_i&=n_1+x_iN_1^{1/3}\,\,,\,\, c'_i=n_1+x'_iN_1^{1/3} \\ d_i&=n_1+1+y_iN_1^{1/3}\,\,,\,\, d'_i=n_1+1+y'_iN_1^{1/3} \notag \end{align} so that $x_i\le 0$, $x'_i\le 0$, $y_i\ge 0$, and $y'_i\ge 0$. Set $$ Y_{\max}=\max\limits_{1\le j\le r} y_j+\max\limits_{1\le j\le t} y'_j\,,\, X_{\max}=\max\limits_{1\le j\le r} (-x_j)+\max\limits_{1\le j\le s} (-x'_j). $$ It follows from corollary \ref{cor4.3} that under the scaling (\ref{4.18}) there exist constants $c,C>0$ such that \begin{equation}\label{4.19} \begin{cases} ||N_1^{1/3}[M_0]_i||_2\le CL^{1/2}e^{C(Y_{\max}-x_i)} & \text{if } 1\le i\le r \\ ||N_1^{1/3}[M_0]_i||_2\le CL^{1/2}e^{C(Y_{\max}-x'_{i-r})} & \text{if } r< i\le r+s\\ ||N_1^{1/3}[M_0]_i||_2\le CL^{1/2}e^{-c y_{i-(r+s)}^{3/2}+CY_{\max}} & \text{if } r+s< i\le 2r+s \\ ||N_1^{1/3}[M_0]_i||_2\le CL^{1/2}e^{-c {y'}_{i-(2r+s)}^{3/2}+CY_{\max}} & \text{if } 2r+s< i\le L \end{cases}, \end{equation} where $L=2r+s+t$, and \begin{equation}\label{4.20} \begin{cases} ||N_1^{1/3}[M_0]^j||_2\le CL^{1/2}e^{-c (-x_j)_{i}^{3/2}+CX_{\max}} & \text{if } 1\le j\le r \\ ||N_1^{1/3}[M_0]^j||_2\le CL^{1/2}e^{-c (-x_{j-r})_{i}^{3/2}+CX_{\max}} & \text{if } r< j\le r+s\\ ||N_1^{1/3}[M_0]^j||_2\le CL^{1/2}e^{C(X_{\max}-y_{j-(r+s)})} & \text{if } r+s< j\le 2r+s \\ ||N_1^{1/3}[M_0]^j||_2\le CL^{1/2}e^{C(X_{\max}-{y'}_{j-(2r+s)})} & \text{if } 2r+s< j\le L \end{cases}. \end{equation} From Hadamard's inequality we get the estimates $$ \left|\det \left(N_1^{1/3}M_0\right)\right|\le\prod\limits_{i=1}^L||N_1^{1/3}[M_0]_i||_2, $$ and $$ \left|\det \left(N_1^{1/3}M_0\right)\right|\le\prod\limits_{j=1}^L||N_1^{1/3}[M_0]^j||_2, $$ from which it follows by taking the product that \begin{equation}\label{4.21} \left|\det \left(N_1^{1/3}M_0\right)\right|\le\prod\limits_{i=1}^L||N_1^{1/3}[M_0]_i||_2^{1/2}\prod\limits_{j=1}^L||N_1^{1/3}[M_0]^j||_2^{1/2}. \end{equation} If we use the estimates (\ref{4.19}) and (\ref{4.20}) in (\ref{4.21}) we see that there are constants $c,C>0$ such that \begin{equation}\label{4.22} \left|\det \left(N_1^{1/3}M_0\right)\right|\le C^LL^{L/2}\prod\limits_{j=1}^re^{-c(-x_j)^{3/2}}\prod\limits_{j=1}^se^{-c(-x'_j)^{3/2}} \prod\limits_{j=1}^re^{-cy_j^{3/2}}\prod\limits_{j=1}^te^{-c{y'}_j^{3/2}}. \end{equation} Here we have also used the fact that given a constant $c>0$, there is a constant $C$ so that \begin{equation*} Y_{\max}\le C+\frac c2 \sum\limits_{j=1}^r y_j^{3/2}+\frac c2 \sum\limits_{j=1}^ty_j'^{3/2}, \end{equation*} and an analogous estimate for $X_{\max}$. Consider now the expression for $Q_3'(0)$ in (\ref{3.503}). If we use the estimate $$ \left|N_1^{1/3}a_3^\ast(n_1+yN_1^{1/3})\right|\le Ce^{-c(-y)_+^{3/2}+Cy_+} $$ from (\ref{4.12}) and the same estimates and arguments as above we see that \begin{align}\label{4.23} &\left|N_1^{L/3}\sum\limits_{j=1}^L(-1)^{r+s+j}a_3^\ast(f_j)\det M_0(\{r+s\}',\{j\}')\right| \\ &\le C^LL^{L/2}\prod\limits_{j=1}^re^{-c(-x_j)^{3/2}}\prod\limits_{j=1}^se^{-c(-x'_j)^{3/2}} \prod\limits_{j=1}^re^{-cy_j^{3/2}}\prod\limits_{j=1}^te^{-c{y'}_j^{3/2}}. \notag \end{align} Note that in (\ref{3.503}), $y_1'=0$ and $x'_s=0$, so if we write \begin{align} N_1^{1/3}Q'_3(0)&=\frac 1{N_1^{1/3}}\sum\limits_{r=0}^{\min(n_1,\Delta n)}\sum\limits_{s=1}^{n_1}\sum\limits_{t=1}^{\Delta n} \sum\limits_{\substack{\mathbf{c}\in [1,n_1]_<^r\\\mathbf{c'}\in [1,n_1]_<^s\\c'_s=n_1}}\frac 1{N_1^{(r+s-1)/3}} \sum\limits_{\substack{\mathbf{d}\in [n_1+2,n_2]_<^{r}\\\mathbf{d'}\in [n_1+1,n_2]_<^t\\d'_1=n_1+1}}\frac 1{N_1^{(r+t-1)/3}} \notag\\ &\times N_1^{L/3}\sum\limits_{j=1}^L(-1)^{r+s+j}a_3^\ast(f_j)\det M_0(\{r+s\}',\{j\}'), \notag \end{align} we see that we can control the convergence of the Riemann sum using (\ref{4.23}) (note ordered variables instead of factorials), but since we have the factor $1/N_1^{1/3}$ in front of the whole expression we see that it $\to 0$ as $M\to\infty$. From (\ref{3.505}) we can write \begin{align} N_1^{1/3}Q'_4(0)&=\frac 1{N_1^{1/3}}\sum\limits_{r=0}^{\min(n_1,\Delta n)}\sum\limits_{\substack{s=0\\r+s\ge 1}}^{n_1}\sum\limits_{t=1}^{\Delta n} \sum\limits_{\substack{\mathbf{c}\in [1,n_1]_<^r\\\mathbf{c'}\in [1,n_1]_<^s}}\frac 1{N_1^{(r+s)/3}} \sum\limits_{\substack{\mathbf{d}\in [n_1+2,n_2]_<^{r}\\\mathbf{d'}\in [n_1+1,n_2]_<^t\\d'_1=n_1+1}}\frac 1{N_1^{(r+t-1)/3}} \notag\\ &\times N_1^{(L+1)/3}\sum\limits_{i=r+1}^{2r+s}\sum\limits_{j=1}^L(-1)^{i+j+1}a_2^\ast(f_i)a_3^\ast(f_j)\det M_0(\{i\}',\{j\}'). \notag \end{align} Using the estimates of $a_2^\ast$ and $a_3^\ast$ from corollary \ref{cor4.3} it follows that we can prove an estimate analogous to (\ref{4.23}) and again we see that $N_1^{1/3}Q'_4(0)\to 0$ as $M\to\infty$. This proves (\ref{4.14}) for $k=3,4$. The proof for $k=5,6$ is a analogous. From the estimates in corollary \ref{cor4.3} we see that in analogy with the proof of (\ref{4.22}) we can prove \begin{equation}\label{4.24} \left|\det \left(N_1^{1/3}V\right)\right|\le C^{L+1}(L+1)^{(L+1)/2}\prod\limits_{j=1}^re^{-c(-x_j)^{3/2}}\prod\limits_{j=1}^se^{-c(-x'_j)^{3/2}} \prod\limits_{j=1}^re^{-cy_j^{3/2}}\prod\limits_{j=1}^te^{-c{y'}_j^{3/2}}. \end{equation} where $V$ is given by (\ref{3.492}). From (\ref{3.501}) we can write \begin{equation}\label{4.25} N_1^{1/3}Q'_1(0)=\sum\limits_{r=0}^{\min(n_1,\Delta n)}\sum\limits_{s=0}^{n_1}\sum\limits_{t=0}^{\Delta n} \sum\limits_{\substack{\mathbf{c}\in [1,n_1]_<^r\\\mathbf{c'}\in [1,n_1]_<^s}}\frac 1{N_1^{(r+s)/3}} \sum\limits_{\substack{\mathbf{d}\in [n_1+2,n_2]_<^{r}\\\mathbf{d'}\in [n_1+2,n_2]_<^t}}\frac 1{N_1^{(r+t)/3}} \det \left(N_1^{1/3}V\right). \end{equation} It follows from lemma \ref{lem4.1}, (\ref{3.28}), (\ref{3.29}) and (\ref{3.492}) that $$ \lim_{M\to\infty} \det \left(N_1^{1/3}V\right) =W_{r,s,r,t}^{(1)}(\mathbf{x},\mathbf{x'},\mathbf{y},\mathbf{y'}). $$ From the estimate (\ref{4.24}) we see that we can take the limit in (\ref{4.25}) and obtain (\ref{4.16}). The proof of (\ref{4.17}) is completely analogous. \end{proof} We now have all the results that we need to prove theorem \ref{thm1.1}. \begin{proof} ({\it Proof of theorem \ref{thm1.1}}) Recall Proposition \ref{prop2.4}. In the scaling (\ref{4.3}) we see that \begin{equation}\label{4.26} \frac{\partial}{\partial\eta_1}\mathbb{P}\left[H(\mu_1,n_1)\le\xi_1,H(\mu_2,n_2)\le\xi_2\right]=\left.\frac{\partial}{\partial h}\right|_{h=0} N_1^{1/3}Q(h). \end{equation} From lemma \ref{lem3.4} and lemma \ref{lem4.4} we see that \begin{equation}\label{4.27} \lim_{M\to\infty}\left.\frac{\partial}{\partial h}\right|_{h=0}N_1^{1/3}Q(h)=\Psi^{(1)}(\eta_1,\eta_2)+\Psi^{(2)}(\eta_1,\eta_2):=\Psi(\eta_1,\eta_2) \end{equation} uniformly for $\eta_1,\eta_2$ in a compact set. Let $$ X_M=\frac{H(\mu_1,n_1)-2t_1M}{(t_1M)^{1/3}}+\nu_1^2\,,\, Y_M=\frac{H(\mu_2,n_2)-2t_2M}{(t_2M)^{1/3}}+\nu_2^2. $$ Then (\ref{4.26}) can be written \begin{equation} \frac{\partial}{\partial\eta_1}\mathbb{P}\left[X_M\le\eta_1,Y_M\le\eta_2\right]=\left.\frac{\partial}{\partial h}\right|_{h=0} N_1^{1/3}Q(h) \notag \end{equation} and for fixed $\eta_1^\ast$ and $\tilde{\eta}_1$ we see that $$ \mathbb{P}\left[\eta_1^\ast<X_M\le\tilde{\eta}_1,Y_M\le\eta_2\right]=\int_{\eta_1^\ast}^{\tilde{\eta}_1}\left.\frac{\partial}{\partial h}\right|_{h=0} N_1^{1/3}Q(h)\,d\eta_1. $$ From (\ref{4.27}) it follows that \begin{equation}\label{4.28} \lim_{M\to\infty}\mathbb{P}\left[\eta_1^\ast<X_M\le\tilde{\eta}_1,Y_M\le\eta_2\right]=\int_{\eta_1^\ast}^{\tilde{\eta}_1}\Psi(\eta_1,\eta_2)\,d\eta_1. \end{equation} Now, \begin{align}\label{4.29} \mathbb{P}\left[\eta_1^\ast<X_M\le\tilde{\eta}_1,Y_M\le\eta_2\right]& \le\mathbb{P}\left[\eta_1^\ast<X_M,Y_M\le\eta_2\right] \\ &\le\mathbb{P}\left[\eta_1^\ast<X_M\le\tilde{\eta}_1,Y_M\le\eta_2\right]+\mathbb{P}\left[X_M>\tilde{\eta}_1\right]. \notag \end{align} From (\ref{1.4}), (\ref{4.28}) and (\ref{4.29}) we see that \begin{align}\label{4.30} &\int_{\eta_1^\ast}^{\tilde{\eta}_1}\Psi(\eta_1,\eta_2)\,d\eta_1\le \liminf\limits_{M\to\infty} \mathbb{P}\left[\eta_1^\ast<X_M,Y_M\le\eta_2\right] \\ &\le \limsup\limits_{M\to\infty}\mathbb{P}\left[\eta_1^\ast<X_M,Y_M\le\eta_2\right]\le \int_{\eta_1^\ast}^{\tilde{\eta}_1}\Psi(\eta_1,\eta_2)\,d\eta_1 +1-F_2(\tilde{\eta}_1). \notag \end{align} If we let $\tilde{\eta}_1\to\infty$ in (\ref{4.30}) we see that $$ \lim_{M\to\infty} \mathbb{P}\left[\eta_1^\ast<X_M,Y_M\le\eta_2\right]=\int_{\eta_1^\ast}^{\infty}\Psi(\eta_1,\eta_2)\,d\eta_1, $$ which is what we wanted to prove. Note that in order for this last argument to work we need an estimate of $\Psi(\eta_1,\eta_2)$ in terms of $\eta_1$. In fact, there are constants $c,C>0$ such that \begin{equation}\label{4.31} \left|\Psi(\eta_1,\eta_2)\right|\le Ce^{-c(\eta_1)_+^{3/2}}. \end{equation} We will only sketch the argument for (\ref{4.31}). Note that $\phi_1$, $\psi_1$ and $\phi_3$ all have a decay of the form $e^{-c(\eta_1)_+^{3/2}}$ in $\eta_1$ by known estimates for the Airy function. Hence, the difficulty is in the presence of $\phi_2$. If $r\ge 1$, the first column in $W_{r,s,r,t}^{(1)}$ does not depend on $\phi_2$ (we can assume $x_1<0$) and hence the first column (in a Hadamard estimate) will give the right $\eta_1$-decay. If $r=0$, but $s\ge 1$, we can again consider the first column ($x'_1<0$), and get the right $\eta_1$-decay. If $r=s=0$, $$ W_{0,0,0,t}^{(1)}(\mathbf{x},\mathbf{x'},\mathbf{y},\mathbf{y'})=\left| \begin{matrix} \psi(0,0) &\psi(0,\mathbf{y'})\\ \psi(\mathbf{y'},0) &\psi(\mathbf{y'},\mathbf{y'}) \end{matrix}\right| $$ and again the first column does not depend on $\phi_2$. The argument for $W_{r,s,r-1,t}^{(2)}$ is easier since we now always have $r\ge 1$. \end{proof} \section{Proof of the combinatorial identities}\label{sect5} In this section we will prove lemma 2.3. \begin{proof} Consider first the identity (\ref{2.9}). We can write \begin{equation*} \prod_{i<j}\left(\frac 1{w_{\sigma(i)}w_{\sigma(j)}}-\frac 1{w_{\sigma(i)}}\right)=\prod_{i<j}\frac 1{w_{\sigma(i)}w_{\sigma(j)}}(1-w_{\sigma(j)}). \end{equation*} Now, \begin{equation*} \prod_{i<j}\frac 1{w_{\sigma(i)}w_{\sigma(j)}}=\prod\limits_{i=1}^{n-1}\frac 1{w_{\sigma(i)}^{n-i}}\prod\limits_{j=2}^n\frac 1{w_{\sigma(j)}^{j-1}} =\prod\limits_{j=1}^{n}\frac 1{w_{\sigma(j)}^{n-1-j}}\prod\limits_{j=1}^n\frac 1{w_{\sigma(j)}^{j}} \end{equation*} and \begin{equation*} \prod_{i<j}(1-w_{\sigma(j)})=\prod\limits_{j=2}^n(1-w_{\sigma(j)})^{j-1}=\prod\limits_{j=1}^n\frac 1{1-w_{\sigma(j)}}\prod\limits_{j=1}^n(1-w_{\sigma(j)})^{j}. \end{equation*} Thus, we have the identity $$ \prod_{i<j}\left(\frac 1{w_{\sigma(i)}w_{\sigma(j)}}-\frac 1{w_{\sigma(i)}}\right)=\prod\limits_{j=1}^n \frac 1{(1-w_j)w_{\sigma(j)}^{n-1-j}}\left(\frac{1-w_{\sigma(j)}}{w_{\sigma(j)}}\right)^j. $$ Hence, the left side of (\ref{2.9}) can be written \begin{align} &\sum\limits_{\sigma\in S_n}\text{sgn\,}(\sigma)\prod_{i<j}\left(\frac 1{w_{\sigma(i)}w_{\sigma(j)}}-\frac 1{w_{\sigma(i)}}\right) \frac{\prod\limits_{j=1}^n(1-w_j)w_{\sigma(j)}^{n-1-j}} {(1-w_{\sigma(1)})\cdots(1-w_{\sigma(1)}\cdots w_{\sigma(n)})} \notag\\ &=\sum\limits_{\sigma\in S_n}\text{sgn\,}(\sigma)\prod_{i<j}\left(\frac 1{w_{\sigma(i)}w_{\sigma(j)}}-\frac 1{w_{\sigma(i)}}\right) \frac{\prod\limits_{j=1}^nw_j^{-2}(1-w_j)} {(\frac 1{w_{\sigma(1)}}-1)\cdots(\frac 1{w_{\sigma(1)}\cdots w_{\sigma(n)}}-1)}. \notag \end{align} By the identity (1.7) in \cite{TrWi} with $p=0,q=1$ the last expression equals $$ \prod\limits_{j=1}^nw_j^{-1}\left(\frac 1{w_j}-1\right)\frac 1{\frac 1{w_j}-1}\det\left(\frac 1{w_j^{i-1}}\right)=(-1)^{n(n-1)/2} \prod\limits_{j=1}^n\frac 1{w_j^n}\det\left(w_j^{i-1}\right), $$ where we also used (\ref{vandermondeidentity}). This proves (\ref{2.9}). We now turn to the proof of (\ref{2.10}). Denote the left side of (\ref{2.10}) by $\omega_n(z,w)$. We will use induction on $n$. It is easy to see that the identity is true for $n=1$. Fix $\sigma_1(n)=k$ and $\sigma_2(n)=\ell$. Then \begin{align}\label{5.1} &\sum\limits_{k,\ell=1}^n\sum\limits_{\substack{\sigma_1,\sigma_2\in S_n\\\sigma_1(n)=k,\sigma_2(n)=\ell}}\text{sgn\,}(\sigma_1)\text{sgn\,}(\sigma_2) \left(\frac{1-z_k}{z_k}\right)^n\left(\frac{1-w_\ell}{w_\ell}\right)^n \prod\limits_{j=1}^{n-1}\left(\frac{1-z_{\sigma_1(j)}}{z_{\sigma_1(j)}}\right)^j\left(\frac{w_{\sigma_2(j)}}{1-w_{\sigma_2(j)}}\right)^j \\ &\times\frac 1{1-\frac{z_1\cdots z_n}{w_1\cdots w_n}}\frac 1{\left(1-\frac{z_{\sigma_1(1)}}{w_{\sigma_2(1)}}\right) \left(1-\frac{z_{\sigma_1(1)}z_{\sigma_1(2)}}{w_{\sigma_2(1)}w_{\sigma_2(2)}}\right) \cdots\left(1-\frac{z_{\sigma_1(1)}\cdots z_{\sigma_1(n-1)}}{w_{\sigma_2(1)}\cdots w_{\sigma_2(n-1)}}\right)} \notag\\ &=\sum\limits_{k,\ell=1}^n\frac {(-1)^{n-k+n-\ell}}{1-\frac{z_1\cdots z_n}{w_1\cdots w_n}}\left(\frac{1-z_k}{z_k}\right)^n\left(\frac{w_\ell}{1-w_\ell}\right)^n \omega_{n-1}(z_1,\dots,\hat{z_k},\dots, z_n,w_1,\dots,\hat{w_\ell},\dots, w_n), \notag \end{align} where $\hat{z_k} (\hat{w_\ell})$ means that we leave out $z_k (w_\ell)$. By the induction hypothesis the last expression in (\ref{5.1}) equals \begin{align}\label{5.2} &\sum\limits_{k,\ell=1}^n\frac {(-1)^{k+\ell}}{1-\frac{z_1\cdots z_n}{w_1\cdots w_n}}\left(\frac{1-z_k}{z_k}\right)^n\left(\frac{w_\ell}{1-w_\ell}\right)^n \prod\limits_{j\neq k}\frac{(1-z_j)^{n-1}}{z_j^{n-1}}\prod\limits_{j\neq \ell}\frac{w_j^{n}}{(1-w_j)^{n-1}} \\ &\times\det\left(\frac 1{w_k-z_j}\right)_{1\le j,k\le n}\frac{\prod\limits_{j=1}^n(w_\ell-z_j)(w_j-z_k)}{(w_\ell-z_k)\prod\limits_{j=1}^{\ell-1} (w_j-w_\ell)\prod\limits_{j=\ell+1}^{n}(w_\ell-w_j)\prod\limits_{j=1}^{k-1} (z_k-z_j)\prod\limits_{j=k+1}^{n}(z_j-z_k)}, \notag \end{align} where we also used the Cauchy determinant formula. The expression in (\ref{5.2}) can be written \begin{equation}\label{5.3} \frac{\det\left(\frac 1{w_k-z_j}\right)}{1-\frac{z_1\cdots z_n}{w_1\cdots w_n}}\prod\limits_{j=1}^{n}\frac{w_j^n(1-z_j)^{n-1}}{z_j^{n-1}(1-w_j)^{n-1}} \sum\limits_{k,\ell=1}^n\frac{(-1)^{n-1}(1-z_k)\prod\limits_{j=1}^n(w_\ell-z_j)(w_j-z_k)}{z_k(1-w_\ell)(w_\ell-z_k)\prod\limits_{j\neq \ell} (w_\ell-w_j)\prod\limits_{j\neq k}(z_k-z_j)}. \end{equation} We see from (\ref{5.3}) and the final formula (\ref{2.10}) that in order to complete the proof we have to show the identity \begin{align}\label{5.4} &(-1)^{n-1}\sum\limits_{k,\ell=1}^n\frac{(1-z_k)}{z_k(1-w_\ell)(w_\ell-z_k)}\frac{\prod\limits_{j=1}^n(w_\ell-z_j)(w_j-z_k)}{\prod\limits_{j\neq \ell} (w_\ell-w_j)\prod\limits_{j\neq k}(z_k-z_j)} \\ &=\prod\limits_{j=1}^n\frac{w_j(1-z_j)}{z_j(1-w_j)}\left(1-\frac{z_1\dots z_n}{w_1\dots w_n}\right)= \prod\limits_{j=1}^n\frac{w_j(1-z_j)}{z_j(1-w_j)}-\prod\limits_{j=1}^n\frac{1-z_j}{1-w_j}. \notag \end{align} We can assume that $|z_j|, |w_j|<1$, $1\le j\le n$. Take $0<r_1<r_2<1$ such that $|z_j|<r_1$, $|w_j|<r_2$ for $1\le j\le n$. Consider the contour integral \begin{align}\label{5.5} &-\frac{1}{(2\pi i)^2}\int_{\gamma_{r_1}}dz\int_{\gamma_{r_2}}dw\frac{1-z}{z(1-w)(w-z)}\prod\limits_{j=1}^n \frac{(w-z_j)(z-w_j)}{(w-w_j)(z-z_j)}=-\frac{1}{2\pi i}\int_{\gamma_{r_2}}dw\frac 1{(1-w)w}\prod\limits_{j=1}^n\frac{(w-z_j)w_j}{(w-w_j)z_j} \\ &-\sum\limits_{k=1}^n\frac{1}{2\pi i}\int_{\gamma_{r_2}}dw\frac{1-z_k}{z_k(1-w)(w-z_k)}\prod\limits_{j=1}^n \frac{(w-z_j)(z_k-w_j)}{w-w_j}\prod\limits_{j\neq k}\frac 1{z_k-z_j}, \notag \end{align} where we have computed the $z$-integral. The first expression in the right side of (\ref{5.5}) can be computed by noticing that the only pole outside $\gamma_{r_2}$ (including $\infty$) is at $w=1$ and this gives $$ -\prod\limits_{j=1}^n\frac{w_j(1-z_j)}{z_j(1-w_j)} $$ The second expression in the right side of (\ref{5.5}) equals $$ (-1)^{n-1}\sum\limits_{k,\ell=1}^n\frac{(1-z_k)}{z_k(1-w_\ell)(w_\ell-z_k)}\frac{\prod\limits_{j=1}^n(w_\ell-z_j)(w_j-z_k)}{\prod\limits_{j\neq \ell} (w_\ell-w_j)\prod\limits_{j\neq k}(z_k-z_j)} $$ and thus by comparing (\ref{5.4}) and (\ref{5.5}) we see that it remains to show \begin{equation}\label{5.6} \frac{1}{(2\pi i)^2}\int_{\gamma_{r_1}}dz\int_{\gamma_{r_2}}dw\frac{1-z}{z(1-w)(w-z)}\prod\limits_{j=1}^n \frac{(w-z_j)(w_j-z)}{(w-w_j)(z-z_j)}=\prod\limits_{j=1}^n\frac{1-z_j}{1-w_j}. \end{equation} The $w$-integral in (\ref{5.6}) has its only pole outside $\gamma_{r_2}$ at $w=1$ which gives $$ \frac{1}{2\pi i}\int_{\gamma_{r_1}}\frac{dz}{z}\prod\limits_{j=1}^n\frac{w_j-z}{z_j-z}\prod\limits_{j=1}^n\frac{1-z_j}{1-w_j} =\prod\limits_{j=1}^n\frac{1-z_j}{1-w_j}\frac{1}{2\pi i}\int_{\gamma_{1/r_1}}\frac{dz}{z}\prod\limits_{j=1}^n\frac{zw_j-1}{zz_j-1} =\prod\limits_{j=1}^n\frac{1-z_j}{1-w_j}, $$ since the only pole in the last $z$-integral is at $z=0$. \end{proof} \section{Asymptotic analysis}\label{sect6} In this section we will prove lemma \ref{lem4.1} and lemma \ref{lem4.2}. Recall the notations and scalings (\ref{4.1}) to (\ref{4.3}). Define, with $k$ and $\ell$ as in (\ref{4.3}), \begin{align} f_1(z;x)&=(\ell-1)\log z+\frac 12\mu_1z^2- \xi_1 z \notag\\ f_2(z;y)&=(n_2-k)\log z+\frac 12\Delta\mu z^2- \Delta\xi z \notag \end{align} and note that $n_2-k=\Delta n-yN_1^{1/3}$. Recall the notation (\ref{3.7}) and the definitions (\ref{3.24})- (\ref{3.27}). We have that \begin{align}\label{6.1} G_{n_1,\mu_1,\xi_1}(z)&=e^{f_1(z;0)}\,\,,\,\,G_{\Delta n,\Delta\mu,\Delta\xi}(w)=e^{f_2(w;0)} \\ G_{k,\mu_1,\xi_1}(\zeta)&=e^{f_1(\zeta;y)}\,\,,\,\,G_{n_2+1-\ell,\Delta\mu,\Delta\xi}(\omega)=e^{f_2(\omega;x)} \notag\\ G_{n_2-k,\Delta\mu,\Delta\xi}(w)&=e^{f_2(w;y)}\,\,,\,\,G_{\ell-1,\mu_1,\xi_1}(z)=e^{f_1(z;x)} \notag \end{align} Let $d_i$, $1\le i\le 4$, be some positive parameters that will be chosen later. Introduce the following contour parametrizations \begin{align}\label{6.2} z(t_1)&=1+(d_1+it_1)N_1^{-1/3}\,\,,\,\,t_1\in\mathbb{R}, \\ \zeta(s_1)&=(1-d_2N_1^{-1/3})e^{is_1N_1^{-1/3}}\,\,,\,\,s_1\in I_1=[-\pi N_1^{1/3},\pi N_1^{1/3}], \notag\\ w(t_2)&=1+(d_3+it_2)N_2^{-1/3}\,\,,\,\,t_2\in\mathbb{R}, \notag\\ \omega(s_2)&=(1-d_4N_2^{-1/3})e^{is_2N_2^{-1/3}}\,\,,\,\,s_2\in I_1=[-\pi N_2^{1/3},\pi N_2^{1/3}]. \notag \end{align} Define \begin{align}\label{6.3} g_1(t_1;x)&=\text{Re\,} f_1(z(t_1);x)\,\,,\,\,h_1(s_1;x)=\text{Re\,} f_1(\zeta(s_1);x) \\ g_2(t_2;x)&=\text{Re\,} f_2(w(t_2);y)\,\,,\,\,h_2(s_2;y)=\text{Re\,} f_1(\omega(s_2);y). \notag \end{align} Let \begin{align}\label{6.3'} \Delta_1&=d_1-\nu_1+\frac12(d_1^2-2\nu_1 d_1-x)N_1^{-1/3}-\frac 12\nu_1d_1^2N_1^{-2/3}\\ \Delta_2&=2(d_2+\nu_1)+(\eta_1-\nu_1^2-2\nu_1d_2)N_1^{-1/3},\notag \\ \Delta_3&=d_3-\Delta\nu+\frac12(d_3^2-2\Delta\nu d_3+y)N_2^{-1/3}-\frac 12\Delta\nu d_3^2N_2^{-2/3}\notag\\ \Delta_4&=2(d_4+\Delta\nu)+(\Delta\eta-\Delta\nu^2-2\Delta\nu d_4)N_2^{-1/3}. \notag \end{align} \begin{lemma}\label{lem6.1} Assume that, for $M$ large, \begin{equation}\label{6.4} 1\le d_1\le N_1^{1/3}\,\,,\,\,1\le\Delta_1\le N_1^{1/3}, \end{equation} \begin{equation}\label{6.5} 1\le d_2\le \frac 12N_1^{1/3}\,\,,\,\,\Delta_2\ge 1, \end{equation} \begin{equation}\label{6.14} 1\le d_3\le N_3^{1/3}\,\,,\,\,1\le\Delta_3\le N_3^{1/3}, \end{equation} \begin{equation}\label{6.15} 1\le d_4\le \frac 12N_2^{1/3}\,\,,\,\,\Delta_4\ge 1. \end{equation} Then, \begin{equation}\label{6.6} g_1(t_1;x)-g_1(0;x)\le -\frac{\Delta_1}{20} t_1^2 \end{equation} for all $t_1\in\mathbb{R}$, and \begin{equation}\label{6.7} h_1(s_1;x)-h_1(0;x)\ge \frac{\Delta_2}{20} s_1^2 \end{equation} for all $s_1\in I_1$. Furthermore \begin{equation}\label{6.16} g_2(t_2;y)-g_2(0;y)\le -\frac{\Delta_3}{20} t_2^2 \end{equation} for all $t_2\in\mathbb{R}$, and \begin{equation}\label{6.17} h_2(s_2;y)-h_2(0;y)\ge \frac{\Delta_4}{20} s_2^2 \end{equation} for all $s_2\in I_2$. \end{lemma} \begin{proof} By (\ref{6.2}) and (\ref{6.3}), \begin{equation}\label{6.8} g_1(t_1;x)=\frac{\ell-1}2\log\left((1+d_1N_1^{-1/3})^2+N_1^{-2/3}t_1^2\right)+\frac 12\mu_1\left((1+d_1N_1^{-1/3})^2-N_1^{-2/3}t_1^2\right) -\xi_1(1+d_1N_1^{-1/3}). \end{equation} Thus, $$ g_1'(t_1;x)=N_1^{-2/3}t_1\left(\frac{\ell-1-\mu_1\left((1+d_1N_1^{-1/3})^2+N_1^{-2/3}t_1^2\right)} {(1+d_1N_1^{-1/3})^2+N_1^{-2/3}t_1^2}\right), $$ and by introducing the scalings (\ref{4.3}) we obtain \begin{equation}\label{6.9} g_1'(t_1;x)=-t_1\left(\frac{2\Delta_1+\left(N_1^{-1/3}-\nu_1N_1^{-2/3}\right)t_1^2} {(1+d_1N_1^{-1/3})^2+N_1^{-2/3}t_1^2}\right). \end{equation} If $0\le t_1\le N_1^{1/3}$, then $(1+d_1N_1^{-1/3})^2+N_1^{-2/3}t_1^2\le 5$ by (\ref{6.4}), and if $M$ is large enough $N_1^{-1/3}-\nu_1N_1^{-2/3}\ge 0$, so (\ref{6.9}) gives \begin{equation}\label{6.10} g_1'(t_1;x)\le -\frac{\Delta_1}5 t_1. \end{equation} If $t_1\ge N_1^{1/3}$ and $M$ is sufficiently large, then (\ref{6.9}) gives \begin{equation} g_1'(t_1;x)\le -t_1\frac{\frac12N_1^{-1/3}t_1^2}{(1+d_1N_1^{-1/3})^2+N_1^{-2/3}t_1^2}\le-t_1\frac{\frac12N_1^{-1/3}t_1^2}{5N_1^{-2/3}t_1^2} \le-\frac 1{10}N_1^{1/3}t_1\le-\frac{\Delta_1}{10}t_1, \notag \end{equation} by (\ref{6.4}). Hence, (\ref{6.10}) holds for all $t_1\ge 0$, and we have proved (\ref{6.6}) for $t_1\ge 0$. The case $t_1\le 0$ follows by symmetry. Consider now $h_1$. We have that \begin{equation}\label{6.11} h_1(s_1;x)=(\ell-1)\log(1-d_2N_1^{-1/3})+\frac 12\mu_1(1-d_2N_1^{-1/3})^2\cos 2N_1^{-1/3}s_1 - \xi_1(1-d_2N_1^{-1/3})\cos N_1^{-1/3}s_1 \end{equation} and hence \begin{equation}\label{6.12} h_1'(s_1;x)=N_1^{-1/3}(1-d_2N_1^{-1/3})\sin N_1^{-1/3}s_1 \left(\xi_1-2\mu_1(1-d_2N_1^{-1/3})\cos N_1^{-1/3}s_1\right). \end{equation} From the scaling (\ref{4.3}) we see that if $M$ is sufficiently large then $\xi_1-2\mu_1(1-d_2N_1^{-1/3})\cos N_1^{-1/3}s_1\ge N_1^{2/3}\Delta_2$ and hence, $$ h_1(s_1;x)-h_1(0;x)\ge N_1^{1/3}(1-d_2N_1^{-1/3})\Delta_2\int_0^{s_1}\sin N_1^{-1/3}t\,dt\ge\frac {\Delta_2}2N_1^{2/3} (1-\cos N_1^{-1/3}s_1) $$ by (\ref{6.5}) for all $s_1\in I_1$. If $|N_1^{-1/3}s_1|\in[0,\pi /2]$, then $$ \frac 12(1-\cos N_1^{-1/3}s_1)=\sin^2\left(\frac12 N_1^{-1/3}s_1\right)\ge\frac 14N_1^{-2/3}s_1^2 $$ and hence $h_1(s_1;x)-h_1(0;x)\ge\frac 14\Delta_2s_1^2$. If $|N_1^{-1/3}s_1|\in[\pi /2,\pi]$, then $1-\cos N_1^{-1/3}s_1\ge 1$, and $$ h_1(s_1;x)-h_1(0;x)\ge \frac12\Delta_2N_1^{2/3}\ge\frac 1{2\pi^2}\Delta_2s_1^2\ge\frac {\Delta_2}{20}s_1^2. $$ Exactly the same argument gives (\ref{6.16}) and (\ref{6.17}). \end{proof} We will now prove lemma \ref{lem4.1}. \begin{proof} ({\it Proof of lemma \ref{lem4.1}}) All the limits below will be uniform for $\nu_i,\eta_i,x,y$ in compact sets. Write $$ u_1(t_1)=d_1+it_1\,,\,u_2(t_2)=d_3+it_2\,,\,v_1(s_1)=-d_2+is_1\,,\,v_2(s_2)=-d_4+is_2. $$ Since $\nu_i,\eta_i,x,y$ belongs to a compact set it is clear that we can choose $d_i$,$1\le i\le 4$, constant but so large that (\ref{6.4}), (\ref{6.5}), (\ref{6.14}) and (\ref{6.15}) hold for all sufficiently large $M$. Recall the definition (\ref{1.7}) of $\alpha$. In (\ref{3.24}) we will use the parametrizations (\ref{6.2}) and we choose $d_1$ and $d_3$ so that \begin{equation}\label{6.18} \alpha d_3-d_1\ge 1 \end{equation} which ensures that the $z$- and $w$-contours have the right ordering. If we let $$ J(t_1,s_1,t_2,s_2)=\frac{(1-d_2N_1^{-1/3})(1-d_4N_1^{-1/3})e^{is_1N_1^{-1/3}+is_2N_2^{-1/3}}} {N_1^{2/3}N_2^{1/3}(z(t_1)-w(t_2))(z(t_1)-\zeta(s_1))(w(t_2)-\omega(s_2))}, $$ then \begin{equation}\label{6.19} \frac{N_1^{1/3}dzdwd\zeta d\omega}{(z-w)(z-\zeta)(w-\omega)}=\alpha J(t_1,s_1,t_2,s_2)dt_1ds_1dt_2ds_2 \end{equation} and \begin{equation}\label{6.20} J(t_1,s_1,t_2,s_2)\to\frac 1{(u_1(t_1)-\alpha u_2(t_2))(u_1(t_1)-v_1(s_1))(u_2(t_2)-v_2(s_2))} \end{equation} as $M\to\infty$; also $J$ is bounded. Furthermore, \begin{align}\label{6.21} &f_1(z(t_1);x)-f_1(1;x)\to\frac 13u_1(t_1)^3-\nu_1u_1(t_1)^2-(\lambda_1-x)u_1(t_1), \\ &f_1(\zeta(s_1);x)-f_1(1;x)\to\frac 13v_1(s_1)^3-\nu_1v_1(s_1)^2-(\lambda_1-x)v_1(s_1), \\ &f_2(w(t_2);y)-f_2(1;y)\to\frac 13u_2(t_2)^3-\Delta\nu u_2(t_2)^2-(\Delta\lambda+\alpha y)u_2(t_2), \\ &f_2(\omega(s_2);y)-f_2(1;y)\to\frac 13v_2(s_2)^3-\Delta\nu v_2(s_2)^2-(\Delta\lambda+\alpha y)v_2(s_2) \notag \end{align} as $M\to\infty$. It follows from (\ref{3.24}) and (\ref{6.1}) that \begin{equation}\label{6.22} N_1^{1/3}a_{0,1}(\ell,k)=\frac{\alpha}{(2\pi)^4}\int_{\mathbb{R}}dt_1\int_{I_1}ds_1\int_{\mathbb{R}}dt_2\int_{I_2}ds_2J(t_1,s_1,t_2,s_2) \frac{e^{f_1(z(t_1);0)+f_2(w(t_2);0)}}{e^{f_1(\zeta(s_1);y)+f_2(\omega(s_2);x)}}. \end{equation} The integrand in (\ref{6.22}) is bounded by \begin{align} &Ce^{g_1(t_1;0)+g_2(t_2;0)-h_1(s_1;y)-h_2(s_2;x)} \notag\\ &\le Ce^{g_1(0;0)+g_2(0;0)-h_1(0;y)-h_2(0;x)-\frac 1{20}(t_1^2+s_1^2+t_2^2+s_2^2)}\le Ce^{-\frac 1{20}(t_1^2+s_1^2+t_2^2+s_2^2)}, \notag \end{align} where the first inequality follows from lemma \ref{lem6.1} since $\Delta_i\ge 1$, and the second inequality follows from (\ref{6.21}) by letting $t_1=s_1=t_2=s_2=0$ and taking real parts. Thus, by the dominated convergence theorem we can take the limit $M\to\infty$ in (\ref{6.22}) and get \begin{align}\label{6.23} &\lim_{M\to\infty}N_1^{1/3}a_{0,1}(\ell,k) \\ &=\frac{\alpha}{(2\pi)^4}\int_{\mathbb{R}}dt_1\int_{\mathbb{R}}ds_1\int_{\mathbb{R}}dt_2\int_{\mathbb{R}}ds_2 \frac 1{(u_1(t_1)-\alpha u_2(t_2))(u_1(t_1)-v_1(s_1))(u_2(t_2)-v_2(s_2))} \notag\\ &\times\frac{e^{\frac 13u_1(t_1)^3-\nu_1u_1(t_1)^2-\lambda_1u_1(t_1)+\frac 13u_2(t_2)^3-\Delta\nu u_2(t_2)^2-\Delta\lambda u_2(t_2)}} {e^{\frac 13v_1(s_1)^3-\nu_1v_1(s_1)^2-(\lambda_1-y)v_1(s_1)+\frac 13v_2(s_2)^3-\Delta\nu v_2(s_2)^2-(\Delta\lambda+\alpha x)v_2(s_2)}} \notag\\ &=\frac{\alpha}{(2\pi i)^4}\int_{\Gamma_{d_1}}dz\int_{\Gamma_{d_3}}dw\int_{\Gamma_{-d_2}}d\zeta\int_{\Gamma_{-d_4}}d\omega \frac{e^{\frac 13z^3- \nu_1 z^2- \lambda_1 z+\frac 13w^3-\Delta\nu w^2-\Delta\lambda w}} {(z-\alpha w)(z-\zeta)(w-\omega)e^{\frac 13\zeta^3- \nu_1 \zeta^2- (\lambda_1-y)\zeta+\frac 13\omega^3-\Delta\nu \omega^2-(\Delta\lambda+\alpha x)\omega}} \notag\\ &=\phi_1(x,y), \notag \end{align} where $\phi_1$ is given by (\ref{1.9}). Recall the condition in (\ref{6.23}). The last equality is a straightforward rewriting of the contour integral in terms of Airy functions, see the end of this section. This proves (\ref{4.4}). The limit of $N_1^{1/3}b_1(\ell,k)$ is the same as the right side of (\ref{6.23}), but we have the condition $d_1>\alpha d_3$ instead. For $c_2$ we get \begin{align}\label{6.232} \lim_{M\to\infty}N_1^{1/3}c_2(\ell,k)&=\frac{\alpha}{(2\pi i)^2}\int_{\Gamma_{d_3}}dw\int_{\Gamma_{-d_4}}d\omega \frac{e^{\frac 13w^3-\Delta\nu w^2-(\Delta\lambda+\alpha y)w}} {(w-\omega)e^{\frac 13\omega^3-\Delta\nu \omega^2-(\Delta\lambda+\alpha x)\omega}} \\ &=\phi_2(x,y), \notag \end{align} and for $c_3$, \begin{align}\label{6.233} \lim_{M\to\infty}N_1^{1/3}c_3(\ell,k)&=\frac{\alpha}{(2\pi i)^2}\int_{\Gamma_{d_1}}dz\int_{\Gamma_{-d_2}}d\zeta \frac{e^{\frac 13z^3- \nu_1 z^2- (\lambda_1-x)z}} {(z-\zeta)e^{\frac 13\zeta^3- \nu_1 \zeta^2- (\lambda_1-y)\zeta}} \\ &=\phi_3(x,y). \notag \end{align} \end{proof} We turn now to the proof of lemma \ref{lem4.2}. \begin{proof} ({\it Proof of lemma \ref{lem4.2}}) To prove the estimate (\ref{4.8}) we will use (\ref{6.22}) but we will make appropriate choices of the $d_i$'s in order to get the estimate. From (\ref{6.22}) we find \begin{equation}\label{6.24} \left|N_1^{1/3}a_{0,1}(\ell,k)\right|\le \frac{C}{|d_1-\alpha d_3|(d_1+d_2)(d_3+d_4)} \int_{\mathbb{R}}dt_1\int_{I_1}ds_1\int_{\mathbb{R}}dt_2\int_{I_2}ds_2 e^{g_1(t_1;0)-h_1(s_1;y)+g_2(t_2;0)-h_2(s_2;x)}. \end{equation} We will choose $d_i$ so that the conditions (\ref{6.4}), (\ref{6.5}), (\ref{6.14}), (\ref{6.15}) and (\ref{6.18}) are satisfied. Hence, it follows from (\ref{6.24}) and lemma \ref{lem6.1} that \begin{equation}\label{6.25} \left|N_1^{1/3}a_{0,1}(\ell,k)\right|\le Ce^{g_1(0;0)-h_1(0;y)+g_2(0;0)-h_2(s_2;x)}. \end{equation} From (\ref{6.8}), (\ref{6.11}) and the scalings (\ref{4.3}) we see that \begin{align}\label{6.26} g_1(0;x)&=(N_1+\nu_1N_1^{2/3}+xN^{1/3})\log(1+d_1N_1^{-1/3})\\&+\frac 12(N_1-\nu_1N_1^{2/3})(1+d_1N_1^{-1/3})^2 -(2N_1+\lambda_1N^{1/3})(1+d_1N_1^{-1/3})\notag \end{align} and \begin{align}\label{6.27} h_1(0;y)&=(N_1+\nu_1N_1^{2/3}+yN^{1/3})\log(1-d_2N_1^{-1/3})\\&+\frac 12(N_1-\nu_1N_1^{2/3})(1-d_2N_1^{-1/3})^2 -(2N_1+\lambda_1N^{1/3})(1-d_2N_1^{-1/3})\notag. \end{align} It is straightforward to show that \begin{equation}\label{6.28} \log(1+x)\le x-\frac{x^2}2+\frac{x^3}3 \end{equation} for all $x\ge 0$, and \begin{equation}\label{6.29} \log(1-x)\ge -x-\frac{x^2}2-\frac{x^3}{3(1-x)^3} \end{equation} if $0\le x<1$. If we use the estimate (\ref{6.28}) in (\ref{6.26}) we get \begin{equation}\label{6.30} g_1(0;x)\le -\frac 32 N_1-\frac 12\nu_1N_1^{2/3}-\lambda_1N^{1/3}+\frac 13d_1^3\left(1+\nu_1N_1^{-1/3}+xN_1^{2/3}\right) -\nu_1d_1^2-\lambda_1d_1+x\left(d_1-\frac 12d_1^2N_1^{-1/3}\right). \end{equation} Similarly, using (\ref{6.29}) in (\ref{6.27}) we find \begin{equation}\label{6.31} h_1(0;y)\ge -\frac 32 N_1-\frac 12\nu_1N_1^{2/3}-\lambda_1N^{1/3}+\frac 13d_2^3\left(\frac{1+\nu_1N_1^{-1/3}+yN_1^{2/3}} {(1-d_2N_1^{-1/3})^3}\right) -\nu_1d_2^2+\lambda_1d_2-y\left(d_2-\frac 12d_2^2N_1^{-1/3}\right). \end{equation} Combining (\ref{6.30}) and (\ref{6.31}) we obtain \begin{align}\label{6.32} &g_1(0;x)-h_1(0;y)\le\frac 13d_1^3\left(1+\nu_1N_1^{-1/3}+xN_1^{2/3}\right)-\nu_1d_1^2-\lambda_1d_1+x\left(d_1-\frac 12d_1^2N_1^{-1/3}\right) \\ &+\frac 13d_2^3\left(\frac{1+\nu_1N_1^{-1/3}+yN_1^{2/3}} {(1-d_2N_1^{-1/3})^3}\right)+\nu_1d_2^2-\lambda_1d_2+y\left(d_2+\frac 12d_2^2N_1^{-1/3}\right). \notag \end{align} In an analogous way, we obtain \begin{align}\label{6.33} &g_2(0;y)-h_2(0;x)\le\frac 13d_3^3\left(1+\Delta\nu N_2^{-1/3}-yN_2^{2/3}\right)-\Delta\nu d_3^2-\Delta\lambda d_3-y\left(d_3-\frac 12d_3^2N_2^{-1/3}\right) \\ &+\frac 13d_4^3\left(\frac{1+\Delta\nu N_2^{-1/3}-xN_2^{2/3}} {(1-d_4N_2^{-1/3})^3}\right)+\Delta\nu d_4^2-\Delta\lambda d_4-x\left(d_4+\frac 12d_4^2N_2^{-1/3}\right). \notag \end{align} We will use the estimates (\ref{6.32}) and (\ref{6.33}) in (\ref{6.25}). Take \begin{equation}\label{6.34} d_1=k_1\,,\,d_2=k_2+\delta_2(-y)_+^{1/2}\,,\, d_3=k_3\,,\,d_4=k_4+\delta_4x_+^{1/2}, \end{equation} where $k_i$ and $\delta_i$ are to be specified. Note that since $1\le \ell,k\le n_2$, there is a constant $k_0$ so that $|x|\le k_0N_1^{2/3}$ and $|y|\le k_0N_1^{2/3}$. First choose $k_1$ large enough so that $\Delta_1\ge 1$ holds. Then (\ref{6.4}) will hold if $M$ is large enough. We can choose $k_2$ so that $\Delta_2\ge 1$ and $d_2\ge 1$ hold provided that $d_2\le \frac 12N_1^{1/3}$. Now, $$ d_2=k_2+\delta_2(-y)_+^{1/2}\le k_2+k_0^{1/2}\delta_2 N_1^{1/3}\le \frac 12 N_1^{1/3} $$ for large $M$ if we choose $\delta_2$ small enough. With these choices (\ref{6.4}) and (\ref{6.5}) are satisfied for large $M$. In a similar way we can choose $k_3,k_4$ and $\delta_4$ so that (\ref{6.14}) and (\ref{6.15}) hold, and we can also choose $k_3$ so large that (\ref{6.18}) holds. Note that there is a constant $C$ so that $$ \frac{1+\nu_1 N_1^{-1/3}+yN_1^{-2/3}}{(1-d_2N_1^{-1/3})^3}\le C\,,\, \frac{1+\Delta\nu N_2^{-1/3}-xN_2^{-2/3}}{(1-d_4N_2^{-1/3})^3}\le C $$ and consequently we see from (\ref{6.32}) and (\ref{6.33}) that \begin{align}\label{6.35} &g_1(0;0)-h_1(0;y)+g_2(0;0)-h_2(0;x)\le \frac 13d_1^3(1+\nu_1N_1^{-1/3})-\nu_1d_1^2-\lambda_1d_1+Cd_2^3+\nu_1d_2^2-\lambda_1d_2 \\ &+y(d_2+\frac 12d_2^21N_1^{-1/3})+\frac 13d_3^3(1+\Delta\nu N_2^{-1/3})-\Delta\nu d_3^2-\Delta\lambda d_3 +Cd_4^3+\Delta\nu d_4^2-\Delta\lambda d_4-x(d_4+\frac 12d_4^2N_2^{-1/3}) \notag\\ &\le C(1+d_2^3+d_4^3+d_2^2+d_4^2+d_2+d_4)+y(d_2+\frac 12d_2^21N_1^{-1/3})-x(d_4+\frac 12d_4^2N_2^{-1/3}), \notag \end{align} since $d_1$ and $d_3$ are constants. From (\ref{6.34}) we see that $d_2^3\le 4(k_2^3+\delta_2^3(-y)_+^{3/2})$, $d_2^2\le 2(k_2^2+\delta_2^2(-y)_+)$ and similarly for $d_4$. If $y\ge 0$, then $$ y(d_2+\frac 12d_2^2N_1^{-1/3})\le Cy=Cy_+, $$ since $d_2=k_2$, and if $y<0$, then $$ y(d_2+\frac 12d_2^2N_1^{-1/3})\le d_2y=k_2y-\delta_2(-y)_+^{3/3}\le -\delta_2(-y)_+^{3/3}. $$ Thus $$ y(d_2+\frac 12d_2^2N_1^{-1/3})\le -\delta_2(-y)_+^{3/3}+Cy_+ $$ for all $y$. Similarly, $$ -x(d_4+\frac 12d_4^2N_2^{-1/3})\le -\delta_4x_+^{3/3}+C(-x)_+. $$ We can pick $\delta_2$ so small that $$ C(\delta_2(-y)_+^{3/3}+\delta_2^2(-y)_+)-\delta_2(-y)_+^{3/3}\le -c(-y)_+^{3/2}, $$ $c>0$ is a small constant. A similar argument can be done for $\delta_4$. Using these estimates in (\ref{6.35}) it follows from (\ref{6.25}) that $$ \left|N_1^{1/3}a_{0,1}(\ell,k)\right|\le Ce^{-c(x_+^{3/2}+(-y)_+^{3/2})+C(y_++(-x)_+)}, $$ which is what we wanted to prove. The estimates (\ref{4.9}), (\ref{4.10}) and (\ref{4.11}) can be proved in a similar way using (\ref{6.32}) and (\ref{6.33}). We will not go into the details. \end{proof} Let us briefly indicate how we can go from the contour integral form of $\phi_1(x,y)$ in (\ref{6.23}) to the Airy form in (\ref{1.9}). We use the fact that if $D>0$, then \begin{equation}\label{6.36} \frac 1{2\pi i}\int_{\Gamma_D}e^{\frac 13z^3+Az^2+Bz}dz=\text{Ai\,}(-B+A^2)e^{-AB+\frac 23 A^3} \end{equation} and \begin{equation}\label{6.37} \frac 1{2\pi i}\int_{\Gamma_{-D}}e^{-\frac 13\zeta^3+A\zeta^2+B\zeta}d\zeta=\text{Ai\,}(B+A^2)e^{AB+\frac 23 A^3}. \end{equation} Also, we write \begin{equation}\label{6.38} \frac 1{z-\alpha w}=-\int_0^\infty e^{\tau_1(z-\alpha w)}d\tau_1\,,\, \frac 1{z-\zeta}=\int_0^\infty e^{-\tau_2(z-\zeta)}d\tau_2\,,\, \frac 1{w-\omega}=\int_0^\infty e^{-\tau_3(w-\omega)}d\tau_3. \end{equation} If we insert (\ref{6.38}) into (\ref{6.23}) and use (\ref{1.6}), (\ref{6.36}) and (\ref{6.37}) we get (\ref{1.9}) after some manipulations.
{ "redpajama_set_name": "RedPajamaArXiv" }
\section*{Appendix #1. #2} \renewcommand{\thesection.\arabic{equation}}}{#1.\arabic{equation}} \renewcommand{\thesection}{#1} } \newcommand{\appsektion}[1]{\setcounter{equation}{0}\setcounter{subsection}{0} \section*{Appendix. #1} \renewcommand{\thesection.\arabic{equation}}}{A.\arabic{equation}} \renewcommand{\thesection}{A} } \catcode`\@=11 \def\@addtoreset{equation}{section{\@addtoreset{equation}{section} \def\thesection.\arabic{equation}}{\thesection.\arabic{equation}}} \@addtoreset{equation}{section \parskip 2mm \begin{document} \begin{titlepage} \vskip 1.5 cm \begin{center} {\LARGE \bf On logarithmic extensions of local scale-invariance} \end{center} \vskip 2.0 cm \centerline{{\bf Malte Henkel}$^{a,b}$} \vskip 0.5 cm \centerline{$^a$ Departamento de F\'{\i}sica da Universidade de Aveiro,} \centerline{Campus Universit\'ario de S\~ao Tiago, P -- 3810-193 Aveiro, Portugal} \vskip 0.5 cm \centerline{$^b$ Groupe de Physique Statistique, D\'epartement de Physique de la Mati\`ere et des Mat\'eriaux,\footnote{permanent address}} \centerline{Institut Jean Lamour,\footnote{Laboratoire associ\'e au CNRS UMR 7198} CNRS - Nancy Universit\'e - UPVM,} \centerline{B.P. 70239, F -- 54506 Vand{\oe}uvre l\`es Nancy Cedex, France} \begin{abstract} The known logarithmic extensions of conformal and Schr\"odinger-invariance assume translation-invariance in their spatial and temporal coordinates. Therefore, they cannot be applied directly to slow far-from-equilibrium relaxations, where time-translation-invariance no longer holds. Here, the logarithmic extension of ageing-invariance, that is local dynamical scaling without the assumption of time-translation-invariance, is presented. Co-variant two-point functions are derived. Their form is compared to transfer-matrix renormalisation group data for the two-time autoresponse function of the $1D$ critical contact process, which is in the directed percolation universality class. \end{abstract} \end{titlepage} \setcounter{footnote}{0} \section{Motivation and background} Dynamical scaling naturally arises in various many-body systems far from equilibrium. A paradigmatic example are ageing phenomena, which may arise in systems quenched, from some initial state, either (i) into a coexistence phase with more than one stable equilibrium state or else (ii) onto a critical point of the stationary state \cite{Bray94a,Cugliandolo02,Henkel10}. From a phenomenological point of view, ageing can be defined through the properties of (i) slow, non-exponential relaxation, (ii) breaking of time-translation-invariance and (iii) dynamical scaling. Drawing on the analogy with equilibrium critical phenomena, where scale-invariance can under rather weak conditions be extended to conformal invariance \cite{Polyakov70,Belavin84}, in recent years it has been attempted to carry out an analogous extension of simple dynamical scaling, characterised by a dynamical exponent $z$, to a new form of {\em local scale-invariance} ({\sc lsi}). One of the most simple predictions of that theory is the form of the linear two-time autoresponse function \cite{Henkel01b,Henkel03a,Henkel06a} \BEQ \label{R} R(t,s) = \left.\frac{\delta \langle\phi(t,\vec{r})\rangle}{\delta h(s,\vec{r})}\right|_{h=0} = s^{-1-a} f_R\left(\frac{t}{s}\right) \;\; , \;\; f_R(y) = f_0 y^{1+a'-\lambda_R/z} (y-1)^{-1-a'} \Theta(y-1) \EEQ which measures the linear response of the order-parameter $\phi(t,\vec{r})$ with respect to its canonically conjugated external field $h(s,\vec{r})$. The autoresponse exponent $\lambda_R$ and the ageing exponents $a,a'$ are universal non-equilibrium exponents.\footnote{In magnets, mean-field theory suggests that generically $a=a'$ for quenches to $T<T_c$ and $a\ne a'$ for $T=T_c$ \cite{Henkel10}.} The causality condition $t>s$ is explicitly included. The foundations and extensive tests of (\ref{R}) are reviewed in detail in \cite{Henkel10}. In the case of a degenerate vacuum state, conformal invariance (of equilibrium phase transitions) can be generalised to {\em logarithmic conformal invariance} \cite{Gurarie93,Gaberdiel96,Rahimi97}, with interesting applications to disordered systems \cite{Caux96}, percolation \cite{Flohr05,Mathieu07} or sand-pile models \cite{Poghosyan07}. For reviews, see \cite{Flohr03,Gaberdiel03}. Here, we shall be interested in possible logarithmic extensions of local scale-invariance and in the corresponding generalisations of (\ref{R}). Logarithmic conformal invariance in $2D$ can be heuristically introduced \cite{Gurarie93,Rahimi97} by replacing, in the left-handed chiral conformal generators $\ell_n = - z^{n+1}\partial_z - (n+1) z^n \Delta $, the conformal weight $\Delta$ by a matrix. Non-trivial results are only obtained if that matrix has a Jordan form, so that one writes, in the most simple case \BEQ \label{1.1} \ell_n = - z^{n+1} \partial_z - (n+1) z^n \left(\matz{\Delta}{1}{0}{\Delta}\right) \EEQ Then the quasi-primary scaling operators on which the $\ell_n$ act have two components, which we shall denote as $\Psi :=\left(\vekz{\psi}{\phi}\right)$. The generators (\ref{1.1}) satisfy the commutation relations $[\ell_n, \ell_m] = (n-m) \ell_{n+m}$ with $n,m\in\mathbb{Z}$. Similarly, the right-handed generators $\bar{\ell}_n$ are obtained by replacing $z\mapsto \bar{z}$ and $\Delta\mapsto \bar{\Delta}$. A simple example of an invariant equation can be written as ${\cal S}\Psi=0$, with the Schr\"odinger operator \BEQ {\cal S} := \left( \matz{0}{\partial_z \partial_{\bar{z}}}{0}{0} \right) \EEQ and because of $[{\cal S},\ell_n] = - (n+1) z^n {\cal S} - (n+1)n z^{n+1} \left( \matz{0}{\Delta}{0}{0}\right) \partial_{\bar{z}}$, one has a dynamic symmetry of ${\cal S}\Psi=0$, if the conformal weights $\Delta=\overline{\Delta}=0$ are chosen. Of particular importance are the consequences for the form of the two-point functions of quasi-primary operators, for which only co-variance under the finite-dimensional sub-algebra $\langle \ell_{\pm 1,0}\rangle \cong \mathfrak{sl}(2,\mathbb{R})$ is needed \cite{Gurarie93,Rahimi97} (we suppress the dependence on $\bar{z}_i$, but see \cite{Do08}). Set \BEQ \label{1.3} F := \left\langle \phi_1(z_1) \phi_2(z_2)\right\rangle \;\; , \;\; G := \left\langle \phi_1(z_1) \psi_2(z_2)\right\rangle \;\; , \;\; H := \left\langle \psi_1(z_1) \psi_2(z_2)\right\rangle \EEQ Translation-invariance implies that $F=F(z), G=G(z)$ and $H=H(z)$ with $z=z_1-z_2$. Combination of dilation- and special co-variance applied to $F,G$ leads to $\Delta :=\Delta_1=\Delta_2$ and $F(z)=0$. Finally, consideration of $H(z)$ leads to \BEQ \label{1.4} G(z) = G(-z) = G_0 |z|^{-2\Delta} \;\; , \;\; H(z) = \bigl( H_0 - 2 G_0 \ln |z|\bigr) \, |z|^{-2\Delta} \EEQ where $G_0, H_0$ are normalisation constants. We emphasise here the {\em symmetric} form of the two-point functions, which does follow from the three co-variance conditions. Recently, `non-relativistic' versions of logarithmic conformal invariance have been studied \cite{Hosseiny10}. Besides the consideration of dynamics in statistical physics referred to above, such studies can also be motivated from the analysis of dynamical symmetries in non-linear hydrodynamical equations \cite{Ovsiannikov80,Niederer78,Ivash97,Hassaine00,ORaif01}, or from studies of non-relativistic versions of the AdS/CFT correspondence \cite{Maldacena98,Bala08,Son08,Minic08,Fuertes09,Leigh09,Hartnoll09}. Two distinct non-semi-simple Lie algebras have been considered: \begin{enumerate} \item the {\em Schr\"odinger algebra} $\mathfrak{sch}(d)$, identified in 1881 by Lie as maximal dynamical symmetry of the free diffusion equation in $d=1$ dimensions. Jacobi had observed already in the 1840s that the elements of $\mathfrak{sch}(d)$ generate dynamical symmetries of free motion. We write the generators compactly as follows \begin{eqnarray} X_n &=& -t^{n+1}\partial_t - \frac{n+1}{2}t^n \vec{r}\cdot\vec{\nabla}_{\vec{r}} - \frac{\cal M}{2}(n+1)n t^{n-1} \vec{r}^2 - \frac{n+1}{2} x t^n \nonumber \\ Y_m^{(j)} &=& - t^{m+1/2} \partial_j - \bigl( m + \demi\bigr) t^{m-1/2} r_j \nonumber \\ M_n &=& - t^n {\cal M} \label{1.5} \\ R_n^{(jk)} &=& - t^n \bigl( r_j \partial_k - r_k \partial_j \bigr) \nonumber \end{eqnarray} where $\cal M$ is a dimensionful constant, $x$ a scaling dimension, $\partial_j = \partial/\partial r_j$ and $j,k=1,\ldots,d$. Then $\mathfrak{sch}(d)=\langle X_{\pm 1,0}, Y_{\pm 1/2}^{(j)}, M_0, R_0^{(j,k)}\rangle_{j,k=1,\ldots,d}$ is a dynamical symmetry of the free Schr\"odinger equation ${\cal S}\phi = \bigl( 2{\cal M}\partial_t - \vec{\nabla}_{\vec{r}}^2\bigr)\phi=0$, provided $x=d/2$, see \cite{Kastrup68,Hagen72,Niederer72,Jackiw72}, and also of Euler's hydrodynamical equations \cite{Ovsiannikov80}. An infinite-dimensional extension is $\langle X_n, Y_m^{(j)}, M_n, R_n^{(jk)}\rangle_{n\in\mathbb{Z},m\in\mathbb{Z}+\demi,j,k=1,\ldots,d}$ \cite{Henkel94}. \item The Schr\"odinger algebra is {\em not} the non-relativistic limit of the conformal algebra. Rather, from the corresponding contraction one obtains the {\em conformal Galilei algebra} $\mbox{\sc cga}(d)$ \cite{Havas78}, which was re-discovered independently several times afterwards \cite{Henkel97,Negro97,Henkel03a,Bagchi09,Martelli09}. The generators may be written as follows \cite{Cherniha10} \begin{eqnarray} X_n &=& - t^{n+1}\partial_t - (n+1) t^n \vec{r}\cdot\vec{\nabla}_{\vec{r}} - n(n+1) t^{n-1} \vec{\gamma}\cdot\vec{r} - x (n+1)t^n \nonumber \\ Y_n^{(j)} &=& - t^{n+1} \partial_{j} - (n+1) t^n \gamma_j \label{1.6} \\ R_n^{(jk)} &=& - t^n \bigl( r_j \partial_{k} - r_k \partial_{j} \bigr) - t^n \bigl( \gamma_j \partial_{\gamma_k}-\gamma_k \partial_{\gamma_j}\bigr) \nonumber \end{eqnarray} where $\vec{\gamma}=(\gamma_1,\ldots,\gamma_d)$ is a vector of dimensionful constants and $x$ is again a scaling dimension. The algebra $\mbox{\sc cga}(d)=\langle X_{\pm 1,0},Y_{\pm 1,0}^{(j)},R_0^{(jk)}\rangle_{j,k=1,\ldots,d}$ does arise as a (conditional) dynamical symmetry in certain non-linear systems, distinct from the equations of non-relativistic incompressible fluid dynamics \cite{Zhang10,Cherniha10}.\footnote{The generator $X_0$ leads to the space-time dilatations $t\mapsto \lambda^z t$, $\vec{r}\mapsto \lambda \vec{r}$, where the dynamical exponent $z$ takes the value $z=2$ for the representation (\ref{1.5}) of $\mathfrak{sch}(d)$ and $z=1$ for the representation (\ref{1.6}) of $\mbox{\sc cga}(d)$. We point out that there exist representations of $\mbox{\sc cga}(d)$ with $z=2$ \cite{Henkel03a}. From this, one can show that $\mathfrak{age}(1)\subset\mbox{\sc cga}(1)$ as well.} The infinite-dimensional extension $\langle X_{n},Y_{n}^{(j)},R_n^{(jk)}\rangle_{n\in\mathbb{Z},j,k=1,\ldots,d}$ is straightforward. \end{enumerate} For both algebras $\mathfrak{sch}(d)$ and $\mbox{\sc cga}(d)$, the non-vanishing commutators are given by \BEQ \label{1.7} {}[X_n, X_{n'}] = (n-n') X_{n+n'} \;,\; {}[X_n, Y_{m}^{(j)}] = \left(\frac{n}{z}-m\right)Y_{n+m}^{(j)} \;,\; {}[R_0^{(jk)}, Y_m^{(\ell)}] = \delta^{j,\ell} Y_m^{(k)} - \delta^{k,\ell} Y_m^{(j)} \EEQ where the dynamical exponent $z=2$ for the representation (\ref{1.5}) and $z=1$ for the representation (\ref{1.6}). For the Schr\"odinger algebra, one has in addition $[Y_{1/2}^{(j)}, Y_{-1/2}^{(k)}] = \delta^{j,k} M_0$. The algebras $\mathfrak{sch}(d)$ and $\mbox{\sc cga}(d)$ arise, besides the conformal algebra, as the only possible finite-dimensional Lie algebras in two classification schemes of non-relativistic space-time transformations, with a fixed dynamical exponent $z$, namely: (i) either as generalised conformal transformations \cite{Duval09} or (ii) as local scale-transformations which are conformal in time \cite{Henkel02}. Now, using the same heuristic device as for logarithmic conformal invariance and replacing in the generators $X_n$ in (\ref{1.5},\ref{1.6}) the scaling dimension by a Jordan matrix \BEQ x \mapsto \left(\matz{x}{1}{0}{x}\right) \EEQ both {\em logarithmic Schr\"odinger-invariance} and {\em logarithmic conformal galilean invariance} can be defined \cite{Hosseiny10}. Adapting the definition (\ref{1.3}), one now has $F=F(t,\vec{r})$, $G=G(t,\vec{r})$ and $H=H(t,\vec{r})$, with $t:=t_1-t_2$ and $\vec{r}:=\vec{r}_1-\vec{r}_2$ because of temporal and spatial translation-invariance. Since the conformal properties involve the time coordinate only, the practical calculation is analogous to the one of logarithmic conformal invariance outlined above (alternatively, one may use the formalism of nilpotent variables \cite{Moghimi00,Hosseiny10}). In particular, one obtains $x:= x_1 = x_2$ and $F=0$. Generalising the results of Hosseiny and Rouhani \cite{Hosseiny10} to $d$ spatial dimensions, the non-vanishing two-point functions read as follows: for the case of logarithmic Schr\"odinger invariance \BEQ \label{1.8} G = G_0 |t|^{-x}\exp\left[-\frac{{\cal M}}{2} \frac{\vec{r}^2}{t}\right] \;\; ,\;\; H = \bigl( H_0 - G_0 \ln |t|\bigr) \, |t|^{-x} \exp\left[-\frac{{\cal M}}{2} \frac{\vec{r}^2}{t}\right] \EEQ subject to the constraint \cite{Bargman54} ${\cal M}:= {\cal M}_1 = - {\cal M}_2$.\footnote{In order to keep the physical convention of non-negative masses ${\cal M}\geq 0$, one may introduce a `complex conjugate' $\phi^*$ to the scaling field $\phi$, with ${\cal M}^*=-{\cal M}$. In dynamics, co-variant two-point functions are interpreted as response functions, written as $R(t,s)=\left\langle \phi(t) \wit{\phi}(s)\right\rangle$ in the context of Janssen-de Dominicis theory, where the response field $\wit{\phi}$ has a mass $\wit{\cal M}=-{\cal M}$, see e.g. \cite{Cugliandolo02,Henkel10} for details.\\ Furthermore, the physical relevant equations are {\em stochastic} Langevin equations, whose noise terms do break any interesting extended dynamical scale-invariance. However, one may identify a `deterministic part' which may be Schr\"odinger-invariant, such that the predictions (\ref{1.8}) remain valid even in the presence of noise \cite{Picone04}. This was rediscovered recently under name of `time-dependent deformation of Schr\"odinger geometry' \cite{Nakayama10}.} For the case of logarithmic conformal galilean invariance \BEQ \label{1.9} G = G_0 |t|^{-2x}\exp\left[-2\frac{\vec{\gamma}\cdot\vec{r}}{t}\right] \;\;,\;\; H = \bigl( H_0 - 2 G_0 \ln |t|\bigr)\, |t|^{-2x} \exp\left[-2\frac{\vec{\gamma}\cdot\vec{r}}{t}\right] \EEQ together with the constraint $\vec{\gamma} :=\vec{\gamma}_1 = \vec{\gamma}_2$. Here, $G_0,H_0$ are again normalisation constants.\footnote{There is a so-called `exotic' central extension of $\mbox{\sc cga}(2)$ \cite{Lukierski06}, but the extension of the known two-point functions \cite{Bagchi09b,Bagchi09c,Martelli09} to the logarithmic version has not yet been attempted.} {}From the comparison of the results (\ref{1.8},\ref{1.9}) with the form (\ref{1.5}) of logarithmic conformal invariance, we see that logarithmic corrections to scaling are systematically present. As we shall show, this feature is a consequence of the assumption of time-translation-invariance, since the time-translation operator $X_{-1}=-\partial_t$ is contained in both algebras. On the other hand, from the point of view of non-equilibrium statistical physics, neither the Schr\"odinger nor the conformal Galilei algebra is a satisfactory choice for a dynamical symmetry, since time-translation- invariance can only hold true at a stationary state and hence eqs.~(\ref{1.5},\ref{1.6}) can only be valid in situations such as {\em equilibrium} critical dynamics. For non-equilibrium systems, it is more natural to leave out time-translations from the algebra altogether. An enormous variety of physical situations with a natural dynamical scaling is known to exist, although the associated stationary state(s), towards which the system is relaxing to, need not be scale-invariant \cite{Henkel10}. We then arrive at the so-called {\em ageing algebra} $\mathfrak{age}(d) := \langle X_{0,1},Y_{\pm 1/2}^{(j)}, M_0, R_0^{(jk)}\rangle_{j,k=1,\ldots,d}\subset \mathfrak{sch}(d)$. We shall study the consequences of a logarithmic extension of ageing invariance. In section~2, we shall write down the generators of logarithmic ageing invariance and shall find the co-variant two-point functions in section~3. In section~4, we discuss some applications. In particular, we shall show that the scaling of the two-time autoresponse function in $1D$ critical directed percolation is well described in terms of logarithmic ageing invariance. We conclude in section~5. \section{Logarithmic extension of the ageing algebra $\mathfrak{age}(d)$} For definiteness, we consider the ageing algebra $\mathfrak{age}(d)\subset\mathfrak{sch}(d)$ of the Schr\"odinger algebra. The generators of the representation (\ref{1.5}) can in general be taken over, but with the important exception \BEQ \label{2.1} X_n = -t^{n+1}\partial_t - \frac{n+1}{2}t^n \vec{r}\cdot\vec{\nabla}_{\vec{r}} - \frac{\cal M}{2}(n+1)n t^{n-1} \vec{r}^2 - \frac{n+1}{2} x t^n - (n+1)n \xi t^n \EEQ where now $n\geq 0$ and (\ref{1.7}) remains valid. In contrast to the representation (\ref{1.5}), we now have {\em two distinct} scaling dimensions $x$ and $\xi$, with important consequences on the form of the co-variant two-point functions \cite{Picone04,Henkel06a}, see also below.\footnote{If one assumes time-translation-invariance, the commutator $[X_1,X_{-1}]=2X_0$ leads to $\xi=0$ and one is back to (\ref{1.5}).} To simplify the discussion, we shall concentrate from now on the temporal part $\langle \Psi(t_1,\vec{r})\Psi(t_2,\vec{r})\rangle$, the form of which is described by the two generators $X_{0,1}$, with the commutator $[X_1,X_0]=X_1$. At the end, the spatial part is easily added. We construct the logarithmic extension of $\mathfrak{age}(d)$, analogously to section~1, by considering two scaling operators, with {\em both} scaling dimensions $x$ and $\xi$ identical, and replacing \BEQ \label{2.2} x \mapsto \left(\matz{x}{x'}{0}{x}\right) \;\; , \;\; \xi \mapsto \left(\matz{\xi}{\xi'}{\xi''}{\xi}\right) \EEQ in eq.~(\ref{2.1}), the other generators (\ref{1.5}) being kept unchanged. Without restriction of generality, one can always achieve either a diagonal form (with $x'=0$) or a Jordan form (with $x'=1$) of the first matrix, but for the moment it is not yet clear if the second matrix in (\ref{2.2}) will have any particular structure. Setting $\vec{r}=\vec{0}$, we have from (\ref{2.1}) the two generators \BEQ \label{2.3} X_0 = - t\partial_t - \demi \left(\matz{x}{x'}{0}{x}\right) \;\; , \;\; X_1 = - t^2\partial_t - t \left(\matz{x+\xi}{x'+\xi'}{\xi''}{x+\xi}\right) \EEQ and we find $[X_1,X_0]=X_1 +\demi t \,x'\xi'' \left(\matz{-1}{0}{0}{1}\right) \stackrel{!}{=} X_1$. The condition $x' \xi''\stackrel{!}{=}0$ follows and we must distinguish two cases. \begin{enumerate} \item $x'=0$. The first matrix in (\ref{2.2}) is diagonal. In this situation, there are two distinct possibilities: (i) either, the matrix $\left(\matz{\xi}{\xi'}{\xi''}{\xi}\right)\rar\left(\matz{\xi_+}{0}{0}{\xi_-}\right)$ is diagonalisable. We then have a pair of quasi-primary operators, with scaling dimensions $(x,\xi_+)$ and $(x,\xi_-)$. This reduces to the standard form of non-logarithmic ageing invariance \cite{Henkel06a}. Or else, (ii), the matrix $\left(\matz{\xi}{\xi'}{\xi''}{\xi}\right)\rar\left(\matz{\bar{\xi}}{1}{0}{\bar{\xi}}\right)$ reduces to a Jordan form. This is a special case of the situation considered below. \item $\xi''=0$. Both matrices in (\ref{2.2}) reduce simultaneously to a Jordan form. While one can always normalise such that either $x'=1$ or else $x'=0$, there is no obvious normalisation for $\xi'$. This is the main case which we shall study in the remainder of this paper. \end{enumerate} In conclusion: {\em without restriction on the generality, we can set $\xi''=0$ in eqs.~(\ref{2.2},\ref{2.3}).} For illustration and completeness, we give an example of a logarithmically invariant Schr\"odinger equation. Consider the Schr\"odinger operator \BEQ {\cal S} := \left( 2{\cal M}\partial_t - \vec{\nabla}_{\vec{r}}^2 +\frac{2{\cal M}}{t} \left( x + \xi - \frac{d}{2}\right) \right) \left( \matz{0}{1}{0}{0}\right) \EEQ Using (\ref{2.3}) with the spatial parts restored, we have $[{\cal S},X_0]=-{\cal S}$ and $[{\cal S},X_1] = -2t {\cal S}$ and furthermore, $\cal S$ commutes with all other generators of $\mathfrak{age}(d)$. Therefore, the elements of $\mathfrak{age}(d)$ map any solution of ${\cal S}\left(\vekz{\psi}{\phi}\right)=\left(\vekz{0}{0}\right)$ to another solution of the same equation. \section{Two-point functions} Consider the following two-point functions, built from the components of quasi-primary operators of logarithmic ageing symmetry \begin{eqnarray} F = F(t_1, t_2) &:=& \left\langle \phi_1(t_1)\phi_2(t_2)\right\rangle \nonumber \\ G_{12} = G_{12}(t_1, t_2) &:=& \left\langle \phi_1(t_1)\psi_2(t_2)\right\rangle \nonumber \\ G_{21} = G_{21}(t_1, t_2) &:=& \left\langle \psi_1(t_1)\phi_2(t_2)\right\rangle \\ H = H(t_1, t_2) &:=& \left\langle \psi_1(t_1)\psi_2(t_2)\right\rangle \nonumber \end{eqnarray} Their co-variance under the representation (\ref{2.3}), with $\xi''=0$, is expressed by the conditions $\hat{X}_{0,1}F\stackrel{!}{=}0$,\ldots, where $\hat{X}_{0,1}$ stands for the extension of (\ref{2.3}) to two-body operators. This leads to the following system of eight equations for a set of four functions in two variables. \begin{eqnarray} \left[ t_1 \partial_1 + t_2\partial_2 +\demi(x_1+x_2) \right] F(t_1,t_2) &=& 0 \nonumber \\ \Bigl[ t_1^2 \partial_1 + t_2^2\partial_2 + (x_1+\xi_1) t_1 + (x_2+\xi_2) t_2 \Bigr] F(t_1,t_2) &=& 0 \nonumber \\[0.20cm] \left[ t_1 \partial_1 + t_2\partial_2 +\demi(x_1+x_2) \right] G_{12}(t_1,t_2) +\frac{x_2'}{2} F(t_1,t_2) &=& 0 \nonumber \\ \Bigl[ t_1^2 \partial_1 + t_2^2\partial_2 + (x_1+\xi_1) t_1 + (x_2+\xi_2) t_2 \Bigr] G_{12}(t_1,t_2) + (x_2'+\xi_2') t_2 F(t_1,t_2) &=& 0 \nonumber \\[0.20cm] \left[ t_1 \partial_1 + t_2\partial_2 +\demi(x_1+x_2) \right] G_{21}(t_1,t_2) +\frac{x_1'}{2} F(t_1,t_2) &=& 0 \nonumber \\ \Bigl[ t_1^2 \partial_1 + t_2^2\partial_2 + (x_1+\xi_1) t_1 + (x_2+\xi_2) t_2 \Bigr] G_{21}(t_1,t_2) + (x_1'+\xi_1') t_1 F(t_1,t_2) &=& 0 \label{3.2} \\[0.20cm] \left[ t_1 \partial_1 + t_2\partial_2 +\demi(x_1+x_2) \right] H(t_1,t_2) +\frac{x_1'}{2} G_{12}(t_1,t_2) +\frac{x_2'}{2} G_{21}(t_1,t_2) &=& 0 \nonumber \\ \Bigl[ t_1^2 \partial_1 + t_2^2\partial_2 + (x_1+\xi_1) t_1 + (x_2+\xi_2) t_2 \Bigr] H(t_1,t_2) & & \nonumber \\ + (x_1'+\xi_1') t_1 G_{12}(t_1,t_2)+ (x_2'+\xi_2') t_2 G_{21}(t_1,t_2) &=& 0 \nonumber \end{eqnarray} where $\partial_i=\partial/\partial t_i$. We expect an unique solution, up to normalisations. It is convenient to solve the system (\ref{3.2}) via the ansatz, with $y:=t_1/t_2$ \begin{eqnarray} F(t_1,t_2) &=& t_2^{-(x_1+x_2)/2}\: y^{\xi_2 +(x_2-x_1)/2} (y-1)^{-(x_1+x_2)/2-\xi_1-\xi_2} f(y) \nonumber \\ G_{12}(t_1,t_2) &=& t_2^{-(x_1+x_2)/2}\: y^{\xi_2 +(x_2-x_1)/2} (y-1)^{-(x_1+x_2)/2-\xi_1-\xi_2} \sum_{j\in\mathbb{Z}} \ln^j t_2 \cdot g_{12,j}(y) \nonumber \\ G_{21}(t_1,t_2) &=& t_2^{-(x_1+x_2)/2}\: y^{\xi_2 +(x_2-x_1)/2} (y-1)^{-(x_1+x_2)/2-\xi_1-\xi_2} \sum_{j\in\mathbb{Z}} \ln^j t_2 \cdot g_{21,j}(y) \label{3.3} \\ H(t_1,t_2) &=& t_2^{-(x_1+x_2)/2}\: y^{\xi_2 +(x_2-x_1)/2} (y-1)^{-(x_1+x_2)/2-\xi_1-\xi_2} \sum_{j\in\mathbb{Z}} \ln^j t_2 \cdot h_{j}(y) \nonumber \end{eqnarray} {\bf 1.} The function $F$ does not contain any logarithmic contributions and its scaling function satisfies the equation $f'(y)=0$, hence \BEQ \label{3.4} f(y)=f_0=\mbox{\rm cste.} \EEQ This reproduces the well-known form of non-logarithmic local scaling \cite{Henkel06a}. Comparing this with the usual form (\ref{R}) of standard {\sc lsi} with $z=2$, the ageing exponents $a,a',\lambda_R$ are related to the scaling dimensions as follows: \BEQ a =\demi(x_1+x_2) -1 \;\; , \;\; a'-a = \xi_1 + \xi_2 \;\; , \;\; \lambda_R = 2 (x_1 +\xi_1) \EEQ For example, the exactly solvable $1D$ kinetic Ising model with Glauber dynamics at zero temperature \cite{Godreche00a} satisfies (\ref{R}) with the values $a=0, a'-a=-\demi, \lambda_R=1, z=2$. Further examples of systems with $a'-a\ne 0$ are given by the non-equilibrium critical dynamics of the kinetic Ising model with Glauber dynamics, both for $d=2$ and $d=3$ \cite{Henkel06a,Henkel10}. \noindent {\bf 2.} Next, we turn to the function $G_{12}$. Co-variance under $X_0$ leads to the condition \BEQ \left( g_{12,1}(y)+\demi x_2' f(y)\right) + \sum_{j\ne 0} (j+1)\ln^j t_2 \cdot g_{12,j+1}(y) = 0 \EEQ which must hold true for all times $t_2$. This implies \BEQ g_{12,1}(y) = - \demi x_2' f(y) \;\; , \;\; g_{12,j}(y) = 0 \mbox{\rm ~~;~ $\forall j\ne 0,1$} \EEQ In order to simplify the notation for later use, we set \BEQ \label{3.9} g_{12}(y) := g_{12,0}(y) \;\; , \;\; \gamma_{12}(y) := g_{12,1}(y) = -\demi x_2' f(y) \EEQ and these two give the only non-vanishing contributions in the ansatz (\ref{3.2}). Furthermore, the last remaining function $g_{12}$ is found from the co-variance under $X_1$, which gives \BEQ \sum_{j\in\mathbb{Z}} \ln^j t_2 \Bigl( y(y-1) g_{12,j}'(y) + (j+1) g_{12,j+1}(y) \Bigr) + (x_2'+\xi_2') f(y) = 0 \EEQ for all times $t_2$. Combining the resulting two equations for $g_{12}$ and $\gamma_{12}$ with (\ref{3.9}) leads to \BEQ \label{3.11} y(y-1) g_{12}'(y) + \left(\frac{x_2'}{2} +\xi_2'\right) f(y) = 0 \EEQ {\bf 3.} The function $G_{21}$ is treated similarly. We find \BEQ \label{3.12} g_{21}(y) := g_{21,0}(y) \;\; , \;\; \gamma_{21}(y) := g_{21,1}(y) = -\demi x_1' f(y) \;\; , \;\; g_{21,j}(y)=0 \mbox{\rm ~~;~ for all $j\ne 0,1$} \EEQ and the differential equation \BEQ \label{3.13} y(y-1) g_{21}'(y) + \left(x_1' +\xi_1'\right) yf(y) -\demi x_1' f(y) = 0 \EEQ {\bf 4.} Finally, dilatation-covariance of the function $H$ leads to $h_j(y)=0$ for all $j\ne 0,1,2$ and \begin{eqnarray} h_1(y) &=& -\demi \bigl( x_1' g_{12}(y) + x_2' g_{21}(y) \bigr) \nonumber \\ h_2(y) &=& \frac{1}{4} x_1' x_2' f(y) \label{3.14} \end{eqnarray} The last remaining function $h_0(y)$ is found from co-variance under $X_1$ which leads to \BEQ y(y-1) h_0'(y) + \left(\left(x_1' +\xi_1'\right)y - \demi x_1'\right)g_{12}(y) + \left( \demi x_2' + \xi_2'\right) g_{21}(y) = 0 \label{3.15} \EEQ Using (\ref{3.4}), the equations (\ref{3.11},\ref{3.13},\ref{3.15}) are readily solved and we find \begin{eqnarray} g_{12}(y) &=& g_{12,0} +\left(\frac{x_2'}{2}+\xi_2'\right) f_0 \ln \left|\frac{y}{y-1}\right| \nonumber \\ g_{21}(y) &=& g_{21,0} -\left(\frac{x_1'}{2}+\xi_1'\right) f_0 \ln |y-1| - \frac{x_1'}{2} f_0 \ln |y| \nonumber \\ h_0(y) &=& h_0 - \left[ \left(\frac{x_1'}{2}+\xi_1'\right)g_{21,0} + \left(\frac{x_2'}{2}+\xi_2'\right)g_{12,0}\right]\ln|y-1| - \left[ \frac{x_1'}{2} g_{21,0} - \left(\frac{x_2'}{2}+\xi_2'\right)g_{12,0}\right]\ln|y| \nonumber \\ & & + \demi f_0 \left[ \left( \left(\frac{x_1'}{2} +\xi_1'\right)\ln |y-1| + \frac{x_1'}{2}\ln |y|\right)^2 - \left(\frac{x_2'}{2} +\xi_2'\right)^2 \ln^2\left|\frac{y}{y-1}\right| \right] \label{3.16} \end{eqnarray} where $f_0, g_{12,0}, g_{21,0}, h_0$ are normalisation constants. We summarise our results: \begin{eqnarray} F(t_1,t_2) &=& t_2^{-(x_1+x_2)/2}\: y^{\xi_2 +(x_2-x_1)/2} (y-1)^{-(x_1+x_2)/2-\xi_1-\xi_2} f_0 \nonumber \\ G_{12}(t_1,t_2) &=& t_2^{-(x_1+x_2)/2}\: y^{\xi_2 +(x_2-x_1)/2} (y-1)^{-(x_1+x_2)/2-\xi_1-\xi_2} \Bigl( g_{12}(y) + \ln t_2 \cdot \gamma_{12}(y) \Bigr) \nonumber \\ G_{21}(t_1,t_2) &=& t_2^{-(x_1+x_2)/2}\: y^{\xi_2 +(x_2-x_1)/2} (y-1)^{-(x_1+x_2)/2-\xi_1-\xi_2} \Bigl( g_{21}(y) + \ln t_2 \cdot \gamma_{21}(y) \Bigr) \nonumber \\ H(t_1,t_2) &=& t_2^{-(x_1+x_2)/2} \: y^{\xi_2 +(x_2-x_1)/2} (y-1)^{-(x_1+x_2)/2-\xi_1-\xi_2} \label{3.17} \\ & & \times \Bigl( h_0(y) + \ln t_2 \cdot h_1(y) +\ln^2 t_2 \cdot h_2(y) \Bigr) \nonumber \end{eqnarray} where the scaling functions, depending only on $y=t_1/t_2$, are given by eqs.~(\ref{3.9},\ref{3.12},\ref{3.14},\ref{3.16}). \\ Although the algebra $\mathfrak{age}(d)$ was written down for a dynamic exponent $z=2$, the space-independent part of the two-point functions is essentially independent of this feature. The change $(x,x',\xi,\xi')\mapsto \bigl((2/z) x, (2/z) x', (2/z) \xi, (2/z)\xi'\bigr)$ in eq.~(\ref{3.17}) produces the form valid for an arbitrary dynamical exponent $z$. Since for $z=2$, the space-dependent part of the generators is not affected by the passage to the logarithmic theory via the substitution (\ref{2.2}), we recover the same space-dependence as for the non-logarithmic theory with $z=2$. For example, \begin{eqnarray} F(t_1,t_2;\vec{r}_1,\vec{r}_2) &=& \delta({\cal M}_1+{\cal M}_2)\,\Theta(t_1-t_2) \, t_2^{-(x_1+x_2)/2} f_0 \nonumber \\ & & \times y^{\xi_2 +(x_2-x_1)/2} (y-1)^{-(x_1+x_2)/2-\xi_1-\xi_2} \exp\left[ -\frac{{\cal M}_1}{2} \frac{(\vec{r}_1-\vec{r}_2)^2}{t_1-t_2}\right] \label{3.18} \end{eqnarray} where we also included the causality condition $t_1>t_2$, expressed by the Heaviside function $\Theta$, which can be derived using the methods of \cite{Henkel03a}. Similar forms hold true for $G_{12}, G_{21}, H$. Comparison with the result (\ref{1.8}) of logarithmic Schr\"odinger-invariance shows: \begin{enumerate} \item logarithmic contributions, either as corrections to the scaling behaviour via additional powers of $\ln t_2$, or else in the scaling functions themselves, may be described independently in terms of the parameter sets $(x_1',x_2')$ and $(\xi_1',\xi_2')$. In particular, one may choose to introduce the logarithmic structure only through a single one of the two generators $X_0$ and $X_1$. \item If one sets $x_1'=x_2'=0$, the scaling functions $g_{12}, g_{21}$ and $h_0$ contain logarithmic terms, although there is no predicted logarithmic breaking of scaling, in contrast to what occurs in logarithmic conformal invariance or logarithmic Schr\"odinger invariance. \item The constraint $F=0$ of both logarithmic conformal invariance and logarithmic Schr\"odinger invariance is no longer required. \item If time-translation-invariance is assumed, one has $\xi_1=\xi_2=\xi_1'=\xi_2'=0$, $x_1=x_2$ and $f_0=0$. The functional form of eqs.~(\ref{3.17},\ref{3.18}) then reduces to the Schr\"odinger-invariant forms of eq.~(\ref{1.8}). \end{enumerate} \section{Applications} \subsection{Directed percolation} It is well-understood that critical $2D$ percolation can be described in terms of conformal invariance \cite{Langlands94}. Notably, Cardy \cite{Cardy92} and Watts \cite{Watts96} used conformal invariance to derive their celebrate formul{\ae} for the crossing probabilities. More recently, it has been shown that a precise formulation of the conformal invariance methods required in their derivations actually leads to a logarithmic conformal field theory \cite{Mathieu07}. Since {\em directed} percolation is in many respects quite analogous to ordinary percolation, we raise the question: {\em can one describe dynamical scaling properties of critical directed percolation in terms of logarithmic ageing invariance~?} The directed percolation universality class can be realised in many different ways, with often-used examples being either the contact process or else Reggeon field theory, and very precise estimates of the location of the critical point and the critical exponents are known, see \cite{Hinrichsen00,Odor04,Henkel09} and references therein, and in agreement with extensive recent experiments \cite{Takeuchi09}. In the contact process, a response function can be defined by considering the response of the time-dependent particule concentration with respect to a time-dependent particule-production rate. The relaxation from an initial state is in many respects quite analogous to what is seen in systems with an equilibrium stationary state \cite{Enss04,Ramasco04,Baumann07}. In figure~\ref{fig1}, we show data of the autoresponse function $R(t,s)=s^{-1-a} f_R(t/s)$ of $1D$ critical directed percolation, realised here by the contact process. The initial state contains uncorrelated particles at a finite density. The data are obtained from the transfer matrix renormalisation group ({\sc tmrg}) which are considerably more precise than data obtained from a Monte Carlo simulation, subject to stochastic uncertainties \cite{Enss04,Enss}. Aspects of local scaling can be emphasised by plotting the function \BEQ h_R(y) := f_R(y) y^{\lambda_R/z} (1-y^{-1})^{1+a} \EEQ over against $y=t/s$, with the exponents taken from \cite{Henkel09}. We observe an excellent collapse of the data when $y$ is large enough, but we also see that finite-time corrections to dynamical scaling arise when $y\to 1$, the precise form of which depends on the waiting time $s$. Starting from large values of $y$, and proceeding towards $y\to 1$, a description of the data in terms of local scale-invariance increasingly needs to take finer points into account. First, in its most simple form, one would na\"{\i}vely assume $a=a'$, when (\ref{R}) predicts a horizontal line in this plot. Indeed, this describes the data down to $y=t/s\approx 3-4$, but fails when $y$ becomes smaller. We had tried earlier \cite{Henkel06a} to take these deviations into account by admitting that $a$ and $a'$ can be different. This is equivalent to the assumption that $\xi+\wit{\xi}\ne0$ and describes the data well down to $t/s \approx 1.1$. However, further systematic deviations exist when $t/s$ is yet closer to unity. For the values of $s$ used in figure~\ref{fig1}, it is clear that one is still in the dynamical scaling regime and an explanation in terms of a more general form of the scaling function should be sought. \begin{figure}[tb] \centerline{\psfig{figure=vieuxlog_abb1.eps,width=4.75in,clip=}} \caption{\label{fig1} Scaling of the autoresponse $R(t,s)=s^{-1-a}f_R(t/s)$ of the $1D$ critical contact process, as a function of $y=t/s$, for several values of the waiting time $s$, and indicated by the dash-dotted lines. The dashed line labelled `{\sc lsi}' gives the prediction for the scaling function $h_R(y) = f_R(y) y^{\lambda_R/z} (1-1/y)^{1+a}$ as obtained from standard, non logarithmic local scale-invariance (\ref{R}), with $a'-a=0.26$. The full curve labelled `{\sc lsi} loga' is the prediction (\ref{4.2}) of logarithmic local scale-invariance with $\xi'=0$, as described in the text.} \end{figure} We now try to explain the {\sc tmrg} data in figure~\ref{fig1} in terms of logarithmic ageing invariance (extended to an arbitrary dynamical exponent $z$ as outlined above). We make the working hypothesis $R(t,s) = \langle \psi(t)\wit{\psi}(s)\rangle$, where the two scaling operators $\psi$ and $\wit{\psi}$ are described by the logarithmically extended scaling dimensions \BEQ \left(\matz{x}{x'}{0}{x}\right) \;\; , \;\; \left(\matz{\xi}{\xi'}{0}{\xi}\right) \;\; \mbox{\rm ~~and~~} \;\; \left(\matz{\wit{x}}{\wit{x}'}{0}{\wit{x}}\right) \;\; , \;\; \left(\matz{\wit{\xi}}{\wit{\xi}'}{0}{\wit{\xi}}\right) \EEQ In principle, one might have logarithmic corrections to scaling, according to eq.~(\ref{3.17}). Because of the excellent scaling behaviour seen in figure~\ref{fig1}, we conclude that logarithmic corrections are absent in the data. Hence the two functions $h_{1,2}(y)$ must vanish. Because of eq.~(\ref{3.14}), this means that $x'=\wit{x}'=0$. Then logarithmic ageing invariance (\ref{3.16}) predicts \begin{eqnarray} h_R(y) &=& \left( 1 - \frac{1}{y}\right)^{a-a'} \left( h_0 - g_{12,0} \wit{\xi}' \ln (1-1/y) - \demi f_0 \wit{\xi}'^2 \ln^2 (1-1/y) \right. \nonumber \\ & & \left. ~~~~~~- g_{21,0} \xi' \ln (y-1) + \demi f_0 \xi'^2 \ln^2 (y-1) \right) \label{4.2} \end{eqnarray} Since for $y$ sufficiently large, the numerically observed scaling function $h_R(y)$ becomes essentially constant, we conclude that there are no leading logarithmic contributions in the $y\to\infty$ asymptotic behaviour, hence $\xi'=0$ and the second line in (\ref{4.2}) vanishes. Hence we arrive at the following phenomenological form $h_R(y) = h_0 (1-1/y)^{a-a'} \left( 1 - A \ln(1-1/y) - B \ln^2 (1-1/y)\right)$, with the normalisation constant $h_0$ and where the universal parameters $A,B$ and the exponent $a-a'$ must be determined from the data. The full curve in figure~\ref{fig1} shows to what extent this describes the available data, with the chosen values \BEQ a'-a \simeq 0.174 \;\; , \;\; A \simeq 0.13 \;\; , \;\; B \simeq 0.0168 \;\; , \;\; h_0 \simeq 0.0888 \EEQ Indeed, the chosen form gives a full account of the {\sc tmrg} data, down to $t/s-1\approx 2\cdot 10^{-3}$, about two orders of magnitude smaller than for non-logarithmic local scale-invariance. This is about the size of the region where dynamical scaling has been confirmed through the collapse of data with different values of $s$. However, we also point out that the functional form of $h_R(y)$ may depend quite sensitively on the values of the several parameters such that error bars in $a'-a$, $A,B$ are of the order of at least $25\%$. In particular, the {\sc tmrg} data suggest that the logarithmic nature of the scaling operator should merely enter via the response field, and only through the consideration of the `special' transformation $X_1$, since $x'=\wit{x}=\xi=0$ and $\wit{\xi}'\ne0$ is the only quantity which describes the departure from the standard non-logarithmic scaling. We observe that the present estimate $a'-a=0.17(5)$ is considerably more small than our earlier estimate $a'-a\approx 0.27$. This is the first time that a theory could be formulated which describes the autoresponse in the entire range of the scaling variable, $2\cdot 10^{-3} \lesssim y-1 \leq \infty$. We point out that existing field-theoretical methods based on the $\varepsilon$-expansion \cite{Baumann07} obtain reliable results only in the opposite case $y\gg 1$, notably on non-equilibrium exponents and universal amplitudes. \subsection{Logarithmic scaling forms} In the ageing of several magnetic systems, such as the $2D$ XY model quenched from a fully disordered initial state to a temperature $T<T_{\rm KT}$ below the Kosterlitz-Thouless transition temperature \cite{Bray00,Berthier01,Abriet04a} or fully frustrated spin systems quenched onto their critical point \cite{Walter08,Karsai09}, the following phenomenological scaling behaviour \BEQ \label{4.4} R(t,s) = s^{-1-a} f_R\left( \frac{t}{\ln t} \frac{\ln s}{s}\right) \EEQ has been found to describe the simulational data well. Could this scaling form be explained within the context of logarithmic ageing invariance~? {\it H\'elas}, this question has to be answered in the negative. If one fixes $y=t/s$ and expands the quotient $\ln s/\ln t = \ln s/(\ln y + \ln s)$ for $s\to\infty$, eq.~(\ref{4.4}) leads to the following generic scaling behaviour \BEQ \label{s} R(t,s) = s^{-1-a} \sum_{k,\ell} f_{k,\ell} \: y^k \left( \frac{\ln y}{\ln s}\right)^{\ell} \EEQ Comparsion with the explicit scaling forms derived in section~3 shows that there arise only combinations of the form $\ln^n y \cdot \ln^m s$ or $\ln^n (y-1) \cdot \ln^m s$, where the integers $n,m$ must satisfy $0\leq n+m\leq 2$. This is incompatible with (\ref{s}). In conclusion, the logarithmic scaling form (\ref{4.4}) cannot be understood in terms of logarithmic ageing invariance, as presently formulated. \section{Conclusions} We have discussed the extension of dynamical scaling towards local scale-invariance in the case when the physical scaling operator acquires a partner with the same scaling dimension. Since in far-from-equilibrium relaxation, time-translation-invariance does not hold, one cannot appeal directly to the known cases of logarithmic conformal and Schr\"odinger-invariance. Indeed, analogously to the non-logarithmic case, the doubletts of scaling oprators are described by {\em pairs} of Jordan matrices of scaling dimensions. When computing the co-variant two-point functions, the absence of time-translation-invariance allows, independently, to include logarithmic corrections to scaling and also non-trivial modification of the scaling functions, see eq.~(\ref{3.16},\ref{3.17}). This generalises the forms found from logarithmic conformal or Schr\"odinger-invariance \cite{Hosseiny10}. Motivated by the fact that imporant properties of ordinary $2D$ critical percolation can be understood in terms of {\em logarithmic} conformal invariance \cite{Mathieu07}, we have re-analysed the autoresponse $R(t,s)$ of critical $1D$ directed percolation in terms of our logarithmic extension of local scale-invariance. The available data suggest that at least this observable behaves as if directed percolation were described by logarithmic local scale-invariance (with an obvious generalisation to $z\ne 2$). Of course, further independent tests of this possibility are required. Since logarithmic conformal invariance also arises in disordered systems at equilibrium, it would be of interest to see whether logarithmic local scale-invariance could help in improving the understanding of the relaxation processes of disordered systems far from equilibrium, see e.g. \cite{Paul04,Henkel08,Loureiro10,Park10}. \noindent {\bf Acknowledgement:} I thank T. Enss for the {\sc tmrg} data, M. Pleimling for useful correspondence and the Departamento de F\'{\i}sica da Universidade de Aveiro for warm hospitality.
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} In this work, we continue our study of the category of quantum liquids started in \cite{KZ20b}. Quantum liquids are quantum phases that only ``softly'' depend on the local geometry on spacetime. They include the usual spontaneous symmetry-breaking phases, topological orders, symmetry protected/enriched topological (SPT/SET) orders and CFT-type gapless phases. We have shown in \cite{KZ20b} that the mathematical description of a quantum liquid consists of two parts of data: the local quantum symmetry and the topological skeleton. In \cite{KZ20b}, we focused on the topological skeletons and explicitly computed the category $\mathcal{QL}_\mathrm{sk}^n$ of the topological skeletons of $n$D quantum liquids. In this work, we combine local quantum symmetries with topological skeletons into a single mathematical theory of topological nets and defect nets. This theory provides a rather complete mathematical description of quantum liquids in all dimensions. Throughout this work, $n$D represents the spacetime dimension. \smallskip In 2D CFT's, the local quantum symmetries are given by vertex operator algebras (VOA) or their non-chiral analogues. It is not known how to generalize VOA directly to higher dimensions. However, there is an alternative formulation of 2D CFT's in terms of conformal nets (see for examples \cite{BGL93,BMT88,BSM90,GF93,KL04,KLM01,LR95,Reh00a,Reh00b,Was95} and references therein). The idea of conformal net was originated from algebraic quantum field theories (defined in all dimensions), which was based on Haag-Kastler nets or the nets of local observables \cite{HK64} (see \cite{Haa92} for a review). Therefore, it is natural to ask how to generalize the notion of a conformal net to a more general one for the study of quantum liquids in all dimensions. Moreover, from the lattice model realization of topological orders, it seems that the ideas of the net of local observables and the superselection sectors of particles work better in lattice models than in quantum gauge field theories (see for example \cite{Kit03,KK12,KWZ21}). This motivates us to introduce the notion of a topological net, which includes that of a conformal net as a special case. We also generalize Bartels, Douglas and Henriques' theory of defects\footnote{Certain defects in conformal nets were studied much earlier than \cite{BDH19a} under the name of `solitons' (see for example \cite{BE98,Kaw02,LR95,LX04}).} in conformal nets and their fusion \cite{BDH19a} to the theory of defect nets and their fusion. In this work, we show that all finite topological $n$-nets, together with their higher codimensional finite defects, form a symmetric monoidal $*$-$(n+1)$-category $\mathcal{N}et^n$. Then we can define a subcategory $\mathcal{LQS}^n$ of $\mathcal{N}et^n$. The category of $n$D quantum liquids, denoted by $\mathcal{QL}^n$, can be obtained from $\mathcal{LQS}^n$ by a (co)slice construction. The category $\mathcal{QL}^n$ is equipped with two forgetful functors $\mathcal{QL}^n \to \mathcal{LQS}^n$ and $\mathcal{QL}^n \to \mathcal{QL}_\mathrm{sk}$, whose images are precisely the local quantum symmetry and the topological skeleton, respectively. One of the main results of this work says that $\mathcal{LQS}^n$ is $*$-condensation-complete (see Theorem \ref{thm:lqs-cc}). It further implies $\mathcal{LQS}^n\simeq (n+1)\mathrm{Hilb}$ and $\mathcal{QL}^n\simeq\mathcal{QL}_\mathrm{sk}^n$, both of which are naturally required by physics thus provide the necessary consistent check of our theory. \smallskip We provide the layout of this paper. In Section \ref{sec:topological-net}, we introduce the notion of a topological net; in Section \ref{sec:defect-net}, we introduce that of a defect net; in Section \ref{sec:net-oss}, we construct from every finite group a topological $n$-net that describe the local quantum symmetry of $n$D gapped quantum liquids with a finite onsite symmetry; in Section \ref{sec:lw-net}, we provide an explicit construction of topological nets and defect nets to describe Levin-Wen models and their boundaries. In Section \ref{sec:netn}, we introduce the category $\mathcal{N}et^n$ and postpone the complete construction (mainly the composition of higher morphisms) of $\mathcal{N}et^n$ to Section \ref{sec:fus-def}. In Section \ref{sec:construct-QL}, we give the construction of $\mathcal{LQS}^n$ and $\mathcal{QL}^n$, state the main result Theorem \ref{thm:lqs-cc} and discuss its consequences. In Section \ref{sec:transparent-wall}, we show how to extract the information of local quantum symmetries from a tower of subcategories of $\mathcal{LQS}^n$ based on the so-called transparent domain walls. Section \ref{sec:fus-def} is devoted to defining the fusion of defect nets and completing the construction of $\mathcal{N}et^n$. In Section \ref{sec:condense-net}, we sketch the condensation theory of topological nets. This theory provides a proof of Theorem \ref{thm:lqs-cc}. In Appendix \ref{sec:condense-vn}, we briefly review Lurie's formulation of Connes fusion and prove the $*$-condensation completeness of the $*$-2-category of von Neumann algebras and bimodules. \medskip We assume the reader is familiar with the fundamentals of von Neumann algebras, such as standard form and Connes fusion of bimodules. We say that a bimodule (including a left or right module as special case) over von Neumann algebras is semisimple if it is a finite direct sums of irreducible ones. We work on the $*$-settings. Functors between $*$-$n$-categories are silently assumed to be $*$-functors. Recall that $\mathrm{Hilb}$ denotes the unitary symmetric monoidal 1-category of finite-dimensional Hilbert spaces and linear maps. We use $\widehat\mathrm{Hilb}$ to the denote the symmetric monoidal $*$-1-category of all (possibly inseparable) Hilbert spaces and bounded linear maps. Given a unitary 1-category $\mathcal{C}$, we use $\hat\mathcal{C}$ to denote $\mathcal{C}\boxtimes_\mathrm{Hilb}\widehat\mathrm{Hilb}$ and refer to it as the {\em completion} of $\mathcal{C}$. By definition $\hat\mathcal{C}$ is a finite direct sum of $\widehat\mathrm{Hilb}$. \medskip \noindent{\bf Acknowledgments}: HZ would like to thank Zhengwei Liu for helpful discussions on von Neumann algebras. LK is supported by NSFC under Grant No. 11971219, and by Guangdong Provincial Key Laboratory (Grant No.2019B121203002) and by Guangdong Basic and Applied Basic Research Foundation under Grant No. 2020B1515120100. HZ is supported by NSFC under Grant No. 11871078. \section{Topological nets} The notion of a topological net is a generalization of a conformal net. By dropping the continuity of diffeomorphism covariance, we obtain a theory unifying conformal symmetries of gapless phases and onsite symmetries of gapped phases. \subsection{Topological nets} \label{sec:topological-net} We adopt the standard definition of the $n$-sphere $$S^n=\{ (x_0,x_1,\dots,x_n) \mid \sum x_i^2=1 \}.$$ Let $S^n_\uparrow$ denote the upper half of $S^n$ defined by $x_0\ge0$ and let $S^n_\downarrow$ denote the lower half of $S^n$ defined by $x_0\le0$. A {\em disk region} of the $n$-sphere $S^n$ is a closed region that is diffeomorphic to the $n$-disk $D^n$. Let $\Disk^n$ denote the 1-category of disk regions of $S^n$ whose morphisms are inclusions of disk regions. Let $\VN$ denote the 1-category of von Neumann algebras whose morphisms are (unital, normal) homomorphisms of von Neumann algebras. \begin{defn} An {\em $n$-dimensional net of von Neumann algebras} or an {\em $n$-net} for short is a functor $\mathcal{A}:\Disk^{n-1}\to\VN$. A {\em partial $n$-net} is a functor $\mathcal{A}:\mathcal{D}\to\VN$ where $\mathcal{D}$ is a full subcategory of $\Disk^{n-1}$. \end{defn} \begin{rem} Unwinding the definition we see that a partial $n$-net $\mathcal{A}:\mathcal{D}\to\VN$ consists of a family of von Neumann algebras $\mathcal{A}(I)$ for $I\in\mathcal{D}$ and a family of homomorphisms $\mathcal{A}(I\subset J):\mathcal{A}(I)\to\mathcal{A}(J)$ such that $\mathcal{A}(I\subset I) = \Id_{\mathcal{A}(I)}$ and $\mathcal{A}(J\subset K)\circ\mathcal{A}(I\subset J) = \mathcal{A}(I\subset K)$. The von Neumann algebras $\mathcal{A}(I)$ are referred to as {\em local observable algebras}. \end{rem} \begin{rem} We do not require the isotony of a partial $n$-net, that is, $\mathcal{A}(I)$ may be not a subalgebra of $\mathcal{A}(J)$ for $I\subset J$. The isotony of a topological $n$-net follows from other axioms (see Remark \ref{rem:tn}(2)). This relaxation is necessary for defining a zero defect $n$-net (see Example \ref{exam:tn2dn}). \end{rem} The {\em direct sum} of two $n$-nets $\mathcal{A},\mathcal{B}$ is an $n$-net $\mathcal{A}\oplus\mathcal{B}$ defined by $$(\mathcal{A}\oplus\mathcal{B})(I) = \mathcal{A}(I)\oplus\mathcal{B}(I).$$ The {\em tensor product} of $\mathcal{A},\mathcal{B}$ is an $n$-net $\mathcal{A}\boxtimes\mathcal{B}$ defined by $$(\mathcal{A}\boxtimes\mathcal{B})(I) = \mathcal{A}(I)\bar\otimes\mathcal{B}(I).$$ The diffeomorphism group $\Diff(S^{n-1})$ acts on the collection of partial $n$-nets by the formula: $(h^*\mathcal{A})(I) = \mathcal{A}(h(I))$ if $h$ preserves orientation or $(h^*\mathcal{A})(I) = \mathcal{A}(h(I))^\mathrm{op}$ otherwise. \begin{defn} \label{defn:sector} A {\em sector} of a partial $n$-net $\mathcal{A}:\mathcal{D}\to\VN$ is a Hilbert space $\mathcal{H}$ equipped with a family of homomorphisms $\rho_I:\mathcal{A}(I)\to\mathcal{L}(\mathcal{H})$ for $I\in\mathcal{D}$ such that $\rho_J\circ\mathcal{A}(I\subset J) = \rho_I$ for all inclusions $I\subset J$. We say that $\mathcal{H}$ is {\em irreducible} if it is neither zero nor the direct sum of two nonzero sectors; {\em semisimple} if it is a finite direct sum of irreducible sectors. A {\em homomorphism} $f:\mathcal{H}\to\mathcal{K}$ between two sectors of $\mathcal{A}$ is a bounded linear map such that $f$ is a left $\mathcal{A}(I)$-module map for all $I\in\mathcal{D}$. We use $\Sect(\mathcal{A})$ to denote the $*$-1-category of sectors of $\mathcal{A}$. \end{defn} Given a sector $\mathcal{H}$ of a partial $n$-net $\mathcal{A}:\mathcal{D}\to\VN$ and a closed region $R\subset S^{n-1}$ with smooth boundary, we use $\mathcal{O}_\mathcal{A}(\mathcal{H},R)$ or $\mathcal{O}(\mathcal{H},R)$ to denote the von Neumann algebra on $\mathcal{H}$ generated by the images of $\mathcal{A}(I)$ for all $R\supset I\in\mathcal{D}$. In particular, we denote $\mathcal{O}_\mathcal{A}(\mathcal{H},S^{n-1})$ by $\mathcal{O}_\mathcal{A}(\mathcal{H})$ or $\mathcal{O}(\mathcal{H})$. \begin{rem} A sector $\mathcal{H}$ of a partial $n$-net is irreducible if and only if $\mathcal{H}$ is an irreducible left $\mathcal{O}(\mathcal{H})$-module, i.e. $\mathcal{O}(\mathcal{H})=\mathcal{L}(\mathcal{H})$. Similarly, $\mathcal{H}$ is semisimple if and only if $\mathcal{H}$ is a semisimple left $\mathcal{O}(\mathcal{H})$-module. If $\mathcal{H}$ is semisimple, $\mathcal{O}(\mathcal{H})$ is a finite direct sum of type I factors. \end{rem} \begin{defn} \label{defn:top-net} A {\em topological $n$-net} consists of the following data: \begin{itemize} \item an $n$-net $\mathcal{A}:\Disk^{n-1}\to\VN$; \item a sector $\mathcal{H}_\mathcal{A}$ of $\mathcal{A}$, called the {\em vacuum sector}; \item a natural isomorphism $\eta_h:\mathcal{A}\to h^*\mathcal{A}$ for every $h\in\Diff(S^{n-1})$ such that $h^*(\eta_g) \circ \eta_h = \eta_{g\circ h}$. \end{itemize} These data are subject to the following axioms where $I,J,K\in\Disk^{n-1}$: \begin{itemize} \item {\em Locality:} If $I,J\subset K$ have disjoint interiors, then the images of $\mathcal{A}(I)$ and $\mathcal{A}(J)$ are commuting subalgebras of $\mathcal{A}(K)$. \item {\em Additivity:} If $I,J\subset K$ and if the interiors of $I$ and $J$ in $K$ cover $K$, then $\mathcal{A}(K)$ is generated by the images of $\mathcal{A}(I)$ and $\mathcal{A}(J)$. \item {\em Covariance:} For every orientation-preserving (resp. orientation-reversing) diffeomorphism $h\in\Diff(S^{n-1})$ there exists an isometry $\alpha_h:\mathcal{H}_\mathcal{A}\to\mathcal{H}_\mathcal{A}$ (resp. $\alpha_h:\mathcal{H}_\mathcal{A}\to\bar\mathcal{H}_\mathcal{A}$) rendering the following diagram commutative: $$\xymatrix{ \mathcal{A}(I) \ar[r]^{\eta_{h,I}} \ar[d]_{\rho_I} & \mathcal{A}(h(I)) \ar[d]^{\rho_{h(I)}} \\ \mathcal{L}(\mathcal{H}_\mathcal{A}) \ar[r]^{\Ad(\alpha_h)} & \mathcal{L}(\mathcal{H}_\mathcal{A}) } \quad\quad \raisebox{-2em}{\text{resp.}} \quad\quad \xymatrix{ \mathcal{A}(I) \ar[r]^{\eta_{h,I}} \ar[d]_{\rho_I} & \mathcal{A}(h(I))^\mathrm{op} \ar[d]^{\rho_{h(I)}} \\ \mathcal{L}(\mathcal{H}_\mathcal{A}) \ar[r]^{\Ad(\alpha_h)} & \mathcal{L}(\bar\mathcal{H}_\mathcal{A}) . } $$ Moreover, $\eta_{h,I}=\Id_{\mathcal{A}(I)}$ if $h|_I=\Id_I$. \item {\em Vacuum property:} Identify $\mathcal{A}(S^{n-1}_\downarrow)$ with $\mathcal{A}(S^{n-1}_\uparrow)^\mathrm{op}$ via $\eta_r$ where $r$ is the reflection across the hyperplane $x_0=0$. Then the $\mathcal{A}(S^{n-1}_\uparrow)$-$\mathcal{A}(S^{n-1}_\downarrow)^\mathrm{op}$-bimodule $\mathcal{H}_\mathcal{A}$ is isometric to $L^2(\mathcal{A}(S^{n-1}_\uparrow))$. \end{itemize} We use the notations $\mathcal{A}$ and $(\mathcal{A},\mathcal{H}_\mathcal{A})$ interchangeably for a topological $n$-net. We say that $(\mathcal{A},\mathcal{H}_\mathcal{A})$ is {\em irreducible} (resp. {\em semisimple}) if the vacuum sector $\mathcal{H}_\mathcal{A}$ is irreducible (resp. semisimple). \end{defn} \begin{exam} (1) The {\em zero topological $n$-net} $(\underline{0},0)$ where $\underline{0}(I)=0$. (2) The {\em trivial topological $n$-net} $(\underline{\mathbb{C}},\mathbb{C})$ where $\underline{\mathbb{C}}(I)=\mathbb{C}$. \end{exam} \begin{exam} If $(\mathcal{A},\mathcal{H}_\mathcal{A})$ and $(\mathcal{B},\mathcal{H}_\mathcal{B})$ are topological $n$-nets then $(\mathcal{A}\oplus\mathcal{B},\mathcal{H}_\mathcal{A}\oplus\mathcal{H}_\mathcal{B})$ and $(\mathcal{A}\boxtimes\mathcal{B},\mathcal{H}_\mathcal{A}\otimes\mathcal{H}_\mathcal{B})$ are also topological $n$-nets. \end{exam} \begin{rem} \label{rem:tn} There are several remarks concerning Definition \ref{defn:top-net}: (1) All of $\mathcal{A}(I)$ are von Neumann algebras on $\mathcal{H}_\mathcal{A}$ and Haag duality holds on $\mathcal{H}_\mathcal{A}$: $$\mathcal{A}(I') = \mathcal{A}(I)'$$ where $I'$ is the closure of the complement of $I$. Indeed, the claim holds for $I=S^{n-1}_\uparrow$ by the vacuum property and holds for general $I$ by the covariance axiom. (2) As an immediate consequence, $\mathcal{A}(I)$ is a subalgebra of $\mathcal{A}(J)$ if $I\subset J$. (3) By the last statement of the covariance axiom, the isomorphism $\eta_{h,I}:\mathcal{A}(I)\to\mathcal{A}(h(I))$ or $\eta_{h,I}:\mathcal{A}(I)\to\mathcal{A}(h(I))^\mathrm{op}$ depends only on $h|_I$. In another word, a diffeomorphism $f:I\to J$ determines an isomorphism $f_*:\mathcal{A}(I)\to\mathcal{A}(J)$ or $f_*:\mathcal{A}(I)\to\mathcal{A}(J)^\mathrm{op}$. Moreover, if $h_{I'}=\Id_{I'}$ then $\alpha_h\in\mathcal{A}(I')'=\mathcal{A}(I)$ so that $\eta_{h,I}=\Ad(\alpha_h):\mathcal{A}(I)\to\mathcal{A}(I)$ is an inner automorphism. (4) By Haag duality, $\mathcal{O}(\mathcal{H}_A)'\subset\mathcal{O}(\mathcal{H}_A)$. Hence $\mathcal{O}(\mathcal{H}_A)'$ is the center $Z(\mathcal{O}(\mathcal{H}_\mathcal{A}))$ and therefore $\mathcal{O}(\mathcal{H}_A)$ is of type I. In particular, $(\mathcal{A},\mathcal{H}_\mathcal{A})$ is irreducible if and only if $\mathcal{O}(\mathcal{H}_\mathcal{A})=\mathcal{L}(\mathcal{H}_\mathcal{A})$ and if and only if $Z(\mathcal{O}(\mathcal{H}_\mathcal{A}))=\mathbb{C}$. In general, if $(\mathcal{A},\mathcal{H}_\mathcal{A})$ is semisimple then it is a finite direct sum of irreducible ones where the decomposition is induced by that of $Z(\mathcal{O}(\mathcal{H}_\mathcal{A}))$. (5) The isometry $\alpha_h$ is unique up to a unitary in $Z(\mathcal{O}(\mathcal{H}_\mathcal{A}))$. Therefore, if $(\mathcal{A},\mathcal{H}_\mathcal{A})$ is irreducible then $\mathcal{H}_A$ carries a (possibly discontinuous) projective action of $\Diff(S^{n-1})$. \end{rem} \begin{defn} We say that a topological $n$-net $(\mathcal{A},\mathcal{H}_\mathcal{A})$ is {\em finite} if it is semisimple and satisfies the following conditions for any (equivalently, some) disjoint $I,J\in\Disk^{n-1}$: \begin{itemize} \item {\em Split property:} The homomorphism from the algebraic tensor product $\mathcal{A}(I)\otimes_{alg}\mathcal{A}(J) \to \mathcal{L}(\mathcal{H}_\mathcal{A})$ extends to a homomorphism from the spatial tensor product $\mathcal{A}(I)\bar\otimes\mathcal{A}(J) \to \mathcal{L}(\mathcal{H}_\mathcal{A})$. \item {\em Duality:} The $\mathcal{O}(\mathcal{H}_\mathcal{A},I\cup J)$-$\mathcal{O}(\mathcal{H}_\mathcal{A},I'\cap J')^\mathrm{op}$-bimodule $\mathcal{H}_\mathcal{A}$ is dualizable\footnote{We do not required the normalization condition of \cite{BDH14}. See Remark \ref{rem:vnbim-dual}.}. \end{itemize} \end{defn} \begin{exam} \label{exam:1tn} Note that a 1-net $\mathcal{A}$ consists of a pair of von Neumann algebras $\mathcal{A}(1)$ and $\mathcal{A}(-1)$. For a topological 1-net $(\mathcal{A},\mathcal{H}_\mathcal{A})$, $\mathcal{A}(-1)$ is determined by $\mathcal{A}(1)$ and $\mathcal{H}_\mathcal{A}$ is determined up to isometry. Therefore, a sector of $\mathcal{A}$ is simply an $\mathcal{A}(1)$-$\mathcal{A}(1)$-bimodule. Moreover, $(\mathcal{A},\mathcal{H}_\mathcal{A})$ satisfies the split property if and only if $\mathcal{A}(1)$ is of type I; $(\mathcal{A},\mathcal{H}_\mathcal{A})$ is finite if and only if $\mathcal{A}(1)$ is a finite direct sum of type I factors. \end{exam} \begin{exam} \label{exam:conf-net} A (finite semisimple) conformal net in the sense of \cite{BDH15,BDH19b} is equivalent to a (finite) topological 2-net satisfying the additional assumptions of continuity, strong additivity and split property. See the terminology therein. (An irreducible conformal net in the sense of \cite{BDH15} with positive energy spectrum is equivalent to a conformal net in the sense of \cite{GF93} satisfying the additional assumptions of strong additivity and diffeomorphism covariance.) \end{exam} \subsection{Defect nets} \label{sec:defect-net} We identify $S^{n-1}$ with the equator of $S^n$ defined by $x_n=0$ so that $S^n_\uparrow\cap S^{n-1}=S^{n-1}_\uparrow$ and $S^n_\downarrow\cap S^{n-1}=S^{n-1}_\downarrow$. \smallskip Let $M$ be a smooth manifold and $N\subset M$ a closed submanifold of codimension one. A {\em local parametrization} of $M$ around $N$ is an equivalence class of smooth embeddings $f:N\times(-1,1)\to M$ that satisfy $f(x,0)=x$ for $x\in N$; two smooth embeddings are equivalent if they agree on a neighborhood of $N\times\{0\}$. For example, there is a standard local parametrization of $S^n$ around $S^{n-1}$ represented by the smooth embedding $(x,s)\mapsto(x\sqrt{1-s^2},s)$. Let $\Diff(S^n)_k \subset \Diff(S^n)$ be the subgroup of diffeomorphisms $h$ such that $h(S^{n-i})=S^{n-i}$ and $h$ preserves the standard local parametrization of $S^{n-i+1}$ around $S^{n-i}$ for $1\le i\le k$. Let $\Disk^n_k$ denote the full subcategory of $\Disk^n$ consisting of those disk regions $I$ such that $I=h(S^n_\uparrow)$ for some $h\in\Diff(S^n)_{d(I)}$ where $d(I)=\max\{i\mid 0\le i\le k, \, I\cap S^{n-i}\ne\emptyset\}$. \begin{defn} A {\em defect $n$-net} of codimension $k$ where $0\le k<n$ consists of the following data: \begin{itemize} \item a partial $n$-net $\mathcal{A}:\Disk^{n-1}_k\to\VN$; \item a sector $\mathcal{H}_\mathcal{A}$ of $\mathcal{A}$, called the {\em vacuum sector}; \item a natural isomorphism $\eta_h:\mathcal{A}\to h^*\mathcal{A}$ for every $h\in\Diff(S^{n-1})_k$ such that $h^*(\eta_g) \circ \eta_h = \eta_{g\circ h}$. \end{itemize} These data are subject to the following axioms where $I,J,K\in\Disk^{n-1}_k$: \begin{itemize} \item {\em Locality:} If $I,J\subset K$ have disjoint interiors, then the images of $\mathcal{A}(I)$ and $\mathcal{A}(J)$ are commuting subalgebras of $\mathcal{A}(K)$. \item {\em Additivity:} If $I,J\subset K$ and if the interiors of $I$ and $J$ in $K$ cover $K$, then $\mathcal{A}(K)$ is generated by the images of $\mathcal{A}(I)$ and $\mathcal{A}(J)$. \item {\em Covariance:} For every orientation-preserving (resp. orientation-reversing) diffeomorphism $h\in\Diff(S^{n-1})_k$ there exists an isometry $\alpha_h:\mathcal{H}_\mathcal{A}\to\mathcal{H}_\mathcal{A}$ (resp. $\alpha_h:\mathcal{H}_\mathcal{A}\to\bar\mathcal{H}_\mathcal{A}$) rendering the following diagram commutative: $$\xymatrix{ \mathcal{A}(I) \ar[r]^{\eta_{h,I}} \ar[d]_{\rho_I} & \mathcal{A}(h(I)) \ar[d]^{\rho_{h(I)}} \\ \mathcal{L}(\mathcal{H}_\mathcal{A}) \ar[r]^{\Ad(\alpha_h)} & \mathcal{L}(\mathcal{H}_\mathcal{A}) } \quad\quad \raisebox{-2em}{\text{resp.}} \quad\quad \xymatrix{ \mathcal{A}(I) \ar[r]^{\eta_{h,I}} \ar[d]_{\rho_I} & \mathcal{A}(h(I))^\mathrm{op} \ar[d]^{\rho_{h(I)}} \\ \mathcal{L}(\mathcal{H}_\mathcal{A}) \ar[r]^{\Ad(\alpha_h)} & \mathcal{L}(\bar\mathcal{H}_\mathcal{A}) . } $$ Moreover, $\eta_{h,I}=\Id_{\mathcal{A}(I)}$ if $h|_I=\Id_I$. \item {\em Vacuum property:} Identify $\mathcal{A}(S^{n-1}_\downarrow)$ with $\mathcal{A}(S^{n-1}_\uparrow)^\mathrm{op}$ via $\eta_r$ where $r$ is the reflection across the hyperplane $x_0=0$. Then the $\mathcal{A}(S^{n-1}_\uparrow)$-$\mathcal{A}(S^{n-1}_\downarrow)^\mathrm{op}$-bimodule $\mathcal{H}_\mathcal{A}$ is isometric to $L^2(\mathcal{A}(S^{n-1}_\uparrow))$. \end{itemize} We use the notations $\mathcal{A}$ and $(\mathcal{A},\mathcal{H}_\mathcal{A})$ interchangeably for a topological $n$-net. We say that $(\mathcal{A},\mathcal{H}_\mathcal{A})$ is {\em irreducible} (resp. {\em semisimple}) if the vacuum sector $\mathcal{H}_\mathcal{A}$ is irreducible (resp. semisimple). \end{defn} \begin{rem} \label{rem:res-k-k+1} A topological $n$-net is exactly a defect $n$-net of codimension zero. A defect $n$-net of codimension $k$ restricts to a defect $n$-net of codimension $k+1$. \end{rem} \begin{rem} (1) By the vacuum property and the covariance axiom, if $I$ intersects $S^{n-1-k}$ then $\mathcal{A}(I)$ is a von Neumann algebra on $\mathcal{H}_\mathcal{A}$ and Haag duality holds on $\mathcal{H}_\mathcal{A}$: $$\mathcal{A}(I') = \mathcal{A}(I)'.$$ (2) As an immediate consequence, $\mathcal{A}(I)$ is a subalgebra of $\mathcal{A}(J)$ if $I\subset J$ and if $I,J$ intersect $S^{n-1-k}$. (3) By the last statement of the covariance axiom, the isomorphism $\eta_{h,I}:\mathcal{A}(I)\to\mathcal{A}(h(I))$ or $\eta_{h,I}:\mathcal{A}(I)\to\mathcal{A}(h(I))^\mathrm{op}$ depends only on $h|_I$. If $I$ intersects $S^{n-1-k}$ and if $h_{I'}=\Id_{I'}$ then $\eta_{h,I}:\mathcal{A}(I)\to\mathcal{A}(I)$ is an inner automorphism. (4) By Haag duality, $\mathcal{O}(\mathcal{H}_A)'\subset\mathcal{O}(\mathcal{H}_A)$. Hence $\mathcal{O}(\mathcal{H}_A)'$ is the center $Z(\mathcal{O}(\mathcal{H}_\mathcal{A}))$ and therefore $\mathcal{O}(\mathcal{H}_A)$ is of type I. In particular, $(\mathcal{A},\mathcal{H}_\mathcal{A})$ is irreducible if and only if $\mathcal{O}(\mathcal{H}_\mathcal{A})=\mathcal{L}(\mathcal{H}_\mathcal{A})$ and if and only if $Z(\mathcal{O}(\mathcal{H}_\mathcal{A}))=\mathbb{C}$. (5) The isometry $\alpha_h$ is unique up to a unitary in $Z(\mathcal{O}(\mathcal{H}_\mathcal{A}))$. Therefore, if $(\mathcal{A},\mathcal{H}_\mathcal{A})$ is irreducible then $\mathcal{H}_A$ carries a (possibly discontinuous) projective action of $\Diff(S^{n-1})_k$. \end{rem} \begin{defn} We say that a defect $n$-net $(\mathcal{A},\mathcal{H}_\mathcal{A})$ of codimension $k$ is {\em finite} if it is semisimple and satisfies the following conditions for any (equivalently, some) disjoint $I,J\in\Disk^{n-1}_k$ that intersect $S^{n-1-k}$: \begin{itemize} \item {\em Split property:} The homomorphism from the algebraic tensor product $\mathcal{A}(I)\otimes_{alg}\mathcal{A}(J) \to \mathcal{L}(\mathcal{H}_\mathcal{A})$ extends to a homomorphism from the spatial tensor product $\mathcal{A}(I)\bar\otimes\mathcal{A}(J) \to \mathcal{L}(\mathcal{H}_\mathcal{A})$. \item {\em Duality:} The $\mathcal{O}(\mathcal{H}_\mathcal{A},I\cup J)$-$\mathcal{O}(\mathcal{H}_\mathcal{A},I'\cap J')^\mathrm{op}$-bimodule $\mathcal{H}_\mathcal{A}$ is dualizable. \end{itemize} \end{defn} \begin{exam} If $(\mathcal{A},\mathcal{H}_\mathcal{A})$ is a defect $n$-net of codimension $k$, then $(\mathcal{A}^\mathrm{op},\bar\mathcal{H}_\mathcal{A})$ is also a defect $n$-net of codimension $k$ where $\mathcal{A}^\mathrm{op}(I)=\mathcal{A}(I)^\mathrm{op}$. \end{exam} \begin{exam} \label{exam:tn2dn} A defect $n$-net $(\mathcal{A},\mathcal{H}_\mathcal{A})$ of codimension $k$ induces a defect $(n+1)$-net $(\hat\mathcal{A},\mathcal{H}_\mathcal{A})$ of codimension $k+1$ where $\hat\mathcal{A}(I) = \mathcal{A}(I\cap S^{n-1})$ if $I$ intersects $S^{n-1}$ or $\hat\mathcal{A}(I)=\mathbb{C}$ otherwise. In particular, the zero topological $n$-net $\underline{0}$ induces a zero defect $(n+1)$-net of codimension one which do not satisfy the isotony axiom. \end{exam} \begin{exam} \label{exam:conf-defect} A (finite semisimple) defect defined in \cite{BDH19b} is a (finite) defect 2-net of codimension one. Indeed, a finite semisimple defect satisfies the duality condition by \cite[Proposition 3.18]{BDH19a}. \end{exam} \subsection{Nets of onsite symmetries} \label{sec:net-oss} We define for every finite group $G$ a finite irreducible topological $n$-net that describes the local quantum symmetry of an $n$D gapped phase with an onsite symmetry $G$. \smallskip Let $G$ be a finite group and let $V$ be a finite-dimensional $G$-module which contains all the irreducible $G$-modules as direct summands. Fix a $G$-invariant vector of norm one $\mu\in V$. Consider the orthonormal frame bundle $E$ over $S^{n-1}$, i.e. the fiber $E_x$ over a point $x\in S^{n-1}$ consists of the orthonormal frames at $x$. For example, $E$ is a double cover of $S^1$ for $n=2$. In general, $E$ has two connected components $E^\pm$ corresponding to the two orientations of $S^{n-1}$ (in fact, $E\cong O(n)$). Assign $W_s=V^*$ for $s\in E^+$ and $W_s=V$ for $s\in E^-$. Define $\mathcal{H} = \bigotimes_{s\in E}W_s$ to be the completion of the pre-Hilbert space spanned by the following vectors $$\{ \otimes w_s \mid \text{$w_s=\mu$ or $\mu^*$ except for finitely many $s\in E$} \}.$$ Define $\mathcal{H}_\mathcal{A}$ to be the subspace of $G$-invariants in $\mathcal{H}$ $$\mathcal{H}_\mathcal{A} = \mathcal{H}^G = \{ w\in\mathcal{H} \mid G w=w \}.$$ It carries an obvious action by the diffeomorphism group $\Diff(S^{n-1})$. Let $I\subset S^{n-1}$ be a disk region. Note that $I$ and $I'$ divide $E$ into two disjoint parts $E(I)$ and $E(I')$ such that an orthonormal frame $(e_1,\dots,e_{n-1})$ at $x\in\partial I$ belongs to $E(I)$ if the first $e_i$ not tangent to $\partial I$ points towards $I$. Define $\mathcal{A}(I)$ to be the von Neumann algebra on $\mathcal{H}_\mathcal{A}$ generated by $\End_G(\bigotimes_{s\in P}W_s)$ where $P$ runs over all finite subsets of $E(I)$. \begin{prop} The pair $(\mathcal{A},\mathcal{H}_\mathcal{A})$, together with the obvious action of $\Diff(S^{n-1})$ on $\mathcal{A}$, defines a finite irreducible topological $n$-net. \end{prop} \begin{proof} The axioms of locality, additivity, covariance and split property are all clear. To see the vacuum property we note that $\mathcal{H} = \mathcal{H}_\uparrow \otimes \mathcal{H}_\uparrow^*$ where $\mathcal{H}_\uparrow = \bigotimes_{s\in E(S^{n-1}_\uparrow)} W_s$ hence $\mathcal{H} = L^2(\mathcal{L}(\mathcal{H}_\uparrow))$. Moreover, $\mathcal{L}(\mathcal{H}_\uparrow)$ is generated by $\End_\mathbb{C}(\bigotimes_{s\in P}W_s)$ for finite $P\subset E(S^{n-1}_\uparrow)$ hence $\mathcal{L}_G(\mathcal{H}_\uparrow)=\mathcal{A}(S^{n-1}_\uparrow)$. Passing to the $G$-invariants yields $\mathcal{H}_\mathcal{A} = L^2(\mathcal{A}(S^{n-1}_\uparrow))$. For the duality condition, note that for any disjoint $I,J\in\Disk^{n-1}$, the $\mathcal{O}(\mathcal{H}_\mathcal{A},I\cup J)$-$\mathcal{O}(\mathcal{H}_\mathcal{A},I'\cap J')^\mathrm{op}$-bimodule $\mathcal{H}_A$ is semisimple hence dualizable. It remains to show that $\mathcal{O}(\mathcal{H}_\mathcal{A}) = \mathcal{L}(\mathcal{H}_\mathcal{A})$. Indeed, the operators from $\mathcal{O}(\mathcal{H}_\mathcal{A})$ supported on the point $(0,\dots,0,1)\in S^{n-1}$ intertwine $\mathcal{H}_\uparrow$ and $\mathcal{H}_\uparrow^*$. Together with these operators, $\mathcal{L}_G(\mathcal{H}_\uparrow)$ and $\mathcal{L}_G(\mathcal{H}_\uparrow^*)$ generate $\mathcal{L}(\mathcal{H}^G)$. \end{proof} \begin{rem} The topological $n$-net $\mathcal{A}$ constructed above (except the trivial case) differs from a conformal net on several aspects. First, from the proof above we notice that all of $\mathcal{A}(I)$ are type I von Neumann algebras on an inseparable Hilbert space while the local observable algebras of a conformal net are usually type III von Neumann algebras on a separable Hilbert space. Second, $\mathcal{A}$ does not satisfy the strong additivity axiom. Thirdly and most importantly, the whole diffeomorphism group $\Diff(S^{n-1})$ acts genuinely on $\mathcal{H}_\mathcal{A}$, however, the action is not continuous in any reasonable sense because every vector of $\mathcal{H}_\mathcal{A}$ lies over countably many points of $S^{n-1}$. In particular, $\Diff(S^{n-1})$ acts strongly continuously only on the one-dimensional subspace spanned by the vacuum vector $\bigotimes_{s\in E^+}(\mu\otimes\mu^*)$. This suggests that the topological $n$-net $\mathcal{A}$ describes the local quantum symmetry of a gapped phase. \end{rem} The above remark motivates the following definition. Typical examples include that a conformal net in the sense of \cite{BDH15} is a conformal 2-net and the topological $n$-net $\mathcal{A}$ constructed above is gapped. \begin{defn} \label{defn:net-gap} We say that a defect $n$-net $\mathcal{A}$ of codimension $k$ is {\em gapless} if $\Diff(S^{n-1})_k$ acts strongly continuously on an infinite-dimensional subspace of $\mathcal{H}_\mathcal{A}$. Otherwise, we say that $\mathcal{A}$ is {\em gapped}. We say that a topological $n$-net $\mathcal{A}$ is a {\em conformal $n$-net} if $\Diff(S^{n-1})$ acts strongly continuously on $\mathcal{H}_\mathcal{A}$. \end{defn} \begin{rem} If an $n$D CFT comes from a conformal $n$-net, then it admits not only an action of the conformal transformation group $\Conf(S^n)$ but also an action of the diffeomorphism group $\Diff(S^{n-1})$. This prompts a positive answer to the long standing question whether there are infinite-dimensional symmetries for higher dimensional CFT's as in the two dimensional case. We will come back to this issue in a subsequent work. \end{rem} Now we focus on dimension $n=2$ so that $E$ is a double cover of $S^1$. We use $x^\pm\in E^\pm$ to denote the point lying over $x\in S^1$. We define a defect 2-net $\mathcal{F}$ of codimension one between the trivial topological 2-net $\underline{\mathbb{C}}$ and $\mathcal{A}$ (i.e. a boundary 2-net of $\mathcal{A}$). Let $A$ be a nonzero $*$-Frobenius algebra in $\Rep G$ (in particular, $A$ is a separable algebra carrying a $G$-action). Assign $U_{(1,0)^+}=\mathbb{C}$ and $U_{(1,0)^-}=A^*$; $U_{(-1,0)^+}=A$ and $U_{(-1,0)^-}=\mathbb{C}$; $U_{x^\pm}=\mathbb{C}$ if $x_1<0$; $U_{x^+}=V^*$ and $U_{x^-}=V$ if $x_1>0$. Let $\mathcal{K}=\bigotimes_{s\in E}U_s$ and define $\mathcal{H}_\mathcal{F}$ to be the subspace of $A$-invariants in $\mathcal{K}^G$ $$\mathcal{H}_\mathcal{F} = (\mathcal{K}^G)^A = \{ w\in\mathcal{K}^G \mid a w=w a, \, \forall a\in A \}$$ where $A$ acts from the left on $A$ and from the right on $A^*$. Let $I\in\Disk^1_1$ be an arc. Define $\mathcal{F}(I)$ to be the von Neumann algebra on $\mathcal{H}_\mathcal{F}$ generated for all finite subsets $P\subset E(I)$ by the algebras $\End_G(\bigotimes_{s\in P}U_s)^A$ or $\End_G(\bigotimes_{s\in P}U_s)$ or $\mathbb{C}$ depending on the types of $U_s$. Then $(\mathcal{F},\mathcal{H}_\mathcal{F})$ is a finite defect 2-net of codimension one. It is irreducible if and only if $A$ is a simple $*$-Frobenius algebra in $\Rep G$. \begin{prop} $\Sect(\mathcal{F})$ is equivalent to the $*$-1-category of $A$-$A$-bimodules in $\widehat{\Rep G}$. \end{prop} \begin{proof} Let $W=A\otimes(V\otimes V^*)^{\otimes k}\otimes A^*$ where $k\ge1$. Since $W$ contains all the simple $A$-$A$-bimodules in $\Rep G$, the category of $A$-$A$-bimodules in $\Rep G$ is equivalent to the category of finite-dimensional left $\End_{A|A}(W)^G$-modules. On the other hand, $\Sect(\mathcal{F})$ is equivalent to the category of left modules over the von Neumann algebra $\mathcal{L}_{A|A}(\mathcal{K})^G$ which is Morita equivalent to $\End_{A|A}(W)^G$. \end{proof} \begin{cor} $\Sect(\mathcal{A})\simeq\widehat{\mathfrak{Z}_1(\Rep G)}$. \end{cor} \begin{proof} By folding $\mathcal{A}$ along the line $x_1=0$ we obtain a defect 2-net between $\underline{\mathbb{C}}$ and $\mathcal{A}\boxtimes\mathcal{A}$ which shares the same sectors as $\mathcal{A}$. let $A=\tau\End_\mathbb{C}(V\otimes V^*)$ where $\tau$ is an adjoint functor to the tensor product $\otimes:\Rep G\boxtimes\Rep G\to\Rep G$. Note that this defect 2-net coincides with the one constructed above for the finite group $G\times G$, the $(G\times G)$-module $V\otimes V^*$ and the $*$-Frobenius algebra $A$ in $\Rep G\boxtimes\Rep G \simeq \Rep(G\times G)$. Since the category of right $A$-modules in $\Rep G\boxtimes\Rep G$ is equivalent to $\Rep G$, the category of $A$-$A$-bimodules in $\Rep G\boxtimes\Rep G$ is equivalent to the Drinfeld center $\mathfrak{Z}_1(\Rep G)$. Our claim follows. \end{proof} \begin{rem} According to Remark \ref{rem:net-monoidal}, $\Sect(\mathcal{A})$ carries a braided monoidal structure. A direct way to see the braiding is to consider fusion of sectors as in \cite[Section 3]{BDH17}. \end{rem} \subsection{Levin-Wen nets} \label{sec:lw-net} In this subsection, we provide the construction of a topological 2-net from a unitary fusion 1-category $\mathcal{C}$ and a defect 2-net from a $*$-Frobenius algebra $A$ in $\mathcal{C}$. They describe the Levin-Wen model associated to $\mathcal{C}$ \cite{LW05} and the gapped boundary associated to $A$ \cite{KK12}. \smallskip Let $\mathcal{C}$ be a unitary fusion 1-category. Normalize the unit map $u_X:\one\to X\otimes X^*$ and the counit map $v_X:X^*\otimes X\to\one$ for every object $X\in\mathcal{C}$ in such a way that the composite isomorphism induced by $v_X$ and $v_X^*$ $$\Hom_\mathcal{C}(\one, X\otimes Y) \to \Hom_\mathcal{C}(X^*,Y) \to \Hom_\mathcal{C}(\one,Y\otimes X)$$ is isometric for all $Y\in\mathcal{C}$. (In fact, these normalized duality maps induce a canonical spherical structure on $\mathcal{C}$.) Let $V\in\mathcal{C}$ be an object which contains all the simple objects of $\mathcal{C}$ as direct summands. Fix a morphism of norm one $\mu:\one\to V$. Fix an orientation of the circle $S^1$ and choose a base point of $S^1$ so that the points of $S^1$ are linearly ordered. Assign $U_x=V\otimes V^*$ for all $x\in S^1$. For every finite subset $P \subset S^1$, we have an object $\bigotimes_{x\in P}U_x = U_{p_1}\otimes\cdots\otimes U_{p_k}$ of $\mathcal{C}$ where $p_1,\dots,p_k$ are the points of $P$ in linear order. For an inclusion $P\subset Q$, the morphisms $\mu\otimes\mu^*:\one\to U_x$ for $x\in Q\setminus P$ induce a unitary embedding $\bigotimes_{x\in P}U_x \hookrightarrow \bigotimes_{x\in Q}U_x$. Define $\bigotimes_{x\in S^1}U_x$ to be the direct limit $\varinjlim_P \bigotimes_{x\in P}U_x$ in $\hat\mathcal{C}$. We obtain a Hilbert space $$\mathcal{H}_\mathcal{A} = \Hom_{\hat\mathcal{C}}(\one,\bigotimes\nolimits_{x\in S^1}U_x) = \varinjlim_P \Hom_\mathcal{C}(\one,\bigotimes\nolimits_{x\in P}U_x)$$ which is independent of the base point of $S^1$ due to the normalized duality maps. Let $I\subset S^1$ be an arc from $a$ to $b$. Assign $W_a=V^*$, $W_b=V$ and $W_x=V\otimes V^*$ for $x\in I\setminus\{a,b\}$. Define $\mathcal{A}(I)$ to be the type I von Neumann algebra $$\mathcal{A}(I) = \Hom_{\hat\mathcal{C}}(\bigotimes\nolimits_{x\in I}W_x,\bigotimes\nolimits_{x\in I}W_x)$$ where $\bigotimes_{x\in I}W_x$ is an object of $\hat\mathcal{C}$ defined by a direct limit as above. The action of $\mathcal{A}(I)$ on $\bigotimes_{x\in S^1}U_x$ induces a left action on $\mathcal{H}_\mathcal{A}$. Then $(\mathcal{A},\mathcal{H}_\mathcal{A})$ is a finite irreducible topological 2-net. Indeed, in the special case $\mathcal{C}=\Rep G$ we recover the topological 2-net defined in the previous subsection. In general, the proof is essentially the same. Moreover, by choosing a nonzero $*$-Frobenius algebra $A$ in $\mathcal{C}$, one defines similarly a finite defect 2-net $\mathcal{F}$ of codimension one between the trivial topological 2-net $\underline{\mathbb{C}}$ and $\mathcal{A}$ (i.e. a boundary 2-net of $\mathcal{A}$). We conclude by similar arguments that $\Sect(\mathcal{F})$ is equivalent to the $*$-1-category of $A$-$A$-bimodules in $\hat\mathcal{C}$ and that $\Sect(\mathcal{A}) \simeq \widehat{\mathfrak{Z}_1(\mathcal{C})}$. One can say more precisely about the latter equivalence: an object $Z\in\mathfrak{Z}_1(\mathcal{C})$ induces a sector $\Hom_{\hat\mathcal{C}}(\one,Z\otimes\bigotimes_{x\in S^1}U_x)$ of $\mathcal{A}$. \smallskip The construction can be generalized to higher unitary fusion categories. We will address the problem in a more general context of condensation theory in Section \ref{sec:cond-nnet}. \section{Construction of categories of quantum liquids} \subsection{Categories of topological nets} \label{sec:netn} Let $S^n_+$ denote the positive half of $S^n$ defined by $x_n\ge0$ and let $S^n_-$ denote the negative half of $S^n$ defined by $x_n\le0$ so that $S^{n-1} = S^n_+\cap S^n_-$. We claim the following result: \begin{thm} \label{thm:netn} There is a symmetric monoidal $*$-$(n+1)$-category $\widehat\mathcal{N}et^n$ containing the following pieces of data: \begin{itemize} \item An object is a topological $n$-net $\mathcal{A}$. \item The tensor product of two objects $\mathcal{A},\mathcal{B}$ is $\mathcal{A}\boxtimes\mathcal{B}$ and the direct sum of $\mathcal{A},\mathcal{B}$ is $\mathcal{A}\oplus\mathcal{B}$. The tensor unit is the trivial topological $n$-net $\underline{\mathbb{C}}$ and the zero object is $\underline{0}$. \item A $k$-morphism $\mathcal{F}:\mathcal{A}\to\mathcal{B}$ for $1\le k<n$ is a defect $n$-net of codimension $k$ such that \begin{equation*} \begin{split} & \mathcal{F}(I)=\mathcal{A}(I),\quad \text{if $I\cap S^{n-k}_+=\emptyset$}, \\ & \mathcal{F}(I)=\mathcal{B}(I),\quad \text{if $I\cap S^{n-k}_-=\emptyset$}, \end{split} \end{equation*} and, moreover, the isomorphism $\eta_{h,I}$ associated to $\mathcal{F}$ agrees with that to $\mathcal{A}$ or $\mathcal{B}$ if $I\cap S^{n-1-k}=\emptyset$. \item The identity $k$-morphism $\Id_\mathcal{A}:\mathcal{A}\to\mathcal{A}$ is the restriction of $\mathcal{A}$ (see Remark \ref{rem:res-k-k+1}). The direct sum of two $k$-morphisms $\mathcal{F},\mathcal{G}:\mathcal{A}\to\mathcal{B}$ is determined by the formula $$(\mathcal{F}\oplus\mathcal{G})(I) = \mathcal{F}(I)\oplus\mathcal{G}(I), \quad \text{if $I\cap S^{n-1-k}\ne\emptyset$}.$$ \item An $n$-morphism $\mathcal{H}:\mathcal{A}\to\mathcal{B}$ is a sector of the partial $n$-net $\mathcal{S}_{\mathcal{B}|\mathcal{A}}:\Disk^{n-1}_{n-1}\to\VN$ defined by \begin{equation*} \begin{split} & \mathcal{S}_{\mathcal{B}|\mathcal{A}}(I)=\mathcal{A}(I),\quad \text{if $I\cap S^0_+=\emptyset$}, \\ & \mathcal{S}_{\mathcal{B}|\mathcal{A}}(I)=\mathcal{B}(I),\quad \text{if $I\cap S^0_-=\emptyset$}. \end{split} \end{equation*} \item The identity $n$-morphism $\Id_\mathcal{A}:\mathcal{A}\to\mathcal{A}$ is the vacuum sector $\mathcal{H}_\mathcal{A}$. The composition of two $n$-morphisms $\mathcal{H}:\mathcal{A}\to\mathcal{B}$ and $\mathcal{K}:\mathcal{B}\to\mathcal{C}$ is the Connes fusion $\mathcal{K}\boxtimes_{\mathcal{B}(S^{n-1}_\uparrow)}\mathcal{H}$ (recall that $\mathcal{B}(S^{n-1}_\downarrow)\cong\mathcal{B}(S^{n-1}_\uparrow)^\mathrm{op}$ canonically). \item An $(n+1)$-morphism is a homomorphism of sectors. The $*$-involution sends $f:\mathcal{H}\to\mathcal{K}$ to the adjoint $f^*:\mathcal{K}\to\mathcal{H}$. \end{itemize} Moreover, the collection of finite topological $n$-nets, finite defect $n$-nets and semisimple sectors form a symmetric monoidal $*$-subcategory $\mathcal{N}et^n$ of $\widehat\mathcal{N}et^n$. \end{thm} We refer the reader to \cite{BDH15,BDH17,BDH19a,BDH18} for a detailed construction of a symmetric monoidal $*$-3-category of conformal nets. A bunch of techniques have been developed there and can be generalized to higher dimensions. In particular, the fusion operation of defects defined in \cite{BDH19a}, after generalized to higher dimensions, implements a crucial ingredient of the higher category $\widehat\mathcal{N}et^n$: the composition of $k$-morphisms for $1\le k<n$. According to Example \ref{exam:conf-net} and Example \ref{exam:conf-defect}, the $*$-3-category of conformal nets is a full subcategory of $\widehat\mathcal{N}et^2$. We will sketch a construction of $\widehat\mathcal{N}et^n$ and $\mathcal{N}et^n$ in Section \ref{sec:fus-def}. \begin{exam} Since $\Disk^{-1}$ is an empty 1-category, a 0-net is an empty functor of which a sector is simply a Hilbert space. Therefore, we may define $\widehat\mathcal{N}et^0=\widehat\mathrm{Hilb}$ and $\mathcal{N}et^0=\mathrm{Hilb}$. \end{exam} \begin{exam} \label{exam:net1} According to Example \ref{exam:1tn}, giving a topological 1-net is equivalent to giving a von Neumann algebra. Therefore, $\widehat\mathcal{N}et^1$ can be identified with the symmetric monoidal $*$-2-category of von Neumann algebras and bimodules; $\mathcal{N}et^1$ can be identified with the symmetric monoidal $*$-2-category of finite direct sums of type I factors and semisimple bimodules. \end{exam} \begin{rem} \label{rem:net-monoidal} Note that $\Sect(\mathcal{A}) = \Omega^n(\widehat\mathcal{N}et^n,\mathcal{A})$ for any topological $n$-net $\mathcal{A}$. Therefore, $\Sect(\mathcal{A})$ carries an $E_n$-monoidal structure. Similarly, for any $k$-morphism $\mathcal{F}$ of $\widehat\mathcal{N}et^n$ where $1\le k<n$, $\Sect(\mathcal{F})$ carries an $E_{n-k}$-monoidal structure. \end{rem} According to Example \ref{exam:tn2dn}, an object of $\widehat\mathcal{N}et^n$ induces a 1-morphism of $\widehat\mathcal{N}et^{n+1}$ between the tensor unit $\underline{\mathbb{C}}$. Conversely, by the additivity axiom, all of the 1-morphisms between $\underline{\mathbb{C}}$ arise in this way. Therefore, we have identifications $$\widehat\mathcal{N}et^n = \Omega\widehat\mathcal{N}et^{n+1}, \quad\quad \mathcal{N}et^n = \Omega\mathcal{N}et^{n+1}.$$ \begin{rem} Bartels, Douglas and Henriques answered positively in their works \cite{BDH15,BDH17,BDH19a,BDH18} the following question proposed by Stolz and Teichner: Does there exist an interesting 3-category that deloops the 2-category of von Neumann algebras? They showed that the $*$-3-category of conformal nets is a delooping of the $*$-2-category of von Neumann algebras. Theorem \ref{thm:netn} improves their result to that $\widehat\mathcal{N}et^n$ is an $(n-1)$-fold delooping of the $*$-2-category of von Neumann algebras (as well as an $n$-fold delooping of the $*$-1-category of Hilbert spaces). \end{rem} \subsection{Construction of $\mathcal{QL}^n$} \label{sec:construct-QL} Let us assume the higher categories $\mathcal{N}et^n$ are well-defined. We construct explicitly higher categories of quantum liquids $\mathcal{QL}^n$ and give in particular a precise mathematical definition of a quantum liquid. \smallskip We define first for each $n$ a symmetric monoidal $*$-$(n+1)$-subcategory $\mathcal{LQS}^n \subset \mathcal{N}et^n$ to describe local quantum symmetries as follows. First, let $$\mathcal{LQS}^0 = \mathcal{N}et^0 = \mathrm{Hilb}.$$ Then by induction on $n$, define $\mathcal{LQS}^n \subset \mathcal{N}et^n$ to be the maximal subcategory extending the embedding $B\mathcal{LQS}^{n-1} \hookrightarrow \Sigma_*\mathcal{LQS}^{n-1}$. In another word, $\mathcal{LQS}^n$ is obtained from the subcategory $B\mathcal{LQS}^{n-1} \subset \mathcal{N}et^n$ by appending all $*$-condensates. By the construction, there is a forgetful functor $$\mathrm{sk}: \mathcal{LQS}^n \to (n+1)\mathrm{Hilb}.$$ We define then $$\mathcal{QL}^n = \begin{cases} \underline{\mathbb{C}}/\mathcal{LQS}^n, & \text{for even $n$}, \\ \mathcal{LQS}^n/\underline{\mathbb{C}}, & \text{for odd $n$}. \\ \end{cases} $$ It comes equipped with forgetful functors $$\mathrm{sk}: \mathcal{QL}^n \to \mathcal{QL}_\mathrm{sk}^n,$$ $$\mathrm{lqs}: \mathcal{QL}^n \to \mathcal{LQS}^n.$$ Recall that $\mathcal{QL}_\mathrm{sk}^n = \Sigma_*^n(\mathbb{C}/\mathrm{Hilb})$. According to \cite[Proposition 4.12]{KZ20b}, we may identify $$\mathcal{QL}_\mathrm{sk}^n = \begin{cases} \bullet/(n+1)\mathrm{Hilb}, & \text{for even $n$}, \\ (n+1)\mathrm{Hilb}/\bullet, & \text{for odd $n$}. \\ \end{cases} $$ An object of $\mathcal{QL}^n$ is referred to as an $n$D {\em quantum liquid}. A $k$-morphism $\mathcal{F}:\mathcal{A}\to\mathcal{B}$ of $\mathcal{QL}^n$ where $1\le k< n$ is referred to as a {\em domain wall} between $\mathcal{A}$ and $\mathcal{B}$ or a {\em defect} of codimension $k$. An $n$-morphism of $\mathcal{QL}^n$ is referred to as an {\em instanton}. The tensor unit $\Id_{\underline{\mathbb{C}}}$ of $\mathcal{QL}^n$ is denoted by $\one^n$ and referred to as the $n$D {\em trivial quantum liquid}. We say that a domain wall is {\em trivial} if it is an identity morphism; {\em invertible} if it is an invertible morphism of $\mathcal{QL}^n$. For an object or a morphism $\mathcal{X}$ of $\mathcal{QL}^n$, we refer to the image $\mathrm{sk}(\mathcal{X})$ in $\mathcal{QL}_\mathrm{sk}^n$ as the {\em topological skeleton} of $\mathcal{X}$, and refer to the image $\mathrm{lqs}(\mathcal{X})$ in $\mathcal{LQS}^n$ as the {\em local quantum symmetry} of $\mathcal{X}$. We say that a quantum liquid or defect $\mathcal{X}$ is {\em gapped} (resp. {\em gapless}) if the defect net $\mathrm{lqs}(\mathcal{X})$ is gapped (resp. gapless). \begin{exam} We have $\mathcal{QL}^0 = \mathcal{QL}_\mathrm{sk}^0 = \mathbb{C}/\mathrm{Hilb}$. \end{exam} \begin{exam} \label{exam:to1} By Example \ref{exam:net1}, we have $\mathcal{LQS}^1 = \mathcal{N}et^1$ which can be identified with the symmetric monoidal $*$-2-category of finite direct sums of type I factors and semisimple bimodules. Moreover, the forgetful functor $\mathcal{LQS}^1\to2\mathrm{Hilb}$ maps an object $\mathcal{A}$ to the unitary 1-category $\LMod_\mathcal{A}^s$ of semisimple left $\mathcal{A}$-modules. Therefore, $\mathcal{QL}^1$ has the following structures: \begin{itemize} \item An object $(\mathcal{A},\mathcal{H})$ consists of a finite direct sum of type I factors $\mathcal{A}$ and a semisimple right $\mathcal{A}$-module $\mathcal{H}$. \item A 1-morphism $(\mathcal{F},f):(\mathcal{A},\mathcal{H})\to(\mathcal{B},\mathcal{K})$ consists of a semisimple $\mathcal{B}$-$\mathcal{A}$-bimodule $\mathcal{F}$ and a right $\mathcal{A}$-module map $f:\mathcal{H}\to\mathcal{K}\boxtimes_\mathcal{B}\mathcal{F}$. \item A 2-morphism $\xi:(\mathcal{F},f)\to(\mathcal{G},g)$ is a bimodule map $\xi:\mathcal{F}\to\mathcal{G}$ such that $(\mathcal{K}\boxtimes_\mathcal{B}\xi)\circ f=g$. \end{itemize} Note that $\mathrm{lqs}(\mathcal{A},\mathcal{H}) = \mathcal{A}$ and that $\mathrm{sk}(\mathcal{A},\mathcal{H})$ is the functor $\mathcal{H}\boxtimes_\mathcal{A}-: \LMod_\mathcal{A}^s\to\mathrm{Hilb}$. A 1D quantum liquid $(\mathcal{A},\mathcal{H})$ is gapped if and only if $\mathcal{A}$ is finite-dimensional. \end{exam} We claim the following result which is requisite by physics (\cite[Theorem 5.13]{KZ20b}): \begin{thm} \label{thm:lqs-cc} $\mathcal{LQS}^n$ is $*$-condensation-complete. Consequently, the following forgetful functors $$\mathrm{sk}:\mathcal{LQS}^n \to (n+1)\mathrm{Hilb},$$ $$\mathrm{sk}:\mathcal{QL}^n \to \mathcal{QL}_\mathrm{sk}^n$$ are symmetric monoidal equivalences. \end{thm} The theorem is true for $n=1$ as clear from Examples \ref{exam:to1}. However, the theorem becomes nontrivial for $n=2$. The Levin-Wen 2-nets constructed in Section \ref{sec:lw-net} show that the forgetful functor $\mathrm{sk}:\mathcal{LQS}^2 \to 3\mathrm{Hilb}$ is essentially surjective. But this is not sufficient to establish the theorem for $n=2$. One needs to show that there are sufficiently many 1-morphisms in $\mathcal{LQS}^2$. The problem is even more complicated for $n=3$: 3D Chern-Simons theory and 3D CFTs are expected to be involved in $\mathcal{LQS}^3$. We will sketch a proof of Theorem \ref{thm:lqs-cc} in Section \ref{sec:condense-net}. \begin{rem} Let $\mathcal{A}\in\mathcal{LQS}^n$. Then $\mathrm{sk}(\mathcal{A})\in(n+1)\mathrm{Hilb}$ is a unitary $n$-category. Let $\mathcal{B}$ denote the unitary braided multi-fusion $(n-1)$-category $\Omega\Fun(\mathrm{sk}(\mathcal{A}),\mathrm{sk}(\mathcal{A}))$. Since $\mathrm{sk}: \mathcal{LQS}^n \to (n+1)\mathrm{Hilb}$ is an equivalence, $\mathcal{B} \simeq \Omega\Hom_{\mathcal{LQS}^n}(\mathcal{A},\mathcal{A}) = \Hom_{\mathcal{LQS}^n}(\Id_\mathcal{A},\Id_\mathcal{A})$. In particular, an object of $\mathcal{B}$ corresponds to a 2-morphism $\Id_\mathcal{A}\to\Id_\mathcal{A}$ of $\mathcal{LQS}^n$ which is a defect $n$-net of codimension two for large $n$. Let $\mathcal{X}:\underline{\mathbb{C}}\to\mathcal{A}$ be an object of $\mathcal{QL}^n$ where $n$ is even (the odd case is similar). Regard $\mathrm{sk}(\mathcal{X})$ as an object of $\mathrm{sk}(\mathcal{A})$ and let $\mathcal{C}$ denote the unitary multi-fusion $(n-1)$-category $\Omega(\mathrm{sk}(\mathcal{A}),\mathrm{sk}(\mathcal{X})) = \Hom_{\mathrm{sk}(\mathcal{A})}(\mathrm{sk}(\mathcal{X}),\mathrm{sk}(\mathcal{X}))$. Since $\mathrm{sk}: \mathcal{LQS}^n \to (n+1)\mathrm{Hilb}$ is an equivalence, $\mathcal{C} \simeq \Hom_{\mathcal{LQS}^n}(\mathcal{X},\mathcal{X})$. In particular, an object of $\mathcal{C}$ corresponds to a 2-morphism $\mathcal{X}\to\mathcal{X}$ of $\mathcal{LQS}^n$ which is a defect $n$-net of codimension two for large $n$. Consider the generic case where $\mathrm{sk}(\mathcal{A})$ is indecomposable and $\mathrm{sk}(\mathcal{X})$ is nonzero. We have $\Sigma_*\mathcal{C} = \mathrm{sk}(\mathcal{A})$ by \cite[Corollary 3.13]{KZ20b} so that $\mathcal{B}=\mathfrak{Z}_1(\mathcal{C})$ by \cite[Theorem 3.41]{KZ20b}. In particular, one recovers $\mathrm{sk}(\mathcal{X})$ as the object $\mathcal{C}$ of $\Sigma_*\mathcal{C}$ so that we can say that $\mathcal{C}$ is the topological skeleton of the quantum liquid $\mathcal{X}$. Moreover, $\mathcal{C}$ is enriched in $\Hom_{\mathcal{LQS}^n}(\Id_\mathcal{A},\Id_\mathcal{A})$ in the sense of \cite[Definition 3.25]{KZ20b} via the equivalence $\mathcal{B} \simeq \Hom_{\mathcal{LQS}^n}(\Id_\mathcal{A},\Id_\mathcal{A})$. Note that the topological $n$-net $\mathcal{A}=\mathrm{lqs}(\mathcal{X})$ encodes the local observable algebras living on the space-time and, moreover, it determines both $\Hom_{\mathcal{LQS}^n}(\Id_\mathcal{A},\Id_\mathcal{A})$ and the braided monoidal equivalence $\mathcal{B} \simeq \Hom_{\mathcal{LQS}^n}(\Id_\mathcal{A},\Id_\mathcal{A})$. Therefore, $\mathrm{lqs}(\mathcal{X})$ is exactly what we have expected in \cite[Section 5.2]{KZ20b} for the local quantum symmetry of the quantum liquid $\mathcal{X}$. However, the content of the quantum liquid $\mathcal{X}$ is slightly more than we have expected in \cite[Section 5.2]{KZ20b}. Besides the topological skeleton $\mathrm{sk}(\mathcal{X})$ and the local quantum symmetry $\mathrm{lqs}(\mathcal{X})$, there are also local observable algebras of the boundary $n$-net $\mathcal{X}$. \end{rem} \begin{rem} Recall that a topological $n$-net $\mathcal{A}\in\mathcal{LQS}^n$ can be viewed as a 1-morphism $\mathcal{A}:\underline{\mathbb{C}}\to\underline{\mathbb{C}}$ of $\mathcal{LQS}^{n+1}$, i.e. an $n+1$D quantum liquid whose local quantum symmetry is trivial. This provides an explanation of the topological Wick rotation introduced in \cite{KZ20a}. Namely, the local quantum symmetry of an $n$D quantum liquid $\mathcal{X}:\underline{\mathbb{C}}\to\mathcal{A}$ shares the same mathematical content as an $n+1$D quantum liquid with trivial local quantum symmetry. \end{rem} Before concluding this subsection, we should point out that the $*$-involution of $\mathcal{QL}^n$ is not the one induced by the time-reversal operator. Let us endow $\mathcal{LQS}^n$ with a new involution $*: \mathcal{LQS}^n \to (\mathcal{LQS}^n)^{\mathrm{op} n}$ which still fixes all the objects and all the $(n-1)$- and lower morphisms but sends an $n$-morphism $\mathcal{H}$ to the complex conjugate $\bar\mathcal{H}$ and an $(n+1)$-morphism $f:\mathcal{H}\to\mathcal{K}$ to $\bar f:\bar\mathcal{H}\to\bar\mathcal{K}$. Then endow $\mathcal{QL}^n$ with the induced $*$-involution. \subsection{Transparent domain walls} \label{sec:transparent-wall} As we have seen from the previous subsection, $\mathcal{QL}^n$ is equivalent to $\mathcal{QL}_\mathrm{sk}^n$. That is, local quantum symmetries can not be recovered from the equivalence type of $\mathcal{QL}^n$ at all. To detect local quantum symmetries, one needs to distinguish a class of transparent domain walls from invertible ones in certain categorical structures. Roughly speaking, a domain wall $\mathcal{F}$ between two quantum liquids or defects $\mathcal{A}$ and $\mathcal{B}$ is transparent if $\mathcal{A}$ and $\mathcal{B}$ can be identified in such a way that $\mathcal{F}$ is a trivial domain wall. \smallskip We say that two defect $n$-nets $\mathcal{A}$ and $\mathcal{B}$ of codimension $k$ are {\em isomorphic} if there exists a natural isomorphism $\xi:\mathcal{A}\to\mathcal{B}$ intertwining the action of $\Diff(S^{n-1})_k$ such that $\xi_I$ is the identity for $I\cap S^{n-1-k}=\emptyset$. Then we say that a $k$-morphism $\mathcal{F}:\mathcal{A}\to\mathcal{B}$ of $\mathcal{N}et^n$ where $1\le k\le n$ is {\em completely transparent} if the defect $n$-nets $\mathcal{A}$ and $\mathcal{B}$ can be identified by an isomorphism in such a way that the defect $n$-net $\mathcal{F}$ is isomorphic to the identity $k$-morphism. An $(n+1)$-morphism of $\mathcal{N}et^n$ is {\em completely transparent} if it is an isometry of sectors. Let $\mathcal{T}$ be the minimal class of morphisms of $\mathcal{N}et^n$ that contains all the completely transparent morphisms and is closed under composition, tensor product and adjunction. The members of $\mathcal{T}$ are called {\em transparent}. \begin{exam} A 1-morphism $\mathcal{F}:\mathcal{A}\to\mathcal{B}$ of $\mathcal{N}et^1$ is (completely) transparent if and only if they are isomorphic as von Neumann algebras. A 2-morphism of $\mathcal{N}et^1$ is (completely) transparent if and only if it is an isometry of bimodules. \end{exam} We say that a morphism of $\mathcal{QL}^n$ is {\em transparent} if it is defined in terms of transparent morphisms of $\mathcal{LQS}^n$. We say that two objects or morphisms of $\mathcal{QL}^n$ are {\em unitarily equivalent} if there is a transparent morphism between them. \begin{exam} In the notation of Example \ref{exam:to1}, two objects $(\mathcal{A},\mathcal{H})$ and $(\mathcal{B},\mathcal{K})$ of $\mathcal{QL}^1$ are unitarily equivalent if and only if there exists an isomorphism $\mathcal{A}\cong\mathcal{B}$ and an isometry of right $\mathcal{A}$-modules $\mathcal{H}\cong\mathcal{K}$. Two 1-morphisms $(\mathcal{F},f)$ and $(\mathcal{G},g)$ are unitarily equivalent if and only if there is an isometry of bimodules $\xi:\mathcal{F}\to\mathcal{G}$ such that $(\mathcal{K}\boxtimes_\mathcal{B}\xi)\circ f=g$. \end{exam} Define $\widetilde\mathcal{LQS}^n_k \subset \mathcal{LQS}^n$ to be the symmetric monoidal subcategory obtained by discarding all the nontransparent (k+1)- and higher morphisms. Similarly, define $\widetilde\mathcal{QL}^n_k \subset \mathcal{QL}^n$ be the symmetric monoidal subcategory obtained by discarding all the nontransparent (k+1)- and higher morphisms. We obtain two towers of symmetric monoidal $(n+1)$-categories $$\widetilde\mathcal{LQS}^n_0 \subset \widetilde\mathcal{LQS}^n_1 \subset \cdots \subset \widetilde\mathcal{LQS}^n_{n+1} = \mathcal{LQS}^n,$$ $$\widetilde\mathcal{QL}^n_0 \subset \widetilde\mathcal{QL}^n_1 \subset \cdots \subset \widetilde\mathcal{QL}^n_{n+1} = \mathcal{QL}^n$$ to encode the information of local quantum symmetries. Indeed, two $n$D quantum liquids are unitarily equivalent if and only if they are equivalent in $\widetilde\mathcal{QL}^n_0$. Two defects of codimension $k$ are unitarily equivalent if and only if they are equivalent in $\widetilde\mathcal{QL}^n_k$. \begin{rem} [Holographic principle] \label{rem:holo} In \cite{KWZ15,KWZ17}, a new kind of morphisms between potentially anomalous quantum liquids were introduced. In the present context, they are nothing but the 1-morphisms of the coslice $(n+1)$-category $\one^n/\widetilde\mathcal{QL}^n_1$. An object of $\one^n/\widetilde\mathcal{QL}^n_1$ is referred to as an $(n-1)$D {\em potentially anomalous quantum liquid} which by definition is a 1-morphism $\mathcal{X}:\one^n\to\mathcal{A}$ of $\widetilde\mathcal{QL}^n_1$. Note that $\mathcal{A}$ is an $n$D anomaly-free quantum liquid and $\mathcal{X}$ is a boundary of $\mathcal{A}$. Also note that the holographic principle holds trivially: the bulk $\mathcal{A}$ is uniquely determined by the boundary $\mathcal{X}$. However, one can say more concretely about this principle. The reasoning in \cite{KWZ15,KWZ17} shows that $\mathcal{A}$ is the center of $\mathcal{X}$ in the sense that the trivial domain wall of $\mathcal{A}$, viewed as an $(n-1)$D potentially anomalous quantum liquid by folding $\mathcal{A}$, satisfies the universal property of the center of $\mathcal{X}$. \end{rem} \section{Construction of categories of topological nets} \label{sec:fus-def} The purpose of this section is to sketch a construction of $\widehat\mathcal{N}et^n$ and $\mathcal{N}et^n$. \subsection{Nets on manifolds} Unless specified explicitly, manifolds are smooth, oriented, compact without boundary and possibly disconnected. Let $M$ be an $n$-manifold. A {\em disk region} of $M$ is a closed region that is diffeomorphic to the $n$-disk. Let $\Disk(M)$ denote the 1-category of disk regions of $M$ whose morphisms are inclusions of disk regions. A {\em partial net} on $M$ is a functor $\mathcal{A}:\mathcal{D}\to\VN$ where $\mathcal{D}$ is a full subcategory of $\Disk(M)$. A {\em sector} of a partial net $\mathcal{A}:\mathcal{D}\to\VN$ is defined literally as Definition \ref{defn:sector}. \begin{defn} A {\em stratified net} on $M$ consists of the following data: \begin{itemize} \item a partial net $\mathcal{A}:\mathcal{D}\to\VN$ on $M$; \item a stratification $M=M_n\supset M_{n-1}\supset\cdots\supset M_0$ where $M_i$ is a closed submanifold of dimension $i$ around which $M_{i+1}$ is equipped with a local parametrization. \end{itemize} These data are subject to the following condition: \begin{itemize} \item For every $I\in\mathcal{D}$, there exists a diffeomorphism $h:S^n_\uparrow\to I$ such that $h(S^{n-i}_\uparrow)=I\cap M_{n-i}$ and $h$ transforms the local parametrization around $S^{n-i}_\uparrow$ into that around $I\cap M_{n-i}$ for $1\le i\le d(I)$ where $d(I)$, called the {\em depth} of $I$, is the maximal integer such that $I\cap M_{n-d(I)}\ne\emptyset$. \end{itemize} We use the notation $\mathcal{A}$ to denote a stratified net. \end{defn} \begin{exam} A defect $n$-net of codimension $k$ defines a stratified net on $S^{n-1}$ with respect to the stratification $S^{n-1} \supset\cdots\supset S^{n-1-k} \supset \emptyset \supset\cdots\supset \emptyset$ and the standard local parametrizations. \end{exam} \subsection{Sewing operations} Let $\mathcal{A}_\pm:\mathcal{D}_\pm\to\VN$ be a pair of stratified nets on $(n-1)$-manifolds $M_\pm$ and let $\mathcal{B}$ be a defect $n$-net of codimension $k$. We consider an operation of sewing $\mathcal{A}_\pm$ along $\mathcal{B}$ as follows. Define subsets of $S^{n-1}$: $$S_\pm = \{ x\in S^{n-1} \mid \pm x_{n-1-k}\ge0 \}, \quad S_0=S_+\cap S_-,$$ $$S_{\pm\epsilon} = \{ x\in S^{n-1} \mid \pm x_{n-1-k}\ge-\epsilon \}, \quad S_\epsilon = S_{+\epsilon}\cap S_{-\epsilon},$$ where $\epsilon>0$ is a small real number fixed once and for all. Suppose given a pair of disk regions $I_\pm\in\mathcal{D}_\pm$ of depth $k$, a pair of diffeomorphisms $\phi_\pm: S_{\pm\epsilon} \to I_\pm$ that preserve the orientation, the stratification and the local parametrizations, and a pair of natural isomorphisms $\mathcal{A}_\pm(\phi_\pm(K_\pm)) \cong \mathcal{B}(K_\pm)$ for $S_{\pm\epsilon}\supset K_\pm\in\Disk^{n-1}_k$ (in particular, $\phi_\pm(K_\pm)\in\mathcal{D}_\pm$). \medskip \begin{center} \begin{tikzpicture}[scale=1.2] \fill[color=lightgray] (1.5,0.866) arc (60:-60:1) node[above=2.5em,right=0em][color=black]{$I_-$}; \draw[thick] (1,0) circle (1) node[below=3.5em]{$M_-$}; \fill[color=lightgray] (3,0.866) arc (120:240:1) node[above=2.5em,left=0em][color=black]{$I_+$}; \draw[thick] (3.5,0) circle (1) node[below=3.5em]{$M_+$}; \end{tikzpicture} \quad\quad \begin{tikzpicture}[scale=1.2] \fill[color=lightgray] (-0.257,0.866) arc (60:40:1) arc (140:120:1) -- (0.257,-0.866) arc (240:220:1) arc (-40:-60:1) node[above=2.5em,right=0em][color=black]{$S_\epsilon$}; \draw[thick] (0,0.643) arc (40:320:1) arc (-140:140:1) node[below=5.5em]{$M$}; \end{tikzpicture} \end{center} Gluing $M_\pm\setminus\phi_\pm(S_\pm)$ and $S_\epsilon$ via the diffeomorphisms $\phi_\pm$, we obtain an $(n-1)$-manifold $$M = (M_-\setminus\phi_-(S_-)) \cup S_\epsilon \cup (M_+\setminus\phi_+(S_+))$$ which inherits a stratification from $M_\pm$. Let $\mathcal{D}\subset\Disk(M)$ be the full subcategory consisting of the disk regions $M_\pm\setminus\phi_\pm(S_\pm) \supset J_\pm\in\mathcal{D}_\pm$ and $S_\epsilon\supset J\in\Disk^{n-1}_k$. Assembling the algebras $\mathcal{A}_\pm(J_\pm)$ and $\mathcal{B}(J)$ then yields a stratified net on $M$ $$\mathcal{A}:\mathcal{D}\to\VN.$$ \smallskip We consider next an operation of sewing a pair of sectors $\mathcal{H}_\pm$ of $\mathcal{A}_\pm$. Let $$\mathcal{H} = \mathcal{H}_-\boxtimes_{\mathcal{B}(S_+)}\mathcal{H}_+.$$ Then $\mathcal{H}$ carries the structure of a sector of $\mathcal{A}$ in the following way. On the one hand, $\mathcal{A}_\pm(J_\pm)$ acts on $\mathcal{H}$ via $\mathcal{H}_\pm$ for $M_\pm\setminus\phi_\pm(S_\pm) \supset J_\pm\in\mathcal{D}_\pm$. On the other hand, we have $$\mathcal{H} \cong \mathcal{H}_- \boxtimes_{\mathcal{B}(S_{+\epsilon})} L^2(\mathcal{B}(S_{+\epsilon})) \boxtimes_{\mathcal{B}(S_+)} L^2(\mathcal{B}(S_{+\epsilon})) \boxtimes_{\mathcal{B}(S_{+\epsilon})} \mathcal{H}_+$$ and $L^2(\mathcal{B}(S_{+\epsilon})) \boxtimes_{\mathcal{B}(S_+)} L^2(\mathcal{B}(S_{+\epsilon})) \cong \mathcal{H}_\mathcal{B} \boxtimes_{\mathcal{B}(S_+)} \mathcal{H}_\mathcal{B} \cong \mathcal{H}_\mathcal{B}$. Then $\mathcal{B}(J)$ acts on $\mathcal{H}$ via $\mathcal{H}_\mathcal{B}$ for $S_\epsilon\supset J\in\Disk^{n-1}_k$. \begin{exam}[Fusion of sectors] The composition of two $n$-morphisms $\mathcal{H}_+:\mathcal{P}\to\mathcal{B}$ and $\mathcal{H}_-:\mathcal{B}\to\mathcal{Q}$ of $\widehat\mathcal{N}et^n$ is precisely defined by sewing the sectors $\mathcal{H}_\pm$ along $\mathcal{B}$ where $\mathcal{A}_+=\mathcal{S}_{\mathcal{B}|\mathcal{P}}$, $\mathcal{A}_-=\mathcal{S}_{\mathcal{Q}|\mathcal{B}}$, $M_\pm=S^{n-1}$ and $\phi_\pm=\Id_{S_{\pm\epsilon}}$. Indeed, $\mathcal{S}_{\mathcal{Q}|\mathcal{P}}$ is a completion of $\mathcal{A}$ so that $\mathcal{H}=\mathcal{H}_-\boxtimes_{\mathcal{B}(S_\uparrow)}\mathcal{H}_+$ is a sector of $\mathcal{S}_{\mathcal{Q}|\mathcal{P}}$. \end{exam} \void{ The above sewing operation generalizes readily to the case where $I_\pm$ are disjoint disk regions of a single manifold. The sector sewing is implemented by the completely additive functor (see Proposition \ref{prop:nfun-bimod}) $$L^2(\mathcal{B}(S_{-\epsilon}) \bar\otimes \mathcal{B}(S_{+\epsilon})) \mapsto L^2(\mathcal{B}(S_{-\epsilon})) \boxtimes_{\mathcal{B}(S_+)} L^2(\mathcal{B}(S_{+\epsilon})).$$ } The above sewing operation can be generalized to sew $\mathcal{A}_\pm$ along $\mathcal{B}$ over a generic closed region $R\subset S^{n-1}$ with smooth boundary instead of a disk region. The local observable algebras around the seam are defined to be those of $\mathcal{B}$ around $\partial R$. \begin{prop} \label{prop:sec-ss-dual} Let $\mathcal{S}_{\mathcal{B}|\mathcal{A}}$ be the partial $n$-net from the definition of an $n$-morphism $\mathcal{A}\to\mathcal{B}$ of $\mathcal{N}et^n$. Then $\Sect(\mathcal{S}_{\mathcal{B}|\mathcal{A}})$ is equivalent to a finite direct sum of $\widehat\mathrm{Hilb}$. A sector of $\mathcal{S}_{\mathcal{B}|\mathcal{A}}$ is semisimple if and only if it is a dualizable $\mathcal{B}(S^{n-1}_\uparrow)$-$\mathcal{A}(S^{n-1}_\uparrow)$-bimodule. \end{prop} \begin{proof}[Sketch of proof] The proof follows the lines of \cite[Theorem 3.14]{BDH15} (see also \cite{KLM01} and \cite[Theorem 1.10]{BDH19b}). Replacing $\mathcal{A}$ and $\mathcal{B}$ by $\mathcal{A}\oplus\mathcal{B}$ if necessary, we may assume without loss of generality that $\mathcal{A}=\mathcal{B}$. Then $\mathcal{S}_{\mathcal{B}|\mathcal{A}}=\mathcal{A}$. Sewing $\mathcal{A}$ and $\mathcal{A}^\mathrm{op}$ over the cylinder $\{x\in S^{n-1} \mid |x_0|\le1/2]\}$, we obtain a stratified net $\mathcal{S}$ as well as a sector $\mathcal{H}$, where $\mathcal{S}$ is a disjoint union of two copies of $\mathcal{A}$. By the split property and the duality condition, $\mathcal{H}$ is a dualizable $\mathcal{A}(S^{n-1}_\uparrow)\bar\otimes\mathcal{A}(S^{n-1}_\downarrow)$-$\mathcal{A}(S^{n-1}_\uparrow)\bar\otimes\mathcal{A}(S^{n-1}_\downarrow)$-bimodule. Note that the duality maps can be chosen to be homomorphisms of sectors of $\mathcal{S}$. Therefore, $\mathcal{H}$ has to be a semisimple sector of $\mathcal{S}$. That is, $\mathcal{H}$ has the form $\mathcal{H}_1\otimes\mathcal{K}_1\oplus\cdots\oplus\mathcal{H}_m\otimes\mathcal{K}_m$ where $\mathcal{H}_i$ and $\mathcal{K}_i$ are irreducible sectors of $\mathcal{A}$. Moreover, $\mathcal{H}_i$ and $\mathcal{K}_i$ are dualizable $\mathcal{A}(S^{n-1}_\uparrow)$-$\mathcal{A}(S^{n-1}_\uparrow)$-bimodules. Applying an argument as the proof of \cite[Theorem 3.14]{BDH15}, we see that every sector of $\mathcal{A}$ is a direct sum of possibly infinite copies of the $\mathcal{H}_i$'s. \end{proof} \begin{cor} \label{cor:net-sec-dual} The composition of $n$-morphisms of $\mathcal{N}et^n$ is well-defined. Moreover, all the $n$-morphisms of $\mathcal{N}et^n$ are dualizable. \end{cor} \subsection{Fusion of defects} Let $\mathcal{A}_+:\mathcal{P}\to\mathcal{B}$ and $\mathcal{A}_-:\mathcal{B}\to\mathcal{Q}$ be two $(k+1)$-morphisms of $\widehat\mathcal{N}et^n$ where $0\le k\le n-2$. We construct the composition $\mathcal{A}_-\circ\mathcal{A}_+:\mathcal{P}\to\mathcal{Q}$ by using the sewing operation defined in the previous subsection. \smallskip Let $M_\pm=S^{n-1}$ and $\phi_\pm: S_{\pm\epsilon} \to \overline{S^{n-1}\setminus S_{\mp\epsilon}}$ be the dilation map. Sewing $\mathcal{A}_\pm$ along $\mathcal{B}$ via $\phi_\pm$ then yields a stratified net $\mathcal{A}:\mathcal{D}\to\VN$ on an $(n-1)$-manifold $M$ which is diffeomorphic to $S^{n-1}$. Moreover, we have a sector of $\mathcal{A}$ $$\mathcal{H} = \mathcal{H}_{\mathcal{A}_-}\boxtimes_{\mathcal{B}(S_+)}\mathcal{H}_{\mathcal{A}_+}.$$ Note that the isometry group $\Isom(S_0)$ acts on $M$ and $S^{n-1}$ leaving the $x_{n-1-k}$-axis fixed and that $M$ contains a copy of $S^{n-1}\setminus S_\epsilon$. Extend the identity map of $S^{n-1}\setminus S_\epsilon$ to an $\Isom(S_0)$-equivariant diffeomorphism $\pi_0:M\to S^{n-1}$. Let $C\subset M_{n-1-k}$ be the closed subset lying between the two copies of $S_0$ in $M$. Note that $C$ is a cylinder over $S^{n-2-k}$. Modify $\pi_0$ on a sufficiently small tubular neighborhood of $C$ to an $\Isom(S^{n-2-k})$-equivariant smooth map $\pi:M\to S^{n-1}$ such that $\pi$ restricts to the projection $C\twoheadrightarrow S^{n-2-k}$ and maps the complement of $C$ diffeomorphically onto that of $S^{n-2-k}$ preserving the stratification and the local parametrizations. \medskip \begin{center} \begin{tikzpicture}[scale=1.2] \fill[color=lightgray] (-0.257,0.866) arc (60:40:1) arc (140:120:1) -- (0.257,-0.866) arc (240:220:1) arc (-40:-60:1) node[above=2.5em,right=0em][color=black]{$S_\epsilon$} node[above=2.5em,left=1em][color=black]{$S_{+\epsilon}$} node[above=2.5em,right=3em][color=black]{$S_{-\epsilon}$}; \draw[thick] (0,0.643) arc (40:320:1) arc (-140:140:1) node[below=5.5em]{$M$}; \draw[ultra thick,color=red] (-0.757,1) arc (90:40:1) arc (140:90:1) node[left=1.7em]{$C$}; \draw[ultra thick,color=red] (-0.757,-1) arc (-90:-40:1) arc (-140:-90:1); \draw[->] (3.5,0) -- (3.5,.7) node[right]{$x_0$}; \draw[->] (3.5,0) -- (2.8,0) node[below]{$x_{n-1-k}$}; \end{tikzpicture} \end{center} \medskip Now we are ready to define the composition $\mathcal{A}_-\circ\mathcal{A}_+:\mathcal{P}\to\mathcal{Q}$: $$(\mathcal{A}_-\circ\mathcal{A}_+)(I) = \begin{cases} \mathcal{P}(I), & \text{if $I\cap S^{n-1-k}_+=\emptyset$}, \\ \mathcal{Q}(I), & \text{if $I\cap S^{n-1-k}_-=\emptyset$}, \\ \mathcal{O}(\mathcal{H},\pi^{-1}(I)), & \text{if $I\cap S^{n-2-k}\ne\emptyset$}, \\ \end{cases} $$ where $\mathcal{O}(\mathcal{H},R)$ is the von Neumann algebra on $\mathcal{H}$ generated by the images of $\mathcal{A}(J)$ for all $R\supset J\in\mathcal{D}$. \begin{prop} The pair $(\mathcal{A}_-\circ\mathcal{A}_+,\mathcal{H})$ is a well-defined $(k+1)$-morphism of $\widehat\mathcal{N}et^n$. \end{prop} \begin{proof}[Sketch of proof] We need to show that $(\mathcal{A}_-\circ\mathcal{A}_+,\mathcal{H})$ is a defect $n$-net of codimension $k+1$. The axioms of locality, additivity and covariance are clear. The difficult part of the proof is to show the vacuum property (aka the $1\boxtimes1$-isomorphism in \cite{BDH19a}). Let $A = (\mathcal{A}_-\circ\mathcal{A}_+)(S^{n-1}_\uparrow)$. Identify $\mathcal{H}_{\mathcal{A}_\pm}$ with $L^2(\mathcal{A}_\pm(S^{n-1}_\uparrow))$ so that it comes equipped with a modular conjugation $j_\pm:\mathcal{H}_{\mathcal{A}_\pm}\to\bar\mathcal{H}_{\mathcal{A}_\pm}$. Then $j_+$ and $j_-$ induce a conjugation $j:\mathcal{H}\to\bar\mathcal{H}$. Let $B = \mathcal{B}(S_+)\vee\mathcal{A}_+(S^{n-1}_\uparrow)$ be the von Neumann algebra on $\mathcal{H}_{\mathcal{A}_+}$. Choose a decomposition of the left $B$-module $\mathcal{H}_{\mathcal{A}_+} = \bigoplus \overline{B\xi_\alpha}$ where $\xi_\alpha\in\mathcal{H}_{\mathcal{A}_+}$ is $j_+$-invariant. Let $p_\alpha: \mathcal{H}_{\mathcal{A}_+} \to \overline{B\xi_\alpha}$ be the projection so that $p_\alpha\xi_\alpha=\xi_\alpha$ and $\sum p_\alpha = 1$. Since $p_\alpha \in B' = \mathcal{B}(S_+)'\cap\mathcal{A}_+(S^{n-1}_\downarrow)$, $j p_\alpha j$ defines a projection in $A$. Consider the trivial case $\mathcal{A}_+=\Id_\mathcal{B}$ so that $\mathcal{H}_{\mathcal{A}_+}\cong L^2(\mathcal{B}(S_+))$. Recall that $\mathcal{H}$ is a completion of $\hom_{\mathcal{B}(S_-)}(L^2(\mathcal{B}(S_-)),\mathcal{H}_{\mathcal{A}_-}) \otimes_{B(S_+)} \mathcal{H}_{\mathcal{A}_+}$. Choose a decomposition of the left $A$-module $\mathcal{H} = \bigoplus\overline{A(\phi_\beta\otimes\mathcal{H}_{\mathcal{A}_+})}$ where $\phi_\beta\in\hom_{\mathcal{B}(S_-)}(L^2(\mathcal{B}(S_-)),\mathcal{H}_{\mathcal{A}_-})$ is $j_-$-invariant. Let $q_\beta:\mathcal{H}\to\overline{A(\phi_\beta\otimes\mathcal{H}_{\mathcal{A}_+})}$ be the projection so that $\sum q_\beta=1$. Then $q_\beta$ can be regarded as a projection in $\mathcal{B}(S_-)'\cap\mathcal{A}_-(S^{n-1}_\downarrow)$ such that $q_\beta\phi_\beta=\phi_\beta$. For the general case, we have $\mathcal{H} = \bigoplus\overline{A(\phi_\beta\otimes\xi_\alpha)}$. Let $\omega_{\alpha\beta}$ be the positive linear functional on $A$ associated to the $j$-invariant vector $\phi_\beta\otimes\xi_\alpha$. Then $(j p_\alpha q_\beta j)\omega_{\alpha\beta}=\omega_{\alpha\beta}$. Thus $\omega_{\alpha\beta}$ are mutually orthogonal. Therefore, the faithful left $A$-module $\mathcal{H}$ is induced by the weight $\sum\omega_{\alpha\beta}$. Since the weight $\sum\omega_{\alpha\beta}$ is compatible with $j$, it has to be faithful and the associated modular conjugation is $j$. This shows that the $A$-$A$-bimodule $\mathcal{H}$ is isometric to $L^2(A)$, as desired. \end{proof} \begin{prop} If both of $\mathcal{A}_{\pm}$ belong to $\mathcal{N}et^n$, so is $\mathcal{A}_-\circ\mathcal{A}_+$. \end{prop} \begin{proof}[Sketch of proof] We need to show that $\mathcal{A}_-\circ\mathcal{A}_+$ is finite. The split property is clear. The duality condition is proved by constructing duality maps explicitly. By constructing duality maps explicitly again, one shows that the $\mathcal{B}(S_+)$-$\mathcal{O}(\mathcal{H}_{\mathcal{A}_+},S_+')^\mathrm{op}$-bimodule $\mathcal{H}_{\mathcal{A}_+}$ is dualizable and similarly for the $\mathcal{O}(\mathcal{H}_{\mathcal{A}_-},S_-')$-$\mathcal{B}(S_+)$-bimodule $\mathcal{H}_{\mathcal{A}_-}$. Therefore, the $\mathcal{O}(\mathcal{H}_{\mathcal{A}_-},S_-')$-$\mathcal{O}(\mathcal{H}_{\mathcal{A}_+},S_+')^\mathrm{op}$-bimodule $\mathcal{H}$ is dualizable. This implies that $\mathcal{H}$ is a semisimple sector of $\mathcal{A}_-\circ\mathcal{A}_+$. \end{proof} \begin{rem} The fusion of defects was carried out in \cite{BDH19a} by using (a variant of) fiber product of von Neumann algebras defined in \cite{Ti08}. However, the construction relies on the strong additivity axiom which is not satisfied by general topological nets, for example, those constructed in Section \ref{sec:net-oss} and Section \ref{sec:lw-net}. In the special case where $\mathcal{B}$ does satisfy the strong additivity axiom, our construction of the fusion $\mathcal{A}_-\circ\mathcal{A}_+$ for $n=2$ clearly recovers that of \cite{BDH19a}. \end{rem} We have defined all ``vertical'' compositions of morphisms of $\widehat\mathcal{N}et^n$. ``Horizontal'' compositions are defined literally as ``vertical'' ones. It remains to construct various coherence relations of $\widehat\mathcal{N}et^n$ to establish Theorem \ref{thm:netn}. This part of work is essentially tautological but appeals to more sophisticated sewing operations of stratified nets. So far, we have made a sufficient preparation for studying the condensation theory of topological nets which is the main topic of this paper. We shall leave the details of the construction of $\widehat\mathcal{N}et^n$ to a future work. \begin{rem} One can show that $\mathcal{N}et^n$ has duals: the dual of an object $(\mathcal{A},\mathcal{H}_\mathcal{A})$ is $(\mathcal{A}^\mathrm{op},\bar\mathcal{H}_\mathcal{A})$, the dual of an $n$-morphism $\mathcal{H}$ is the complex conjugate $\bar\mathcal{H}$ and the dual of a $k$-morphism $(\mathcal{F},\mathcal{H}_\mathcal{F})$ for $1\le k<n$ is $(r_k^*\mathcal{F},\bar\mathcal{H}_\mathcal{F})$ where $r_k$ is the reflection across the hyperplane $x_{n-k}=0$. The proof is an application of sewing arguments and will not be given here. See \cite{BDH19b} for a dualizability result of the symmetric monoidal $*$-3-category of conformal nets. We need not this fact in this paper. \end{rem} \section{Condensation theory of topological nets} \label{sec:condense-net} The purpose of this section is to sketch a proof of Theorem \ref{thm:lqs-cc}. \subsection{Condensation of 1D defects} Let $\mathcal{R}$ be a $*$-condensation monad on an $(n-1)$-morphism $\mathcal{A}$ of $\widehat\mathcal{N}et^n$ (in particular, $\mathcal{R}$ is a sector of $\mathcal{A}$). Then $\mathcal{R}$ induces a $*$-condensation monad on the von Neumann algebra $\mathcal{A}(S^{n-1}_\uparrow)$ in the Morita $*$-2-category of von Neumann algebras. Applying Theorem \ref{thm:vn-cc} and Remark \ref{rem:vn-cc} then yields a canonical $*$-condensation $\mathcal{A}\condense\mathcal{B}$ extending the $*$-condensation monad $\mathcal{R}$. More precisely, the $*$-condensation $\mathcal{A}\condense\mathcal{B}$ is defined by a pair of $n$-morphisms $\mathcal{H}_\mathcal{B}:\mathcal{A}\to\mathcal{B}$ and $\mathcal{R}:\mathcal{B}\to\mathcal{A}$. Now we assume that the $*$-condensation monad $\mathcal{R}$ is defined in $\mathcal{N}et^n$. Since $n$-morphisms of $\mathcal{N}et^n$ are dualizable by Corollary \ref{cor:net-sec-dual}, we may assume that $\mathcal{R}$ is unital (see \cite[Theorem 3.1.7]{GJF19}). According to Remark \ref{rem:vn-cc-dual}, $\mathcal{R}:\mathcal{B}\to\mathcal{A}$ is right dual to $\mathcal{H}_\mathcal{B}:\mathcal{A}\to\mathcal{B}$. It is easy to verify that $\mathcal{A}\condense\mathcal{B}$ is a well-defined $*$-condensation in $\mathcal{N}et^n$: Since $\mathcal{R}:\mathcal{A}\to\mathcal{A}$ is semisimple and since the action of $\mathcal{A}(S^{n-1}_\uparrow)$ on $\mathcal{R}$ factors through $\mathcal{B}(S^{n-1}_\uparrow)$, $\mathcal{R}:\mathcal{B}\to\mathcal{A}$ is semisimple. Then $\mathcal{H}_\mathcal{B}:\mathcal{A}\to\mathcal{B}$ is also semisimple. Since the action of $\mathcal{A}(S^{n-1}_\downarrow)$ on $\mathcal{H}_\mathcal{B}$ factors through $\mathcal{B}(S^{n-1}_\downarrow)$, $\mathcal{H}_\mathcal{B}$ is a semisimple sector of $\mathcal{B}$. Since $\Id_\mathcal{B}$ is a direct summand of $\mathcal{H}_\mathcal{B}\circ\mathcal{R}$, the duality condition of $\mathcal{A}$ implies that of $\mathcal{B}$. Hence $\mathcal{B}$ is finite. \subsection{Condensation of topological 2-nets} We consider in this subsection the simplest nontrivial case $n=2$. Let $\mathcal{A}=\underline{\mathbb{C}}$ be the trivial topological 2-net. We establish Theorem \ref{thm:lqs-cc} by showing that every $*$-condensation monad on $\mathcal{A}$ admits a $*$-condensate in $\mathcal{LQS}^2$ and every $*$-condensation bimodule is induced by a 1-morphism of $\mathcal{LQS}^2$. Let $\mathcal{R}$ be a unital $*$-condensation monad on $\mathcal{A}$. To unpack the definition of $\mathcal{R}$, we assume a prior that $\mathcal{R}$ is induced by a $*$-condensation $f:\mathcal{A}\condense\mathcal{E}$ defined by the counit map $v:f\circ f^\vee\to\Id_\mathcal{E}$ and a 3-morphism $\alpha:v\circ v^\vee\to\Id_{\Id_\mathcal{E}}$ such that $\alpha\circ\alpha^*=1$. (Indeed, $\mathcal{E}$ lives in the $*$-condensation completion of $\mathcal{LQS}^2$ which is equivalent to $3\mathrm{Hilb}$.) See the following picture. By composing the 1-morphisms in the left configuration horizontally to eliminate $\mathcal{E}$, we obtain a configuration in terms of the defining data of $\mathcal{R}$ on the right. \medskip \begin{center} \begin{tikzpicture}[scale=1] \fill[lightgray] (0,0) circle (1.5); \draw[thick] (1.5,0) node[right]{$f$} arc(0:90:1.5) node{$\bullet$} node[above]{$u^\vee$} arc(90:180:1.5) node[left]{$f^\vee$} arc(180:270:1.5) node{$\bullet$} node[below]{$u$} arc(270:360:1.5); \fill[fill=white] (0,0) circle (0.5) node{$\mathcal{A}$} node[left=5em,above=3em]{$\mathcal{A}$} node[left=2.5em,above=1.2em]{$\mathcal{E}$}; \draw[thick] (.5,0) node[right]{$f^\vee$} arc(0:90:.5) node{$\bullet$} node[above]{$v$} arc(90:180:.5) node[left]{$f$} arc(180:270:.5) node{$\bullet$} node[below]{$v^\vee$} arc(270:360:.5); \end{tikzpicture} \quad\quad \raisebox{5.3em}{$=$} \quad\quad \begin{tikzpicture}[scale=1] \draw[thick] (0,0.5) node{$\bullet$} node[above left]{$m$} -- (0,1.5) node{$\bullet$} node[above]{$u^\vee$} node[below=1.2em,right]{$\mathcal{R}$}; \draw[thick] (0,-1.5) node{$\bullet$} node[below]{$u$} -- (0,-0.5) node{$\bullet$} node[below left]{$m^\vee$} node[below=1.2em,right]{$\mathcal{R}$}; \draw[thick] (0,0) circle (0.5) node{$\mathcal{A}$} node[left=3.5em,above=3em]{$\mathcal{A}$} node[left=1.2em]{$\mathcal{R}$} node[right=1.2em]{$\mathcal{R}$}; \end{tikzpicture} \end{center} \medskip The 2-morphisms $u:\Id_\mathcal{A}\to\mathcal{R}$ and $m:\mathcal{R}\circ\mathcal{R}\to\mathcal{R}$ exhibit $\mathcal{R}$ as an algebra in the monoidal $*$-2-category $\Hom_{\mathcal{LQS}^2}(\mathcal{A},\mathcal{A})$. Moreover, the 2-morphism $m^\vee\circ u:\Id_\mathcal{A}\to\mathcal{R}\circ\mathcal{R}$ exhibits $\mathcal{R}$ self dual. Let $\theta:\Id_\mathcal{A}\to\mathcal{R}$ be a 2-morphism which contains all the simple 2-morphisms $\Id_\mathcal{A}\to\mathcal{R}$ as direct summands and admits an isometric embedding $\mu:u\hookrightarrow\theta$. Note that $\theta^\vee:\mathcal{R}\to\Id_\mathcal{A}$ induces a 2-morphism $\Id_\mathcal{A}\to\mathcal{R}^\vee$ which by the self duality of $\mathcal{R}$ determines a 2-morphism $\theta^*:\Id_\mathcal{A}\to\mathcal{R}$. We follow the lines of Section \ref{sec:lw-net} to construct a $*$-condensate $\mathcal{B}$ of $\mathcal{R}$. Let $P=\{p_1,\dots,p_k\}\subset S^1$ be a finite subset where the points $p_i$ are in cyclic order. We have a sector of $\mathcal{A}$ defined by the following composition: $$\mathcal{H}_P: \Id_\mathcal{A} \xrightarrow{(\theta\theta^*)^k} \mathcal{R}^{2k} \xrightarrow{m} \mathcal{R} \xrightarrow{u^\vee} \Id_\mathcal{A}$$ where $\mathcal{R}^k$ denotes the $k$-fold composition $\mathcal{R}\circ\mathcal{R}\circ\cdots\circ\mathcal{R}$. For any inclusion $P\subset Q$, the isometric embedding $\mu:u\hookrightarrow\theta$ induces an isometric embedding $\mathcal{H}_P\hookrightarrow\mathcal{H}_Q$. Form a direct limit in $\Sect(\mathcal{A})$ $$\mathcal{H}_\mathcal{B} = \varinjlim_P \mathcal{H}_P.$$ To facilitate the generalization to higher dimensions, we give a diagrammatic description of $\mathcal{H}_P$ by means of the postulated $*$-condensate $\mathcal{E}$. Consider the following configuration where we insert a copy of the 2-morphism $\theta$ (resp. $\theta^*$) on the left (resp. right) of each point of $P$: \medskip \begin{center} \begin{tikzpicture}[scale=1] \filldraw[thick,fill=lightgray] (0,0) circle (1.5) node[left=5em,above=2em]{$\mathcal{A}$} node[]{$\mathcal{E}$}; \draw (1.5,0) arc (0:25:1.5) node{$\bullet$} node[right]{$\theta$} arc (25:45:1.5) node{$+$} node[above=.5em,right]{$p_1$} arc (45:65:1.5) node{$\bullet$} node[right=.5em,above]{$\theta^*$} arc (65:205:1.5) node{$\bullet$} node[left]{$\theta$} arc (205:225:1.5) node{$+$} node[below=.5em,left]{$p_2$} arc (225:250:1.5) node{$\bullet$} node[below=.5em,left]{$\theta^*$} arc (250:295:1.5) node{$\bullet$} node[below=.5em,right]{$\theta$} arc (295:315:1.5) node{$+$} node[below=.5em,right]{$p_3$} arc (315:340:1.5) node{$\bullet$} node[right]{$\theta^*$} ; \end{tikzpicture} \quad\quad \raisebox{5em}{$:=$} \quad\quad \raisebox{.5em}{\begin{tikzpicture}[scale=.9] \filldraw[thick,fill=lightgray] (1.65,0) arc (0:180:1.65 and 1.65) node[above=2.3em]{$\mathcal{A}$} -- (-1.65,-1) arc (-180:-90:.15) node{$\bullet$} node[below]{$\theta$} arc (-90:0:.15) -- (-1.35,-.5) arc (180:90:.15) node{$\times$} node[above]{$p_1$} arc (90:0:.15) -- (-1.05,-1) arc (-180:-90:.15) node{$\bullet$} node[below]{$\theta^*$} arc (-90:0:.15) -- (-.75,-.5) arc (180:0:.15) -- (-.45,-1) arc (-180:-90:.15) node{$\bullet$} node[below]{$\theta$} arc (-90:0:.15) -- (-.15,-.5) arc (180:90:.15) node{$\times$} node[above]{$p_2$} node[above=2em]{$\mathcal{E}$} arc (90:0:.15) -- (.15,-1) arc (-180:-90:.15) node{$\bullet$} node[below]{$\theta^*$} arc (-90:0:.15) -- (.45,-.5) arc (180:0:.15) -- (.75,-1) arc (-180:-90:.15) node{$\bullet$} node[below]{$\theta$} arc (-90:0:.15) -- (1.05,-.5) arc (180:90:.15) node{$\times$} node[above]{$p_3$} arc (90:0:.15) -- (1.35,-1) arc (-180:-90:.15) node{$\bullet$} node[below]{$\theta^*$} arc (-90:0:.15) -- (1.65,0); \end{tikzpicture}} \end{center} \medskip The left diagram is interpreted by the right one. By composing the 1-morphisms horizontally to eliminate $\mathcal{E}$, we obtain a configuration of which the vertical composition recovers the above definition of $\mathcal{H}_P$. \medskip \begin{center} \begin{tikzpicture}[scale=1] \filldraw[thick,fill=lightgray] (0,0) circle (1.5) node[left=5em,above=2em]{$\mathcal{A}$} node[]{$\mathcal{E}$}; \draw (1.5,0) arc (0:25:1.5) node{$\bullet$} node[right]{$\theta$} arc (25:45:1.5) node{$+$} node[above=.5em,right]{$p_1$} arc (45:65:1.5) node{$\bullet$} node[right=.5em,above]{$\theta^*$} arc (65:205:1.5) node{$\bullet$} node[left]{$\theta$} arc (205:225:1.5) node{$+$} node[below=.5em,left]{$p_2$} arc (225:250:1.5) node{$\bullet$} node[below=.5em,left]{$\theta^*$} arc (250:295:1.5) node{$\bullet$} node[below=.5em,right]{$\theta$} arc (295:315:1.5) node{$+$} node[below=.5em,right]{$p_3$} arc (315:340:1.5) node{$\bullet$} node[right]{$\theta^*$} ; \draw[ultra thick,color=red] (1.061,1.061) node[below=2.5em,right=1.2em]{$I$} arc (45:-135:1.5); \end{tikzpicture} \quad \raisebox{5em}{$=$} \quad \raisebox{.5em}{\begin{tikzpicture}[scale=1] \filldraw[thick,fill=lightgray] (-1.05,0) node{$\times$} node[left]{$p_2$} node[left=1.5em,above=1.5em]{$\mathcal{A}$} node[right=1em,above=.5em]{$\mathcal{E}$} -- (-1.05,-1) arc (-180:-90:.15) node{$\bullet$} node[below]{$\theta^*$} arc (-90:0:.15) -- (-.75,-.5) arc (180:0:.15) -- (-.45,-1) arc (-180:-90:.15) node{$\bullet$} node[below]{$\theta$} arc (-90:0:.15) -- (-.15,-.5) arc (180:90:.15) node{$\times$} node[above]{$p_3$} arc (90:0:.15) -- (.15,-1) arc (-180:-90:.15) node{$\bullet$} node[below]{$\theta^*$} arc (-90:0:.15) -- (.45,-.5) arc (180:0:.15) -- (.75,-1) arc (-180:-90:.15) node{$\bullet$} node[below]{$\theta$} arc (-90:0:.15) -- (1.05,0) node{$\times$} node[right]{$p_1$} -- (1.05,.8) arc (0:90:.35) node{$\bullet$} node[above]{$\theta^\vee$} arc (90:180:.35) arc (0:-180:.35) arc (0:90:.35) node{$\bullet$} node[above]{$\theta^{*\vee}$} arc (90:180:.35) -- (-1.05,0); \end{tikzpicture}} \quad \raisebox{5em}{$=$} \raisebox{1.5em}{\begin{tikzpicture}[scale=1] \draw[thick] (0,-.7) node{$\bullet$} node[below]{$\mathcal{H}_{P,I}$} -- (0,0) node[right]{$\mathcal{R}$} -- (0,.7) node{$\bullet$} node[above]{$\bar\mathcal{H}_{P,I'}$}; \end{tikzpicture}} \end{center} \medskip Let $I\subset S^1$ be an arc and let $P\subset S^1$ be a finite subset. Enlarging $P$ if necessary, we assume $\partial I\subset P$. Note that $I$ and $I'$ divide the collection of the inserted 2-morphisms $\theta$ and $\theta^*$ into two disjoint parts. Correspondingly, $\mathcal{H}_P$ admits a decomposition as depicted in the above picture $$\mathcal{H}_P: \Id_\mathcal{A} \xrightarrow{\mathcal{H}_{P,I}} \mathcal{R} \xrightarrow{\bar\mathcal{H}_{P,I'}} \Id_\mathcal{A}.$$ Passing to the direct limit yields a decomposition $$\mathcal{H}_\mathcal{B}: \Id_\mathcal{A} \xrightarrow{\mathcal{H}_{\mathcal{B},I}} \mathcal{R} \xrightarrow{\bar\mathcal{H}_{\mathcal{B},I'}} \Id_\mathcal{A}.$$ Define $\mathcal{B}(I)$ to be the von Neumann algebra $$\mathcal{B}(I) = \hom_{\mathcal{R}(S^1_\uparrow)}(\mathcal{H}_{\mathcal{B},I},\mathcal{H}_{\mathcal{B},I}).$$ We need to verify that $(\mathcal{B},\mathcal{H}_\mathcal{B})$ is a finite topological 2-net. Note that $\bar\mathcal{H}_{P,I}\boxtimes_{\mathcal{R}(S^1_\uparrow)}\mathcal{H}_{P,I}$ defines a unital $*$-condensation monad on $\Id_\mathcal{A}$, determining a $*$-condensation $\Id_\mathcal{A}\condense\tilde\mathcal{B}$ in $\mathcal{LQS}^2$ with $\tilde\mathcal{B}(S^1_\uparrow) = \hom_{\mathcal{R}(S^1_\uparrow)}(\mathcal{H}_{P,I},\mathcal{H}_{P,I})$ and $\mathcal{H}_{\tilde\mathcal{B}} = \bar\mathcal{H}_{P,I}\boxtimes_{\mathcal{R}(S^1_\uparrow)}\mathcal{H}_{P,I}$. In particular, $\mathcal{H}_{\tilde\mathcal{B}} \cong L^2(\tilde\mathcal{B}(S^1_\uparrow))$. Passing to the direct limit yields $\mathcal{H}_\mathcal{B} \cong L^2(\mathcal{B}(S^1_\uparrow))$. This proves the vacuum property of $\mathcal{B}$. The center $Z(\mathcal{O}(\mathcal{H}_\mathcal{B}))$ can be computed by any finite configuration. Hence $\mathcal{H}_\mathcal{B}$ is a semisimple sector of $\mathcal{B}$. The duality condition of $\mathcal{B}$ is also proved by reducing the problem to finite configurations. In order to obtain a 1-morphism $\mathcal{A}\to\mathcal{B}$ to extend the $*$-condensation monad $\mathcal{R}$, one simply applies the above construction on a half circle instead of $S^1$. \begin{rem} Since $\mathcal{R}$ is a unital $*$-condensation algebra in $\mathcal{LQS}^1 \simeq 2\mathrm{Hilb}$, $\mathcal{R}$ may be regarded as a unitary multi-fusion 1-category $\mathcal{C}$ so that $m:\mathcal{R}\circ\mathcal{R}\to\mathcal{R}$ is identified with the tensor product $\otimes:\mathcal{C}\boxtimes\mathcal{C}\to\mathcal{C}$ and $u:\Id_\mathcal{A}\to\mathcal{R}$ defines the tensor unit of $\mathcal{C}$. Moreover, the 1-morphism $\theta: \Id_\mathcal{A} \to \mathcal{R}$ of $\mathcal{LQS}^1$ defines an object $V\in\mathcal{C}$. The topological 2-net $(\mathcal{B},\mathcal{H}_\mathcal{B})$ constructed above recovers the Levin-Wen net from Section \ref{sec:lw-net} associated to $\mathcal{C}$ and $V$. \end{rem} Now we consider a $*$-condensation bimodule $\mathcal{S}$ over unital $*$-condensation monads $\mathcal{R}$ and $\mathcal{R}'$ on $\mathcal{A}$. To unpack the definition of $\mathcal{S}$, we assume a prior that $\mathcal{R}$ and $\mathcal{R}'$ are induced by $*$-condensations $f:\mathcal{A}\condense\mathcal{E}$ and $f':\mathcal{A}\condense\mathcal{E}'$, respectively, and $\mathcal{S}=f'^\vee\circ g\circ f$ where $g:\mathcal{E}\to\mathcal{E}'$ is a 1-morphism of $\mathcal{LQS}^2$. See the following picture: \medskip \begin{center} \begin{tikzpicture}[scale=1] \fill[fill=lightgray] (-.2,1.5) -- (.2,1.5) -- (.2,1.183) arc (80.4:-80.4:1.2) -- (.2,-1.5) -- (-.2,-1.5) -- (-.2,-1.183) arc (260.4:99.6:1.2) -- (-.2,1.5); \draw[thick] (.2,1.5) -- (.2,1.183) arc (80.4:-80.4:1.2) -- (.2,-1.5); \draw[thick] (-.2,-1.5) -- (-.2,-1.183) arc (260.4:99.6:1.2) -- (-.2,1.5); \draw[thick] (0,1.5) -- (0,0) node{$g$} -- (0,-1.5); \node at (-1.4,.9) {$\mathcal{A}$}; \node at (-1,0) {$\mathcal{E}'$}; \node at (1,0) {$\mathcal{E}$}; \node at (-1.4,0) {$f'^\vee$}; \node at (1.4,0) {$f$}; \filldraw[thick,fill=white] (.2,.77) arc(75.5:-75.5:.8) -- (.2,0) node[right]{$\mathcal{A}$} -- (.2,.77); \filldraw[thick,fill=white] (-.2,.77) arc(104.5:255.5:.8) -- (-.2,0) node[left]{$\mathcal{A}$} -- (-.2,.77); \end{tikzpicture} \quad\quad \raisebox{4em}{$=$} \quad\quad \begin{tikzpicture}[scale=1] \draw[thick] (0,1.5) -- (0,1) node[above=1em,right]{$\mathcal{S}$} node{$\bullet$} -- (0,0) node[right]{$\mathcal{S}$} -- (0,-1) node[below=1em,right]{$\mathcal{S}$} node{$\bullet$} -- (0,-1.5) ; \draw[thick] (0,0) circle (1) node[left=2.7em]{$\mathcal{R}'$} node[right=2.7em]{$\mathcal{R}$}; \node at (-1.4,.9) {$\mathcal{A}$}; \end{tikzpicture} \end{center} \medskip Let $\zeta:\Id_\mathcal{A}\to\mathcal{S}$ be a 2-morphism which contains all the simple 2-morphisms $\Id_\mathcal{A}\to\mathcal{S}$ as direct summands. Let $I\in\Disk^1_1$ be an arc and let $P\subset S^1$ be a finite subset containing $\partial I$. Assume without loss of generality that $I$ intersects $S^0_\downarrow$. We have a sector $\mathcal{H}_P$ of $\mathcal{A}$ as depicted in the following picture: \medskip \begin{center} \begin{tikzpicture}[scale=1] \filldraw[thick,fill=lightgray] (0,0) circle (1.5) node[left=5em,above=2em]{$\mathcal{A}$} node[left=1em]{$\mathcal{E}'$} node[right=1em]{$\mathcal{E}$}; \draw (1.5,0) arc (0:25:1.5) node{$\bullet$} node[right]{$\theta$} arc (25:45:1.5) node{$+$} node[above=.5em,right]{$p_1$} arc (45:65:1.5) node{$\bullet$} node[right=.5em,above]{$\theta^*$} arc (65:205:1.5) node{$\bullet$} node[left]{$\theta'$} arc (205:225:1.5) node{$+$} node[below=.5em,left]{$p_2$} arc (225:250:1.5) node{$\bullet$} node[below=.5em,left]{$\theta'^*$} ; \draw[thick] (0,1.5) node[above]{$\zeta^*$} node{$\bullet$} -- (0,0) -- (0,-1.5) node[below]{$\zeta$} node{$\bullet$}; \draw[ultra thick,color=red] (1.061,1.061) node[below=2.5em,right=1.2em]{$I$} arc (45:-135:1.5); \end{tikzpicture} \quad \raisebox{5.5em}{$:=$} \quad \raisebox{1em}{\begin{tikzpicture}[scale=1] \filldraw[thick,fill=lightgray] (-1,0) node{$\times$} node[left]{$p_2$} node[left=1.5em,above=1.5em]{$\mathcal{A}$} -- (-1,-1) arc (-180:-90:.2) node{$\bullet$} node[below]{$\theta'^*$} arc (-90:0:.2) -- (-.6,-.5) arc (180:0:.2) -- (-.2,-1) arc (-180:-90:.2) node{$\bullet$} node[below]{$\zeta$} arc (-90:0:.2) -- (.2,-.5) arc (180:0:.2) -- (.6,-1) arc (-180:-90:.2) node{$\bullet$} node[below]{$\theta$} arc (-90:0:.2) -- (1,0) node{$\times$} node[right]{$p_1$} -- (1,1) arc (0:90:.2) node{$\bullet$} node[above]{$\theta^\vee$} arc (90:180:.2) -- (.6,.5) arc (0:-180:.2) -- (.2,1) arc (0:90:.2) node{$\bullet$} node[above]{$\zeta^\vee$} arc (90:180:.2) -- (-.2,.5) arc (0:-180:.2) -- (-.6,1) arc (0:90:.2) node{$\bullet$} node[above]{$\theta'^{*\vee}$} arc (90:180:.2) -- (-1,0); \draw[thick] (0,1.2) -- (0,0) node[left=.8em]{$\mathcal{E}'$} node[right=.8em]{$\mathcal{E}$} -- (0,-1.2); \end{tikzpicture}} \quad \raisebox{5.5em}{$=$} \raisebox{2em}{\begin{tikzpicture}[scale=1] \draw[thick] (0,-.7) node{$\bullet$} node[below]{$\mathcal{H}_{P,I}$} -- (0,0) node[right]{$\mathcal{S}$} -- (0,.7) node{$\bullet$} node[above]{$\bar\mathcal{H}_{P,I'}$}; \end{tikzpicture}} \end{center} \medskip Form direct limits in $\Sect(\mathcal{A})$ $$\mathcal{H}_\mathcal{F} = \varinjlim_P \mathcal{H}_P, \quad \mathcal{H}_{\mathcal{F},I} = \varinjlim_P \mathcal{H}_{P,I}$$ and define $\mathcal{F}(I)$ to be the von Neumann algebra $$\mathcal{F}(I) = \hom_{\mathcal{S}(S^1_\uparrow)}(\mathcal{H}_{\mathcal{F},I},\mathcal{H}_{\mathcal{F},I}).$$ Then $(\mathcal{F},\mathcal{H}_\mathcal{F})$ defines a 1-morphism of $\mathcal{LQS}^2$ inducing the $*$-condensation bimodule $\mathcal{S}$. \subsection{Condensation of topological $n$-nets} \label{sec:cond-nnet} Let $\mathcal{A}=\underline{\mathbb{C}}$ be the trivial topological $n$-net where $n\ge2$. By induction on $n$, we have $\Hom_{\mathcal{LQS}^n}(\mathcal{A},\mathcal{A}) \simeq n\mathrm{Hilb}$. Let $\mathcal{R}$ be a unital $*$-condensation monad on $\mathcal{A}$. We construct explicitly a $*$-condensate $\mathcal{B}$ of $\mathcal{R}$ by generalizing the construction from the previous subsection. To unpack the definition of $\mathcal{R}$, we assume a prior that $\mathcal{R}$ is induced by a $*$-condensation $f:\mathcal{A}\condense\mathcal{E}$. Consider the cone $\Lambda = \{ x\in\mathbb{R}^n \mid x_i\ge0 \}$. We label the interior of $\Lambda$ by $\mathcal{E}$ and the complement of $\Lambda$ by $\mathcal{A}$. Use $\theta_1$ to denote the 1-morphism $f:\mathcal{A}\to\mathcal{E}$ and assign it to all the faces of $\Lambda$ of codimension one. Then by induction on $k$ for $2\le k\le n$, we choose a $k$-morphism $\theta_k:\Id_{\cdots\Id_\mathcal{A}}\to f_k$ and assign it to all the faces of codimension $k$, where $f_k$ is obtained by composing the morphisms around such a face (for example, $f_2=f^\vee\circ f=\mathcal{R}$). We require that $\theta_k$ extend to a $*$-condensation and contain the unit map $u_{\theta_{k-1}}: \Id_{\cdots\Id_\mathcal{A}} \to \theta_{k-1}^\vee\circ\theta_{k-1}$ as a direct summand. Moreover, let $\theta_n^*:\Id_{\cdots\Id_\mathcal{A}}\to f_n$ be the $n$-morphism induced by $\theta_n$ due to the self duality of $\mathcal{R}$. Note that the morphisms $\theta_2,\dots,\theta_n$ and $\theta_n^*$ are irrelevant to $\mathcal{E}$. \medskip \begin{center} \begin{tikzpicture}[scale=1] \draw[thick] (-1.3,1)--(1.3,1) (-1.3,-1)--(1.3,-1) (1,-1.3)--(1,1.3) (-1,-1.3)--(-1,1.3); \draw[dashed,color=blue] (-1.3,-1.3) -- (1.3,1.3) (1.3,-1.3) -- (-1.3,1.3) (0,1.3) -- (0,-1.3) (1.3,0) -- (-1.3,0) (.7,1.3)--(1.3,.7) (.7,-1.3)--(1.3,-.7) (-.7,1.3)--(-1.3,.7) (-.7,-1.3)--(-1.3,-.7); \draw[ultra thick,color=red] (.3,.7) node[color=black]{$\circlearrowleft$} -- (.7,.3) node[color=black]{$\circlearrowright$} -- (.7,-.3) node[color=black]{$\circlearrowleft$} -- (.3,-.7) node[color=black]{$\circlearrowright$} -- (-.3,-.7) node[color=black]{$\circlearrowleft$} -- (-.7,-.3) node[color=black]{$\circlearrowright$} -- (-.7,.3) node[color=black]{$\circlearrowleft$} -- (-.3,.7) node[color=black]{$\circlearrowright$} -- (.3,.7) (-.3,.7)--(-.3,1.3) (.3,.7)--(.3,1.3) (-.3,-.7)--(-.3,-1.3) (.3,-.7)--(.3,-1.3) (.7,.3)--(1.3,.3) (.7,-.3)--(1.3,-.3) (-.7,.3)--(-1.3,.3) (-.7,-.3)--(-1.3,-.3) ; \end{tikzpicture} \end{center} \medskip Let $P$ be a regular cell decomposition of $S^{n-1}$ (compatible with the smooth structure). Let $\hat P$ be a barycentric subdivision of $P$ and let $\hat P^\vee$ be a dual decomposition of $\hat P$. In particular, the vertices of $\hat P$ are in bijection to all the cells of $P$ and the $i$-cells of $\hat P^\vee$ are in bijection to the $(n-1-i)$-cells of $\hat P$. See the above picture, where $P$ is depicted by the black lines and $\hat P^\vee$ is depicted by the red lines. Indeed, $\hat P$ is a triangulation of $S^{n-1}$ and each cell of $\hat P$ carries a canonical orientation (determined by the evident linear order on its vertices). Then, each vertex of $\hat P^\vee$ carries an induced orientation. Assign the $k$-morphism $\theta_k$ to each $(n-k)$-cell of $\hat P^\vee$ for $1\le k<n$ and assign the $n$-morphism $\theta_n$ or $\theta_n^*$ to each vertex according to the orientation. Composing all the morphisms in the configuration then yields a sector $\mathcal{H}_P$ of $\mathcal{A}$ which is irrelevant to $\mathcal{E}$. Since $\hat P$ and $\hat P^\vee$ are unique up to isotopy, $\mathcal{H}_P$ is independent of the choice of them. If $Q$ is a subdivision of $P$, then $\hat Q^\vee$ is canonically a subdivision of $\hat P^\vee$ up to isotopy. The inclusions $u_{\theta_k}\hookrightarrow\theta_{k+1}$ then induce an isometric embedding $\mathcal{H}_P\hookrightarrow\mathcal{H}_Q$. Form a direct limit in $\Sect(\mathcal{A})$ $$\mathcal{H}_\mathcal{B} = \varinjlim_P \mathcal{H}_P.$$ Let $I\subset S^{n-1}$ be a disk region and let $P$ be a regular cell decomposition of $S^{n-1}$. Passing to a subdivision of $P$ if necessary, we assume that $\partial I$ is a union of cells of $P$ so that $\partial I$ is transverse to all the cells of $\hat P^\vee$. Then $\mathcal{H}_P$ admits a decomposition $$\mathcal{H}_P: \Id_{\cdots\Id_\mathcal{A}} \xrightarrow{\mathcal{H}_{P,I}} \mathcal{T}_{P,I} \xrightarrow{\bar\mathcal{H}_{P,I'}} \Id_{\cdots\Id_\mathcal{A}}$$ where $\mathcal{T}_{P,I}$ is obtained by composing all the morphisms lying on $\partial I$ and $\mathcal{H}_{P,I}$ is obtained by composing all the morphisms inside $I$. Passing to the direct limit yields a decomposition $$\mathcal{H}_\mathcal{B}: \Id_{\cdots\Id_\mathcal{A}} \xrightarrow{\mathcal{H}_{\mathcal{B},I}} \mathcal{T}_I \xrightarrow{\bar\mathcal{H}_{\mathcal{B},I'}} \Id_{\cdots\Id_\mathcal{A}}.$$ Define $\mathcal{B}(I)$ to be the von Neumann algebra $$\mathcal{B}(I) = \hom_{\mathcal{T}_I(S^{n-1}_\uparrow)}(\mathcal{H}_{\mathcal{B},I},\mathcal{H}_{\mathcal{B},I}).$$ Then $(\mathcal{B},\mathcal{H}_\mathcal{B})$ defines a $*$-condensate of the $*$-condensation monad $\mathcal{R}$. \begin{rem} Since $\mathcal{R}$ is a $*$-condensation algebra in $\mathcal{LQS}^{n-1} \simeq n\mathrm{Hilb}$, $\mathcal{R}$ may be regarded as a unitary multi-fusion $(n-1)$-category $\mathcal{C}$. Then $\theta_2$ defines an object of $\mathcal{C}$ and $\theta_k$ defines a $(k-2)$-morphism of $\mathcal{C}$ for $3\le k\le n$. The construction of the topological $n$-net $(\mathcal{B},\mathcal{H}_\mathcal{B})$ can be reduced to a computation in the unitary multi-fusion $(n-1)$-category $\mathcal{C}$ without referring to the local observable algebras of $\mathcal{R}$. Moreover, $\mathcal{T}_I$ lives in the completion of $\Omega^{n-1}\mathcal{LQS}^n = \mathcal{LQS}^1$ by infinite direct sums therefore $\mathcal{B}(I)$ is an atomic type I von Neumann algebra. \end{rem} Generalizing the above construction, we see that all $*$-condensation bimodules and bimodule maps, etc. are induced by morphisms of $\mathcal{LQS}^n$. This completes the proof of Theorem \ref{thm:lqs-cc}.
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} A \textbf{circle packing} is a collection $P$ of discs in the Riemann sphere ${\mathcal C} \cup \{\infty\}$ such that distinct discs in $P$ do not overlap (i.e., have disjoint interiors), but may be tangent. Given a circle packing $P$, its \textbf{tangency graph} (or \textbf{nerve}) is the graph whose vertices are the discs in $P$ and where two vertices are connected by an edge if and only if their corresponding discs are tangent. The Circle Packing Theorem \cite{K36,Th78} states that every finite, simple\footnote{A graph is said to be \textbf{simple} if it does not contain any loops or multiple edges.} planar graph may be represented as the tangency graph of a circle packing, and that if the graph is a triangulation (i.e., every face has three sides) then the circle packing is unique up to M\"obius transformations and reflections. See e.g.\ \cite{St05,Rohde11} for further background on circle packing. The Circle Packing Theorem was extended to infinite, simple planar triangulations by He and Schramm \cite{HS93,HeSc,Schramm91,he1999rigidity}. In particular, they showed that if the triangulation is \emph{simply connected}, meaning that the surface formed by gluing triangles according to the combinatorics of the triangulation is homeomorphic to the plane, then the triangulation can be circle packed in either the disc or the plane, but not both\footnote{Here the word \textbf{in} is being used in a technical sense to mean that the \textbf{carrier} of the circle packing is equal to either the disc or the plane, see \cref{subsec:mapsdcp}.}; we call the triangulation \textbf{CP parabolic} or \textbf{CP hyperbolic} accordingly. More generally, they showed that, in the CP hyperbolic case, the triangulation can be circle packed in \emph{any} simply-connected domain $D \subsetneq {\mathcal C}$. These results can be viewed as discrete analogue of the Riemann mapping theorem and of the uniformization theorem for Riemann surfaces. Indeed, the theory of circle packing is closely related to the theory of conformal mapping and geometric function theory, see e.g.\ \cite{HS93,RS87,Rohde11,St05,MR3755822} and references therein. He and Schramm also pioneered the use of circle packing to study probabilistic questions about planar graphs, showing in particular that a bounded degree, simply connected, planar triangulation is CP parabolic if and only if it is recurrent for simple random walk \cite{HeSc}. This result was recently generalised by Gurel-Gurevich, Nachmias, and Suoto \cite{GGNS15}, who proved that a (not necessarily simply connected) bounded degree planar triangulation admitting a circle packing in a domain $D$ is recurrent for simple random walk if and only if the domain is recurrent for Brownian motion. A more detailed study of the relationship between circle packing and random walks was initiated by Benjamini and Schramm \cite{BS96a}, who proved in particular that if $T$ is a bounded degree triangulation circle packed in the unit disc $\mathbb D$, then the random walk on $T$ converges almost surely to a point in the boundary $\partial \mathbb D$, and the law of this limit point is non-atomic. They used this to deduce the existence of various kinds of \emph{harmonic functions} on transient, bounded degree planar graphs. Recall that a function $h$ on the vertex set of a simple, locally finite graph $G=(V,E)$ is said to be \textbf{harmonic} if \[h(v)=\frac{1}{\deg(v)}\sum_{u\sim v}h(u) \] for every $v\in V$. Here and elsewhere, we write $V$ and $E$ for the vertex and edge sets of a graph $G$, and write $u\sim v$ if the vertices $u$ and $v$ are adjacent in $G$. Three particularly important and probabilistically meaningful classes of harmonic functions are the \emph{bounded harmonic functions}, the \emph{positive harmonic functions}, and the \emph{harmonic Dirichlet functions}. It is an easy consequence of the Benjamini-Schramm convergence theorem that every bounded degree, transient planar graph admits non-constant harmonic functions in each of these three classes. Here, a \textbf{harmonic Dirichlet function} on a graph with oriented edge set $E^\rightarrow$ is a harmonic function $h$ such that \[ \mathcal E(h) = \frac{1}{2}\sum_{e\in E^\rightarrow} \left[ h\big(e^+\big)-h\big(e^-\big) \right]^2 <\infty. \] We denote the space of harmonic Dirichlet functions on a graph $G$ by $\mathbf{HD}(G)$ and the space of bounded harmonic Dirichlet functions on $G$ by $\mathbf{BHD}(G)$. For each vertex $v$ of $G$, $\|h\|=h(v)^2+\mathcal E(h)$ is a norm on $\mathbf{HD}(G)$, and $\mathbf{BHD}(G)$ is dense in $\mathbf{HD}(G)$ with respect to this norm \cite[Theorem 3.73]{Soardibook}. (Without the $h(v)^2$ term this would be a seminorm rather than a norm.) Harmonic Dirichlet functions and function spaces on domains are defined similarly; see \cref{subsec:Dirichlet} for details. More recently, Angel, Barlow, Gurel-Gurevich, and Nachmias \cite{ABGN14} showed that \emph{every} bounded harmonic function and every positive harmonic function on a bounded degree, simply connected, simple planar triangulation can be represented geometrically in terms of the triangulation's circle packing in the unit disc. A similar representation theorem for bounded (but not positive) harmonic functions using a different embedding, the \emph{square tiling}, was obtained slightly earlier by Georgakopoulos~\cite{G13}. Simpler proofs of both results for bounded harmonic functions have since been obtained by Peres and the author \cite{hutchcroft2015boundaries}. In this paper we establish a similar representation theorem for harmonic Dirichlet functions. We begin with a simple form of the result that can be stated with minimum preparation. We say that two functions $\phi$ and $\psi$ on the vertex set of a graph are \textbf{asymptotically equal} if the set $\{v\in V: |\phi(v)-\psi(v)|\geq \varepsilon\}$ is finite for every $\varepsilon>0$. \begin{theorem} \label{thm:isomorphismdisc} Let $T$ be a bounded degree, simply connected, simple, planar triangulation, let $P$ be a circle packing of $T$ in the unit disc $\mathbb D$, and let $z:V\to \mathbb D$ be the function sending vertices to the centres of their corresponding discs. \begin{enumerate} \item For each bounded harmonic Dirichlet function $h \in \mathbf{BHD}(T)$, there exists a unique harmonic Dirichlet function $H \in \mathbf{HD}(\mathbb D)$ such that $h$ and $H\circ z$ are asymptotically equal. \item For each bounded harmonic Dirichlet function $H \in \mathbf{BHD}(\mathbb D)$, there exists a unique harmonic Dirichlet function $h \in \mathbf{HD}(T)$ such that $h$ and $H \circ z$ are asymptotically equal. \end{enumerate} Moreover, the function assigning each $h\in \mathbf{BHD}(T)$ to the unique $H \in \mathbf{HD}(\mathbb D)$ such that $H\circ z$ is asymptotically equal to $h$ can be uniquely extended to a bounded linear isomorphism from $\mathbf{HD}(T)$ to $\mathbf{HD}(\mathbb D)$. \end{theorem} By a bounded linear isomorphism we mean a bounded linear map with a bounded inverse; such an isomorphism need not be an isometry. A more general form of our theorem, applying in particular to bounded degree, multiply-connected planar triangulations circle packed in arbitrary domains, is given in \cref{thm:isomorphismgeneral}. See \eqref{eq:discdef} and \eqref{eq:contdef} for an explicit description of the isomorphism. Note that \cref{thm:isomorphismdisc,thm:isomorphismgeneral} are much stronger than those available for bounded and or positive harmonic functions. For example, the representation theorem for bounded harmonic functions \cite{ABGN14} requires one to take integrals over the harmonic measure on the boundary, which is not particularly well understood and can be singular with respect to the corresponding measure for Brownian motion. As a consequence, there can exist bounded harmonic functions $h$ on $T$ such that $h$ is not asymptotically equal to $H \circ z$ for any bounded harmonic function $H$ on $\mathbb D$. The difference in strength between these theorems is unsurprising given that the \emph{existence} of non-constant harmonic Dirichlet functions is known to be stable under various perturbations of the underlying space \cite{Soardi93,holopainen1997p,Gab05}, while the existence of non-constant bounded harmonic functions is known to be unstable in general under similar perturbations \cite{BS96a}. \subsection{Applications} \cref{thm:isomorphismdisc} also allows us to deduce various facts about the boundary behaviour of harmonic Dirichlet functions on circle packings of triangulations from the corresponding facts about harmonic Dirichlet functions on the unit disc. For example, we immediately obtain a representation theorem for the harmonic Dirichlet functions on $T$ in terms of boundary functions, similar to that obtained for bounded harmonic functions in \cite{ABGN14}. We say that a Borel function $\phi: \partial \mathbb D \to \mathbb R$ is \textbf{Douglas integrable} if \begin{equation} \label{eq:Douglas} \mathcal D(\phi):= \frac{1}{4\pi}\int_{\partial \mathbb D} \int_{\partial \mathbb D} \left| \frac{\phi(\xi)-\phi(\zeta)}{\xi - \zeta}\right|^2 \dif \xi \dif \zeta <\infty. \end{equation} Note in particular that every Lipschitz function on $\partial \mathbb D$ is Douglas integrable. It is a classical theorem of Douglas \cite{MR1501590} that a harmonic function $H:\mathbb D\to\mathbb R$ is Dirichlet if and only if it is the extension of a Douglas integrable function $\phi:\partial\mathbb D\to\mathbb R$, and in this case $\mathcal D(\phi)=\mathcal E(h)$. This equality is known as the \emph{Douglas integral formula}. Thus, we obtain the following corollary to \cref{thm:isomorphismdisc}. \begin{corollary} \label{cor:douglas} Let $T$ be a bounded degree, simply connected, simple, planar triangulation and let $P$ be a circle packing of $T$ in the unit disc $\mathbb D$. Then a function $h:V\to \mathbb R$ is a harmonic Dirichlet function if and only if there exists a Douglas integrable Borel function $\phi:\partial \mathbb D\to\mathbb R$ such that \[h(v) = \mathbf E_v \left[ \phi\left(\lim_{n\to\infty} z(X_n) \right) \right] \qquad \text{for every vertex $v$.}\] \end{corollary} We remark that there is a generalization of the Douglas integral formula to other domains due to Doob \cite{doob1962boundary}, and that related results for graphs have been announced by Georgakopoulos and Kaimanovich \cite{georgakopoulos2015group}. The results of Doob could be combined with \cref{thm:isomorphismgeneral} to obtain versions of \cref{cor:douglas} for more general domains. We do not pursue this here. \medskip Similarly, we can immediately deduce the following very strong boundary convergence result from \cref{thm:isomorphismdisc} together with a theorem of Nagel, Rudin, and Shapiro \cite{MR672838}. \begin{corollary}[Boundary convergence in exponentially tangential approach regions] \label{cor:exponentiallytangential} Let $T$ be a bounded degree, simply connected, simple, planar triangulation, let $P$ be a circle packing of $T$ in the unit disc $\mathbb D$, and let $z:V\to \mathbb D$ be the function sending vertices to the centres of their corresponding discs. Then for each $h\in \mathbf{BHD}(T)$, the following holds for Lebesgue-a.e.\ $\xi\in \partial \mathbb D$: For every sequence of vertices $v_1,v_2,\ldots$ of $T$ such that $z(v_i) \to \xi$ and \[\limsup_{i\to\infty} |z(v_i)-\xi| \log \frac{1}{1-|z(v_i)|} <\infty,\] the limit $\lim_{i\to\infty} h(v_i)$ exists. \end{corollary} See \cite{MR3185375} and references therein for several further results concerning the boundary behaviour of harmonic Dirichlet functions on the unit disc. Together with the Poisson boundary identification result of \cite{ABGN14}, \cref{cor:douglas} gives us a good understanding of the relationship between the space of bounded Harmonic Dirichlet functions $\mathbf{BHD}(T)$ and the space of all bounded harmonic functions, denoted $\mathbf{BH}(T)$: The latter is identified with the space of bounded Borel functions $L^\infty(\partial \mathbb D)$, while the former is identified with the space of bounded Douglas integrable functions on $\partial \mathbb D$. In particular, this allows us to easily generate many examples of bounded harmonic functions on $T$ that are not Dirichlet, such as harmonic extensions of indicator functions. Moreover, since the identification of $\mathbf{BH}(T)$ and $L^\infty(\partial\mathbb D)$ is easily seen to be a homeomorphism when $\mathbf{BH}(T)$ is equipped with the topology of pointwise convergence and $L^\infty(\partial\mathbb D)$ is given the subspace topology from $L^1(\partial \mathbb D)$, and since the Lipschitz functions are dense in $L^1(\partial \mathbb D)$, we obtain the following interesting corollary concerning harmonic functions on triangulations. \begin{corollary} \label{cor:Ori} Let $T$ be a bounded degree, simply connected, simple, planar triangulation. Then $\mathbf{BHD}(T)$ is dense in $\mathbf{BH}(T)$ with respect to the topology of pointwise convergence. \end{corollary} A nice feature of this corollary is that it is an `intrinsic' result, whose statement does not make any reference to circle packing. \cref{cor:douglas,cor:exponentiallytangential,cor:Ori} all have straightforward extensions to simply connected, weighted, polyhedral planar with bounded codegrees and bounded local geometry, both of which follow from \cref{thm:isomorphismgeneral}. \cref{thm:isomorphismdisc} and its generalization \cref{thm:isomorphismgeneral} are also useful in the study of uniform spanning forests of planar graphs, for which closed linear subspaces of $\mathbf{HD}(T)$ correspond, roughly speaking, to possible boundary conditions at infinity for the spanning forest measure. In particular, \cref{thm:isomorphismgeneral} will be applied in forthcoming work with Nachmias on uniform spanning forests of multiply-connected planar maps. \subsection{The Dirichlet space} \label{subsec:Dirichlet} We begin by reviewing the definitions of the Dirichlet spaces in both the discrete and continuous cases, as well as some of their basic properties. For further background, we refer the reader to \cite{LP:book,Soardibook} in the discrete case, and \cite{AnLyPe99} and references therein for the continuous case. Recall that a \textbf{network} is a graph $G=(V,E)$ (which in this paper will always be locally finite and connected) together with an assignment $c:E\to(0,\infty)$ of positive \textbf{conductances} to the edges of $G$. The random walk on a locally finite network is the Markov process that, at each step, chooses an edge to traverse from among those edges emanating from its current position, where the probability of choosing a particular edge is proportional to its conductance. Let $G=(V,E)$ be a network, and let $E^\rightarrow$ be the set of oriented edges of $G$. The \textbf{Dirichlet energy} of a function $\phi: V\to \mathbb R$ is defined to be \[\mathscr E(\phi) = \frac{1}{2}\sum_{e\in E^\rightarrow} c(e) \left(\phi\left(e^-\right)-\phi\left(e^+\right)\right)^2.\] We say that $\phi$ is a \textbf{Dirichlet function} (or equivalently that $\phi$ has \textbf{finite energy}) if $\mathscr E(\phi)<\infty$. The space of Dirichlet functions on $G$ and the space of harmonic Dirichlet functions on $G$ are denoted by $\mathbf{D}(G)$ and $\mathbf{HD}(G)$ respectively. These spaces are both Hilbert spaces with respect to the inner product \begin{equation} \label{eq:innerproductdefdisc} \langle \phi,\psi \rangle = \phi(o)\psi(o)+\frac{1}{2}\sum_{e\in E^\rightarrow} c(e) \left[\phi\left(e^-\right)-\phi\left(e^+\right)\right]\left[\psi\left(e^-\right)-\psi\left(e^+\right)\right], \end{equation} where $o$ is a fixed root vertex. (It is easily seen that different choices of $o$ yield equivalent norms.) We denote the space of \emph{bounded Dirichlet functions} by $\mathbf{BD}(G)$ and the space of \emph{bounded harmonic Dirichlet functions} by $\mathbf{BHD}(G)$. These spaces are dense in $\mathbf{D}(G)$ and $\mathbf{HD}(G)$ respectively, see \cite[Page 314]{LP:book} and \cite[Theorem 3.73]{Soardibook}. Let $\mathbf{D}_0(G)$ be the closure in $\mathbf{D}(G)$ of the space of finitely supported functions. If $G$ is transient, then every Dirichlet function $\phi \in \mathbf{D}(G)$ has a unique decomposition \begin{equation} \label{eq:Royden} \phi = \phi_{\mathbf{D}_0} + \phi_{\mathbf{HD}} \end{equation} where $\phi_{\mathbf{D}_0}\in \mathbf{D}_0(G)$ and $\phi_{\mathbf{HD}} \in \mathbf{HD}(G)$, known as the \textbf{Royden decomposition} of $\phi$ \cite[Theorem 3.69]{Soardibook}. In other words, $\mathbf{D}(G)=\mathbf{D}_0(G)\oplus \mathbf{HD}(G)$. (Note that this is \emph{not} necessarily an orthogonal decomposition, although $\mathbf{D}_0(G)$ and $\mathbf{HD}(G)$ are orthogonal with respect to the Euclidean seminorm $\mathcal E$, see \cite[Lemma 3.66]{Soardibook}.) Let $\langle X_n \rangle_{n\geq0}$ be a random walk on $G$. It is a theorem of Ancona, Lyons, and Peres \cite{AnLyPe99}, which complements earlier results of Yamasaki \cite{yamasaki1986ideal}, that the limit $\lim_{n\to\infty} \phi(X_n)$ exists almost surely for each $\phi \in \mathbf{D}(G)$, that \begin{equation} \label{eq:ALPlimitdisc} \lim_{n\to\infty} \phi(X_n) = \lim_{n\to\infty} \phi_{\mathbf{HD}}(X_n) \end{equation} almost surely, and moreover that $\phi_{\mathbf{HD}}$ can be expressed as \begin{equation} \label{eq:ALPlimitidentification} \phi_{\mathbf{HD}}(v) = \mathbf E_v\left[ \lim_{n\to\infty} \phi(X_n) \right], \end{equation} where $\mathbf E_v$ denotes the expectation with respect to the random walk $\langle X_n \rangle_{n\geq0}$ started at $v$. See also \cite[Theorem 9.11]{LP:book}. [The referee has informed us that the almost sure existence of the limit $\lim_{n\to\infty}\phi(X_n)$ was in fact originally proven by Silverstein in 1974 \cite{MR0350876}, independently of Ancona, Lyons, and Peres.] A similar theory holds in the continuum. If $D \subseteq {\mathcal C}$ is a domain, the \textbf{Dirichlet energy} of a locally $L^2$, weakly differentiable\footnote{Recall that a function or vector field $\Phi:D\to \mathbb R^d$, $d\geq 1$, is said to be \textbf{locally integrable} if $\int_A \|\Phi(z)\|\dif z<\infty$ for every precompact open subset $A$ of $D$, and \textbf{locally $L^2$} if $\int_A \|\Phi(z)\|^2\dif z<\infty$ for every precompact open subset $A$ of $D$. A locally integrable vector field $W:D\to \mathbb R^2$ is said to be a \textbf{weak gradient} of the locally integrable function $\Phi : D\to \mathbb R$ if the identity $\int_D \Psi W \dif z = -\int_D \Phi \nabla \Psi \dif z$ holds for every smooth, compactly supported function $\Psi$ on $D$. We say that a locally integrable function $\Phi:D\to \mathbb R$ is \textbf{weakly differentiable} if it admits a weak gradient. The weak gradient of a locally integrable, weakly differentiable $\Phi:D\to \mathbb R^2$ is unique up to almost-everywhere equivalence, and is denoted by $\nabla \Phi$. The weak gradient coincides with the usual gradient of $\Phi$ at $z$ if $\Phi$ is differentiable on an open neighbourhood of $z$. } function $\Phi: D \to \mathbb R$ on $D$ is defined to be \[\mathscr E(\Phi) = \int_D \|\nabla \Phi (z)\|^2 dz.\] As in the discrete case, we say that $\Phi$ is a \textbf{Dirichlet function} (or equivalently that $\Phi$ has \textbf{finite energy}) if it is locally $L^2$, weakly differentiable, and satisfies $\mathscr E(\Phi)<\infty$. We let $\mathbf{D}(D)$, and $\mathbf{HD}(D)$ be the spaces of Dirichlet functions (modulo almost everywhere equivalence) and harmonic Dirichlet functions respectively. The spaces $\mathbf{D}(D)$ and $\mathbf{HD}(D)$ are Hilbert spaces with respect to the inner product \begin{equation} \label{eq:innerproductdefcont} \langle \Phi,\Psi \rangle = \int_{O} \Phi(z) \Psi(z) \dif z + \int_D \nabla \Phi(z) \cdot \nabla \Psi(z) \dif z, \end{equation} where $O$ is a fixed precompact open subset of $D$. (The Poincar\'e inequality implies that different choices of $O$ yield equivalent norms. In particular, convergence in this norm implies local $L^2$ convergence.) The spaces $\mathbf{D}(D)$ and $\mathbf{HD}(D)$ contain the spaces of \emph{bounded Dirichlet functions} $\mathbf{BD}(D)$ and of \emph{bounded harmonic Dirichlet functions} $\mathbf{BHD}(D)$ as dense subspaces respectively \cite[Proposition 16]{MR0049396}. Let $\mathbf{D}_0(D)$ be the closure in $\mathbf{D}(D)$ of the space of compactly supported Dirichlet functions. As in the discrete case, if $D$ is a transient for Brownian motion, then every $\Phi \in \mathbf{D}(D)$ has a unique Royden decomposition $\Phi=\Phi_{\mathbf{D}_0}+\Phi_{\mathbf{HD}}$ where $\Phi_{\mathbf{D}_0} \in \mathbf{D}_0(D)$ and $\Phi_{\mathbf{HD}}\in \mathbf{HD}(D)$ \cite{MR0049396}. Let $\langle B_t \rangle_{t=0}^{T_{\partial D}}$ be a Brownian motion stopped at the first time it hits $\partial D$, denoted $T_{\partial D}$. Anconca, Lyons, and Peres \cite{AnLyPe99} proved that if $\Phi \in \mathbf{D}(D)$, then the limit $\lim_{t\uparrow T_{\partial D}} \Phi(B_t)$ exists almost surely\footnote{Strictly speaking, since $\Phi$ is only defined up to almost everywhere equivalence, we choose a \emph{quasi-continuous version} of $\Phi$ before applying it to the Brownian motion $B_t$. This ensures that $\Phi(B_t)$ is well-defined and continuous in $t$ almost surely. See \cite{AnLyPe99} for details.}, that \begin{equation} \label{eq:ALPcontinuouslimit} \lim_{t\uparrow T_{\partial D}}\Phi(B_t) = \lim_{t\uparrow T_{\partial D}}\Phi_{\mathbf{HD}}(B_t) \end{equation} almost surely, and that \begin{equation} \label{eq:ALPcontinuousidentification} \Phi_{\mathbf{HD}}(z) = \mathbb E_z\left[\lim_{t\uparrow T_{\partial D} } \Phi(B_t) \right]\end{equation} for every $z\in D$, where $\mathbb E_z$ denotes the expectation with respect to the Brownian motion $\langle B_t \rangle_{t=0}^{T_{\partial D}}$ started at $z$. The almost sure existence of the limit $\lim_{t\uparrow T_{\partial D}} \Phi(B_t)$ also follows from the earlier work of Doob \cite{MR0109961,MR0173783}. \subsection{Planar maps and double circle packing} \label{subsec:mapsdcp} Let us briefly recall the definitions of planar maps; see e.g.\ \cite{LZ,miermont2014aspects,unimodular2} for detailed definitions. Recall that a (locally finite) \textbf{map} $M$ is a connected, locally finite graph $G$ together with an equivalence class of proper embeddings of $G$ into orientable surfaces, where two such embeddings are equivalent if there is an orientation preserving homeomorphism between the two surfaces sending one embedding to the other. Equivalently, maps can be defined combinatorially as graphs equipped with cyclic orderings of the oriented edges emanating from each vertex, see \cite{LZ} or \cite[Section 2.1]{unimodular2}. We call a graph endowed with \emph{both} a map structure and a network structure (i.e., specified conductances) a \textbf{weighted map}. A map is \textbf{planar} if the surface is homeomorphic to an open subset of the sphere, and is \textbf{simply connected} if the surface is simply connected, that is, homeomorphic to either the sphere or the plane. Given a specified embedding of a map $M$, the \textbf{faces} of $M$ are defined to be the connected components of the complement of the embedding. We write $F$ for the set of faces of $M$, and write $f\perp v$ if the face $f$ is incident to the vertex $v$. Given an oriented edge $e$ of $M$, we write $e^\ell$ for the face to the left of $e$ and $e^r$ for the face to the right of $E$. Every map $M$ has a \textbf{dual} map $M^\dagger$ that has the faces of $M$ as vertices, the vertices of $M$ as faces, and for each oriented edge $e$ of $M$, $M^\dagger$ has an oriented edge $e^\dagger$ from $e^\ell$ to $e^r$. The definitions of $F$ and $M^\dagger$ are independent of the choice of embedding of $M$, as different embeddings give rise to face sets that are in canonical bijection with each other and dual maps that are canonically isomorphic to each other. It is also possible to define $F$ and $M^\dagger$ entirely combinatorially, see \cite{LZ} or \cite[Section 2.1]{unimodular2} for details. The \textbf{carrier} of a circle packing $P$, $\operatorname{carr}(P)$, is defined to be union of the discs in $P$ together with the components of ${\mathcal C}\cup\{\infty\} \setminus \bigcup P$ whose boundaries are contained in a union of finitely many discs in $P$. Note that every circle packing $P$ in the Riemann sphere whose tangency graph is locally finite also defines a locally finite \textbf{tangency map}, where we embed the tangency graph into the carrier of $P$ by drawing straight lines between the centres of tangent circles. \begin{figure}[t] \centering \includegraphics[height=0.32\textwidth]{dcpgraph.pdf} \hspace{1.1cm} \includegraphics[height=0.32\textwidth]{dcp.pdf} \caption{ { A finite polyhedral planar map (left) and its double circle packing (right). Primal circles are filled and have solid boundaries, dual circles have dashed boundaries.} } \label{fig.dcp} \end{figure} Let $M$ be a locally finite map with locally finite dual $M^\dagger$. A \textbf{double circle packing} of $M$ is a pair of circle packings $(P,P^\dagger)$ in the Riemann sphere such that the following conditions hold (see \cref{fig.dcp}). \begin{enumerate}[leftmargin=*] \item $M$ is the tangency map of $P=\{P(v) : v \in V\}$ and $M^\dagger$ is the tangency map of $P^\dagger=\{P^\dagger(f) : f \in F\}$. \item If $v$ is a vertex of $M$ and $f$ is a face of $M$, then the discs $P(v)$ and $P^\dagger(f)$ intersect if and only if $v$ is incident to $f$, and in this case their boundaries intersect orthogonally. \end{enumerate} Observe that if $(P,P^\dagger)$ is a double circle packing of a locally finite map with locally finite dual then $\operatorname{carr}(P)=\operatorname{carr}(P^\dagger)=\bigcup P \cup \bigcup P^\dagger$. It follows from Thurston's interpretation \cite{Th78,marden1990thurston} of Andreev's theorem \cite{andreev1970convex} that a finite planar map has a double circle packing in the Riemann sphere if and only if it is \textbf{polyhedral}, that is, simple and $3$-connected. The corresponding infinite theory\footnote{He worked in a more general setting, see \cite[Section 2.5]{HutNach15b} for a discussion of how his results imply those claimed here.} was developed by He \cite{he1999rigidity}, who proved that every simply connected, locally finite, polyhedral map $M$ with locally finite dual admits a double circle packing in either the plane or the disc, and that this packing is unique up to M\"obius transformations. (Note that reflections are no longer needed now that we are considering maps instead of graphs.) See \cite{HS93} for a related uniformization theorem for \emph{countably-connected} triangulations. Without any topological assumptions, we still have by an easy compactness argument that every locally finite polyhedral planar map with locally finite dual admits a double circle packing in \emph{some} domain, although possibly a very wild one. \subsection{The isomorphism} We are now ready to describe our isomorphism theorem in its full generality. We say that a weighted map (or more generally a network) has \textbf{bounded local geometry} if it has bounded degree and the conductances of its edges are bounded between two positive constants. We say that a map has \textbf{bounded codegree} if its dual has bounded degree. \begin{theorem}[The isomorphism] \label{thm:isomorphismgeneral} Let $M$ be a transient weighted polyhedral planar map with bounded codegrees and bounded local geometry, let $(P,P^\dagger)$ be a double circle packing of $M$ in a domain $D \subset {\mathcal C} \cup \{\infty\}$, and let $z:V\to D$ be the function sending each vertex $v$ to the centre of the corresponding disc $P(v)$. Then the following hold: \begin{enumerate} \item For every harmonic Dirichlet function $h \in \mathbf{HD}(M)$, there exists a unique harmonic Dirichlet function $H \in \mathbf{HD}(D)$ such that $h-H\circ z \in \mathbf{D}_0(M)$. We denote this function $H$ by $\mathsf{Cont}[h]$. \item For every harmonic Dirichlet function $H \in \mathbf{HD}(D)$, there exists a unique harmonic Dirichlet function $h \in \mathbf{HD}(M)$ such that $h-H \circ z \in \mathbf{D}_0(M)$. We denote this function $h$ by $\mathsf{Disc}[H]$. \end{enumerate} Moreover, the functions $\mathsf{Cont}:\mathbf{HD}(M)\to \mathbf{HD}(D)$ and $\mathsf{Disc}:\mathbf{HD}(D)\to\mathbf{HD}(M)$ are bounded linear operators, and these operators are inverses of each other. \end{theorem} Note that even in the simply connected case there are many choices of domain $D$ and double circle packing $(P,P^\dagger)$ for any given map $M$, and the theorem should be understood as giving us an isomorphism for each such choice of $D$ and $(P,P^\dagger)$. There are several ways to characterize the space $\mathbf{D}_0(G)$, leading to several alternative characterisations of the functions $\mathsf{Cont}[h]$ and $\mathsf{Cont}[H]$. In particular, the following hold under the assumptions of \cref{thm:isomorphismgeneral}: \begin{itemize} \item For each $h \in \mathbf{HD}(M)$, $H=\mathsf{Cont}[h]$ is the unique harmonic Dirichlet function on $D$ such that \begin{equation} \label{eq:othercharswalk} \lim_{n\to\infty} \big|h(X_n) - H \circ z(X_n)\big|=0 \end{equation} almost surely when $\langle X_n \rangle_{n \geq0}$ is a random walk on $G$. Similarly, for each $H\in \mathbf{HD}(D)$, $h=\mathsf{Disc}[H]$ is the unique harmonic Dirichlet function on $M$ such that \eqref{eq:othercharswalk} holds almost surely. Given \cref{thm:isomorphismgeneral}, both statements are implied by \eqref{eq:ALPlimitdisc}. \item For each $h\in \mathbf{HD}(M)$, $H=\mathsf{Cont}[h]$ is the unique harmonic Dirichlet function on $D$ such that $h$ and $H\circ z$ are \textbf{quasi-asymptotically equal}, meaning that \begin{equation} \label{eq:othercharscap} \Cap\big(\big\{v\in V : |h(v)-H\circ z(v)| \geq \varepsilon\big\}\big)<\infty \end{equation} for every $\varepsilon>0$. See \cref{subsec:capacity} for the definition of capacity. Similarly, for each $H\in \mathbf{HD}(D)$, $h=\mathsf{Disc}[H]$ is the unique harmonic Dirichlet function on $M$ such that $h$ is quasi-asymptotically equal to $H \circ z$. Given Theorem \ref{thm:isomorphismgeneral}, both statements are implied by Proposition \ref{lem:D0char}. \end{itemize} We can get the stronger characterisation of $\mathsf{Cont}$ and $\mathsf{Disc}$ in terms of asymptotic equality if we make additional assumptions on the domain. We say that a domain $D$ is \textbf{uniformly transient} if \[ \inf_{z\in D} \Cap\Big(B\big(z,\varepsilon d(z,\partial D)\big)\Big)>0 \] for every $\varepsilon>0$. For example, the unit disc is uniformly transient, as is any finitely connected domain none of whose complementary components are points. \begin{itemize} \item \emph{If $D$ is uniformly transient}, then for each \emph{bounded} $h\in \mathbf{BHD}(M)$, $H=\mathsf{Cont}[h]$ is the unique harmonic Dirichlet function on $D$ such that $h$ and $H\circ z$ are asymptotically equal. Similarly, for each \emph{bounded} $H\in \mathbf{BHD}(D)$, $h=\mathsf{Disc}[H]$ is the unique harmonic Dirichlet function on $M$ such that $h$ is asymptotically equal to $H \circ z$. As we will see, given \cref{thm:isomorphismgeneral}, both statements are implied by Proposition \ref{prop:uniformlytransient}, and yield \cref{thm:isomorphismdisc} as a special case. \end{itemize} Note that the weighted map $M$ is \emph{not} required to be uniformly transient. \subsection{Related work and an alternative proof} A related result concerning linear isomorphisms between harmonic Dirichlet spaces induced by rough isometries between bounded degree graphs was shown by Soardi \cite{Soardi93}, who proved that if $G_1$ and $G_2$ are bounded degree, rough isometric graphs, then $G_1$ admits non-constant harmonic Dirichlet functions if and only if $G_2$ does. See e.g.\ \cite{Soardibook,LP:book} for definitions of and background on rough isometries. Soardi's result was subsequently generalized by Holopainen and Soardi \cite{holopainen1997p} to rough isometries between bounded degree graphs and a certain class of Riemannian manifolds. This result was then strengthened by Lee \cite{lee1999rough}, who showed that the dimension of the space of harmonic Dirichlet functions is preserved under rough isometry. By a small improvement on the methods in the works mentioned (or, alternatively, using the methods of this paper), it is not difficult to show the stronger result that for each rough isometry $\rho : G_1 \to G_2$, we have that $h \mapsto (h \circ \rho)_{\mathbf{HD}}$ is a bounded linear isomorphism $\mathbf{HD}(G_2)\to\mathbf{HD}(G_1)$. Similar statements hold for rough isometries between graphs and manifolds and between two manifolds (under appropriate assumptions on the geometry in both cases). Indeed, in the discrete case the fact that $h \mapsto (h \circ \rho)_{\mathbf{HD}}$ is a bounded linear isomorphism can easily be read off from the proof of Soardi's result presented in \cite{LP:book}. Another setting in which one very easily obtains an isomorphism between harmonic Dirichlet spaces is given by quasi-conformal mapping between domains (or other Riemannian manifolds). Recall that a homeomorphism $q:D\to D'$ is said to be \textbf{quasi-conformal} if it is orientation preserving, weakly differentiable, and there exists a constant $C$ such that \[\|D q(z)\|^2 \leq C \,|\!\det \left[D q(z)\right]\!| \] for a.e.\ $z\in D$. It is trivial to verify by change of variables that $\mathcal E(\phi \circ q) \leq C \mathcal E(\phi)$ for every $\phi \in \mathbf{D}(D)$ and $\mathcal E(\psi \circ q^{-1}) \leq C \mathcal E(\psi)$ for every $\psi \in \mathbf{D}(D')$, so that composition with $q$ defines a bounded linear isomorphism from $\mathbf{D}(D')$ to $\mathbf{D}(D)$. Moreover, it is immediate that $\psi \circ q \in \mathbf{D}_0(D)$ if and only if $\psi \in \mathbf{D}_0(D')$, and it follows that $H \mapsto (H\circ q)_{\mathbf{HD}}$ is a bounded linear isomorphism from $\mathbf{HD}(D')$ to $\mathbf{HD}(D)$. Using these ideas, one could obtain an alternative, less direct proof of \cref{thm:isomorphismgeneral}, sketched as follows: First, let $S$ be the `piecewise flat' surface obtained by gluing regular polygons according to the combinatorics of the map $M$, which is Riemannian apart from having conical singularities at its vertices. The assumption that $M$ has bounded degrees and codegrees readily implies that the function $i$ sending each vertex of $M$ to the corresponding point of $S$ is a rough isometry. One can then show that $H \mapsto (h \circ i)_{\mathbf{HD}}$ is a bounded linear isomorphism $\mathbf{HD}(S)\to\mathbf{HD}(M)$, similar to the above discussion. Next, the Ring Lemma easily allows us to construct, face-by-face, a quasi-conformal map $q:S\to D$ such that $q\circ i = z$. One can then arrive at \cref{thm:isomorphismgeneral} by composing the isomorphism $\mathbf{HD}(S)\to\mathbf{HD}(M)$, $H \mapsto (H \circ i)_{\mathbf{HD}}$ and the isomorphism $\mathbf{HD}(D)\to\mathbf{HD}(S)$, $H \mapsto (H \circ q)_{\mathbf{HD}}$. \section{Proof} \subsection{Capacity characterisation of $\mathbf{D}_0$} \label{subsec:capacity} Recall that the \textbf{capacity} of a finite set of vertices $A$ in a network $G$ is defined to be \[\Cap(A)=\sum_{v\in A} c(v) \mathbf P_v(\tau^+_A =\infty),\] where $\mathbf P_v(\tau^+_A=\infty)$ is the probability that a random walk on $G$ started at $A$ never returns to $A$ after time zero and $c(v)=\sum_{e\in E^\rightarrow : e^- =v} c(e)$ is the total conductance of all oriented edges emanating from the vertex $v$. The capacity of an infinite set $A$ is defined to be $\Cap(A)=\sup\{\Cap(A') : A' \subseteq A \text{ finite}\}.$ Another way to compute capacities is via \textbf{Dirichlet's principle}, which gives the following variational formula for the capacity of a (finite or infinite) set $A$ in a network $G$ (see e.g.\ \cite[Chapter 2]{LP:book}): \[ \Cap(A)=\inf\left\{\mathcal E(\phi): \phi \in \mathbf{D}_0(G),\, \phi|_A \geq 1 \right\}, \] where we set $\inf \emptyset = \infty$. (For example, if $G=(V,E)$ is transient then $\Cap(V)=\infty$ and the set $\{\phi \in \mathbf{D}_0(G),\, \phi|_V \geq 1\}$ is empty.) A similar formula can also be taken as the definition of the capacity of a set $A$ in a domain $D$ (see e.g.\ \cite{AnLyPe99}): \[ \Cap(A):= \inf\big\{\mathcal E(\Phi): \Phi \in \mathbf{D}_0(D),\, \Phi \geq 1 \text{ a.e.\ on an open neighbourhood of $A$}\big\}. \] A network is transient if and only if some (and hence every) finite set of its vertices has positive capacity, and a domain is transient if and only if some (and hence every) precompact open subset of it has positive capacity. The following characterisation of $\mathbf{D}_0$ is presumably well-known to experts. \begin{prop} \label{lem:D0char} \hspace{1cm} \begin{enumerate} \item Let $G$ be a network and let $\phi \in \mathbf{D}(G)$. Then $\phi \in \mathbf{D}_0(G)$ if and only if it is quasi-asymptotically equal to the zero function, that is, if and only if \[ \Cap\left(\{v\in V : |\phi(v)| \geq \varepsilon \}\right)<\infty \] for every $\varepsilon>0$. \item Let $D$ be a domain and let $\Phi \in \mathbf{D}(D)$. Then $\Phi \in \mathbf{D}_0(D)$ if and only if it is quasi-asymptotically equal to the zero function, that is, if and only if \[ \Cap\big(\{z\in D : |\Phi(z)| \geq \varepsilon \text{ a.e.\ on an open neighbourhood of $z$} \}\big)<\infty. \] for every $\varepsilon>0$. \end{enumerate} \end{prop} \begin{proof} We prove item $1$; item $2$ is similar. If $G$ is recurrent, then $\mathbf{D}_0(G)=\mathbf{D}(G)$ \cite[Theorem 3.63]{Soardibook} and every set has capacity zero, so that the claim holds vacuously. Thus, it suffices to consider the case that $G$ is transient. Let $\phi \in \mathbf{D}(G)$. If $\phi \in \mathbf{D}_0(G)$ then for each $\varepsilon>0$, the function $\psi=\varepsilon^{-1}|\phi|$ satisfies $\psi \geq 1$ on the set $\{v\in V: |\phi(v)| \geq \varepsilon\}$. It is easily verified that $\psi \in \mathbf{D}_0(G)$ and that $\mathcal E(\psi) \leq \varepsilon^{-2} \mathcal E(\phi)$, and so Dirichlet's principle implies that \begin{equation} \Cap\big(\{v\in V: |\phi(v)| \geq \varepsilon\}\big) \leq \mathcal E(\psi) \leq \varepsilon^{-2} \mathcal E(\phi) <\infty \end{equation} as claimed. Conversely, suppose that $\Cap(\{v\in V : |\phi(v)| \geq \varepsilon\})<\infty$ for every $\varepsilon>0$. Then for every $\varepsilon>0$ there exists $\psi_\varepsilon \in \mathbf{D}_0(G)$ such that $\psi_\varepsilon \geq 1$ on the set $\{v\in V : |\phi(v)|\geq \varepsilon\}$. Let $\langle X_n \rangle_{n \geq0}$ be a random walk on $M$. We deduce from the uniqueness of the Royden decomposition \eqref{eq:Royden} and from \eqref{eq:ALPlimitdisc} and \eqref{eq:ALPlimitidentification} that $\lim_{n\to\infty} \psi_\varepsilon(X_n) =0$ almost surely, and hence that $\limsup_{n\to\infty} |\phi(X_n)| \leq \varepsilon$ almost surely. Since $\varepsilon>0$ was arbitrary it follows that $\lim_{n\to\infty} \phi(X_n) =0$ almost surely, and we deduce from \eqref{eq:ALPlimitidentification} that $\phi \in \mathbf{D}_0(G)$ as claimed. \qedhere \end{proof} \subsection{Proof of the main theorems} We begin by recalling the Ring Lemma of Rodin and Sullivan \cite{RS87}, which was originally proven for circle packings of triangulations and was generalized to double circle packings of polyhedral maps in \cite{HutNach15b}. See \cite{Hansen,Ahar97} for quantitative versions in the case of triangulations. Given a double circle packing $(P,P^\dagger)$ in a domain $D \subseteq {\mathcal C}$ of a map $M$ we write $r(v)$ for the radius of $P(v)$ and $r(f)$ for the radius of $P^\dagger(f)$ for each $v\in V$ and $f\in F$. \begin{theorem}[The Ring Lemma] There exists a family of positive constants $\langle k_{n,m} : n\geq 3, m\geq 3\rangle$ such that if $(P,P^\dagger)$ is a double circle packing of a polyhedral planar map $M$ in a domain $D \subseteq {\mathcal C}$, then \[r(v)/r(f) \leq k_{\deg(v),\max_{g \perp v}\deg(g)}\] for every vertex $v\in V$ and every $f\in F$ incident to $v$. \end{theorem} For the rest of this section $M$ will be a transient weighted polyhedral map with bounded codegrees and bounded local geometry, $(P,P^\dagger)$ will be a double circle packing of $M$ in a domain $D \subseteq {\mathcal C} \cup \{\infty\}$, and $z$ will be the associated embedding of $M$. By applying a M\"obius transformation if necessary, we can and will assume that $D \subseteq {\mathcal C}$ (in which case $D\subsetneq {\mathcal C}$ by the He-Schramm theorem since $M$ is transient). We write $\mathbf{M}=\mathbf{M}(M)$ for the data \[\mathbf{M}(M) = \big(\max_{v\in V} \deg(v),\; \max_{f\in F} \deg(f),\; \sup_{e\in E} c(e),\; \sup_{e\in E} c^{-1}(e)\big).\] We say that two quantities are \textbf{comparable} if they differ up to positive multiplicative constants depending only on $\mathbf{M}$, and write $\asymp$, $\preceq$, and $\succeq$ for equalities and inequalities that hold up to positive multiplicative constants depending only on the data $\mathbf{M}$. We also use standard big-O notation, where again the implicit positive multiplicative constants depend only on $\mathbf{M}$. A consequence of the Ring Lemma is that the embedding of $M$ given by drawing straight lines between the centres of circles in its double circle packing is \emph{good}\footnote{We remark that all our results hold more generally for good straight-line embeddings of $M$, not just those produced using double circle packing. However, we are not aware of any general method of producing good embeddings that does not rely on double circle packing.} in the sense of \cite{ABGN14}, meaning that adjacent edges have comparable lengths and that the faces in the embedding have internal angles uniformly bounded away from zero and $\pi$. We will require the following useful geometric property of good embeddings of planar graphs, stated here for double circle packings. For each $v\in V$ and $\delta>0$, we write $P_{\delta}(v)$ for the disc that has the same centre as $P(v)$ but has radius $\delta r(v)$. Given a set of vertices $A \subseteq V$, we write $P_{\delta}(A)$ for the union $P_{\delta}(A)=\bigcup_{v\in A} P_{\delta}(v)$. \begin{lemma}[The Sausage Lemma \cite{ABGN14}] There exists a positive constant $\delta_1=\delta_1(\mathbf{M})$ such that for each two oriented edges $e_1,e_2\in E^\rightarrow$ of $M$ that do \emph{not} share an endpoint, the convex hull of $P_{\delta_1}(e^-_1)\cup P_{\delta_1}(e_1^+)$ and the convex hull of $P_{\delta_1}(e_2^-)\cup P_{\delta_1}(e^+_2)$ are disjoint. \end{lemma} We now define the two operators that will be the key players in the proof of \cref{thm:isomorphismgeneral}. \begin{definition}[The operator $\mathsf{R}$] Fix $\delta_0=\delta_0(\mathbf{M}) \leq 1/2$ sufficiently small that $\delta_0$ is less than or equal to the sausage lemma constant $\delta_1$ and that $\frac{1}{4}|z(u)-z(v)| \geq \delta_0 r(v)$ for every adjacent pair $u,v\in V$. For each locally integrable $\Phi : D\to \mathbb R$, we define $\mathsf{R}[\Phi]:V\to \mathbb R $ by setting $\mathsf{R}[\Phi](v)$ to be the average value of $\Phi$ on the disc $P_{\delta_0}(v)$ for each $v\in V$, that is, \[ \mathsf{R}[\Phi](v) = \frac{1}{\pi \delta_0^2 r(v)^2}\int_{P_{\delta_0}(v)} \Phi(z) \dif z.\] \end{definition} If $H\in \mathbf{HD}(D)$, then it follows from harmonicity that $\mathsf{R}[H](v) = H \circ z(v)$ for every $v\in V$. \begin{definition}[The operator $\mathsf{A}$] Consider the triangulation $T$ embedded with straight lines in $D$ that is obtained by drawing a straight line between $z(v)$ and $z(u)$ whenever $u$ and $v$ are adjacent vertices of $M$, and a straight line between $z(v)$ and $z(f)$ (the centre of $P^\dagger(f)$) whenever $v$ is a vertex of $M$ and $f\perp v$ is a face of $M$ incident to $v$. For each function $\phi:V\to \mathbb R$, we define the \textbf{piecewise-affine extension} $\mathsf{A}[\phi]$ of $\phi$ to $D$ to be the unique function on $D$ that takes the values \[\mathsf{A}[\phi](z(v)) = \phi(v) \text{ for every } v\in V\quad \text{ and } \quad \mathsf{A}[\phi](z(f))=\phi(f):=\frac{1}{\deg(f)}\sum_{v \perp f} \phi(v) \text{ for every } f \in F \] on $z(V)=\{z(v): v\in V\}$ and $z(F)=\{z(f): f \in F\}$, and is affine on each edge and each face of the triangulation $T$. \end{definition} We fix a root vertex $o$ of $M$ with which to define the inner product on $\mathbf{D}(M)$ in \eqref{eq:innerproductdefdisc}, and take the interior of $P_{\delta_0}(o)$ to be the precompact open set $O$ used to define the inner product on $\mathbf{D}(D)$ in \eqref{eq:innerproductdefcont}. \begin{lemma} \label{lem:AandRenergy} $\mathsf{R}:\mathbf{D}(D)\to\mathbf{D}(M)$ and $\mathsf{A}:\mathbf{D}(M)\to\mathbf{D}(D)$ are bounded linear operators with norms bounded by constants depending only on $\mathbf{M}(M)$, and also satisfy \[\mathcal E(\mathsf{R}[\Phi]) \preceq \mathcal E(\Phi) \quad \text{ and } \quad \mathcal E(\mathsf{A}[\phi]) \preceq \mathcal E(\phi)\] for every $\Phi \in \mathbf{D}(D)$ and $\phi \in \mathbf{D}(M)$. In particular, $\mathsf{R}[\Phi]\in \mathbf{D}(M)$ for every $\Phi \in \mathbf{D}(D)$ and $\mathsf{A}[\phi]\in \mathbf{D}(D)$ for every $\phi\in \mathbf{D}(M)$. \end{lemma} The main estimates needed for this lemma are implicit in \cite{GGNS15}, and our proof is closely modeled on the arguments in that paper. \begin{proof}[Proof of \cref{lem:AandRenergy}] We begin with $\mathsf{A}$. We wish to show that $\mathcal E(\mathsf{A}[\phi])\preceq \mathcal E(\phi)$. Let $\phi\in \mathbf{D}(M)$, let $e\in E^\rightarrow$ be an oriented edge of $M$, and let $T_{e}$ be the triangle with corners at $z(e^-),z(e^+),$ and $z(e^\ell)$. For each $e\in E^\rightarrow$, let $\psi_e$ be the linear map sending $T_{e}$ to the convex hull of $\{0,1,i\}$ that sends $e^\ell$ to $0$, $e^-$ to $1$, and $e^+$ to $i$. It follows from the Ring Lemma that $\|\mathrm{D} \psi_e(z)\|\asymp r(e^-)^{-1}$ for all $z\in T_e$, where $\mathrm{D}\psi_e$ denotes the total derivative of $\psi_e$. On the other hand, $\mathsf{A}[\phi]\circ \psi_e^{-1}$ is equal to the affine function $x+iy \mapsto (1-x-y)\phi(e^\ell) + x\phi(e^-) + y \phi(e^+)$, and we deduce that \begin{align*}\|\nabla \mathsf{A}[\phi](z)\| &\leq \|\mathrm{D} \psi_e(z)\|\left\|\nabla \!\left(\mathsf{A}[\phi] \circ \psi_e^{-1}\right)(\psi_e(z))\right\|\\ &\asymp r(e^-)^{-1} \max \bigl\{|\phi(e^-)-\phi(e^+)|,\, |\phi(e^-)-\phi(e^\ell)|,\, |\phi(e^+)-\phi(e^\ell)|\bigr\}. \end{align*} Integrating over $z \in T_{e}$ and summing over $e\in E^\rightarrow$, we obtain that \begin{align} \mathcal E(\mathsf{A}[\phi]) &= \sum_{e\in E^\rightarrow} \int_{T_{e}} \|\nabla \mathsf{A}[\phi](z)\|^2 \dif z \preceq \sum_{e\in E^\rightarrow} \max \big\{|\phi(e^-)-\phi(e^+)|,\, |\phi(e^-)-\phi(e^\ell)|,\, |\phi(e^+)-\phi(e^\ell)|\big\}^2 \nonumber \\ &\preceq \sum_{e\in E^\rightarrow} |\phi(e^-)-\phi(e^+)|^2 \quad + \sum_{v \in V, f\in F, f \perp v} |\phi(v)-\phi(f)|^2, \label{eq:EAtwoterms} \end{align} where in the first inequality we have used the fact that, by the Ring Lemma, the area of $T_{e}$ is comparable to $r(e^-)^2$ for every $e\in E^\rightarrow$. Now, for each face $f$ of $M$, we have that \[\max_{u,v \perp f} |\phi(u)-\phi(v)| \leq \sum_{e:e^\ell=f}|\phi(e^+)-\phi(e^-)|,\] and hence by Cauchy-Schwarz we have that \begin{align} \sum_{v \in V, f\in F, f \perp v} |\phi(v)-\phi(f)|^2 &\leq \sum_{v \in V, f\in F, f \perp v} \max_{u \perp f} |\phi(u)-\phi(v)|^2 \leq \sum_{v \in V, f\in F, f \perp v} \left[\sum_{e:e^\ell=f}|\phi(e^+)-\phi(e^-)|\right]^2 \nonumber \\ &\leq \sum_{v \in V, f \in F, f \perp v} \deg(f) \sum_{e: e^\ell=f} |\phi(e^+)-\phi(e^-)|^2. \label{eq:EAsecondterm} \end{align} Since each oriented edge is counted at most a constant number of times in this sum we obtain from \eqref{eq:EAtwoterms} and \eqref{eq:EAsecondterm} that \begin{equation} \label{eq:Aenergyfinal} \mathcal E(\mathsf{A}[\phi]) \preceq \sum_{e\in E^\rightarrow} |\phi(e^+)-\phi(e^-)|^2 \preceq \mathcal E(\phi) \end{equation} as required. To control the other term in $\langle \mathsf{A}[\phi],\mathsf{A}[\phi]\rangle$, observe that \begin{align*} \int_{P_{\delta_0}(o)} \mathsf{A}[\phi](z)^2 \dif z &\preceq \max\big\{|\phi(u)|^2 : u \text{ shares a face with $o$}\big\}\\ &\preceq \phi(o)^2 + \max\big\{|\phi(u)-\phi(o)|^2 : u \text{ shares a face with $o$}\big\}, \end{align*} where we say that two vertices $u$ and $v$ \textbf{share a face} if there exists $f\in F$ such that $u\perp f$ and $v\perp f$. A simple Cauchy-Schwarz argument similar to the above then shows that \begin{equation} \label{eq:Anormother} \int_{P_{\delta_0}(o)} \mathsf{A}[\phi](z)^2 \dif z \preceq \phi(o)^2 + \mathcal E(\phi), \end{equation} and combining \eqref{eq:Aenergyfinal} and \eqref{eq:Anormother} yields that $\langle \mathsf{A}[\phi],\mathsf{A}[\phi]\rangle \preceq \langle \phi,\phi \rangle$ as required. \medskip We now show that $\mathsf{R}$ is bounded. We wish to show that $\langle \mathsf{R}[\Phi],\mathsf{R}[\Phi]\rangle \preceq \langle \Phi,\Phi\rangle$ and moreover that $\mathcal E(\mathsf{R}[\Phi]) \preceq \mathcal E(\Phi)$ for every $\Phi\in \mathbf{D}(D)$. Let us first suppose that $\Phi$ is continuously differentiable. It is well known, and can be seen by a simple mollification argument, that such $\Phi$ are dense in $\mathbf{D}(D)$ (as indeed are the smooth Dirichlet functions). For each $v\in V$, let $X_v$ be a random point chosen uniformly from the disc $P_{\delta_0}(v)$, independently from each other, so that $\mathsf{R}[\Phi](v)=\mathbb E \Phi(X_v)$. For each $u,v\in V$, let $\Gamma_{u,v}$ be the random line segment connecting $X_u$ to $X_v$. By Jensen's inequality and the assumption that $\Phi$ is continuously differentiable we have that \[ \left(\mathsf{R}[\Phi](u)-\mathsf{R}[\Phi](v)\right)^2 = \mathbb E\left[ \Phi(X_u)-\Phi(X_v) \right]^2 \leq \mathbb E\left[ \left(\Phi(X_u)-\Phi(X_v)\right)^2 \right] =\mathbb E \Big[ \big(\int_{\Gamma_{u,v}}\!\|\nabla \Phi(z)\|\dif z\big)^2 \Big]. \] For each adjacent $u,v \in V$, conditional on $\Gamma_{u,v}$, let $Z_{u,v}$ be a random point chosen uniformly on the line segment $\Gamma_{u,v}$. The Cauchy-Schwarz inequality implies that \[ \Bigl(\int_{\Gamma_{u,v}}\!\|\nabla \Phi(z)\|\dif z\Bigr)^2 \leq |\Gamma_{u,v}| \int_{\Gamma_{u,v}} \|\nabla \Phi(z)\|^2 \dif z \leq |\Gamma_{u,v}|^2 \,\mathbb E\left[ \|\nabla \Phi(Z_{u,v})\|^2 \mid \Gamma_{u,v}\right]. \] Next, the Ring Lemma implies that $|\Gamma_{u,v}|\preceq r(v)$, and we deduce that \begin{equation} \label{eq:RZbound} \left(\mathsf{R}[\Phi](u)-\mathsf{R}[\Phi](v)\right)^2 \leq \mathbb E\left[\Bigl(\int_{\Gamma_{u,v}}\!\|\nabla \Phi(z)\|\dif z\Bigr)^2\right] \preceq r(v)^2 \mathbb E\left[ \|\nabla \Phi(Z_{u,v})\|^2 \right]. \end{equation} Let $\mu_{u,v}$ be the law of $Z_{u,v}$ and let $A_{u,v}$ be its support, i.e., the convex hull of $P_{\delta_0}(u)\cup P_{\delta_0}(v)$. We claim that the Radon-Nikodym derivative of $\mu_{u,v}$ with respect to the Lebesgue measure on $A_{u,v}$ is $O(r(v)^{-2})$. This is equivalent to the claim that \begin{equation} \label{eq:RadonNikodym} \P\left(Z_{u,v} \in B(z,\delta r(v))\right)\preceq \delta^2 \end{equation} for every $z \in A_{u,v}$ and $\delta>0$. Suppose without loss of generality that $|z-z(v)|\leq |z-z(u)|$, and condition on the value of $X_u$, so that $|X_u-z|\geq |z(u)-z(v)|/4\succeq r(v)$ by definition of $\delta_0$. In order for $Z_{u,v}$ to be in the ball $B(z,\delta r(v))$, we must have that $X_v$ is in the cone $K$ that has its vertex at $X_u$ and that is tangent to $B(z,\delta r(v))$, see \cref{fig:cone}. Since $|X_u-z|\geq |z(u)-z(v)|/4$, it follows by elementary trigonometry that the internal angle at the vertex of $K$ is $O(\delta)$, and consequently that the intersection of $K$ with $P_{\delta_0}(v)$ (or indeed with all of $A_{u,v}$), being contained inside a triangle with height $O(r(v))$ and width $O(\delta r(v))$, has area at most $O(\delta r(v)^{2})$. Thus, the probability that $X_v$ lies in this region is at most $O(\delta)$. Conditioned on the event that $X_v$ lies in $K$, the intersection of $\Gamma_{u,v}$ with $B(z,\delta)$ has length at most $2\delta r(v)$, and so the conditional probability that $Z_{u,v}$ lies in this segment is $O(\delta)$. The estimate \eqref{eq:RadonNikodym} follows. \begin{figure}[t] \centering \includegraphics[width=0.65\textwidth]{cone.pdf} \caption{\small{Illustration of the proof of the boundedness of $\mathsf{R}$. Suppose that $z$ (green square) is closer to $z(v)$ (navy disc) than to $z(u)$ (brown disc). Then conditional on the location of $X_u$ (red square), in order for $Z_{u,v}$ to be located in $B(z,\delta r(v))$ (purple disc), $X_v$ must be located in the intersection (blue segment) of $P_{\delta_0}(v)$ with the cone whose vertex is at $X_u$ and that is tangent to $B(z,\delta r(v))$. The dashed line is the perpendicular bisector of the line from $z(u)$ to $z(v)$. This intersection is contained within a triangle (grey) whose sides have lengths of order $O(r(v))$, $O(r(v))$ and $O(\delta r(v))$, and consequently has area $O(\delta r(v)^2)$.} } \label{fig:cone} \end{figure} Integrating over the Radon-Nikoydm estimate \eqref{eq:RadonNikodym} we obtain that \[ \mathbb E\left[ \|\nabla \Phi(Z_{u,v})\|^2 \right] = \int_{A_{u,v}}\frac{d\mu_{u,v}(z)}{\dif z} \| \nabla \Phi(z) \|^2 \dif z \preceq r(v)^{-2} \int_{A_{u,v}} \| \nabla \Phi(z) \|^2 \dif z \] and hence by \eqref{eq:RZbound} that \begin{equation} \label{eq:Rmainestimate} \left(\mathsf{R}[\Phi](u)-\mathsf{R}[\Phi](v)\right)^2 \preceq \int_{A_{u,v}} \| \nabla \Phi(z) \|^2 \dif z \end{equation} for every adjacent $u,v\in V$. Since \eqref{eq:Rmainestimate} holds uniformly for all continuously differentiable $\Phi\in \mathbf{D}(D)$ and the expressions on both sides of the inequality are continuous functions of $\Phi \in \mathbf{D}(D)$, we deduce by density that the inequality holds for \emph{all} $\Phi \in \mathbf{D}(D)$. Since $\delta_0$ was taken to be less than the Sausage Lemma constant, we have that each point $z$ is in at most $\max_{v\in V}\deg(v)=O(1)$ different regions of the form $A_{u,v}$, so that applying \eqref{eq:Rmainestimate} yields that \begin{equation} \label{eq:ERfinalbound} \mathcal E(\mathsf{R}[\Phi]) = \sum_{e\in E^\rightarrow} \left(\mathsf{R}[\Phi](e^-)-\mathsf{R}[\Phi](e^+)\right)^2 \preceq \sum_{e\in E^\rightarrow} \int_{A_{e^-,e^+}} \| \nabla \Phi(z)\|^2 \dif z \preceq \int_D \| \nabla \Phi(z)\|^2 \dif z = \mathcal E(\Phi) \end{equation} as required. The other term in $\langle \mathsf{R}[\Phi],\mathsf{R}[\Phi]\rangle$ can be bounded using Jensen's inequality, which yields that \begin{equation} \label{eq:normRotherbound}|\mathsf{R}[\Phi](o)|^2 \preceq \int_{P_{\delta_0}(o)}\Phi^2(z) \dif z. \end{equation} Combining \eqref{eq:ERfinalbound} and \eqref{eq:normRotherbound} yields that $\langle \mathsf{R}[\Phi],\mathsf{R}[\Phi]\rangle \preceq \langle \Phi,\Phi\rangle$ as required. \qedhere \end{proof} It is an immediate consequence of the closed graph theorem that if a Banach space $V$ is written as the direct sum of two closed subspaces $V=V_1 \oplus V_2$ then the associated projections onto each of the subspaces are bounded. (This can also be argued directly.) Applying this fact in our setting we obtain that the projections $\phi \mapsto \phi_{\mathbf{HD}}$ and $\Phi \mapsto \Phi_{\mathbf{HD}}$ are bounded. Thus, it follows as an immediate corollary to \cref{lem:AandRenergy} that the operators $\mathsf{Disc}:\mathbf{HD}(D)\to\mathbf{HD}(M)$ and $\mathsf{Cont}:\mathbf{HD}(M)\to\mathbf{HD}(D)$ defined by \begin{align}\mathsf{Disc}[H](v) &= (\mathsf{R}[H])_{\mathbf{HD}}(v) = (H \circ z)_{\mathbf{HD}}(v) = \mathbf E_v\left[\lim_{n\to\infty} H \circ z(X_n) \right] \qquad &H\in \mathbf{HD}(D),\; v\in V \label{eq:discdef}\\ \mathsf{Cont}[h](z) &= \,(\mathsf{A}[h])_{\mathbf{HD}}(z)\, = \mathbb E_{z}\left[\lim_{t\to T_{\partial D}} \mathsf{A}[h](B_t) \right] \qquad & h \in \mathbf{HD}(M),\; z \in D \label{eq:contdef} \end{align} are also well defined and bounded. Here the final equalities of \eqref{eq:discdef} and \eqref{eq:contdef} follow from \eqref{eq:ALPlimitidentification} and \eqref{eq:ALPcontinuousidentification} respectively. \medskip A second immediate corollary is the following. \begin{corollary} \label{lem:d0tod0easy} If $\phi \in \mathbf{D}_0(M)$ then $\mathsf{A}[\phi] \in \mathbf{D}_0(D)$. Similarly, if $\Phi\in \mathbf{D}_0(D)$ then $\mathsf{R}[\Phi] \in \mathbf{D}_0(M)$. \end{corollary} \begin{proof} We prove the first sentence, the second being similar. It is immediate from the definitions that if $\phi \in \mathbf{D}_0(M)$ is finitely supported, then $\mathsf{A}[\phi]$ is compactly supported. We conclude by applying the boundedness of $\mathsf{A}$. \end{proof} The following lemma, which is proved below and is also an easy corollary of \cref{lem:AandRenergy}, is also implicit in \cite{GGNS15}; indeed, it can be thought of as a quantitative form of the main result of that paper. \begin{lemma} \label{lem:CapComparison} For every $0<\delta \leq 1/2$, we have that \[ \delta^4 \Cap(A) \preceq \Cap(P_{\delta}(A)) \preceq \, \Cap(A) \] for every set of vertices $A$ in $M$. \end{lemma} We will require the following simple estimates. \begin{lemma}[Continuity estimates] \label{lem:continuity} \hspace{1cm} \begin{enumerate} \item Let $\phi:V\to \mathbb R$ be a function. Then \[ \sup_{z\in P_\delta(v)}\big| \mathsf{A}[\phi](z) - \phi(v)\big| \leq \delta\sup \left\{ |\phi(u)-\phi(v)| : \text{$u$ and $v$ share a face of $M$}\right\} \preceq \delta \sqrt{\mathcal E(\phi)} \] for every $v\in V$ and $0<\delta<1$. \item Let $H:D\to\mathbb R$ be a harmonic function. Then for every $r>0$, $\alpha>1$, and $z_0 \in D$ such that $B(z_0,\alpha r) \subseteq D$ we have that \[\sup_{z\in B(z_0,r)} |H(z)-H(z_0)|^2 \leq \frac{1}{\pi} \log\left[ \frac{\alpha^2}{\alpha^2-1} \right] \int_{B(z_0,\alpha r)} \| \nabla H(z) \|^2 \dif z. \] \end{enumerate} \end{lemma} \begin{proof} The first inequality of item $1$ is immediate from the definition of $\mathsf{A}[\phi]$, while the second follows since \begin{multline*} \sup \left\{ |\phi(u)-\phi(v)| : \text{$u$ and $v$ share a face of $M$}\right\} \leq \sup_{f\in F} \sum_{e \in E^\rightarrow : e^\ell = f} |\phi(e^+)-\phi(e^-)| \\\preceq \sup_{e\in E^\rightarrow} |\phi(e^+)-\phi(e^-)| \preceq \sqrt{\mathcal E(\phi)}. \end{multline*} Item $2$ follows by taking $\Phi: B(z_0,r)\to {\mathcal C}$ to be holomorphic with real part $H$ and applying the inequality of \cite[Theorem 1.2.1]{MR3185375} to the function $\Psi:\mathbb D\to {\mathcal C}$ defined by $\Psi(z)=\Phi((z_0+z)/\alpha r)$. (Note that their definition of the energy of $\Psi$ disagrees with ours by a factor of $\pi$.) \end{proof} \begin{proof}[Proof of \cref{lem:CapComparison}] We start with the upper bound. Let $\phi\in \mathbf{D}_0(M)$ be such that $\phi|_A \geq 1$, and let $\psi = (\phi \wedge 1)\vee 0$. It is easily verified that $\mathcal E(\psi) \leq \mathcal E(\phi)$ and $\psi |_A = 1$, and it follows from \cref{lem:D0char} that $\psi\in \mathbf{D}_0(M)$ (this is also easy to verify directly). \cref{lem:continuity} implies that $\mathsf{A}[\psi](z) \geq 1- \delta$ for every $z\in P_\delta(A)$. Thus, by \cref{lem:d0tod0easy}, we have that $2(1-\delta)^{-1}\mathsf{A}[\psi] \in \mathbf{D}_0(D)$ and that $2(1-\delta)^{-1}\mathsf{A}[\psi] \geq 1$ on an open neighbourhood of $P_\delta(A)$, so that, by Dirichlet's principle and \cref{lem:AandRenergy}, \[\Cap(P_\delta(A)) \leq \mathcal E(2(1-\delta)^{-1}\mathsf{A}[\psi]) \preceq \mathcal E(\psi) \leq \mathcal E(\phi). \] The claimed upper bound follows by taking the infimum over $\phi$. We now turn to the lower bound. Let $\Phi\in \mathbf{D}_0(D)$ be such that $\Phi \geq 1$ on an open neighbourhood of $P_\delta(A)$, and let $\Psi=(\Phi \wedge 1) \vee 0$. As before, we have that $\mathcal E(\Psi)\leq \mathcal E(\Phi)$ and that $\Psi = 1$ on an open neighbourhood of $A$. For every $v\in A$ we have that \[ \mathsf{R}[\Psi](v) = \frac{1}{\pi \delta_0^2 r(v)^2} \int_{P_{\delta_0}(v)} \Psi(z) \dif z \geq \frac{1}{\pi \delta_0^2 r(v)^2} \int_{P_{\delta_0}(v)} \mathbf{1}\left[z\in P_\delta(v)\right] \dif z = \frac{\delta^2}{\delta_0^2}. \] Thus, by \cref{lem:d0tod0easy}, the function $\delta_0^2\mathsf{R}[\Psi]/\delta^2 \in \mathbf{D}_0(M)$ is at least $1$ on $A$, and so, by Dirichlet's principle and \cref{lem:AandRenergy}, \[ \Cap(A) \leq \mathcal E\left(\frac{\delta_0^2}{\delta^2}\mathsf{R}[\Psi]\right) \preceq \delta^{-4} \mathcal E(\mathsf{R}[\Psi])\preceq \delta^{-4}\mathcal E(\Psi) \leq \delta^{-4} \mathcal E(\Phi). \] The claimed lower bound follows by taking the infimum over $\Phi$. \end{proof} There is one more lemma to prove before we prove \cref{thm:isomorphismgeneral}. \begin{lemma}\hspace{1cm} \label{lem:RandAsendD0toD0} \begin{enumerate} \item If $\phi \in \mathbf{D}(M)$, then $\phi-\mathsf{R}[\mathsf{A}[\phi]]\in \mathbf{D}_0(M)$. \item If $\phi\in \mathbf{D}(M)$, then $\mathsf{A}[\phi] \in \mathbf{D}_0(D)$ if and only if $\phi\in \mathbf{D}_0(M)$. \item If $\Phi\in \mathbf{D}(D)$, then $\mathsf{R}[\Phi] \in \mathbf{D}_0(M)$ if and only if $\Phi\in \mathbf{D}_0(D)$. \end{enumerate} \end{lemma} \begin{proof}[Proof of \cref{lem:RandAsendD0toD0}] We begin with item $1$. Observe that, by the definitions of $\mathsf{R}$ and $\mathsf{A}$, we have that \[\big|\phi(v)-\mathsf{R}[\mathsf{A}[\phi]](v)\big| \leq \sup \left\{|\phi(v)-\phi(u)| : u \text{ shares a face with $v$}\right\}\] for every vertex $v\in V$. It follows by a straightforward argument with the Cauchy-Schwarz inequality, similar to that used in the proof of \cref{lem:AandRenergy}, that \[\sum_{v\in V} \big|\phi(v)-\mathsf{R}[\mathsf{A}[\phi]](v)\big|^2 \preceq \mathcal E(\phi), \] and hence that, for each $\varepsilon>0$, \[ \Cap\Big(\big\{ v \in V : \big|\phi(v)-\mathsf{R}[\mathsf{A}[f]](v)\big| \geq \varepsilon \big\}\Big) \preceq \Big|\big\{ v \in V : \big|\phi(v)-\mathsf{R}[\mathsf{A}[\phi]](v)\big| \geq \varepsilon \big\}\Big| \preceq \mathcal E(\phi) \varepsilon^{-2}. \] The right hand side is finite for every $\varepsilon>0$, and so we conclude by applying \cref{lem:D0char}. We now turn to items $2$ and $3$. The `if' parts of the statements are covered by \cref{lem:d0tod0easy}; It remains to prove only the `only if' parts of the statements. We begin with item 2. Let $\phi\in \mathbf{D}(M)$ be such that $\mathsf{A}[\phi] \in \mathbf{D}_0(D)$ and let $\varepsilon>0$. It follows from \cref{lem:continuity} that there exists a constant $\delta=\delta(\varepsilon,\mathcal E(\phi),\mathbf{M}(M))$ such that \[ \{v\in V: |\phi(v)| \geq \varepsilon \} \subseteq \left\{v\in V: |\mathsf{A}[\phi](z)| \geq \frac{\varepsilon}{2} \text{ for all $z\in P_\delta(v)$} \right\}, \] and it follows from \cref{lem:CapComparison} that there exists a constant $C=C(\varepsilon,\mathcal E(\phi),\mathbf{M}(M))$ such that \[ \Cap\left(\left\{v\in V: |\phi(v)|\geq \varepsilon\right\}\right) \leq C\, \Cap\left(\left\{z\in D : |\mathsf{A}[\phi](v)|\geq \frac{\varepsilon}{2} \right\}\right). \] Here we have used the fact that if $A \subseteq B$ then $\Cap(A)\leq \Cap(B)$, which is an immediate consequence of the Dirichlet principle. \cref{lem:D0char} and the assumption that $\mathsf{A}[\phi]\in \mathbf{D}_0(D)$ implies that the right hand side is finite, so that the left hand side is finite also. Since $\varepsilon>0$ was arbitrary, applying \cref{lem:D0char} a second time shows that $\phi \in \mathbf{D}_0(M)$ as claimed. It remains to prove item 3. We begin by proving that for every $H\in \mathbf{HD}(D)$ and $\varepsilon>0$ there exists a compact set $K \subset D$ such that \begin{equation} \label{eq:movedclaim} \Cap\bigl(\{z\in D : |H(z)| \geq \varepsilon \}\bigr)\preceq \Cap(K) + \Cap\left[ \left\{ v \in V,\, |H\circ z(u)| \geq \varepsilon/4 \right\}\right]. \end{equation} For each $v\in V$, define $\mathrm{Fl}(v)$ to be the union of the disc $P(v)$ with all of the discs $P^\dagger(f)$ where $f$ is a face of $M$ incident to $v$, and let $N(v)$ be the set of all vertices of $M$ that share a face with $v$. Let $H\in \mathbf{HD}(D)$ and let $\varepsilon>0$. Observe that \begin{align*} \{z\in D : |H(z)|\geq \varepsilon \} &\subseteq \bigcup \left\{ P(v) : v\in V,\, \sup\{|H(z)| : z \in P(v) \} \geq \varepsilon \right\} \\ &\hspace{3.8cm}\cup\, \bigcup \left\{ P^\dagger(f) : f\in F,\, \sup\{|H(z)| : z \in P^\dagger(f) \} \geq \varepsilon\right\} \\ &\subseteq \left\{ \mathrm{Fl}(v) : v\in V,\, \sup\left\{|H(z)| : z \in P(v) \right\} \geq \varepsilon \right\}, \end{align*} where the second inclusion follows from the maximum principle. Define the sets $A_{\varepsilon,1} = \{v\in V : |H\circ z(v)| \geq \varepsilon/2 \}$ and \[ A_{\varepsilon,2} = \left\{v\in V : \sup\bigl\{ |H(z)|: z \in P(v) \bigr\} \geq \varepsilon\right\}. \] Clearly $A_{\varepsilon,1} \subseteq A_{\varepsilon,2}$. We claim that $A_{\varepsilon,2} \setminus A_{\varepsilon,1}$ is finite. Indeed, suppose for contradiction that $A_{\varepsilon,2} \setminus A_{\varepsilon,1}$ is infinite. It follows from the Ring Lemma that there exists a constant $C>1$ such that $B(z(v),Cr(v)) \subseteq D$ for every $v\in V$, and since the point set $\{z(v):v \in V\}$ is locally finite in $D$, we can find an infinite set $A_{\varepsilon,3} \subseteq A_{\varepsilon,2} \setminus A_{\varepsilon,1}$ such that the balls $B(z(v),Cr(v))$ and $B(z(u),Cr(u))$ are disjoint whenever $u,v\in A_{\varepsilon,3}$ are distinct. Applying item 2 of \cref{lem:continuity} we obtain that \[ \mathcal E(H) \geq \sum_{v\in A_{\varepsilon,3}} \int_{B(z(v),Cr(v))} \|\nabla H(z)\|^2 \dif z \succeq \sum_{v\in A_{\varepsilon,3}} \varepsilon^2 = \infty, \] contradicting the assumption that $H\in \mathbf{HD}(D)$. It follows that if $H\in \mathbf{HD}(D)$ then \[ \{z\in D : |H(z)| \geq \varepsilon \} \subseteq K' \cup \bigcup \left\{\mathrm{Fl}(v) : v \in V,\, |H\circ z(v)| \geq \varepsilon/2 \right\} \] where $K' \subset D$ is compact. Now, since $H \circ z \in \mathbf{D}(M)$ by \cref{lem:AandRenergy}, it follows by similar reasoning to above that $\{v\in V : |H \circ z(u)|\geq \varepsilon/2$ for some $u\in N(v)\} \setminus \{v\in V : H\circ z(u) \geq \varepsilon/4$ for every $u\in \{v\}\cup N(v)\}$ is finite, and it follows that there exists a compact set $K \subset D$ such that \begin{multline*} \{z\in D : |H(z)| \geq \varepsilon \} \subseteq K \cup \bigcup \left\{\mathrm{Fl}(v) : v \in V,\, |H\circ z(u)| \geq \varepsilon/4 \text{ for every } u\in \{v\}\cup N(v) \right\} \end{multline*} Now suppose that $\psi \in \mathbf{D}_0$ is such that $\psi \geq 1$ on the set $\{ v \in V : |H \circ z(v)| \geq \varepsilon/4\}$. Then we clearly have that $\mathsf{A}[\psi] \geq 1$ on the set $\bigcup \left\{\mathrm{Fl}(v) : v \in V,\, |H\circ z(u)| \geq \varepsilon/4 \text{ for every $u\in \{v\}\cup N(v)$}\right\}$, and optimizing over $\psi$ it follows that \begin{multline*} \Cap\bigl(\{z\in D : |H(z)| \geq \varepsilon \}\bigr)\\ \leq \Cap(K') + \Cap\left[ \bigcup \left\{\mathrm{Fl}(v) : v \in V,\, |H\circ z(u)| \geq \varepsilon/4 \text{ for every $u\in \{v\}\cup N(v)$}\right\}\right] \\\preceq \Cap(K) + \Cap\left[ \left\{ v \in V,\, |H\circ z(u)| \geq \varepsilon/4 \right\}\right] \end{multline*} as claimed. Now let $\Phi= \Phi_0 + \Phi_{\mathrm{HD}} \in \mathbf{D}(D)$ and suppose that $R[\Phi] \in \mathbf{D}_0(M)$. We have by \cref{lem:d0tod0easy} that $R[\Phi_0] \in \mathbf{D}_0(M)$, and it follows that $R[\Phi_\mathrm{HD}] = \Phi_\mathrm{HD}\circ z = R[\Phi]-R[\Phi_0] \in \mathbf{D}_0(M)$ also. Let $\varepsilon>0$. Then we have by \eqref{eq:movedclaim} and \cref{lem:D0char} that there exists a compact subset $K$ of $D$ such that \begin{equation*} \Cap\bigl(\{z\in D : |\Phi_\mathrm{HD}(z)| \geq \varepsilon \}\bigr) \leq \Cap(K) + \Cap\left[\left\{ v \in V,\, |\Phi_\mathrm{HD}\circ z(v)| \geq \varepsilon/4 \right\}\right] <\infty \end{equation*} where we have used the fact that compact subsets of transient domains have finite capacity. Since $\varepsilon>0$ was arbitrary it follows from \cref{lem:D0char} that $\Phi_\mathrm{HD}\in \mathbf{D}_0(D)$, and hence that $\Phi_\mathrm{HD}\equiv 0$ by uniqueness of the Royden decomposition. Thus, $\Phi\in \mathbf{D}_0(D)$ as claimed. \qedhere \end{proof} We are now ready to prove \cref{thm:isomorphismgeneral}. \begin{proof}[Proof of \cref{thm:isomorphismgeneral}.] As discussed after the proof of \cref{lem:AandRenergy}, \cref{lem:AandRenergy} implies that $\mathsf{Disc}$ and $\mathsf{Cont}$ are both bounded. Thus, it suffices to prove the following: \begin{enumerate} \item For each $H \in \mathbf{HD}(D)$, $h=\mathsf{Disc} [H]=(\mathsf{R}[H])_\mathrm{HD}$ is the unique element of $\mathbf{HD}(M)$ such that $\mathsf{R}[H] - h \in \mathbf{D}_0(M)$. \item For each $h \in \mathbf{HD}(M)$, $H=\mathsf{Cont} [h]$ is the unique element of $\mathbf{HD}(D)$ such that $h - \mathsf{R}[H] \in \mathbf{D}_0(M)$. \item $h=\mathsf{Disc}[\mathsf{Cont}[h]]$ and $H=\mathsf{Cont}[\mathsf{Disc}[H]]$ for every $h\in \mathbf{HD}(M)$ and $H\in \mathbf{HD}(D)$ respectively. \end{enumerate} Each of these items has a highly elementary but slightly tricky proof. Let $\mathsf{P}_{\mathbf{D}_0(M)},$ $\mathsf{P}_{\mathbf{HD}(M)}$, $\mathsf{P}_{\mathbf{D}_0(D)},$ and $\mathsf{P}_{\mathbf{HD}(D)}$ be the projections associated to the Royden decompositions of $\mathbf{D}(M)$ and $\mathbf{D}(D)$ respectively. \begin{enumerate} \item This follows immediately from the uniqueness of the Royden decomposition (i.e., the fact that $\mathbf{D}(D)=\mathbf{D}_0(D)\oplus \mathbf{HD}(D)$). \item We first wish to prove that $h-\mathsf{R}\mathsf{Cont}[h] = h - \mathsf{R} \mathsf{P}_{\mathbf{HD}(D)} \mathsf{A} h \in \mathbf{D}_0(M)$ for every $h\in \mathbf{D}(M)$. To see this, note that $h-\mathsf{R}\mathsf{P}_{\mathbf{HD}(D)} \mathsf{A} h = \left[h- \mathsf{R}\mathsf{A}h\right] + \mathsf{R} \mathsf{P}_{\mathbf{D}_0(D)} \mathsf{A} h$. Since $h-\mathsf{R}\mathsf{A}h \in \mathbf{D}_0(M)$ by item 1 of \cref{lem:RandAsendD0toD0} and $\mathsf{R} \mathsf{P}_{\mathbf{D}_0(D)} \mathsf{A} h \in \mathbf{D}_0(M)$ by \cref{lem:d0tod0easy}, we deduce that $h-\mathsf{R}\mathsf{Cont}[h] \in \mathbf{D}_0(M)$ as claimed. We now prove uniqueness. Suppose that $H\in \mathbf{HD}(D)$ is such that $h-\mathsf{R}[H]$ is in $\mathbf{D}_0(M)$. Then we must have that $\mathsf{R}\left[\mathsf{Cont}[h]-H\right] = (h-\mathsf{R}[H]) - (h-\mathsf{R}[\mathsf{Cont}[h]])$ is in $\mathbf{D}_0(M)$ also, and it follows from \cref{lem:RandAsendD0toD0} (more specifically the `only if' implication of item 3 of that lemma) that $\mathsf{Cont}[h]-H \in \mathbf{D}_0(D)$. But since $\mathsf{Cont}[h]-H \in \mathbf{HD}(D)$ we deduce that $H=\mathsf{Cont}[h]$ as claimed. \item We first prove that $h=\mathsf{Disc}[\mathsf{Cont}[h]]$ for every $h\in \mathbf{HD}(M)$. We have that $h-\mathsf{Disc}[\mathsf{Cont}[h]] =h- \mathsf{R} \mathsf{Cont} [h] + \mathsf{P}_{\mathbf{D}_0(M)} \mathsf{R} \mathsf{Cont} [h]$, and since, by item 2, $h- \mathsf{R} \mathsf{Cont} [h]$ and $\mathsf{P}_{\mathbf{D}_0} \mathsf{R} \mathsf{Cont} [h]$ are both in $\mathbf{D}_0(M)$, it follows that $h-\mathsf{Disc}[\mathsf{Cont}[h]] \in \mathbf{D}_0(M)$ and hence that $h-\mathsf{Disc}[\mathsf{Cont}[h]]=0$ as claimed. It remains to prove that $H=\mathsf{Cont}[\mathsf{Disc}[H]]$ for every $H\in\mathbf{HD}(D)$. By item 2 we have that $\mathsf{Disc}[H] - \mathsf{R} \mathsf{Cont}[\mathsf{Disc}[H]] \in \mathbf{D}_0(M)$, and hence that \[\mathsf{R}\bigl[H - \mathsf{Cont}[\mathsf{Disc}[H]] \bigr]= \mathsf{P}_{\mathbf{D}_0(M)}\mathsf{R} [H] + \mathsf{Disc}[H] - \mathsf{R} \mathsf{Cont}[\mathsf{Disc}[H]] \in \mathbf{D}_0(M)\]also. It follows by \cref{lem:RandAsendD0toD0} that $H - \mathsf{Cont}[\mathsf{Disc}[H]]\in \mathbf{D}_0(D)$ and hence that $H - \mathsf{Cont}[\mathsf{Disc}[H]]=0$ as claimed. \qedhere \end{enumerate} \end{proof} \subsection{Asymptotic equality in the uniformly transient case} We now prove the following proposition, which, together with Proposition \ref{lem:D0char}, allows us to deduce \cref{thm:isomorphismdisc} from \cref{thm:isomorphismgeneral}. \begin{prop} \label{prop:uniformlytransient} Let $M$ be a transient weighted polyhedral planar map with bounded codegrees and bounded local geometry, let $(P,P^\dagger)$ be a double circle packing of $M$ in a domain $D \subset {\mathcal C}$, and let $z:V\to D$ be the function sending each circle to the centre of its corresponding disc. Let $h$ and $H$ be bounded harmonic functions on $M$ and $D$ respectively. If $D$ is uniformly transient, then $h$ and $H\circ z$ are asymptotically equal if and only if they are quasi-asymptotically equal. \end{prop} The proof of this proposition applies the elliptic Harnack inequality, which we now discuss. For each $z\in {\mathcal C}$ and $r>0$, let $B(z,r)$ denote the Euclidean ball of radius $r$ around $z$. Recall the classical elliptic Harnack inequality for the plane, which states that for every $z_0\in {\mathcal C}$, every non-negative harmonic function $h: B(z_0,r) \to \mathbb R$, and every $z\in B(z_0,r)$, we have that \begin{equation} \label{eq:classicerEHI} \frac{r-|z-z_0|}{r+|z-z_0|} h(z_0) \leq h(z)\leq \frac{r+|z-z_0|}{r-|z-z_0|} h(z_0). \end{equation} An immediate consequence of this inequality is that \begin{equation} \label{eq:classicerEHI2} |h(z)-h(z_0)|\leq \frac{2|z-z_0|}{r-|z-z_0|} h(z_0) \end{equation} under the same assumptions. If $h:B(z_0,r)\to \mathbb R$ is a harmonic function that is not necessarily non-negative, we can apply this inequality to the normalized function $h-\inf_{z\in B(z_0,r)} h(z)$ to obtain that \begin{multline} \label{eq:classicEHI} |h(z)-h(z_0)| \leq \frac{2|z-z_0|}{r-|z-z_0|} \bigl(h(z_0)-\inf_{z' \in B(z_0,r)} h(z')\bigr) \\ \leq \frac{2 |z-z_0|}{r-|z-z_0|}\sup\big\{|h(z_1)-h(z_2)| : z_1,z_2 \in B(z_0,r) \big\}. \end{multline} Angel, Barlow, Gurel-Gurevich, and Nachmias \cite{ABGN14} established a version of the elliptic Harnack inequality that holds for double circle packings with respect to the Euclidean metric. The version of the theorem that we state here follows from that stated in \cite{ABGN14} by a simple rearrangement and iteration argument, below. \begin{theorem}[Elliptic Harnack Inequality] Let $M$ be a transient weighted polyhedral planar map with bounded codegrees and bounded local geometry, let $(P,P^\dagger)$ be a double circle packing of $M$ in a domain $D$. Then for each $\alpha<1$ there exist positive constants $\beta=\beta (\mathbf{M})$ and $C=C(\mathbf{M})$ such that \begin{equation} \label{eq:DiscreteEHI} |h(u)-h(v)| \leq C \left(\frac{|z(u)-z(v)|}{r}\right)^\beta \sup\big\{ |h(w_1)-h(w_2)| : z(w_1),z(w_2) \in B(z,r) \big\} \end{equation} for every harmonic function $h$ on $V$, every $v\in V$, every $r \leq d(z(v),\partial D)$, and every $u\in V$ with $z(u) \in B(z(v),\alpha r)$. \end{theorem} \begin{proof} Let $X$ be the union of the straight lines between the centres of circles in $P$. The Ring Lemma implies that the path metric on $X$ is comparable to the subspace metric on $X$ \cite[Proposition 2.5]{ABGN14}. Given a function $\phi$ on the vertex set of $M$, we extend $\phi$ to $X$ by linear interpolation along each edge. The version of the elliptic Harnack inequality stated in \cite[Theorem 5.4]{ABGN14} implies that for each $A>1$, there exists a constant $C=C(A,\mathbf{M})>1$ such that for every $x \in X$ with $d(x,\partial D) \geq Ar$, and every harmonic function $h$ on $M$ such that the extension of $h$ to $X$ is positive on $B(x,Ar)$, we have that \begin{equation} \label{eq:ABGNEHI} \sup_{y \in X \cap B(x,r)} h(y) \leq C \inf_{y\in X \cap B(x,r)} h(y). \end{equation} Now suppose that $h$ is a harmonic function on $M$ that is not necessary positive. Write $B(r)=X\cap B(x,r)$. Applying this inequality to the normalized function $h(y)-\inf_{z \in B(Ar)}h(z)$, we deduce that \[ \sup_{y \in B(r)} h(y)-\inf_{y \in B(Ar)} h(y) \leq C \left[ \inf_{y \in B(r)} h(y)-\inf_{y \in B(Ar)} h(y) \right]. \] Adding $(C-1) \sup_{y \in B(r)} h(y) + \inf_{y\in B(Ar)} h(y) - C \inf_{y\in B(r)} h(y)$ to both sides of this inequality, we obtain that \begin{align*} C\left[\sup_{y \in B(r)} h(y) - \inf_{y \in B(r)} h(y)\right] &\leq (C-1) \sup_{y\in B(r)} h(y) - (C-1) \inf_{y \in B(Ar)} h(y)\\ &\leq (C-1)\left[ \sup_{y \in B(Ar)} h(y) - \inf_{y \in B(Ar)} h(y)\right]. \end{align*} By applying this inequality for different values of $r$ we obtain that \[ \sup_{y \in B(A^{-n}r)}h(y) - \inf_{y \in B(A^{-n}r)}h(y) \leq \left(\frac{C-1}{C}\right) \left[ \sup_{y \in B(A^{-n+1}r)} h(y) - \inf_{y \in B(A^{-n+1}r)} h(y) \right] \] for every $n\geq 1$, every harmonic function $h$ on $M$, every $r>0$, every $n\geq 1$, and every $x\in X$ such that $d(x,\partial D) \geq r$. It follows by induction that \[ \sup_{y \in B(A^{-n}r)}h(y) - \inf_{y \in B(A^{-n}r)}h(y) \leq \left(\frac{C-1}{C}\right)^n \left[ \sup_{y \in B(r)} h(y) - \inf_{y \in B(r)} h(y) \right] \] for every harmonic function $h$ on $M$, every $r>0$, every $n\geq 1$, and every $x\in X$ such that $d(x,\partial D) \geq r$. This is easily seen to imply the claimed inequality \end{proof} The following lemma is presumably well-known to experts, but we were not able to find a reference. \begin{lemma} \label{lem:muchmoredetail} Let $G$ be a transient network and suppose that $A$ is a set of vertices for which there exists $\varepsilon>0$ and infinitely many disjoint sets $A_1,A_2, \ldots \subseteq A$ such that $\Cap(A_i)\geq \varepsilon$ for every $i\geq 1$. Then $\Cap(A)=\infty$. \end{lemma} \begin{proof} First note that if $A$ has finite capacity then we must have that simple random walk on $G$ visits $A$ at most finitely often almost surely. Indeed, if $\Cap(A)<\infty$ then there exists $\psi \in \mathbf{D}_0(G)$ with $\psi|_A\geq 1$, and it follows from \eqref{eq:ALPlimitidentification} that if $X$ is a random walk then $\psi(X_n)\to 0$ a.s.\ and hence that $X$ visits $A$ at most finitely often a.s. Thus, it suffices to consider the case that the simple random walk visits $A$ at most finitely often almost surely. For each $i\geq 1$, there exists a finite set $A_i' \subseteq A_i$ such that $\Cap(A'_i) \geq \Cap(A_i)/2 \geq \varepsilon/2$. We construct a subsequence $i_1,i_2,\ldots$ as follows. Let $i_1=1$. Since random walk visits $A$ at most finitely often almost surely, it follows that, given $i_1,\ldots,i_m$, there exists $j$ such that \[\sum_{\ell=1}^m \sum_{v \in A_{i_\ell}'} c(v) \mathbf P_v\Bigl(\text{hit }\bigcup_{i\geq j} A'_i\Bigr) \leq \varepsilon/8\] Set $i_m$ to be the minimal such $j$; this gives a recursive procedure to define the entire sequence $i_1,i_2,\ldots$. By the Dirichlet principle we have that $\Cap(A) \geq \Cap\Bigl(\bigcup_{\ell=1}^m A_{i_\ell}'\Bigr)$ for each $m\geq 1$, and so it suffices to prove that \begin{equation} \label{eq:capacity_claim} \Cap\Bigl(\bigcup_{\ell=1}^m A_{i_\ell}'\Bigr) \geq \frac{\varepsilon m}{4} \end{equation} for every $m\geq 1$. To see this, we use the elementary bound \begin{align*} \Cap\Bigl(\bigcup_{\ell=1}^m A_{i_\ell}'\Bigr) &= \sum_{\ell=1}^m \sum_{v \in A_{i_\ell}'} c(v)\mathbf{P}_v\Bigl(\text{do not return to $\bigcup_{\ell=1}^m A_{i_\ell}'$} \Bigr)\\ &\geq \sum_{\ell=1}^m \sum_{v\in A_{i_\ell}'} c(v)\mathbf{P}_v\Bigl(\text{do not return to $A_{i_\ell}'$} \Bigr) - \sum_{\ell=1}^m \sum_{v\in A_{i_\ell}'} c(v) \mathbf{P}_v\Bigl(\text{hit $\bigcup_{k\geq \ell+1 }A'_{i_k}$} \Bigr) \\ &\hspace{4cm}- \sum_{\ell=1}^m \sum_{v\in A_{i_\ell}'} \sum_{r=1}^{\ell-1} \sum_{u \in A_{i_r}'} c(v) \mathbf{P}_v\Bigl(\text{hit $u$, don't return to $\bigcup_{k\geq \ell} A'_{i_k}$} \Bigr), \end{align*} from which the bound \[ \Cap\Bigl(\bigcup_{\ell=1}^m A_{i_\ell}'\Bigr) \geq \frac{\varepsilon m}{2} - \frac{\varepsilon m}{8} - \sum_{\ell=1}^m \sum_{v\in A_{i_\ell}'} \sum_{r=1}^{l-1} \sum_{u \in A_{i_r}'} c(v) \mathbf{P}_v\Bigl(\text{hit $u$, don't return to $\bigcup_{k \geq \ell} A'_{i_k}$} \Bigr) \] follows immediately. To control the final term, we reverse time to get that \begin{multline*} \Cap\Bigl(\bigcup_{\ell=1}^m A_{i_\ell}'\Bigr) \geq \frac{3\varepsilon m}{8} - \sum_{r=1}^m \sum_{u \in A_{i_r}'} \sum_{\ell=r+1}^m \sum_{v\in A_{i_\ell}'} c(u) \mathbf{P}_u\Bigl(\text{hit $\bigcup_{k\geq \ell} A'_{i_k}$ for first time at $v$} \Bigr)\\ \geq \frac{3\varepsilon m}{8} - \sum_{r=1}^m \sum_{u \in A_{i_r}'} \sum_{\ell=r+1}^m c(u) \mathbf{P}_u\Bigl(\text{hit $\bigcup_{k\geq \ell} A'_{i_k}$} \Bigr)\\ \geq \frac{3\varepsilon m}{8} - m \sum_{r=1}^m \sum_{u \in A_{i_r}'} c(u) \mathbf{P}_u\Bigl(\text{hit $\bigcup_{k \geq r+1} A'_{i_k}$} \Bigr) \geq \frac{\varepsilon m}{4} \end{multline*} as claimed. The claim that $A$ has infinite capacity now follows immediately from \eqref{eq:capacity_claim}. \end{proof} \begin{proof}[Proof of \cref{prop:uniformlytransient}] Asymptotic equality clearly implies quasi-asymptotic equality. Suppose that $h$ and $H\circ z$ are not asymptotically equal, so that there exists $\varepsilon>0$ such that the set $A_{\varepsilon}=\{ v\in V : |h(v)-H\circ z(v)| \geq \varepsilon\}$ is infinite. Since $h$ and $H$ are bounded, it follows from the elliptic Harnack inequalities \eqref{eq:classicEHI} and \eqref{eq:DiscreteEHI} that there exists $\delta>0$ such that \[ \bigcup_{v\in A_\varepsilon}\Big\{u \in V : z(u) \in B\Big(z(v),\delta d\big(z(v),\partial D\big)\Big)\Big\} \subseteq A_{\varepsilon/2}.\] Since $D$ is uniformly transient, \cref{lem:CapComparison} implies that the sets \[ \Big\{z \in D : z \in B\Big(z(v),\delta d\big(z(v),\partial D\big)\Big)\Big\} \] have capacity bounded below by some positive constant, and a simple variation on the proof of \cref{lem:CapComparison} yields that the sets \[\Bigl\{u \in V : z(u) \in B\Big(z(v),\delta d\big(z(v),\partial D\big)\Big)\Big\}\] also have capacity bounded below by a positive constant. Since there must exist infinitely many disjoint sets of this form, we can apply \cref{lem:muchmoredetail} to deduce that $\Cap(A_{\varepsilon/2})=\infty$. It follows that $h$ and $H \circ z$ are not quasi-asymptotically equal, concluding the proof. \end{proof} \subsection*{Acknowledgments} The author was supported by a Microsoft Research PhD Fellowship. We thank the anonymous referees for their comments and corrections. \setstretch{1} \small{ \bibliographystyle{abbrv}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction and main result} Let $\Omega$ be a bounded domain in $\R^3$ with smooth boundary $ {\partial} \Omega$. We denote by $S^2$ the standard 2-sphere. We consider the {\em harmonic map flow} for maps from $\Omega$ into $S^2$, given by the semilinear parabolic equation \begin{equation} \label{har flow0} \left \{ \begin{aligned} u_t = \Delta u + |\nabla u|^2 u \quad &\text{in } \Omega\times(0,T)\\ u = u_b \quad &\text{on } {\partial} \Omega\times(0,T)\\ u(\cdot,0) = u_0 \quad & \text{in } \Omega \end{aligned}\right.\end{equation} for a function $u:\Omega \times [0,T)\to S^2$. Here $u_0:\bar \Omega \to S^2$ is a given smooth map and $\varphi= u_0\big|_{ {\partial} \Omega}$. Local existence and uniqueness of a classical solution follows from the pioneering work by Eells and Sampson \cite{Ells} and K.C. Chang \cite{chang}. Equation \equ{har flow0} formally corresponds to the negative $L^2$-gradient flow for the Dirichlet energy $\int_\Omega |\nabla u|^2 dx$. This energy is decreasing along smooth solutions $u(x,t)$: $$ \frac { {\partial} }{ {\partial} t} \int_\Omega |\nabla u(\cdot, t)|^2 = -\int_\Omega |u_t(\cdot, t)|^2. $$ Chen-Struwe \cite{chen-struwe} found a global $H^1$-weak solution in any dimension. In the two-dimensional case \\ $\Omega\subset \R^2\mapsto S^2$ this solution can only become singular at a finite number of points in space-time \cite{Struwe}. \medskip If $T>0$ designates the first instant at which smoothness of \equ{har flow0} is lost, standard parabolic regularity leads to the fact that $$ \|\nabla u(\cdot, t)\|_\infty \, \to \, +\infty \quad\mbox{as}\quad t\uparrow T. $$ In the two-dimensional case, substantial knowledge on the possible blow-up structure has been obtained in \cite{Ding-Tian,Lin-Wang1,Qing1,Qing-Tian,Struwe,Topping2}. Blow-up takes place only about a finite number of points $q_1,\ldots, q_k$, around which the approximate form $u(x,t) \approx U\left (\frac{x-\xi(t)}{\lambda(t)} \right )$ with $\lambda(t)\to 0$ where $U$ is a finite-energy harmonic map, namely a solution of $$\Delta U +| {\nabla} U|^2 U=0, \quad |U|\equiv 1 {\quad\hbox{in } } \R^2 , \quad , \quad \int_{\R^2} |\nabla U|^2 <+\infty $$ and $\lambda(t)\to 0$ as $t\to T$. Moreover (up to subsequences), we have \begin{equation} \label{aa2} |\nabla u(\cdot, t)|^2 \ \rightharpoonup \ |\nabla u_*|^2 + \sum_{i=1}^k 4\pi m_i\,\delta_{q_i} \quad\mbox{as}\quad t\to T, \end{equation} for some positive integers $m_i$ where $\delta_q$ denotes the unit Dirac mass at $q$. \medskip Less is known in the higher dimensional case $\Omega\subset \R^n\mapsto S^2$ in problem \equ{har flow0}. Chen-Struwe and Cheng \cite{chen-struwe,cheng} have proven that the blow-up set in $\Omega$ is at most $(n-2)$-dimensional in the Hausdorff sense. More refined information on the singular set has been derived by Lin and Wang in \cite{Lin-Wang5}, see also \cite{LinLibro}. \medskip While various important blow-up classification results are available, finding solutions explicitly exhibiting blow-up behavior has been rather difficult. In fact, in the two-dimensional case they were even believed not to exist, see \cite{Chang-Ding-Ye}. The first example of a blowing-up solution in the case $\Omega =B_2 \subset \R^2 $, the unit two-dimensional ball was found by Chang-Ding-Ye \cite{Chang-Ding-Ye} in the {\em 1-corrotational symmetry class}, $$ u(x,t) = \left (\begin{matrix} e^{ i\theta} \sin v(r,t) \\ \cos v(r,t) \end{matrix} \right ) , \quad x= re^{i\theta} . $$ where $v(r,t)$ is a scalar function. System \equ{har flow0} reduces to the radial scalar equation \begin{align*} v_t = v_{rr} + \frac{v_r}r - \frac {\sin v\cos v} {r^2}, \quad v(0,t)=0, \quad r\in (0,1). \end{align*} Suitable initial and boundary conditions and the use of barriers lead to finite-time blow-up at some $T>0$ in the form $v(r,t) \approx w(\frac{r}{\lambda(t)}) $ with $$ w(\rho) = \pi - 2\arctan (\rho). $$ Van den Berg, Hulshof and King \cite{bhk} formally found that generically, \begin{align*} \lambda(t) \approx \kappa \frac{ T-t}{|\log (T-t)|^2} \quad\mbox{as}\quad t\to T. \end{align*} for some $\kappa >0$. Raphael and Schweyer \cite{rs1} rigorously constructed an entire 1-corrotational solution with this blow-up rate. At the level of $u$, the solutions mentioned above have the form $$ u(x,t) \approx W\left(\frac x{\lambda(t)} \right ) $$ where $W(y)$ is the canonical 1-corrotational harmonic map \begin{equation} \label{U00} W(y) = \frac 1{1+ |y|^2} \left (\begin{matrix} 2y \\ |y|^2 -1 \end{matrix} \right ) , \quad y\in \R^2 . \end{equation} which satisfies $$ \int_{\R^2} |\nabla W |^2 = 4\pi, \quad W (\infty) = {\bf e}_3 , $$ where \begin{align} \label{e123} {\bf e}_1 =\left (\begin{matrix} 1 \\ 0\\ 0 \end{matrix} \right ), \quad {\bf e}_2 =\left (\begin{matrix} 0 \\ 1\\ 0 \end{matrix} \right ),\quad {\bf e}_3 =\left (\begin{matrix} 0 \\ 0\\ 1 \end{matrix} \right ). \end{align} We achieved in \cite{ddw} the first construction of a blow-up solution without symmetries in \equ{har flow0} in the case $\Omega \subset \R^2 \mapsto S^2$: for an arbitrary $\Omega\subset \R^2$, given points $q_1,\ldots, q_k\in\Omega$ and $u_b ={\bf e}_3$ there is for any sufficiently small $T>0$ a solution $u(x,t)$ with precisely these $k$ blow-up points which, consistently with \equ{aa2}, satisfies \begin{equation} \label{aa3} |\nabla u(\cdot, t)|^2 \ \rightharpoonup \ |\nabla u_*|^2 + \sum_{i=1}^k 4\pi \,\delta_{q_i} \quad\mbox{as}\quad t\to T, \end{equation} which near each $q_j$ and after a rigid constant rotation has the approximate form $$ u(x,t) \approx W\left(\frac {x-q_j}{\lambda_j(t)} \right ), \quad \lambda_j(t) = \kappa_j \frac{ T-t}{|\log (T-t)|^2} \quad\mbox{as}\quad t\to T. $$ Part of the difficulty in the construction is due to the {\em instability} of the blow-up phenomenon here described once the 1-corrotational symmetry is violated, see \cite{ddw}. This instability had been numerically conjectured in \cite{williams}. \medskip In the case $\Omega \subset \R^3\to S^2$ only one example has been know, again in the 1-corrotational class and $\Omega =B_3$, the unit ball in $\R^3$. In this case the ansatz takes the form $$ u(x,t) = \left (\begin{matrix} e^{ i\theta} \sin v(r,z,t) \\ \cos v(r,z,t) \end{matrix} \right ) , \quad x= \left (\begin{matrix} re^{ i\theta} \\ z \end{matrix} \right ) $$ System \equ{har flow0} reduces to the scalar equation \begin{equation}\label{111} v_t = v_{rr} +\frac{v_r}r + v_{zz} - \frac {\sin v\cos v} {r^2}, \quad v(0,z,t)=0, \quad r\in (0,1). \end{equation} Adapting the barrier method in \cite{Chang-Ding-Ye}, Grotowski \cite{grotowski} found boundary and initial conditions and a solution to \equ{111} that blows up on a subset of the $z$-axis $r=0$. (See a related result in \cite{grotowski2}.) No information on the structure (or dimension) of this set or on the blow-up rate is provided. \medskip In this paper we construct the first example of a solution with a 1-dimensional blow-up set in an arbitrary axisymmetric bounded domain $\Omega \subset \R^3$. We observe that this example saturates the estimate for the dimension $n-2$ of the singular set found in \cite{cheng} (for $n=3$). \medskip Before stating our main result we introduce the setting we will consider. We say that $\Omega\subset \R^3$ is an axisymmetric domain if it can be expressed in the form \begin{align} \label{axi1} \Omega = \{ ( re^{i\vartheta} , z) \ /\ (r,z) \in {\mathcal D}, \quad \vartheta\in [0,2\pi] \} . \end{align} where ${\mathcal D}\subset \{(r,z) /\ r\ge 0\}\subset \R^2 $. When $\Omega$ is axisymmetric, it is natural to look for solutions of \equ{har flow0} with the same axial symmetry, namely \begin{align*} u(x,t) = \tilde u ( r,z,t) , \quad x = ( re^{i\vartheta} , z), \quad (r,z)\in {\mathcal D} , \end{align*} for a function $\tilde u : {\mathcal D} \times (0,T) \to S^2$. \medskip We fix in what follows and axisymmetric, smooth and bounded domain $\Omega$ of the form \equ{axi1} Let us consider a point $(r_0,z_0) \in {\mathcal D}$ with $r_0>0$ and let $\Gamma$ be the curve inside $\Omega$ given by the copy of $S^1$, \begin{equation}\label{Gamma} \Gamma := \{ (r_0e^{i\vartheta}, z_0) \ / \ \vartheta \in [0,2\pi) \} \subset \Omega \end{equation} \begin{theorem}\label{teo1} Let $\Omega \subset \R^3$ be an axisymmetric domain and consider problem $\equ{har flow0}$ with boundary condition $u_b \equiv {\bf e}_3$. Then for all sufficiently small $T>0$ there exists an initial condition and a solution $u(x,t)$ that blows-up exactly on the curve $\Gamma$ in $\equ{Gamma}$, with a profile of the form $$ u(x,t) = W \left (\frac { (r,z)- \xi(t) } {\lambda(t)} \right ) + u_*(x) , \quad x= (re^{i\vartheta}, z) \quad\mbox{as}\quad t\to T. $$ where $W(y)$ is the standard two-dimensional 1-corrotational map $\equ{U00}$, $u_* \in H^1(\Omega)$, $\lambda(t)\to 0$ and $\xi(t) \to (r_0,z_0)$. \end{theorem} The proof provides much finer information on the asymptotic profile. In particular we have, analogously to \equ{aa3}, \begin{align*} |\nabla u(\cdot, t)|^2 \ \rightharpoonup \ |\nabla u_*|^2 + 4\pi \,\delta_{\Gamma} \quad\mbox{as}\quad t\to T, \end{align*} with $\delta_\Gamma$ the uniform Dirac measure on the curve $\Gamma$. Moreover, writing $\xi(t) = (\xi_1(t), \xi_2(t))$ we have the asymptotic expressions \begin{align*} \left\{ \begin{aligned} \xi_1(t) &= \sqrt{ r_0^2 + 2(T-t) } + O ( (T-t)^{1+\sigma}) , \\ \xi_2(t) & = z_0+ O ( (T-t)^{1+\sigma}),\\ \lambda(t) & = |\kappa| \frac{ T-t}{|\log (T-t)|^2} (1+ o(1)) , \end{aligned} \right. \end{align*} as $t \uparrow T$, for some $\kappa\in {\mathbb C}$, $ \sigma>0$. \bigskip The proof of this result takes strong advantage of the symmetry of revolution of the domain. In fact, restricting the problem to the class of axisymmetric functions, Problem \equ{har flow0} reduces to a problem only involving the variables $(r,z)$ and the two-dimensional domain ${\mathcal D}$. We will closely follow the steps of the main result in \cite{ddw} and make reference to intermediate technical results there. \medskip With a very similar proof we can construct simultaneous blow-up in any finite number of disjoint circles $\Gamma$. It would be a very interesting issue to consider the case $r_0=0$ case in which the singularity would asymptotically collapse onto a point in the $z$-axis. Lifting the revolution symmetry assumption potentially obtaining other blow-up sets is a very interesting and difficult issue. \section{The axially symmetric problem} In the setting of Theorem \ref{teo1} it is natural to look for solutions which are axially symmetric. More precisely, we look of a solution of \equ{har flow0} with boundary condition $u_b = {\bf e}_3$ of the form $$ u(x,t) := \tilde u (r,z) , \quad x= (r e^{i\vartheta}, z) , \quad (r,z) {\quad\hbox{in } } {\mathcal D} . $$ where $ \tilde u : {\mathcal D}\subset \R^2 \to S^2$. We directly check that in this situation our problem becomes \begin{equation}\left \{ \begin{aligned} \tilde u_t &= \tilde u_{rr} + \frac 1r \tilde u_{r} + \tilde u_{zz} + | {\nabla} \tilde u |^2 \tilde u {\quad\hbox{in } } {\mathcal D}\times (0,T) \\ \tilde u_r &=0 {\quad\hbox{on } } \{r=0\} \cap {\mathcal D} \times (0,T)\\ \tilde u & = {\bf e}_3 {\quad\hbox{on } } ( {\partial} {\mathcal D}\setminus \{r=0\}) \times (0,T) \\ \tilde u(\cdot, 0) &= \tilde u_0 , \end{aligned}\right. \label{hmf1}\end{equation} where $ {\nabla} \tilde u = ( \tilde u_r, \tilde u_z) $. We want to find a solution $\tilde u(x,z) $ that blows up exactly at the point $q = (r_0,z_0)$ as $t\to T$ in the form $$ \tilde u(r,z) \approx W \left (\frac {(r,z) -\xi(t) } {\lambda(t)} \right ), \quad \lambda(t)\to 0, \quad \xi(t)\to q_0 . $$ To make a precise ansatz, we consider the family of two-dimensional 1-corrotational harmonic maps \[ U_{\lambda, \xi , \omega} (r,z) := Q_\omega\, W ( y ) ,\quad y= \frac {(r,z)- \xi }{\lambda}, \quad \xi\in \R^2 \ \omega\in \R,\ \lambda>0, \] where $ W(y) $ is the canonical 1-corrotational harmonic map \equ{U00} and $Q_\omega$ is the $\omega$-rotation matrix \[ Q_\omega:= \left [ \begin{matrix} \cos\omega & - \sin \omega & 0 \\ \sin\omega & \cos\omega & 0 \\ 0 & 0 & 1 \end{matrix} \right ]. \] All these functions satisfy the elliptic equation \begin{align} \label{hm0} U_{rr}+ U_{zz} + | {\nabla} U|^2 U =0 {\quad\hbox{in } } \R^2, \quad |U|=1 . \end{align} For any sufficiently small number $T>0$ we look for an initial datum $u_0$ such that the solution $\tilde u(r,z,t)$ of problem \equ{hmf1} looks at main order like \[ U_{\lambda(t), \xi(t) , \omega(t)} (r,z) = Q_{\omega(t)}\, W( y), \quad y= \frac {(r,z)- \xi(t) }{\lambda(t)} , \] for certain functions $\xi(t)$, $\lambda(t)$ and $\omega (t)$ of class $C^1([0,T])$ such that \[ \xi(T) =q, \quad \lambda(T)=0 , \quad \omega(T)= 0. \] We consider a first approximation $U (r,z,t)$ which smoothly interpolates $U_{\lambda(t), \xi(t) , \omega(t)}(r,z,t)$ with $(r,z)\approx q$ and the constant vector ${\bf e}_3$. Let $\eta(\zeta)$ be a smooth cut-off function so that $$\eta(\zeta )= \begin{cases} 1 & \hbox{ for $\zeta <1$,}\\ 0 &\hbox{ for $\zeta >2$}. \end{cases}$$ For a fixed small number $\delta>0$ we let $$ \eta^\delta (r,z) := \eta\left ( \frac {|(r,z)-q |}\delta \right ) $$ and set \begin{equation}\label{UUU} U (r,z,t)\ :=\ \, \eta^\delta(r,z) \, U_{\lambda(t), \xi(t) , \omega(t)}(r,z,t) \,+\, (1-\eta^\delta (r,z) ) \, {\bf e}_3 . \end{equation} We shall find values for these functions so that for a small remainder $v(x,t)$ we have that $ \tilde u = U + v $ solves \equ{hmf1}. The condition $|U+ v|=1$ tells us that $u$ can be written as \begin{equation}\label{v} u(x,t)= U+ \Pi_{U^\perp}\varphi + a(\Pi_{U^\perp}\varphi) U, \end{equation} where $\varphi$ is a small function with values into $\R^3$ and we denote $$ \Pi_{U^\perp} \varphi := \varphi - (\varphi\cdot U) U, \quad a(\zeta) := \sqrt{1 - |\zeta|^2}-1 . $$ The term $a(\Pi_{U^\perp}\varphi)$ has a quadratic size in $\varphi$. We choose to decompose the remainder $\varphi(r,z,t)$ in \equ{v} as the addition of an ``outer'' part, better expressed in the original variables $(r,z)$, and an ``inner'' part which is supported near the singularity and it is naturally expressed as function of the slow variable $y$. More precisely, we let \begin{equation}\label{aris} \varphi(r,z,t) \ = \ \varphi^{out} (r,z,t) + \varphi^{in} (y,t), \quad y=\frac{(r,z)-\xi(t)}{\lambda(t)} \end{equation} where \begin{align} \nonumber \varphi^{in} (y,t) \ = & \ \eta_{R(t)}\left (y \right) Q_{\omega(t)} \phi(y,t), \quad \phi(y,t)\cdot W(y) \equiv 0 \end{align} and $ \eta_R(y) := \eta\left (\frac {|y|} R \right)$ The function $\phi(y,t)$ is defined for $|y|< 3R(t)$ where $R(t)\to +\infty$ and $\lambda(t) R(t)\to 0$ as $t\to T$. With these definitions we see that $\Pi_{U^\perp} \varphi^{in}= \varphi^{in}$. \medskip We choose to the decompose the outer part $\varphi^{out}(x,t)$ in \equ{aris} as \begin{align} \nonumber \varphi^{out}(x,t) = \Phi^0[\omega,\lambda, \xi] + Z^*(x,t) \, + \, \psi(x,t), \end{align} where $ \Phi^0$ and + $Z^*(x,t)$ are explicit functions chosen as follows: $\Phi^0[\omega,\lambda, \xi]$ is a function (which will be precisely described in the next section) that at main order eliminates the largest slow-decaying part of the error of approximation $E(r,z,t)$ in \equ{hmf1}, namely $E = S(U)$, where $$ S(\tilde u) := - \tilde u_t + \tilde u_{rr} + \frac {\tilde u_r}r + \tilde u_{zz} + | {\nabla} \tilde u|^2 \tilde u . $$ Writing $p(t) := \lambda(t) e^{i\omega(t)}$ and using polar coordinates $$(r,z)= \xi(t)+ s e^{i\theta},$$ we require $$ \Phi^0_t - \Phi_{rr}^0 -\Phi_{zz}^0 \approx \frac 2s \left [\begin{matrix} \dot p(t) e^{i\theta} \\ 0 \end{matrix}\right ] \approx E(r,z,t) . $$ With the aid of Duhamel's formula for the standard heat equation, we find that the following function is a good approximate solution: \begin{align} \label{defPhi0} \Phi^0[\omega,\lambda,\xi] (s,\theta,t) & := \left [ \begin{matrix} \varphi^0(s,t) e^{i\theta } \\ 0 \end{matrix} \right ] \\ \nonumber \varphi^0(s,t) & = -\int_{-T}^t \dot p(\tau) s k(z(s),t-\tau) \, d\tau \\ \nonumber z(s) & = \sqrt{ s^2+ \lambda^2} ,\quad k(z,t) = 2\frac{1-e^{-\frac{z^2}{4 t}}}{z^2} , \end{align} where for technical reasons $p(t)$ is assumed to be defined in $[-T,T]$, that is, also for some negative values of $t$. On the other hand, we let $Z^*:\Omega\times (0,\infty) \to \R^3$ satisfy \begin{align} \label{heatZ*} \left\{ \begin{aligned} Z_{t}^* &= \Delta_x Z^* {\quad\hbox{in } } \Omega\times(0,\infty), \\ Z^*(\cdot ,t) &= 0{\quad\hbox{in } } {\partial} \Omega \times (0,\infty),\\ Z^*(\cdot ,0) &= Z^*_0 {\quad\hbox{in } } \Omega , \end{aligned} \right. \end{align} where $Z_0^*(x)$ is is a small, sufficiently regular, axially symmetric function, more precisely \begin{align} \label{notationZ0star} Z_0^*(x) = \tilde Z_0^*(r,z) = \left [ \begin{matrix} \tilde z_0^*(r,z) \\ \tilde z_{03}^* (r,z) \end{matrix} \right ] , \quad \tilde z_0^*(r,z) = \tilde z^*_{01}(r,z) + i \tilde z^*_{02}(r,z), \quad x= (re^{i\vartheta}, z). \end{align} function essentially satisfying $$\tilde Z_0^*(q)= 0, \quad \div \tilde z^*_0(q) + i \mathop{\rm curl} \tilde z^*_0(q) \ne 0 . $$ where we denote \begin{align} \label{div-curl-z0star} \div \tilde z^*_0(r,z) = {\partial} _r \tilde z^*_{01}(r,z) + {\partial} _z \tilde z^*_{02}(r,z), \quad \mathop{\rm curl} \tilde z^*_0(r,z)= {\partial} _r \tilde z^*_{02}(r,z) - {\partial} _z \tilde z^*_{01}(r,z). \end{align} Of course we have $Z^*(x,t) = \tilde Z^*(r,z,t)$. Then for $(r,z,t)\in {\mathcal D} \times (0,T) $ we make the ansatz \begin{equation} \left\{ \begin{aligned} \tilde u(r,z,t)\ = & \ U(r,z,t) \, +\, v(r,z,t), \\ v(r,z,t)\ = &\ \Pi_{U^\perp} \big( \eta^\delta\, \Phi^0[\omega, \lambda , \xi] + \tilde Z^* + \psi\big ) \, + \, \eta_R Q_\omega \phi + a U \end{aligned} \right. \label{canave}\end{equation} for a blowing-up solution $\tilde u(r,z,t)$ of \equ{hmf1}, where $\phi$ and $\psi$ are lower order corrections. Our task is to find functions $\omega(t), \lambda(t) , \xi(t)$, $\psi(x,t)$ and $\phi(y,t)$ as described above, such that the remainder $v$ remains uniformly small. \medskip We will define a system of equations that we call the {\em inner-outer gluing system}, essentially of the form \begin{align} \nonumber \left\{ \begin{aligned} \lambda^2 \phi_t\ & =\ L_ W [\phi ]\ +\ H[p,\xi, \psi,\phi], \quad \phi\cdot W = 0 {\quad\hbox{in } } \R^2\times (0,T) \\ \psi_t\ &=\ \psi_{rr} + \frac{\psi_r}r + \psi_{zz} \ +\ G[p,\xi, \psi,\phi]\qquad \qquad \qquad\ \, {\quad\hbox{in } } {\mathcal D} \times (0,T) \end{aligned} \right. \end{align} where \begin{equation}\label{L} L_W[ \phi] = \Delta_y \phi + | {\nabla} _y W|^2 \phi + 2( {\nabla} _y\phi \cdot {\nabla} _y W)W , \quad \phi\cdot W = 0 \end{equation} is the linearized operator for equation \equ{hm0} around $U= W$, so that if the pair of functions $(\phi(y,t),\psi(x,t))$ solves it then $\tilde u$ given by \equ{canave} is a solution of \equ{hmf1}. The point is to adjust the parameter functions $\omega, \lambda,\xi$ such that the inner problem can be solved for $\phi(y,t)$ which decays as $|y|\to \infty$. To fix the idea, let us consider the approximate elliptic equation, where time is regarded just as a parameter, $$ L_ W [\phi ]\ +\ H[p,\xi, 0,0] = 0 {\quad\hbox{in } } \R^2 $$ As we will discuss, a space-decaying solution $\phi(y,t)$ to this problem exists if a set of orthogonality conditions of the form $$ \int_{\R^2} H[p,\xi, 0,0](y,t)\, Z(y)\, dy = 0 \quad\mbox{for all}\quad Z\in \mathcal Z $$ where $\mathcal Z $ is a 4-dimensional space constituted by decaying functions $Z(y)$ with $L_W[Z]=0$. These solvability conditions lead to an essentially explicit system of equations for the parameter functions which will tell us in particular that for some small $\sigma>0$ \begin{align*} \begin{aligned} p(t) & = - (\div \tilde z^*_0(q) + {i\mathop{\rm curl} \tilde z_0^* (q)} ) \frac {|\log T| }{\log^2 (T-t)} (1+ O( |\log T|^{-1+\sigma} )) , \\ \xi_1(t) & = \sqrt{ r_0^2 + 2 (T-t) } + O ( (T-t)^{1+\sigma}) \\ \xi_2(t) & = z_0+ O ( (T-t)^{1+\sigma}),\end{aligned} \end{align*} and we recall that we are consistently asking $ \div \tilde z^*_0(q) + i{\mathop{\rm curl} \tilde z_0^* (q)} \ne 0$. \medskip In the next sections we will carry out in detail the program for the construction sketched above. \section{The linearized operator around the bubble} We can represent $ W (y)$ in polar coordinates, $$ W (y) = \left (\begin{matrix} e^{ i\theta} \sin {w(\rho )} \\ \cos {w(\rho )} \end{matrix} \right ), \quad w(\rho ) = \pi - 2\arctan (\rho ), \quad y= \rho e^{i\theta}. $$ We notice that $$ w_\rho = - \frac 2{1+\rho^2} , \quad \sin w = -\rho w_\rho = \frac {2\rho} {1+\rho^2}, \quad \cos w = \frac {\rho^2 -1}{1+\rho^2} . $$ For the linearized operator $L_W$ in \equ{L} we have that $L_ W [ Z_{lj}] =0 $ where \begin{align} \label{ZZ} \left\{ \begin{aligned} Z_{01}(y) & = \rho w_\rho(\rho)\, E_1(y) & Z_{02}(y) &= \rho w_\rho(\rho)\, E_2(y) \\ Z_{11}(y) & = w_\rho(\rho)\, [ \cos\theta \, E_1(y) + \sin\theta\, E_2(y) ] & Z_{12}(y) & =w_\rho(\rho)\, [ \sin\theta \, E_1(y) - \cos \theta\, E_2(y) ] \\ Z_{-1,1}(y) &= \rho^2 w_\rho(\rho)[ \cos \theta E_1(y) -\sin \theta E_2(y) ] & Z_{-1,2}(y) &= \rho^2 w_\rho (\rho)[ \sin \theta E_1(y) + \cos \theta E_2(y) ] . \end{aligned} \right. \end{align} and \[ E_1(y) = \left (\begin{matrix} e^{ i\theta} \cos w(\rho ) \\ - \sin {w(\rho )} \end{matrix} \right ), \quad E_2(y) = \left (\begin{matrix} ie^{ i\theta} \\ 0 \end{matrix} \right ) . \] These vectors from an orthonormal basis of the tangent space to $S^2$ at the point $ W (y)$. \subsection*{The linearized operator at functions orthogonal to \texorpdfstring{$U$}{U}} We consider the linearized operator $L_U$ analogous to $L_W$ but taken around our basic approximation $U$, that is, \[ L_U[ \varphi] = \varphi_{rr} + \varphi_{zz} + | {\nabla} U|^2 \varphi + 2( {\nabla} \varphi \cdot {\nabla} U)U . \] It will be especially significant to compute the action of $L_U$ on functions with values pointwise orthogonal to $U$. In what remains of this section we will derive various formulas that will be very useful later on. \medskip For an arbitrary function $\Phi(r,z)$ with values in $\R^3$ we denote the projection $$ \Pi_{U^\perp} \Phi := \Phi - (\Phi\cdot U) U. $$ A direct computation shows the validity of the following: \begin{equation} \nonumber L_U[\Pi_{U^\perp}\Phi] = \Pi_{U^\perp} (\Phi_{rr}+ \Phi_{zz}) + \tilde L_U[\Phi ] \end{equation} where \[ \tilde L_U[ \Phi ] := | {\nabla} U|^2 \Pi_{U^\perp} \Phi - 2 {\nabla} (\Phi \cdot U ) {\nabla} U, \] with $ {\nabla} = ( {\partial} _r, {\partial} _z)$ and $$ {\nabla} (\Phi \cdot U ) {\nabla} U = {\partial} _{r} (\Phi \cdot U )\, {\partial} _{r} U + {\partial} _{z} (\Phi \cdot U )\, {\partial} _{z} U . $$ A very convenient expression for $\tilde L_U[ \Phi ]$ is obtained if we use polar coordinates. Writing in complex notation $$ \Phi(r,z) = \Phi(s,\theta), \quad (r,z) = \xi + s e^{i\theta}, $$ we find \begin{align} \label{Ltilde} \tilde L_U[\Phi] = - \frac 2{\lambda} w_\rho(\rho)\, [ (\Phi_s \cdot U)Q_\omega E_1 - \frac 1{s} (\Phi_\theta \cdot U) Q_\omega E_2 ], \quad \rho = \frac s\lambda. \end{align} \bigskip We mention two consequences of formula \equ{Ltilde}. Let us assume that $\Phi(x)$ is a $C^1$ function $\Phi : {\mathcal D} \to {\mathbb C} \times \R$, which we express in the form \begin{align} \label{notation-Phi} \Phi(r,z)\ =\ \left ( \begin{matrix} \varphi_1 (r,z) + i \varphi_2(r,z) \\ \varphi_3 (r,z) \end{matrix} \right ). \end{align} We also denote $$\varphi = \varphi_1 + i \varphi_2 , \quad \bar \varphi = \varphi_1 - i \varphi_2 $$ and define the operators $$ \div \varphi = {\partial} _{r}\varphi_1 + {\partial} _{z}\varphi_2, \quad \mathop{\rm curl} \varphi = {\partial} _{r}\varphi_2 - {\partial} _{z}\varphi_1 . $$ Then the following formula holds: \begin{equation} \tilde L_U [\Phi ] = \tilde L_U [\Phi ]_0 + \tilde L_U [\Phi ]_1 + \tilde L_U [\Phi ]_2\ , \label{Ltilde2} \end{equation} where \begin{align} \label{Ltilde-j} \left\{ \begin{aligned} \tilde L_U [\Phi ]_0 & = \lambda^{-1} \rho w_\rho^2\, \big [\, \div ( e^{-i\omega} \varphi)\, Q_\omega E_1 + \mathop{\rm curl} ( e^{-i\omega} \varphi)\, Q_\omega E_2 \, \big ]\, \\ \tilde L_U [\Phi ]_1 & = -\, 2\lambda^{-1} w_\rho \cos w \, \big [\,( {\partial} _{r} \varphi_3) \cos \theta + ( {\partial} _{z} \varphi_3) \sin \theta \, \big ]\,Q_{\omega} E_1 \\ & \quad - 2\lambda^{-1} w_\rho \cos w \, \big [\, ( {\partial} _{r} \varphi_3) \sin \theta - ( {\partial} _{z} \varphi_3) \cos \theta \, \big ]\, Q_{\omega}E_2\ , \\ \tilde L_U [\Phi ]_2 &= \quad \lambda^{-1} \rho w_\rho^2 \, \big [\, \div (e^{i\omega}\bar \varphi)\, \cos 2\theta - \mathop{\rm curl} ( e^{i\omega}\bar \varphi)\, \sin 2\theta \, \big ]\, Q_{\omega} E_1 \\ & \quad + \lambda^{-1} \rho w_\rho^2 \, \big [\, \div ( e^{i\omega}\bar \varphi)\, \sin 2\theta + \mathop{\rm curl} ( e^{i\omega}\bar \varphi)\, \cos 2\theta \, \big ]\,Q_{\omega}E_2 . \end{aligned} \right. \end{align} Another corollary of formula \equ{Ltilde} that we single out is the following: assume that $$ \Phi (r,z) = \left ( \begin{matrix} \phi(s) e^{i\theta} \\ 0 \end{matrix} \right) , \quad x = \xi + se^{i\theta} , \quad \rho =\frac s\lambda $$ where $\phi(s)$ is complex valued. Then \begin{align} \label{uii} \tilde L_U [\Phi] = \frac 2\lambda w_\rho(\rho)^2 \left [ {\rm Re } \,( e^{-i\omega} {\partial_s \phi(s) } ) Q_\omega E_1 + \frac 1s {\rm Im} \,( e^{-i\omega} \phi(s)) Q_\omega E_2 \right ]. \end{align} For the proof of the formulas above see \cite{ddw}, section~2. \hide{ A final result in this section is a computation (in polar coordinates) of the operator $L_U$ acting on a function of the form \[ \Phi (r,z) = \varphi_1(\rho, \theta) Q_\omega E_1 + \varphi_2(\rho, \theta)Q_\omega E_2 , \quad x= \xi + \lambda \rho e^{i\theta}. \] We have: \begin{align} L_U[\Phi ] & = \lambda^{-2} \left ( {\partial} ^2_\rho \varphi_{1} + \frac { {\partial} _\rho\varphi_{1} } {\rho } + \frac { {\partial} ^2_\theta \varphi_{1} } {\rho^2 }+ (2w_\rho ^2 - \frac 1{\rho^2} )\varphi_1 - \frac{2}{\rho^2} {\partial} _\theta \varphi_{2}\cos w \right ) \, Q_\omega E_1 \nonumber\\ & \quad + \lambda^{-2} \left ( {\partial} ^2_\rho \varphi_{2} + \frac { {\partial} _\rho\varphi_{2} } {\rho } + \frac { {\partial} ^2_\theta \varphi_{2} } {\rho^2 }+ (2w_\rho ^2 - \frac 1{\rho^2} )\varphi_2 + \frac{2}{\rho^2} {\partial} _\theta \varphi_{1}\cos w \right ) \, Q_\omega E_2 . \nonumber \end{align} } \section{The ansatz and the inner-outer gluing system} The equation we want to solve is $S(\tilde u)=0$, with $\tilde u=U+v$. A useful observation that we make is that as long as the constraint $|\tilde u|=1$ is kept at all times and $\tilde u= U+ v$ with $|v|\le \frac 12 $ uniformly, then for $\tilde u$ to solve equation \equ{hmf1} it suffices that \begin{equation} \label{bU} S(U+v) = b(r,z,t) U \end{equation} for some scalar function $b$. Indeed, we observe that since $|\tilde u|\equiv 1 $ we have \[ b\, (U\cdot \tilde u) = S(\tilde u) \cdot \tilde u = - \frac 12 \frac d{dt} {|\tilde u|^2} + \frac 12 ( {\partial} ^2_r + {\partial} ^2_z) {|\tilde u|^2} + \frac 1{2r} {\partial} _r |\tilde u |^2 = 0 , \] and since $U\cdot u \ge \frac 12 $, we find that $b\equiv 0$. \medskip We find the following expansion for $S(U+v)$ with $v=\Pi_{U^\perp}\varphi + a(\Pi_{U ^\perp}\varphi)U$: \[ S( U + \Pi_{U^\perp} \varphi + aU ) = S(U) - {\partial} _t \Pi_{U^\perp} \varphi+ L_U(\Pi_{U^\perp} \varphi) + \frac 1r {\partial} _r (\Pi_{U^\perp} \varphi ) + N_U( \Pi_{U^\perp} \varphi ) + c(\Pi_{U^\perp} \varphi) U \nonumber \] where for $\zeta =\Pi_{U^\perp} \varphi $, $a = a(\zeta )$, \begin{align} \nonumber L_U(\zeta ) &= \zeta_{rr}+ \zeta_{zz} + |\nabla U|^2 \zeta + 2(\nabla U\cdot \zeta ) U \\ \nonumber N_U( \zeta ) &= \big[ 2 {\nabla} (aU)\cdot {\nabla} (U+ \zeta ) + 2 \nabla U \cdot \nabla \zeta + |\nabla \zeta |^2 + |\nabla (a U ) |^2 \, \big] \zeta - aU_t + \frac ar {\partial} _rU \\ & \quad \nonumber + 2 {\nabla} a {\nabla} U , \\ \nonumber c(\zeta ) & = a_{rr}+ a_{zz} - a_t + ( | {\nabla} (U + \zeta + aU)|^2 - | {\nabla} U|^2 )(1 + a) - 2 {\nabla} U\cdot {\nabla} \zeta + \frac 1r ( {\partial} _r a) . \end{align} Since we just need to have an equation of the form \equ{bU} satisfied, we find that $$ \tilde u = U + \Pi_{U^\perp} \varphi + a(\Pi_{U^\perp} \varphi)U $$ solves \equ{hmf1} if and only if $\varphi$ satisfies \begin{align} \nonumber 0= S(U) - {\partial} _t \Pi_{U^\perp} \varphi+ L_U(\Pi_{U^\perp} \varphi) + \frac 1r {\partial} _r (\Pi_{U^\perp} \varphi ) + N_U(\Pi_{U^\perp}\varphi ) + b(r,z,t) U , \end{align} for some scalar function $b$. \hide{ The logic of the construction goes like this: As we have said, we decompose $\varphi$ into the sum of two functions $\varphi = \varphi^i + \varphi^o$, the ``inner'' and ``outer'' solutions and reduce equation \equ{ecuacion} to solving a system of two equations in $(\varphi^i, \varphi^o)$ that we call the inner and outer problems. \medskip Using that $V= U_{\lambda, \xi, \omega}$ satisfies $$ V_{rr} + V_{zz} + | {\nabla} V|^2 V =0 $$ \medskip \begin{align*} Q_{-\omega} L_U[\varphi^i] &= \lambda^{-2} \eta L_ W [\phi] + ((\partial_r^2+\partial_z^2) \eta) \phi + 2 \lambda^{-1} {\nabla} \eta {\nabla} _y \phi \\ Q_{-\omega} \varphi^i_t & = \eta \bigl( \phi_t - \lambda^{-1}\dot\lambda y\cdot {\nabla} _y \phi - \lambda^{-1}\dot\xi \cdot {\nabla} _y \phi + \dot\omega Q_{-\omega} {\partial} _\omega Q_\omega \phi \bigr) + \eta_t \phi . \end{align*} } We use the ansatz \equ{canave} for $\tilde u$, namely \begin{align} \label{upa} \tilde u(r,z,t)\ = U + \Pi_{U^\perp}\varphi \, + \, a(\Pi_{U^\perp}\varphi) U, \quad \varphi := \Pi_{U^\perp} \big( \eta^\delta\, \Phi^0[\omega, \lambda , \xi] + \Psi^*\big ) + \eta_R Q_\omega \phi, \end{align} where we will later decompose $\Psi^* = \tilde Z^* + \psi$ for a suitable $\tilde Z^*$. Equation $S(\tilde u)= 0$ then becomes \begin{align} \label{eqsys1} 0 & = \lambda^{-2} \eta Q_\omega [- \lambda^2 \phi_t + \ L_ W [\phi ] + \lambda^2 Q_{-\omega} \tilde L_U [\Psi^*] ] \\ \nonumber & \quad + \eta Q_\omega( \lambda^{-1}\dot\lambda y\cdot {\nabla} _y \phi + \lambda^{-1} \dot\xi \cdot {\nabla} _y \phi - \dot\omega J \phi ) \\ \nonumber & \quad + \eta^{\delta} \tilde L_U [ \Phi^0 ] + \eta^{\delta} \Pi_{U ^\perp} [ - {\partial} _t \Phi^0 + (\partial_r^2+\partial_z^2) \Phi^0 + S(U) ]+ {\mathcal E} ^{out,0} \\ \nonumber & \quad - {\partial} _t \Psi^* +\Delta \Psi^* + (1-\eta) \tilde L_U [\Psi^*] + Q_\omega[((\partial_r^2+\partial_z^2) \eta) \phi + 2 {\nabla} \eta {\nabla} \phi - \eta_t \phi] \\ \nonumber & \quad + \frac{1}{r} \partial_r \left( \Pi_{U^\perp} \big( \eta^\delta\, \Phi^0[\omega, \lambda , \xi] + \Psi^*\big ) + \eta_R Q_\omega \phi \right) \\ \nonumber & \quad + N_U( \eta Q_\omega \phi + \Pi_{U^\perp}( \Phi^0 +\Psi^*) ) + ((\Psi^*+ \Phi^0)\cdot U)U_t + b U , \end{align} where \begin{align*} {\mathcal E} ^{out,0} &= \tilde L_U[\eta^\delta \Phi^0] + \Pi_{U^\perp}[ (-\partial_t + \partial_r^2 + \partial_z^2)(\delta^\eta \Phi^0) ] \\ & \quad - \eta^{\delta} \tilde L_U [ \Phi^0 ] - \eta^{\delta} \Pi_{U ^\perp} [ - {\partial} _t \Phi^0 + (\partial_r^2+\partial_z^2) \Phi^0 ] + (1-\eta^\delta) S(U). \end{align*} We note that from the definition \eqref{UUU} and the fact that $U_{\lambda(t),\xi(t),\omega(t)}$ satisfies the harmonic map equation \eqref{hm0}, we have \begin{align*} S(U) &= -U_t + \frac{1}{r} \partial_r U+ {\mathcal E} ^{out,1} , \quad | {\mathcal E} ^{out,1}| + |\nabla {\mathcal E} ^{out,1}| \leq C \lambda. \end{align*} Invoking formulas \eqref{ZZ} to compute $U_t$ we get \begin{align*} U_t = \dot\lambda {\partial} _\lambda U_{\lambda, \xi , \omega} + \dot\omega {\partial} _\omega U_{\lambda, \xi , \omega} + {\partial} _{\xi } U_{\lambda, \xi , \omega}\cdot \dot \xi = {\mathcal E} _{0} + {\mathcal E} _{1} , \end{align*} where, setting $ y = \frac{(r,z)-\xi }{\lambda}= \rho e^{i\theta} $, we have \begin{align*} {\mathcal E} _{0} (r,z,t) & = - Q_\omega [ \frac{\dot \lambda}{\lambda} \rho w_\rho(\rho)\, E_1(y) \, + \, {\dot \omega } \rho w_\rho(\rho)\, E_2(y)\, ] \\ {\mathcal E} _{1} (r,z,t) & = -\frac{\dot \xi_{1}}{\lambda} \, w_\rho(\rho)\, Q_\omega [\ \cos\theta \, E_1(y) + \sin\theta\, E_2(y) ]\, \\ & \quad - \frac{\dot \xi_{2}}{\lambda}\,w_\rho(\rho) \, Q_\omega[ \sin\theta \, E_1(y) - \cos \theta\, E_2(y) \, ]. \end{align*} The choice \equ{defPhi0} of $\Phi^0$ is so that it cancels ${\mathcal E} _0$ at main order. The other terms in $S(U)$ behave better, since ${\mathcal E} _1$ has faster space decay in $\rho $ and the other terms in $S(U)$ are smaller. We note that \begin{align*} {\mathcal E} _0(r,z,t) \approx \tilde {\mathcal E} _0 (r,z,t) := - \frac {2 s } {s^2+\lambda^2}\left [\begin{matrix} \dot p(t)e^{i\theta} \\ 0 \end{matrix}\right ] , \end{align*} and a direct computation yields $$ \Phi^0_t + (\partial_r^2+\partial_z^2) \Phi^0 + \tilde {\mathcal E} _0 = \tilde {{\mathcal R}}_0 +\tilde {{\mathcal R}}_1 , \quad \tilde {{\mathcal R}}_0 = \left ( \begin{matrix} {{\mathcal R}}_0 \\ 0 \end{matrix} \right ),\quad \tilde {{\mathcal R}}_1 = \left ( \begin{matrix} {{\mathcal R}}_1 \\ 0 \end{matrix} \right ) $$ where \begin{align*} {{\mathcal R}}_0 &:= - re^{i\theta} \frac {\lambda^2}{z^4} \int_{-T}^t \dot p(\tau) ( z{k_z} - z^2 k_{zz}) (z(s),t-\tau) \, d\tau \\ {{\mathcal R}}_1 & := - e^{ i\theta} {\rm Re}\,( e^{-i\theta} \dot \xi(t)) \int_{-T}^t \dot p(\tau) \, k(z(s),t-\tau) \, d\tau \\ &\qquad + \frac r{z^2} e^{i\theta} \, (\lambda\dot\lambda(t) - {\rm Re}\,( re^{i\theta} \dot\xi(t)) ) \int_{-T}^t \dot p(\tau) \ {zk_z}(z(s),t-\tau)\, d\tau. \end{align*} We observe that ${{\mathcal R}}_1$ is actually a term of smaller order. Using formulas \equ{Ltilde2}, \equ{uii} and the facts $$ \frac {\lambda^2r }{z^4} = \frac 1{4\lambda} \rho w_\rho^2, \quad \frac r {z^2} (1-\cos w) = \frac 1{2\lambda} \rho w_\rho^2 , $$ we derive an expression for the quantity: \begin{align*} & \tilde L_U [ \Phi^0 ] + \Pi_{U ^\perp} [- {\partial} _t \Phi^0 +(\partial_r^2+\partial_z^2) \Phi^0 + S(U) ] \\ & = \tilde L_U[\Phi^0] -{\mathcal E} _1 + \Pi_{U^\perp} [\tilde {\mathcal E} _0] - {\mathcal E} _0 + \Pi_{U^\perp} [\tilde {{\mathcal R}}_0] + \Pi_{U^\perp} [\tilde {{\mathcal R}}_1] \\ &= {\mathcal K}_{0}[p,\xi] + {\mathcal K}_1[p,\xi] +\Pi_{U^\perp} [\tilde {{\mathcal R}}_1] + +\Pi_{U^\perp} \Bigl[\frac{1}{r}U + {\mathcal E} ^{out,1} \Bigr] \end{align*} where \begin{align*} {\mathcal K}_{0}[p,\xi] = {\mathcal K}_{01}[p,\xi] + {\mathcal K}_{02}[p,\xi] \end{align*} with \begin{align} \nonumber {\mathcal K}_{01}[p,\xi] &:= - \frac {2}{\lambda} \rho w_\rho^2 \int_{-T} ^t \left [ {\rm Re } \,( \dot p(\tau) e^{-i\omega(t)} ) Q_\omega E_1+ {\rm Im } \,( \dot p(\tau) e^{-i\omega(t)} ) Q_\omega E_2 \right ] \\ \label{K01} & \qquad \qquad \qquad \qquad \cdot k(z,t-\tau) \, d\tau \\ \nonumber {\mathcal K}_{02}[p,\xi] & := \frac 1{\lambda} \rho w_\rho^2 \left [ {\dot\lambda} - \int_{-T} ^t {\rm Re } \,( \dot p(\tau) e^{-i\omega(t)} ) r k_z(z,t-\tau) z_r \, d\tau\, \right] Q_\omega E_1 \\ \nonumber &\quad - \frac{1}{4\lambda} \rho w_\rho^2 \cos w \left [ \int_{-T}^t {\rm Re}\, ( \dot p(\tau)e^{-i\omega(t) } ) \, ( z{k_z} - z^2 k_{zz}) (z,t-\tau)\, d\tau\, \right ] Q_\omega E_1 \\ \label{K02} &\quad - \frac{1}{4\lambda} \rho w_\rho^2 \left [ \int_{-T}^t {\rm Im }\, ( \dot p(\tau)e^{-i\omega(t) } ) \, ( z{k_z} - z^2 k_{zz}) (z,t-\tau)\, d\tau\, \right ] Q_\omega E_2 , \\ \label{K1} {\mathcal K}_{1}[p,\xi] & := \frac 1\lambda w_\rho \, \big [ \Re \big ( (\dot \xi_1 - i \dot \xi_2) e^{i\theta } \big ) Q_\omega E_1 + \Im \big( (\dot \xi_1 - i \dot \xi_2) e^{i\theta } \big ) Q_\omega E_2 \big ]. \end{align} We insert this decomposition in equation \equ{eqsys1} and see that we will have a solution to the equation if the pair $(\phi,\Psi^*)$ solves the {\em inner-outer gluing system} \begin{align} \label{inner1} \left\{ \begin{aligned} \lambda^2 \phi_t & = L_ W [\phi ] + \lambda^2 Q_{-\omega} \left[ \tilde L_U [\Psi^* ] + {\mathcal K}_{0}[p,\xi]+ {\mathcal K}_{1}[p,\xi] \right] + \lambda^2 \chi_{D_{2R}} \frac{1}{r}Q_{-\omega} \partial_r U \quad \text{in } D_{2R} \\ \phi\cdot W & = 0 {\quad\hbox{in } } D_{2R} \\ \phi(\cdot, 0) & = 0=\phi(\cdot, T), \end{aligned} \right. \end{align} \smallskip \begin{align} \label{outer1} \partial_t \Psi^* &= (\partial_r^2+\partial_z^2) \Psi^* + g[p,\xi, \Psi^*,\phi]{\quad\hbox{in } } {\mathcal D} \times (0,T) , \end{align} where $\chi_A$ is characteristic function of a set $A$, \begin{align} \label{GG} g[p,\xi, \Psi^*,\phi] & := (1-\eta) \tilde L_U [\Psi^*] + (\Psi^*\cdot U ) U_t \\ \nonumber & \quad + Q_\omega \bigl( ((\partial_r^2+\partial_z^2) \eta) \phi + 2 {\nabla} \eta {\nabla} \phi - \eta_t \phi \bigr) \\ \nonumber & \quad + \eta Q_\omega\bigl( - \dot\omega J \phi + \lambda^{-1}\dot\lambda y\cdot {\nabla} _y \phi + \lambda^{-1} \dot\xi \cdot {\nabla} _y \phi \bigr) \\ \nonumber & \quad + (1-\eta)[ {\mathcal K}_{0}[p,\xi]+ {\mathcal K}_{1}[p,\xi]] + \Pi_{U^\perp}[ \tilde {{\mathcal R}}_1] + ( \Phi^0\cdot U)U_t \\ \nonumber & \quad + \frac{1}{r} \partial_r \left( \Pi_{U^\perp} \big( \eta^\delta\, \Phi^0[\omega, \lambda , \xi] + \Psi^*\big ) + \eta_R Q_\omega \phi \right) + (1-\eta) \frac{1}{r} \partial_r U + \eta^\delta {\mathcal E} ^{out,1} + {\mathcal E} ^{out,0} \\ \nonumber & \quad + N_U( \eta Q_\omega \phi + \Pi_{U^\perp}( \Phi^0 +\Psi^*) ) , \end{align} and we denote $$ D_{\gamma R} = \{(y,t)\in \R^2\times (0,T) \ /\ |y|< \gamma R(t) \}. $$ Indeed if $(\phi,\Psi^*)$ solves this system, then $\tilde u$ given by \eqref{upa} solves equation \equ{hmf1}. The boundary condition $\tilde u= {\bf e}_3$ on $ ( {\partial} {\mathcal D}\setminus \{r=0\}) \times (0,T) $ amounts to $$ \Pi_{U^\perp} [ \Phi^0+ \Psi^* ] + a(\Pi_{U^\perp} [U+ \Phi^0+ \Psi^* ]) U = ({\bf e}_3 - U) $$ and then it suffices that we take the boundary condition for \equ{outer1}: \begin{align} \label{bcpsi} \Psi^*= {\bf e}_3 - U -\Phi^0 \quad \text{on } ( {\partial} {\mathcal D}\setminus \{r=0\}) \times (0,T) . \end{align} We also impose \begin{align} \label{bcpsi2} \partial _r \Psi^* =0 \quad \text{on } \{r=0\} \cap {\mathcal D} \times (0,T). \end{align} Since we want $\tilde u(r,z,t)$ to be a small perturbation of $U(x,t)$ when we stand close to $(r_0,z_0,T)$, it is natural to require that $\Psi^*$ satisfies the final condition \[ \Psi^* (r_0,z_0,T) = 0. \] This constraint amounts to three Lagrange multipliers when we solve the problem, which we choose to put in the initial condition. Then we assume \[ \Psi^*\big(r,z,0) = Z_0^*(x) + c_1{\mathbf e_1} + c_2{\mathbf e_2} + c_3{\mathbf e_3} , \] where $c_1,c_2,c_3$ are undetermined constants and $Z_0^*(x)$ is a small function for which specific assumptions will later be made. \section{The reduced equations} In this section we will informally discuss the procedure to achieve our purpose in particular deriving the order of vanishing of the scaling parameter $\lambda(t)$ as $t\to T$. The main term that couples equations \equ{inner1} and \equ{outer1} inside the second equation is the linear expression \[ Q_\omega[((\partial_r^2+\partial_z^2) \eta) \phi + 2 {\nabla} \eta {\nabla} \phi+ \eta_t \phi], \] which is supported in $|y|= O(R)$. This motivates the fact that we want $\phi$ to exhibit some type of space decay in $|y|$ since in that way $\Psi^*$ will eventually be smaller and in turn that would make the two equations at main order {\em uncoupled}. Equation \equ{inner1} has the form \begin{align*} \lambda^2 \phi_t & = L_ W [\phi ] + h[p,\xi, \Psi^*] (y,t) {\quad\hbox{in } } D_{2R} \\ \phi\cdot W & = 0 {\quad\hbox{in } } D_{2R} \\ \phi(\cdot, 0) & = 0 {\quad\hbox{in } } B_{2R(0)} , \end{align*} where, for convenience we assume that $h(y,t)$ is defined for all $y\in \R^2$ extending it outside $D_{2R}$ as \begin{equation} \label{HH2} h[p,\xi, \Psi^*] = \lambda^2 Q_{-\omega} {\mathcal K}_{0}[p,\xi] + \lambda^2 Q_{-\omega} \left[ \tilde L_U [\Psi^* ] + {\mathcal K}_{1}[p,\xi] + \frac{1}{r} \partial_r U \right] \chi_{D_{2R} } , \end{equation} where ${\mathcal K}_0$ is defined in \eqref{K01}, \eqref{K02} and ${\mathcal K}_1$ in \eqref{K1}. If $\lambda (t)$ has a relatively smooth vanishing as $t\to T$ it seems natural that the term $\lambda^2 \phi_t $ be of smaller order and then the equation is approximately represented by the elliptic problem \begin{align} \label{linearized-elliptic} L_ W [\phi ] + h[p,\xi, \Psi^*]=0, \quad \phi\cdot W =0 {\quad\hbox{in } } \R^2 . \end{align} Let us consider the decaying functions $Z_{lj}(y)$ defined in formula \eqref{ZZ}, which satisfy $L_ W [Z_{lj}]=0$. If $\phi(y,t)$ is a solution of \equ{linearized-elliptic} with sufficient decay, then necessarily \begin{equation}\label{ww1} \int_{\R^2 } h[p,\xi, \Psi^*](y,t)\cdot Z_{lj} (y)\, dy = 0 \quad \quad\mbox{for all}\quad t\in (0,T) , \end{equation} for $l=0,1$, $j=1,2$. These relations amount to an integro-differential system of equations for $p(t)$, $\xi(t)$, which, as a matter of fact, {\em detemine} the correct values of the parameters so that the solution $(\phi,\Psi^*)$ with appropriate asymptotics exists. \medskip We derive next useful expressions for relations \equ{ww1}. Let us first define \begin{align} \label{defB0j} \mathcal B_{0j} [p] (t) &:= \frac{\lambda}{2\pi} \int_{\R^2} Q_{-\omega} [ {\mathcal K}_{0}[p,\xi]+ {\mathcal K}_{1}[p,\xi] ] \cdot Z_{0j} (y)\, dy. \\ \nonumber \tilde {\mathcal B}_{0j}[p,\xi] & := \frac{\lambda}{2\pi} \int_{B_{2R}} Q_{-\omega} ( \frac{1 }{r} \partial_r U )\cdot Z_{0j} (y)\, dy \end{align} Using \eqref{K01}, \eqref{K02} the following expressions for $\mathcal B_{01}$, $\mathcal B_{02}$ are readily obtained: \begin{align*} \mathcal B_{01} [p](t) &= \int_{-T} ^t {\rm Re } \,(\dot p(\tau) e^{-i\omega(t)} )\, \Gamma_1 \left ( \frac {\lambda(t)^2}{t-\tau} \right ) \,\frac{ d\tau}{t-\tau}\, -2 \dot\lambda (t) \\ \mathcal B_{02}[p](t) & = \int_{-T} ^t {\rm Im } \,(\dot p(\tau) e^{-i\omega(t)} )\, \Gamma_2 \left ( \frac {\lambda(t)^2}{t-\tau} \right ) \,\frac{ d \tau}{t-\tau}\, \end{align*} where $\Gamma_j(\tau)$, $j=1,2$ are the smooth functions defined as follows: \begin{align*} \Gamma_1 (\tau) & = - \int_0^{\infty} \rho^3 w^3_\rho \left [ K ( \zeta ) + 2 \zeta K_\zeta (\zeta ) \frac {\rho^2} { 1+ \rho^2} -4\cos(w) \zeta^2 K_{\zeta\zeta} (\zeta) \right ]_{\zeta = \tau(1+\rho^2)} \, d\rho \\ \Gamma_2 (\tau) & = - \int_0^{\infty} \rho^3 w^3_\rho \left [K(\zeta) - \zeta^2 K_{\zeta\zeta}(\zeta) \right ]_{\zeta = \tau(1+\rho^2)} \, d\rho\, \end{align*} where \[ K(\zeta) = 2\frac {1- e^{-\frac{\zeta}4}} {\zeta} , \] and we have used that $\int_0^{\infty} \rho^3 w_\rho^3 d\rho=-2$. Using these expressions we find that \begin{align} \nonumber | \Gamma_l (\tau)- 1| & \le C \tau(1+ |\log\tau|) \quad \hbox{ for }\tau<1 , \\ \nonumber |\Gamma_l (\tau)| & \le \frac C\tau\qquad \qquad\hbox{ for }\tau> 1, l=1,2. \end{align} Let us define \begin{align} \label{defB0-new} \mathcal B_0[p ] := \frac{1}{2}e^{i \omega(t) } \left( \mathcal B_{01}[p ] + i\mathcal B_{02}[p ] \right) ,\quad \tilde {\mathcal B}_0[p ] := \frac{1}{2}e^{i \omega(t) } \left( \tilde {\mathcal B}_{01}[p ] + i\tilde {\mathcal B}_{02}[p ] \right) \end{align} and \begin{align} \nonumber a_{0j}[p,\xi, \Psi^*] &:= - \frac{ \lambda}{2\pi} \int_{B_{2R}} Q_{-\omega} \tilde L_U [\Psi^* ] \cdot Z_{0j} (y)\, dy, j=1,2, \\ \label{defA0} a_{0}[p,\xi, \Psi^*] & := \frac{1}{2} e^{i \omega(t)} \left( a_{01}[p,\xi, \Psi^*] + i a_{02}[p,\xi, \Psi^*] \right). \end{align} \noanot{ \begin{ch} Maybe change to \begin{align} \label{defA0-new} a_{0}[p,\xi, \Psi^*] =-\frac{\lambda}{4\pi} e^{i \omega(t)} \int_{B_{2R} } \left( Q_{-\omega(t)} \tilde L_U[\Psi^*] \cdot Z_{01} + i Q_{-\omega(t)} \tilde L_U[\Psi^*] \cdot Z_{02} \right)\,dy . \end{align} \end{ch} } Similarly, we let \begin{align*} \mathcal B_{1j} [p,\xi ] (t) & := \frac{\lambda}{2\pi} \int_{\R^2} Q_{-\omega} \Bigl[ {\mathcal K}_{0}[p,\xi]+ {\mathcal K}_{1}[p,\xi] +\chi_{D_{2R}} \frac{1}{r}\partial_r U \Bigr] \cdot Z_{1j} (y)\, dy, j=1,2, \\ \mathcal B_{1} [p,\xi ] (t) & := \mathcal B_{11}[p,\xi](t) + i \mathcal B_{12}[p,\xi](t) . \end{align*} At last, we set \begin{align*} a_{1j} [p,\xi, \Psi^* ] &:= \frac{\lambda}{2\pi} \int_{B_{2R}} Q_{-\omega} \tilde L_U [\Psi^* ] \cdot Z_{1j}(y) \,dy, \quad j=1,2, \\ a_1[p,\xi, \Psi^* ] & := - e^{i \omega(t) } ( a_{11}[p,\xi, \Psi^* ] + i a_{12} [p,\xi, \Psi^* ] ) . \end{align*} We get that the four conditions \equ{ww1} reduce to the system of two complex equations \begin{align} \label{eqB0} \mathcal B_0[p ]& = a_0[p,\xi,\Psi^* ] - \tilde{\mathcal B}_0[p,\xi ] ,\\ \label{eqB1} \mathcal B_1[\xi ] & = a_1[p,\xi,\Psi^* ]. \end{align} At this point we will make some preliminary considerations on this system that will allow us to find a first guess of the parameters $p(t)$ and $\xi(t)$. First, we observe that \begin{align*} \mathcal B_0[p ] = \int_{-T} ^{t-\lambda^2} \frac{\dot p(\tau)}{t-\tau}d\tau\, + O\big( \|\dot p\|_\infty \big) , \quad \tilde{\mathcal B}_0[p,\xi] = O(\lambda^{1-\sigma}), \end{align*} for any $\sigma>0$. To get an approximation for $a_0$, let us write $$ \Psi^* = \left [ \begin{matrix}\psi^* \\ \psi^*_3 \end{matrix} \right ] , \quad \psi^* = \psi^*_1 + i \psi^*_2 . $$ From formula \equ{Ltilde2} we find that \[ \tilde L_U [\Psi^* ](y) = [\tilde L_U]_0 [\Psi^* ] + [\tilde L_U]_1 [\Psi^* ]+ [\tilde L_U]_2 [\Psi^* ] , \] where \begin{align*} \lambda Q_{-\omega} [\tilde L_U]_0 [\Psi^* ]& = \ \ \rho w_\rho^2\, \big [\, \div ( e^{-i\omega} \psi^*)\, E_1 + \mathop{\rm curl} ( e^{-i\omega} \psi^* )\, E_2 \, \big ]\, \\ \lambda Q_{-\omega}[\tilde L_U]_1 [\Psi^* ] & = -\, 2 w_\rho \cos w \, \big [\,( {\partial} _{r} \psi^*_3) \cos \theta + ( {\partial} _{z} \psi^*_3) \sin \theta \, \big ]\, E_1 \\ & \quad - 2 w_\rho \cos w \, \big [\, ( {\partial} _{r} \psi^*_3) \sin \theta - ( {\partial} _{z} \psi^*_3) \cos \theta \, \big ]\, E_2\ , \\ \lambda Q_{-\omega}[\tilde L_U]_2 [\Psi^* ] &= \quad \rho w_\rho^2 \, \big [\, \div (e^{i\omega}\bar \psi^*)\, \cos 2\theta - \mathop{\rm curl} ( e^{i\omega}\bar \psi^*)\, \sin 2\theta \, \big ]\, E_1 \\ & \quad + \rho w_\rho^2 \, \big [\, \div ( e^{i\omega}\bar \psi^*)\, \sin 2\theta + \mathop{\rm curl} ( e^{i\omega}\bar \psi^*)\, \cos 2\theta \, \big ]\,E_2 , \end{align*} and the differential operators in $\Psi^*$ on the right hand sides are evaluated at $(r,z,t)$ with $(r,z)= \xi(t)+ \lambda(t) y$, $y = \rho e^{i\theta}$ while $E_l= E_l(y)$, $l=1,2$. From the above decomposition, assuming that $\Psi^*$ is of class $C^1$ in space variable, we find that \[ a_{0}[p,\xi, \Psi^*] = [ \div \psi^*+ i\mathop{\rm curl} \psi^*](\xi,t ) + o(1) , \] where $o(1)\to 0$ as $t\to T$. Similarly, we have that \begin{align*} a_1(p,\xi) & = 2 ( {\partial} _{r} \psi^*_3 + i {\partial} _{z} \psi^*_3) (\xi, t) \int_{0}^\infty \cos w \,w_\rho^2 \rho \, d\rho = o(1) \quad\mbox{as}\quad t\to T, \end{align*} since $\int_0^\infty w_\rho^2 \cos w \rho \, d\rho = 0 $. Using \equ{K1}, \eqref{ZZ} and the fact that $\int_0^{\infty} \rho w_\rho^2 d\rho\, =2$ we get \[ \mathcal B_{1} [\xi ](t)\, = \, 2[\, \dot \xi_1(t) + i\dot \xi_2(t)\,] + \frac{2}{\xi_1(t)} + O(\lambda^{\sigma}), \] for some $\sigma>0$ (actually $\sigma = 2\beta$ where $R \approx \lambda^\beta$.) \medskip Let us discuss informally how to handle \equ{eqB0}-\equ{eqB1}. For this we simplify this system in the form \begin{align} \nonumber \int_{-T} ^{t-\lambda^2} \frac{\dot p(\tau)}{t-\tau}d\tau & = [ \div \psi^*+ i\mathop{\rm curl} \psi^*](\xi(t),t ) + o(1) + O(\|\dot p\|_\infty) \\ \dot \xi_1(t) & = -\frac{1}{\xi_1(t)} +o(1)\quad\mbox{as}\quad t\to T. \label{equB1} \\ \dot \xi_2(t) & = o(1)\quad\mbox{as}\quad t\to T. \label{equB1-2} \end{align} We assume for the moment that the function $\Psi^*(x,t)$ is fixed, sufficiently regular, and we regard $T$ as a parameter that will always be taken smaller if necessary. Recall that we want $\xi(T)=(r_0,z_0) $ where $(r_0,z_0) \in {\mathcal D}$, $r_0\not=0$ is given, and $\lambda(T)=0$. Equation \equ{equB1-2} suggests us to take $\xi_2(t) \equiv z_0$ as a first approximation, while \eqref{equB1} suggest that $\xi_1$ is given at main order by \[ \xi_1(t) = \sqrt{r_0^2 + 2 (T-t)}. \] Neglecting lower order terms, we arrive at the ``clean'' equation for $p(t)= \lambda (t) e^{i\omega(t)}$, \begin{align} \label{kuj} \int_{-T} ^{t-\lambda(t)^2} \frac{ \dot p(s)}{t-s}ds = a_0^* \end{align} where $a_0^* = \div \psi^*(q,0 ) + i\mathop{\rm curl} \psi^*(q,0 ) $. At this point we make the following assumption: \begin{align} \label{div+icurl-not-0} \div \psi^*(q,0 ) + i\mathop{\rm curl} \psi^*(q,0 ) \not=0. \end{align} We claim that a good approximate solution of \equ{kuj} as $t\to T$ is given by \[ \dot p(t) = -\frac {\kappa} {\log^2(T-t)} \] for a suitable $\kappa \in {\mathbb C}$. In fact, substituting, we have \begin{align} \int_{-T}^{t-\lambda(t)^2} \frac {\dot p(s)}{t-s}\, ds & = \int_{-T}^{t- (T-t) } \frac{ \dot p (s)}{t-s} \, ds + \, \dot p (t)\left [ \log (T-t) - 2\log (\lambda(t)) \right ]\nonumber + \int_{t-(T-t) } ^{ t- \lambda(t)^2}\frac{\dot p(s)-\dot `p(t)}{t-s} ds \nonumber \\ \label{formal} & \approx \int_{-T}^{t } \frac{ \dot p (s)}{T-s}\, ds - \dot p (t) \log (T-t) \end{align} as $t\to T$. We see that by the explicit form of $p$, \begin{align*} \frac{d}{dt} \left[ \int_{-T}^{t } \frac{ \dot p (s)}{T-s}\, ds - \dot p (t) \log (T-t) \right]=0 , \end{align*} and hence the right hand side of \eqref{formal} is constant. As a conclusion, equation \equ{kuj} is approximately satisfied if $\kappa$ is such that $$ \kappa \int_{-T}^{T} \frac{ \dot p (s)}{T-s}=a_0^* . $$ Imposing $p(T)=0$ we gives us the approximate expression \[ p(t) (t) = a_0^* \frac { |\log T| (T-t)}{\log^2(T-t)}(T-t)\, (1+ o(1)) \quad\mbox{as}\quad t\to T. \] \section{Solving the inner-outer gluing system} Our purpose is to determine, for a given $(r_0,z_0)\in {\mathcal D}$ and a sufficiently small $T>0$, a solution $(\phi,\Psi^*)$ of system \equ{inner1}-\equ{outer1} with a boundary condition of the form \equ{bcpsi}, \eqref{bcpsi2} such that $\tilde u(r,z,t)$ given by \equ{upa} blows up with $U(x,t)$ as its main order profile. This will only be possible for adequate choices of the parameter functions $\xi(t)$ and $p(t)= \lambda(t) e^{i\omega(t)}$. These functions will eventually be found by fixed point arguments, but a priori we need to make some assumptions regarding their behavior. First, we define \begin{align*} \lambda_*(t) = \frac{|\log T| ( T-t)}{ |\log(T-t)|^2}. \end{align*} We will assume that for some positive numbers $a_1,a_2,\sigma$ independent of $T$ the following hold: \begin{align} a_1 |\dot \lambda_* (t)| \le |\dot p (t)| & \le a_2 |\dot \lambda_* (t)| \quad\mbox{for all}\quad t\in (0,T), \nonumber \\ |\dot \xi(t) | & \le \lambda_*(t)^{\sigma}\quad\ \quad\mbox{for all}\quad t\in (0,T). \nonumber \end{align} We also take \begin{align*} R(t) =\lambda_*(t)^{-\beta}, \end{align*} where $\beta \in ( 0,\frac 12)$. \medskip To solve the outer equation \eqref{outer1} we will decompose $\Psi^*$ in the form $ \Psi^* = \tilde Z^* + \psi $ where we let $Z^*:\Omega\times (0,\infty) \to \R^3$ satisfy \eqref{heatZ*} with $Z_0^*(x)$ a function satisfying certain conditions to be described below. Since we would like that $\tilde u(r,z,t)$ given by \equ{upa} has a blow-up behavior given at main order by that of $U(x,t)$, we will require \[ \Psi^*(r_0,z_0,T)=0 . \] This constraint has three parameters. Therefore we need three ``Lagrange multipliers'' which we include in the initial datum. \subsection{Assumptions on \texorpdfstring{$Z_0^*$}{Z0star}} Let us recall that $Z^*$ solves the heat equation \eqref{heatZ*} with initial condition $Z_0^*$. We assume first that $Z_0^*$ is axially symmetric so that $Z_0^*(x) = \tilde Z_0^*(r,z)$ and use the notation \eqref{notationZ0star} and \eqref{div-curl-z0star}. A first condition that we require, consistent with \eqref{div+icurl-not-0}, is $ \div \tilde z^*_0(q) + i \mathop{\rm curl} \tilde z^*_0(q) \not=0$. In addition we require that $\tilde Z_0^*(r_0,z_0)\approx 0 $ in a non-degenerate way. We want also $Z^*$ to be sufficiently small, but independently of $T$, so that the heat equation \eqref{heatZ*} is a good approximation of the linearized harmonic map flow far from the singularity. More precisely, we assume that for some $\alpha_0>0$ small and some $\alpha_1>0$, all independent of $T$, we have \begin{align} \label{condZ0} \left\{ \begin{aligned} & \|Z_0^*\|_{C^3(\overline \Omega)} \le \alpha_0, \\ & | \tilde Z_0^{*}(r_0,z_0)| \le 5T, \\ & |(D \tilde z_0^{*}(r_0,z_0))^{-1}| \le \alpha_1 , \\ & \alpha_0 \leq | \div \tilde z_0^{*}(r_0,z_0) + i \mathop{\rm curl} \tilde z_0^{*}(r_0,z_0) | . \end{aligned} \right. \end{align} (The notation here is analogous to \eqref{notationZ0star} and \eqref{div-curl-z0star}.) \medskip \noanot{ OLD ASSUMPTIONS More precisely, we consider positive numbers $\alpha_0$, $\alpha_1$, $\alpha_2$, all of them independent of $T$, with $\alpha_0$ sufficiently small \begin{align} \label{conditionsZ0} \left\{ \begin{aligned} \|Z_0^*\|_{C^1(\Omega)} + T|\log T|^{-1} \|D^2 Z_0^*\|_{L^\infty (\Omega)} & \le \alpha_0, \\ |Z_0^*(q)| & \le 5T, \\ |(Dz_0(q))^{-1}| & \le -\alpha_1, \\ \div z_0^*(q) & \le - \alpha_2. \end{aligned} \right. \end{align} } \medskip \subsection{Linear theory for the inner problem} The inner problem \equ{inner1} is written as \begin{align*} \left\{ \begin{aligned} \lambda^2 {\partial} _t \phi & = L_ W [\phi ] + h[p,\xi, \Psi^*] {\quad\hbox{in } } D_{2R} \\ \phi \cdot W & = 0 {\quad\hbox{in } } D_{2R} \\ \phi (\cdot, 0) & = 0 {\quad\hbox{in } } B_{2R(0)} \end{aligned} \right. \end{align*} where $h[p,\xi, \Psi^*] $ is given by \equ{HH2}. To find a good solution to this problem we would like that $ h[p,\xi, \Psi^*] $ satisfies the orthogonality conditions \eqref{ww1}. We split the right hand side $h[p,\xi, \Psi^*] $ and the inner solution into components with different roles regarding these orthogonality conditions. Recall that \begin{equation*} h[p,\xi, \Psi^*] = \lambda^2 Q_{-\omega} \tilde L_U [\Psi^* ] \chi_{D_{2R} } + \lambda^2 Q_{-\omega} {\mathcal K}_{0}[p,\xi] + \lambda^2 Q_{-\omega} {\mathcal K}_{1}[p,\xi] \chi_{D_{2R} } , \end{equation*} the decomposition of $\tilde L_U$ given in \eqref{Ltilde2}: \begin{align} \nonumber \tilde L_U [\Psi^* ] = \tilde L_U [\Psi^* ]_0 + \tilde L_U [ \Psi^* ]_1 + \tilde L_U [ \Psi^* ]_2\ , \end{align} with $\tilde L_U[\Phi]_j$ defined in \eqref{Ltilde-j}. Using the notation \eqref{notation-Phi}, we then define \begin{align} \nonumber \tilde L_U [\Phi ]_1^{(0)} & = -\, 2\lambda^{-1} w_\rho \cos w \, \big [\,(\partial_{x_1} \varphi_3(\xi(t),t)) \cos \theta + (\partial_{x_2} \varphi_3(\xi(t),t))) \sin \theta \, \big ]\,Q_{\omega} E_1 \\ \nonumber & \quad - 2\lambda^{-1} w_\rho \cos w \, \big [\, (\partial_{x_1} \varphi_3(\xi(t),t))) \sin \theta - (\partial_{x_2} \varphi_3(\xi(t),t))) \cos \theta \, \big ]\, Q_{\omega}E_2\ . \end{align} We then decompose the function $h$ defined in \eqref{HH2} \[ h = h_1+ h_2 + h_3 \] where \begin{align} \label{def-h1} h_1[p,\xi, \Psi^*] &= \lambda^2 Q_{-\omega} ( \tilde L_U [\Psi^* ]_0 +\tilde L_U [\Psi^* ]_2 ) \chi_{D_{2R} } + \lambda^2 Q_{-\omega} {\mathcal K}_{0}[p,\xi] , \\ \nonumber h_2[p,\xi, \Psi^*] &= \lambda^2 Q_{-\omega} \tilde L_U [\Psi^* ]_1^{(0)} \chi_{D_{2R} } + \lambda^2 Q_{-\omega} {\mathcal K}_{1}[p,\xi] \chi_{D_{2R} }, \\ \nonumber h_3[p,\xi, \Psi^*] &= \lambda^2 Q_{-\omega} ( \tilde L_U [\Psi^* ]_1 - \tilde L_U [\Psi^* ]_1 ^{(0)}) \chi_{D_{2R} } . \end{align} Next we decompose $\phi = \phi_1+ \phi_2 + \phi_3+\phi_4$. The function $\phi_1$ will solve the inner problem with right hand side $ h_1[p,\xi, \Psi^*] $ projected so that it satisfies essentially \eqref{ww1}. The advantage of doing this is that $h_1$ has faster spatial decay, which gives better bounds for the solution. For this we let, for any function $h(y,t)$ defined in $\R^2 \times (0,T)$ with sufficient decay, \begin{align} \label{defCij} c_{lj}[h](t) := \frac 1 { \int_{\R^2} w_\rho^2 |Z_{lj}|^2 } \int_{\R^2} h (y ,t)\cdot Z_{lj}(y)\, dy . \end{align} Note that $h[p,\xi, \Psi^*] $ is defined in $\R^2\times (0,T)$, and for simplicity we will assume that the right hand sides appearing in the different linear equations are always defined in $\R^2\times (0,T)$. We would like that $\phi_1$ solves \begin{align*} \lambda^2 {\partial} _t \phi_1 &= L_ W [\phi_1 ] + h_1[p,\xi, \Psi^*] - \sum_{l=-1}^1 \sum_{j=1}^2 c_{lj}[h_1(p,\xi, \Psi^*)] w_\rho^2 Z_{lj} {\quad\hbox{in } } D_{2R} , \end{align*} but the estimates for $\phi_1$ are better if the projections $c_{0j}[h(p,\xi, \Psi^*)] $ are modified slightly. Here is the precise result that we will use later. We define the norms \begin{align} \label{norm-h} \|h\|_{\nu,a} = \sup_{\R^2 \times (0,T)} \ \frac{ |h(y,t)| }{ \lambda_*^\nu (1+|y|)^{-a}} , \end{align} and \begin{align} \label{norm-phi1} \| \phi \|_{*,\nu,a,\delta} = \sup_{D_{2R}} \frac{| \phi(y,t) | + (1+|y|) |\nabla_y \phi(y,t)|}{ \lambda_*^\nu \max(\frac{R^{\delta(5-a)}}{(1+|y|)^3} , \frac{1}{(1+|y|)^{a-2} })} . \end{align} \begin{prop} \label{prop1.0} Let $a \in (2,3)$, $\delta \in (0,1)$, $\nu>0$. Assume $\| h \|_{\nu,a}<\infty$. Then there is a solution $\phi = {\mathcal T} _{\lambda,1} [h]$, $\tilde c_{0j}[h]$ of \begin{align} \nonumber \left\{ \begin{aligned} \lambda^2 \partial_t \phi & = L_ W [\phi ] + h - \sum_{ j=1,2} \tilde c_{0j}[h] Z_{0j} \chi_{B_1} - \sum_{ \substack{l=-1,1\\ j=1,2}} c_{lj}[h] Z_{lj} \chi_{B_1} \quad \text{in } D_{2R} \\ \phi\cdot W & = 0 \quad \text{in } D_{2R} \\ \phi(\cdot, 0) & = 0 \quad \text{in } B_{2R(0)} \end{aligned} \right. \end{align} where $c_{lj}$ is defined in \eqref{defCij}, which is linear in $h$, such that \begin{align*} \| \phi \|_{*,\nu,a,\delta} \leq C \|h\|_{\nu,a} \end{align*} and such that \begin{align} \nonumber |c_{0j}[h] - \tilde c_{0j}[h]| \leq C \lambda_*^{\nu} R^{-\frac{1}{2}\delta(a-2)} \| h \|_{\nu,a}. \end{align} \end{prop} \medskip The function $\phi_2$ solves the equation with right hand side $h_2[p,\xi,\Psi^*]$, which is in {\em mode 1}, a notion that we define next. Let $h(y,t)\in \R^3$, be defined in $\R^2 \times (0,T)$ or $D_{2R}$ with $h\cdot W = 0$. We say that $h$ is a mode $k\in \Z$ if $h$ has the form \[ h(y,t)= \Re ( \tilde h_k(|y|,t) e^{ik\theta}) E_1 + \Re ( \tilde h_k(|y|,t) e^{ik\theta}) E_2 , \] for some complex valued function $\tilde h_k(\rho,t)$. Consider \begin{align} \label{1.11-mode1} \left\{ \begin{aligned} \lambda^2 \partial_t \phi & = L_ W [\phi ] + h - \sum_{ j=1,2} c_{1j}[h] w_\rho^2 Z_{1j} \quad \text{in } D_{2R} \\ \phi\cdot W & = 0 \quad \text{in } D_{2R} \\ \phi(\cdot, 0) & = 0 \quad \text{in } B_{2R(0)} \end{aligned} \right. \end{align} \begin{prop} Let $a \in (2,3)$, $\delta \in (0,1)$, $\nu>0$. Assume that $h$ is in mode 1 and $\| h \|_{\nu,a}<\infty$. Then there is a solution $\phi ={\mathcal T} _{\lambda,2} [h]$ of \eqref{1.11-mode1}, which is linear in $h$, such that \begin{align*} \| \phi \|_{\nu,a-2} \leq C \|h\|_{\nu,a} . \end{align*} \end{prop} In the above statemen the norm $\|\phi\|_{\nu,a-2}$ analogous to the one in \eqref{norm-h}, but the supremum is taken in $D_{2R}$. Another piece of the inner solution, $\phi_3$, will handle $h_3[p,\xi,\Psi^*]$, which does not satisfy orthogonality conditions in mode 0. We will still project it to satisfy the orthogonality condition in mode 1. Let us consider then \eqref{1.11-mode1} without any orthogonality conditions on $h$ in mode 0. We define \begin{align} \label{norm-starstar} \|\phi\|_{**,\nu} = \sup_{D_{2R}} \ \frac{ |\phi(y,t)| + (1+|y|)\left | {\nabla} _y \phi(y,t)\right | } { \lambda_*(t)^{\nu} R(t)^{2} ( 1+|y| )^{-1} } . \end{align} \begin{prop} \label{prop02} Let $1<a<3$ and $\nu>0$. There exists a $C>0$ such that if $\|h\|_{a,\nu} <+\infty$ there is a solution $ \phi = {\mathcal T} _{\lambda,3} [h]$ of \eqref{1.11-mode1}, which is linear in $h$ and satisfies the estimate $$ \|\phi\|_{**,\nu} \ \le\ C \|h\|_{a,\nu} . $$ \end{prop} Note that we allow $a$ to be less than 2 in the previous proposition. Next we have a variant of Proposition~\ref{prop02} when $h$ is in mode -1. \begin{prop} \label{prop03} Let $2<a<3$ and $\nu>0$. There exists a $C>0$ such that for any $h$ in mode -1 with $\|h\|_{a,\nu} <+\infty$, there is a solution $\phi = {\mathcal T} _{\lambda,4} [h]$ of problem \eqref{1.11-mode1}, which is linear in $h$ and satisfies the estimate $$ \|\phi\|_{***,\nu} \leq C \|h\|_{a,\nu} , $$ where \begin{align} \nonumber \|\phi\|_{***,\nu} = \sup_{D_{2R}} \ \frac{ |\phi(y,t)| + (1+|y|)\left | {\nabla} _y \phi(y,t)\right | } { \lambda_*(t)^{\nu} \log(R(t)) } . \end{align} \end{prop} Propositions~\ref{prop1.0}--\ref{prop03} are proved in \cite{ddw}, section~6. \subsection{The equations for \texorpdfstring{$p = \lambda e^{i\omega}$}{}} We need to choose the free parameters $p$, $\xi$ so that $c_{lj}[h(p,\xi, \Psi^*)]=0$ for $l=-1,0,1$, $j=1,2$. This will be easy to do for $l=1$ (mode 1), but mode $l=0$ is more complicated. To handle $c_{0j}$ we note that by definitions \eqref{HH2}, \eqref{defB0j}, \eqref{defA0} \begin{align*} c_{0,j}[h(p,\xi,\Psi^*)] = \frac{2\pi \lambda }{\int_{\R^2} w_\rho^2 |Z_{0j}|^2}\left( \mathcal B_{0j}[p] - a_{0j}[p,\xi,\Psi^*] \right) \end{align*} where $B_0$, $a_0$ are defined in \eqref{defB0-new}, \eqref{defA0} and we recall that $p = \lambda e^{i \omega}$. \noanot{ By the definition of $h[ p,\xi,\Psi^*]$ \eqref{HH2} \[ h[p,\xi, \Psi^*] = \lambda^2 Q_{-\omega}\tilde L_U [\Psi^* ] \chi_{D_{2R} } + \lambda^2 Q_{-\omega} [ {\mathcal K}_{0}[p,\xi]+ {\mathcal K}_{1}[p,\xi]] , \] and then \begin{align*} c_{0j} [ h[p,\xi, \Psi^*] ] &= \frac 1 { \int_{\R^2} w_\rho^2 |Z_{lj}|^2 } \int_{\R^2} h[p,\xi, \Psi^*] \cdot Z_{0j}(y)\, dy \\ &= \frac{\lambda^2}{ \int_{\R^2} w_\rho^2 |Z_{lj}|^2 } \int_{B_{2R}} Q_{-\omega}\tilde L_U [\Psi^* ] \cdot Z_{0j}(y)\, dy \\ & \quad + \frac{\lambda^2}{ \int_{\R^2} w_\rho^2 |Z_{lj}|^2 } \int_{\R^2}Q_{-\omega} [ {\mathcal K}_{0}[p,\xi]+ {\mathcal K}_{1}[p,\xi]] \cdot Z_{0j}(y)\, dy \\ &=\frac{2\pi \lambda }{\int_{\R^2} w_\rho^2 |Z_{0j}|^2}\left( \mathcal B_{0j}[p] - a_{0j}[p,\xi,\Psi^*] \right). \end{align*} } So to achieve $c_{0j}[h(p,\xi, \Psi^*)]=0$ we should solve \begin{align} \label{eqAbc} \mathcal B_0[p ](t) = a_0[p,\xi,\Psi^* ](t), \quad t\in [0,T], \end{align} adjusting the parameters $\lambda(t)$ and $\omega(t)$. We define the following norms. Let $I$ denote either the interval $[0,T]$ or $[-T,T]$. For $\Theta\in (0,1)$, $l\in \R$ and a continuous function $g:I\to {\mathbb C}$ we let \begin{align} \nonumber \|g\|_{\Theta,l} = \sup_{t\in I} \, (T-t)^{-\Theta} |\log(T-t)|^{l} |g(t)| , \end{align} and for $\gamma \in (0,1)$, $m \in (0,\infty) $, and $l \in \R$ we let \begin{align} \nonumber [ g]_{\gamma,m,l} = \sup \, (T-t)^{-m} |\log(T-t)|^{l} \frac{|g(t)-g(s)|}{(t-s)^\gamma} , \end{align} where the supremum is taken over $s \leq t$ in $ I$ such that $t-s \leq \frac{1}{10}(T-t)$. We have then the following result, whose proof is in \cite{ddw}, section 13. \begin{prop} \label{propIntegralOp} Let $\alpha , \gamma \in (0,\frac{1}{2})$, $l\in \R$, $C_1>1$. There is $\alpha_0>0$ such that if $\Theta \in (0,\alpha_0)$ and $m \leq \Theta - \gamma$, then for $a:[0,T]\to {\mathbb C}$ is such that \begin{align} \label{hypA00} \left\{ \begin{aligned} & \frac{1}{C_1} \leq | a(T) | \leq C_1 , \\ & T^\Theta |\log T|^{1+\sigma-l} \| a(\cdot) - a(T) \|_{\Theta,l-1} + [a]_{\gamma,m,l-1} \leq C_1 , \end{aligned} \right. \end{align} for some $\sigma>0$, then, for $T>0$ small enough there are two operators $\mathcal P $ and ${\mathcal R}_0$ so that $p = \mathcal P[a]: [-T,T]\to {\mathbb C}$ satisfies \begin{align} \label{eq-modified0} \mathcal B_0[p](t) = a(t) + {\mathcal R}_0[a](t) , \quad t \in [0,T], \end{align} with \begin{align} \nonumber & |{\mathcal R}_0[a](t) | \\ \nonumber & \leq C \Bigl( T^{\sigma} + T^\Theta \frac{\log |\log T|}{|\log T|} \| a(\cdot) - a(T) \|_{\Theta,l-1} + [a]_{\gamma,m,l-1} \Bigr) \frac{(T-t)^{m+(1+\alpha ) \gamma}}{ |\log(T-t)|^{l}} , \end{align} for some $\sigma>0$. \end{prop} The idea of the proof og Proposition~\ref{propIntegralOp} is to notice that \begin{align*} \mathcal B_0[p ] \approx \int_{-T} ^{t-\lambda_*(t)^2} \frac{\dot p(s)}{t-s}ds. \end{align*} and decompose \begin{align} \nonumber S_\alpha [ g] & := g(t) [ - 2\log \lambda_*(t) + (1+\alpha ) \log (T-t)] + \int_{-T} ^{t-(T-t)^{1+\alpha }} \frac{g(s)}{t-s}ds , \\ \label{defRem} R_{\alpha }[g] & := -\int_{t-(T-t)^{1+\alpha } } ^{t-\lambda_*^2} \frac {g(t) -g(s)}{t-s} ds. \end{align} where $\alpha>0$ is fixed. We solve a modified equation where in \eqref{eq-modified0} we drop $R_{\alpha }[\dot p]$, and so the remainder ${\mathcal R}_0$ is essentially $R_{\alpha}[\dot p]$. Another modification to equations \eqref{eqAbc} that we introduce is to replace $a_0[p,\xi,\Psi^*]$ by its main term. To do this we write \begin{align*} a_0[p,\xi,\Psi] =a_0^{(0)}[p,\xi,\Psi] + a_0^{(1)}[p,\xi,\Psi] + a_0^{(2)}[p,\xi,\Psi] \end{align*} where \begin{align} \nonumber a_0^{(l)}[p,\xi,\Psi] = -\frac{\lambda}{4\pi} e^{i\omega} \int_{B_{2R}} \left( Q_{-\omega}\tilde L_U[\Psi]_l \cdot Z_{01} + i Q_{-\omega}\tilde L_U[\Psi]_l \cdot Z_{02} \right)\,dy \end{align} for $l=0,1,2$. We define \begin{align} \nonumber c_0^*[p,\xi,\Psi^*](t) & := \frac{ 4 \pi \lambda}{\int_{\R^2} w_\rho^2 |Z_{01}|^2 } e^{- i\omega} \Bigl( {\mathcal R}_0\left[ a_0^{(0)}[p,\xi,\Psi^*] \right](t) +a_0^{(1)}[p,\xi,\Psi^*](t) \\ \nonumber & \qquad + a_0^{(2)}[p,\xi,\Psi^*](t) \Bigr) - (c_0[ h[ p,\xi,\Psi^* ]]-\tilde c_0[ h_1[ p,\xi,\Psi^* ] ] ) - \tilde{\mathcal B}_0[p,\xi] , \end{align} and \begin{align*} c_{01}^* := \Re(c_{0}^*) , \quad c_{02}^* := \Im(c_{0}^*) , \end{align*} where ${\mathcal R}_0$ is the operator given Proposition~\ref{propIntegralOp} and $\tilde c_0 = \tilde c_{01} + i \tilde c_{02}$ are the operators defined in Proposition~\ref{prop1.0}. \subsection{The system of equations} We transform the system \eqref{inner1}-\eqref{outer1} in the problem of finding functions $\psi(r,z,t)$, $\phi_1(y,t),\ldots,\phi_4(y,t)$, parameters $p(t) = \lambda(t)e^{i\omega(t)} $, $\xi(t)$ and constants $c_1,c_2,c_3$ such that the following system is satisfied: \begin{align} \label{eq-psi} \left\{ \begin{aligned} \psi_t &= (\partial_r^2+\partial_z^2) \psi + g(p,\xi, Z^*+ \psi,\phi_1+ \phi_2+\phi_3+\phi_4) {\quad\hbox{in } } {\mathcal D} \times (0,T) \\ \psi &= ({\bf e}_3 - U) -\Phi^0 \qquad\qquad\ \ {\quad\hbox{on } } (\partial{\mathcal D}\setminus\{r=0\})\times (0,T) \\ \partial_r \psi &=0 \qquad\qquad\ \ {\quad\hbox{on } } ( \{r=0\} \cap {\mathcal D}) \times (0,T) \\ \psi(\cdot ,0) &= (c_1 \, \mathbf{e_1} + c_2 \, \mathbf{e_2} + c_3\, \mathbf{e_3})\chi + \ (1-\chi) ({\bf e}_3 - U -\Phi^0) {\quad\hbox{in } } {\mathcal D} \\ \psi(r_0,z_0,T) & = - Z^* (r_0,z_0,T) \end{aligned} \right. \end{align} \begin{align} \left\{ \begin{aligned} \lambda^2 \partial_t \phi_1 &= L_W [\phi_1] + h_1[p,\xi, \Psi^*] - \sum_{ j=1,2} \tilde c_{0j}[ h_1[p,\xi, \Psi^*] ] w_\rho^2 Z_{0j} \\ \label{eqphi1} & \qquad - \sum_{ \substack{l=-1,1\\ j=1,2}} c_{lj}[ h_1[p,\xi, \Psi^*] ] w_\rho^2 Z_{lj} {\quad\hbox{in } } D_{2R} \\ \phi_1\cdot W &= 0 {\quad\hbox{in } } D_{2R} \\ \phi_1(\cdot, 0) &=0 {\quad\hbox{in } } B_{2R(0)} \end{aligned} \right. \end{align} \begin{align} \left\{ \begin{aligned} \label{eqphi2} \lambda^2 \partial_t \phi_2 &= L_W [\phi _2] + h_2[p,\xi, \Psi^*] - \sum_{ j=1,2} c_{1j}[ h_2[p,\xi, \Psi^*] ] w_\rho^2 Z_{1j} {\quad\hbox{in } } D_{2R} \\ \phi_2\cdot W & = 0 {\quad\hbox{in } } D_{2R} \\ \phi_2(\cdot, 0) & =0 {\quad\hbox{in } } B_{2R(0)} \end{aligned} \right. \end{align} \begin{align} \label{eqphi3} \left\{ \begin{aligned} \lambda^2 \partial_t \phi_3 &= L_W [\phi_3] + h_3 - \sum_{ j=1,2} c_{1j}[ h_3[p,\xi, \Psi^*] ] w_\rho^2 Z_{1j} \\ & \quad + \sum_{j=1,2} c_{0j}^*[p,\xi,\Psi^*] w_\rho^2 Z_{0j} {\quad\hbox{in } } D_{2R} \\ \phi_3\cdot W & = 0 {\quad\hbox{in } } D_{2R} \\ \phi_3(\cdot, 0) & =0 {\quad\hbox{in } } B_{2R(0)} \end{aligned} \right. \end{align} \begin{align} \label{eqphi4} \left\{ \begin{aligned} \lambda^2 \partial_t \phi_4 &= L_W [\phi_4 ] + \sum_{j=1,2} c_{-1,j}[ h_1[p,\xi, \Psi^*] ] w_\rho^2 Z_{-1j} \\ \phi_4\cdot W & = 0 {\quad\hbox{in } } D_{2R} \\ \phi_4(\cdot, t) & = 0 {\quad\hbox{on } } {\partial} B_{2R(t)} \\ \phi_4(\cdot, 0) & =0 {\quad\hbox{in } } B_{2R(0)} \end{aligned} \right. \end{align} \begin{align} \label{1.3} c_{0j}[h(p,\xi, \Psi^*)](t) - \tilde c_{0j}[p,\xi,\Psi^*] (t) &= 0 \quad\mbox{for all}\quad t\in (0,T), \quad j=1,2, \\ \label{1.4} c_{1j}[h(p,\xi, \Psi^*)](t) &= 0 \quad\mbox{for all}\quad t\in (0,T), \quad j=1,2. \end{align} In \eqref{eq-psi} $\chi$ is a smooth cut-off function with compact support in ${\mathcal D}$ which is identically 1 on a fixed neighborhood of $(r_0,z_0)$ independent of $T$ and the function $ g(p,\xi, \Psi^*,\phi)$ is given by \equ{GG}. \medskip We see that if $(\phi_1,\phi_2,\phi_3,\phi_4,\psi,p,\xi)$ satisfies system \equ{eq-psi}--\equ{1.4} then the functions $$ \phi= \phi_1 + \phi_2+\phi_3+\phi_4, \quad \Psi^* = \tilde Z^*+ \psi $$ solve the outer-inner gluing system \equ{inner1}--\equ{outer1}. \subsection{The fixed point formulation} We consider the inner-outer system including the equations for the parameters $p$ and $\xi$ \eqref{eq-psi}--\eqref{1.4} as a fixed point problem for certain operators that we describe below. First we define the functional spaces we will use for the functions $\psi,\phi_1,\ldots,\phi_4,p,\xi$. For the outer problem \eqref{eq-psi} we define, given $\Theta>0$, $\gamma \in (0,\frac{1}{2})$ the norm \begin{align} \nonumber \| \psi\|_{\sharp, \Theta,\gamma} &:= \lambda_*(0)^{-\Theta} \frac{1}{|\log T| \lambda_*(0) R(0) }\|\psi\|_{L^\infty(\Omega\times (0,T))} + \lambda_*(0)^{-\Theta} \|\nabla \psi\|_{L^\infty(\Omega\times (0,T))} \\ \nonumber &\quad + \sup_{\Omega\times (0,T)} \lambda_*(t)^{-\Theta-1} R(t)^{-1} \frac{1}{|\log(T-t)|} |\psi(x,t)-\psi(x,T)| \\ \nonumber &\quad + \sup_{\Omega\times (0,T)} \, \lambda_*(t)^{-\Theta} |\nabla \psi(x,t)-\nabla \psi(x,T) | \\ \label{normPsi} & \quad + \sup_{} \lambda_*(t)^{-\Theta} (\lambda_*(t) R(t))^{2\gamma} \frac {| {\nabla} \psi(x,t) - {\nabla} \psi(x',t') |}{ ( |x-x'|^2 + |t-t'|)^{\gamma }} , \end{align} where the last supremum is taken in the region \[ x,x'\in \Omega,\quad t,t'\in (0,T), \quad |x-x'|\le 2 \lambda_*R(t), \quad |t-t'| < \frac 14 (T-t) . \] Then we define \begin{align} \nonumber F = \{ \psi \in L^\infty({\mathcal D}\times (0,T) ) & : \text{$\psi$ is Lipschitz continuous with respecto to $(r,z)$ in ${\mathcal D}\times (0,T)$} \\ \label{space-F} & \quad \text{ and } \|\psi\|_{\sharp,\Theta,\gamma}<\infty\} \end{align} with the norm $\| \ \|_{\sharp,\Theta,\gamma}$. For the functions $\phi_i$ in the inner equations \eqref{eqphi1}--\eqref{eqphi4} we consider the spaces \begin{align*} E_1 &= \{ \phi_1 \in L^\infty(D_{2R}) : \nabla_y \phi_1 \in L^\infty(D_{2R}), \ \|\phi_1\|_{*,\nu_1,a_1,\delta} <\infty \} \\ E_2 &= \{ \phi_2 \in L^\infty(D_{2R}) : \nabla_y \phi_2 \in L^\infty(D_{2R}), \ \|\phi_2\|_{\nu_2,a_2}<\infty \} \\ E_3 &= \{ \phi_3 \in L^\infty(D_{2R}) : \nabla_y \phi_3 \in L^\infty(D_{2R}), \ \|\phi_3\|_{**,\nu_3} < \infty \} \\ E_4 &= \{ \phi_4 \in L^\infty(D_{2R}) : \nabla_y \phi_4 \in L^\infty(D_{2R}), \ \|\phi_4\|_{***,\nu_4} <\infty \} \end{align*} and use the notation \begin{align*} E &= E_1\times E_2 \times E_3 \times E_4, \\ \Phi &= ( \phi_1,\phi_2,\phi_3,\phi_4) \in E \\ \|\Phi\|_E &= \|\phi_1\|_{*,\nu_1,a_1,\delta} +\|\phi_2\|_{\nu_2,a_2-2} +\|\phi_3\|_{**,\nu_3} +\|\phi_4\|_{***,\nu_4} . \end{align*} \medskip To introduce the space for the parameter $p$, we recall the integral operator $\mathcal B_0$ defined in \eqref{defB0-new}, which has the approximate form \begin{align*} \mathcal B_0[p ] = \int_{-T} ^{t-\lambda^2} \frac{\dot p(s)}{t-s}ds\, + O\big( \|\dot p\|_\infty \big). \end{align*} Proposition~\ref{propIntegralOp} gives an approximate inverse $\mathcal P$ of the operator $\mathcal B_0$, so that given $a$ satisfying \eqref{hypA00}, $ p := \mathcal P \left[ a \right] $, satisfies the equation \[ \mathcal B_0[ p ] = a +{\mathcal R}_0[ a] , \quad \text{in }[0,T], \] for a small remainder ${\mathcal R}_0[ a]$. The proof of that proposition in \cite{ddw} gives a decomposition \begin{align} \label{decompP} \mathcal P[a] = p_{0,\kappa} + \mathcal P_1[a], \end{align} where $p_{0,\kappa}$ is defined by \begin{align} \nonumber p_{0,\kappa}(t) = \kappa |\log T| \int_t^T \frac{1}{|\log(T-s)|^2} \,ds , \quad t\leq T , \end{align} $\kappa = \kappa[a] \in {\mathbb C}$, and the function $ p_1 = \mathcal P_1[a]$ has the estimate \[ \| p_1 \|_{*,3-\sigma} \leq C |\log T|^{1-\sigma} \log^2(|\log T|) , \] where $\| \ \|_{*,3-\sigma}$ is defined by \begin{align} \nonumber \|g\|_{*,k} = \sup_{t\in [-T,T]} |\log(T-t)|^{k} |\dot g(t)|, \end{align} and $\sigma \in (0,1)$. This leads us to define the space \[ X_1 := \{ p_1 \in C([-T,T;{\mathbb C}]) \cap C^1([-T,T;{\mathbb C}]) \ | \ p_1(T) = 0 , \ \|p_1\|_{*,3-\sigma}<\infty \} , \] with the norm $ \|p_1\|_{*,3-\sigma} $ and represent $p$ by the pair $(\kappa,p_1)$ in the form $p = p_{0,\kappa} + p_1$. Finally, for the parameter $\xi$ we denote by $\xi^0$ the explicit function \[ \xi^0(t) = ( \sqrt{r_0^2 + 2(T-t)} , z_0) , \quad t\in [0,T], \] and represent $\xi = \xi^0 + \xi^1$ with $\xi^1$ in the space \begin{align*} X_2 &= \{ \xi \in C^{1}([0,T];\R^2) \ : \ \dot \xi(T) = 0\} \end{align*} with the norm \[ \| \xi \|_{X_2} = \|\xi\|_{L^\infty(0,T)} + \sup_{t\in (0,T)} \lambda_*(t)^{-\sigma} |\dot \xi(t)| \] where $\sigma \in (0,1)$ is fixed. \medskip Let $\mathcal B$ denote the closed subset of $F \times E \times {\mathbb C} \times X_1 \times X_2$ defined by $ (\psi,\Phi,\kappa, p_1,\xi^1) \in \mathcal B$ if: \begin{align} \label{def-B} \left\{ \begin{aligned} \|\psi\|_F + \|\Phi\|_E & \leq 1 \\ |\kappa-\kappa_0| & \leq \frac{1}{|\log T|^{1/2}} \\ \| p_1 \|_{*,3-\sigma } &\leq C_0 |\log T|^{1-\sigma} \log^2(|\log T|) \\ \|\xi^1\|_{X_2} & \leq 1, \end{aligned} \right. \end{align} where $\kappa_0 = \div \tilde z_0^*(r_0,z_0) + i \mathop{\rm curl} \tilde z_0^*(r_0,z_0) $ and $C_0$ is a large fixed constant. \medskip Next we define an operator $ \mathcal A : \mathcal B \to F \times E \times {\mathbb C} \times X_1 \times X_2$ so that a fixed point of it will give a solution to the full system \eqref{eq-psi}--\eqref{1.4}. This operator is defined by \begin{align*} \mathcal A = (\mathcal A_0, \mathcal F, \mathcal K, \tilde{\mathcal P}_1,\mathcal X_1), \end{align*} where \begin{align*} \mathcal A_0 &:\mathcal B \to F , \quad \mathcal F :\mathcal B \to E , \\ \mathcal K &:\mathcal B \to {\mathbb C} \quad \tilde{\mathcal P}_1 :\mathcal B \to X_1 , \quad \mathcal X_1 :\mathcal B \to X_2 , \end{align*} and where $\mathcal A_0$ will hande \eqref{eq-psi}, $\mathcal F$ is related to \eqref{eqphi1}--\eqref{eqphi4} and $\mathcal K $, $ \tilde{\mathcal P}_1 $, $ \mathcal X_1 $ deal with the equations for $p$ and $\xi$, \eqref{1.3}, \eqref{1.4}. \medskip To define $\mathcal A_0$, we need first a linear result about the exterior problem \eqref{eq-psi}. Thus we consider the inhomogeneous linear heat equation \begin{align} \label{heat-eq0} \left\{ \begin{aligned} \psi_t & = (\partial_r^2+\partial_z^2) \psi + \frac{1}{r} \partial_r \psi + f(r,z,t) {\quad\hbox{in } } {\mathcal D} \times (0,T) \\ \psi & = 0 {\quad\hbox{on } } ( {\partial} {\mathcal D} \setminus \{ r=0\} ) \times (0,T) \\ \partial_r \psi &= 0 {\quad\hbox{on } } ( {\mathcal D} \cap \{ r=0\} ) \times (0,T) \\ \psi(r_0,z_0,T) & = 0 \\ \ \psi(r,z,0) & = (c_1 \, \mathbf{e_1} + c_2 \, \mathbf{e_2} + c_3\, \mathbf{e_3}) \eta_1 {\quad\hbox{in } } {\mathcal D}, \end{aligned} \right. \end{align} for suitable constants $c_1,c_2,c_3$, where $\mathbf{e_1}$, $\mathbf{e_2}$, $\mathbf{e_3}$ are defined in \eqref{e123}, $(r_0,z_0) \in {\mathcal D}$, $r_0>0$, and $T>0$ is sufficiently small. The fixed smooth cut-off $\eta_1$ has compact support in ${\mathcal D}$ and is such that $\eta_1\equiv 1$ in a small neighborhood of $(r_0,z_0)$. The right hand side is assumed to satisfy $\| f\|_{**}<\infty$ where \begin{align} \nonumber \|f\|_{**} : = \sup_{ {\mathcal D} \times (0,T)} \Big ( 1 + \sum_{i=1}^3 \varrho_i(r,z,t)\, \Big )^{-1} {|f(r,z,t)|} , \end{align} and the weights are defined by \begin{align*} \varrho_1 & := \lambda_*^{\Theta} (\lambda_* R)^{-1} \chi_{ \{ s \leq 3R\lambda_* \} } ,\quad \varrho_2 := T^{-\sigma_0} \frac{\lambda_*^{1-\sigma_0}}{s^2} \chi_{ \{ s \geq R\lambda_* \} } , \quad \varrho_3 := T^{-\sigma_0} , \end{align*} where $s= |(r,z)-(r_0,z_0)|$, $\Theta>0$ and $\sigma_0>0$ is small. (The factor $T^{\sigma_0}$ in front of $\varrho_2$ and $\varrho_3$ is a simple way to have parts of the error small in the outer problem.) These weights naturally adapt to the form of the outer error $g$ in \eqref{GG}. The next lemma gives a solution to \eqref{heat-eq0} as a linear operator of $f$. \begin{lemma} \label{lemma3} Assume $ \beta \in ( 0,\frac{1}{2}) $, $ \Theta \in (0,\beta)$. For $T>0$ small there is a linear operator that maps a function $f:{\mathcal D} \times (0,T) \to \R^3$ with $\|f\|_{**}<\infty$ into $\psi$, $c_1,c_2,c_3$ so that \eqref{heat-eq0} is satisfied. Moreover the following estimate holds \begin{align} \label{estPsi0} \| \psi\|_{\sharp, \Theta ,\gamma} + \frac{\lambda_*(0)^{-\Theta} ( \lambda_*(0) R(0) )^{-1} }{ |\log T| }( |c_1| +|c_2| +|c_3| ) \leq C \|f\|_{**} , \end{align} where $\gamma\in(0,\frac{1}{2})$. \end{lemma} \begin{proof} We use Lemmas~8.1--8.3 in \cite{ddw}, which put together, can be summarized as follows. \medskip {\em Assume $ \beta \in ( 0,\frac{1}{2}) $, $ \Theta \in (0,\beta)$. If $f:\R^2 \times (0,T) \to \R^3$ satisfies $\|f\|_{**}<\infty$, the solution $\psi$ to \begin{align} \label{heat-eq-r2} \partial_t \psi = ( \partial_r^2 + \partial_z^2)\psi + f(r,z) \quad \text{in }\R^2 \end{align} given by Duhamel's formula satisfies \begin{align} \nonumber \| \psi\|_{\sharp, \Theta ,\gamma} \leq C \|f\|_{**} , \end{align} where $\gamma\in(0,\frac{1}{2})$. } \medskip To obtain Lemma~\ref{lemma3} from the above statement we first show that the solution to \begin{align} \label{heat-eq0b} \left\{ \begin{aligned} \psi_t & = (\partial_r^2+\partial_z^2) \psi + \frac{1}{r} \partial_r \psi + f(r,z,t) {\quad\hbox{in } } {\mathcal D} \times (0,T) \\ \psi & = 0 {\quad\hbox{on } } ( {\partial} {\mathcal D} \setminus \{ r=0\} ) \times (0,T) \\ \partial_r \psi &= 0 {\quad\hbox{on } } ( {\mathcal D} \cap \{ r=0\} ) \times (0,T) \\ \ \psi(r,z,0) & = 0 {\quad\hbox{in } } {\mathcal D}, \end{aligned} \right. \end{align} satisfies \begin{align} \label{est-psi-ini0} \| \psi\|_{\sharp, \Theta ,\gamma} \leq C \|f\|_{**} . \end{align} Indeed, let $\psi[f]$ be the solution of \eqref{heat-eq-r2} given by Duhamel's formula. We then rewrite the solution $\psi$ of \eqref{heat-eq0b} as $\psi=\eta \psi[f] + \tilde \psi_1$ where $\eta_1$ is as before. Then $\tilde \psi_1$ satisfies \begin{align} \label{heat-eq0c} \left\{ \begin{aligned} \partial_t\tilde\psi_1 & = (\partial_r^2+\partial_z^2) \tilde\psi _1 + \frac{1}{r} \partial_r \tilde\psi_1 + \frac{1}{r} \partial_r (\eta_1 \psi[f] ) + (\partial_r^2+\partial_z^2) \eta_1 \psi[f] + 2\nabla \eta_1\nabla \psi[f] {\quad\hbox{in } } {\mathcal D} \times (0,T) \\ \tilde\psi_1 & = -\psi[f] {\quad\hbox{on } } ( {\partial} {\mathcal D} \setminus \{ r=0\} ) \times (0,T) \\ \partial_r \tilde\psi_1 &= 0 {\quad\hbox{on } } ( {\mathcal D} \cap \{ r=0\} ) \times (0,T) \\ \tilde \psi_1(r,z,0) & = 0 {\quad\hbox{in } } {\mathcal D}, \end{aligned} \right. \end{align} The function $\tilde \psi_1$ can be regarded as $\tilde \psi_1(r,z,t) = \psi_1(x,t)$ where $\psi_1$ solves a non-homogeneous problem in the three dimensional axially symmetric domain $\Omega$. The estimate $\| \psi[f] \|_{\sharp,\Theta,\gamma} \leq \|f\|_{**}$ gives sufficient control of the terms involving $\psi[f] $ in \eqref{heat-eq0c} so that for $\tilde \psi_1$ we also obtain $\| \tilde \psi_1 \|_{\sharp,\Theta,\gamma} \leq \|f\|_{**}$. This proves \eqref{est-psi-ini0}. Finally, using \eqref{est-psi-ini0} one can show that for the problem \eqref{heat-eq0} there are choices of $c_i$ so that $\psi(r_0,z_0,T)=0$, and these constants satisfy \eqref{estPsi0}. \end{proof} Let $\psi = \mathcal U(f)$ be the operator constructed in Lemma~\ref{lemma3} and set \begin{align} \nonumber \tilde g[p,\xi, \Psi^*,\phi] & := g[p,\xi, \Psi^*,\phi] - \frac{1}{r} \partial_r \psi \end{align} with $g$ defined in \eqref{GG}. We then define \begin{align} \label{def-A0} \mathcal A_0(\psi,\Phi,\kappa, p_1,\xi^1) = \mathcal U ( \tilde g[p_{0,\kappa}+p_1,\xi^0+\xi^1,Z^* + \psi,\phi] ) \end{align} where $\phi = \phi_1+\phi_2+\phi_3+\phi_4$ and $\Phi= (\phi_1,\ldots,\phi_4)$. \medskip Next we define \begin{align} \label{def-F} \mathcal F (\psi,\Phi,\kappa, p_1,\xi^1) = ( \mathcal F_1(\psi,\Phi,\kappa, p_1,\xi^1) , \mathcal F_2(\psi,\Phi,\kappa, p_1,\xi^1) , \mathcal F_3(\psi,\Phi,\kappa, p_1,\xi^1) , \mathcal F_4(\psi,\Phi,\kappa, p_1,\xi^1) ) \end{align} where \begin{align*} \mathcal F_1(\psi,\Phi,\kappa, p_1,\xi^1) &= \mathcal T_{\lambda,1} ( h_1[p,\xi, \Psi^* ] ) \\ \mathcal F_2(\psi,\Phi,\kappa, p_1,\xi^1)&= \mathcal T_{\lambda,2} ( h_2[p,\xi, \Psi^* ] ) \\ \mathcal F_3(\psi,\Phi,\kappa, p_1,\xi^1) &= {\mathcal T} _{\lambda,3 } \Bigl( h_3[p,\xi, \Psi^* ]+ \sum_{j=1}^2 c_{0j}^*[p,\xi, \Psi^* ] w_\rho^2 Z_{0j} \Bigr) \\ \mathcal F_4(\psi,\Phi,\kappa, p_1,\xi^1)&= {\mathcal T} _{\lambda,4 } \Bigl( \sum_{j=1}^2 c_{-1,j}[h_1[p,\xi, \Psi^* ]] w_\rho^2 Z_{-1,j} \Bigr) , \end{align*} where $p = p_{0,\kappa}+p_1$, $\xi= \xi^0+\xi^1$, $\Psi^* = Z^* + \psi$. \medskip To define the operators $\mathcal K$ and $\tilde{\mathcal P}_1$, we recall that Proposition~\ref{propIntegralOp} gives the decomposition \eqref{decompP} where $\kappa= \kappa[a]$ and $p_1 = \mathcal P_1[a]$. We define \begin{align} \label{def-K} \mathcal K(\psi,\Phi,\kappa, p_1,\xi^1) & = \kappa \left[ a_0^{(0)}[p,\xi,\Psi^*]\right] \\ \label{def-tilde-P1} \tilde{\mathcal P_1} (\psi,\Phi,\kappa, p_1,\xi^1) &=\mathcal P_1 \left[ a_0^{(0)}[p,\xi,\Psi^*]\right] , \end{align} where, again, $p = p_{0,\kappa}+p_1$, $\xi= \xi^0+\xi^1$, $\Psi^* = Z^* + \psi$. \medskip Finally, we introduce the operator $\mathcal X_1$. By \eqref{defCij}, \eqref{1.4} is equivalent to \begin{align*} \int_{\R^2} h[p,\xi, \Psi^*] \cdot Z_{1j}(y)\, dy =0 , \quad t\in (0,T), \ j=1,2, \end{align*} and recalling \eqref{HH2}, this is equivalent to \begin{align} \nonumber \dot \xi_j = - \frac{1}{4\pi}(1+(2R)^{-2}) \int_{B_{2R}} Q_{-\omega} \Bigl( \tilde L_U[\Psi^*] + \frac{1}{r}\partial_r U \Bigr) \cdot Z_{1j}, \quad j=1,2. \end{align} Then we define \begin{align} \label{def-X1} \mathcal X_1 (\psi,\Phi,\kappa, p_1,\xi^1) = (r_0,z_0) + \int_t^T b(\psi,\Phi,\kappa, p_1,\xi^1) (s) \,ds \end{align} with \begin{align*} b_{11}(\psi,\Phi,\kappa, p_1,\xi^1)(t)&= \frac{1}{4\pi}(1+(2R)^{-2}) \int_{B_{2R}} Q_{-\omega} \Bigl( \tilde L_U[\Psi^*] + \frac{1}{r}\partial_r U \Bigr) \cdot Z_{1j} - \frac{1}{\xi^0_1(t)} \\ b_{12}(\psi,\Phi,\kappa, p_1,\xi^1)(t)&= \frac{1}{4\pi}(1+(2R)^{-2}) \int_{B_{2R}} Q_{-\omega} \Bigl( \tilde L_U[\Psi^*] + \frac{1}{r}\partial_r U \Bigr) \cdot Z_{1j} . \end{align*} \subsection{Choice of constants} \label{constants} We state here the constraints we impose in the parameters involved in the different norms. The values assumed will be sufficient for the inner-outer gluing scheme to work. \medskip \begin{itemize} \item $\beta \in (0,\frac 12)$ is so that $ R(t) = \lambda_*(t)^{-\beta}$. \item $\alpha \in (0,\frac{1}{2})$ appears in Proposition~\ref{propIntegralOp}. It is the parameter used to define the remainder $\mathcal R_\alpha$ in \eqref{defRem}. \item We use the norm $\| \ \|_{*,\nu_1,a_1,\delta}$ \eqref{norm-phi1} to measure the solution $\phi_1$ in \eqref{eqphi1}. Here we will ask that $ \nu_1 \in (0,1)$, $ a_1 \in (2,3)$, and $\delta > 0$ small and fixed. \item We use the norm $\| \ \|_{\nu_2,a_2-2}$ \eqref{norm-h} to measure the solution $\phi_2$ in \eqref{eqphi2}, with $\nu_2 \in (0,1) $, $a_2 \in (2,3)$. \item We use the norm $\| \ \|_{**,\nu_3}$ \eqref{norm-starstar} for the solution $\phi_3$ of \eqref{eqphi3}, with $\nu_3>0$. \item We use the norm $\| \ \|_{***,\nu_4}$ for the solution $\phi_4$ of \eqref{eqphi4}, with $\nu_4>0$. \item We are going to use the norm $\| \ \|_{\sharp,\Theta,\gamma}$ with a parameters $\Theta$, $\gamma$ satisfying some restrictions given below. \item We have parameters $m$, $l$ in Proposition~\ref{propIntegralOp}. We work with $m$ given by \begin{align*} m= \Theta -2\gamma(1-\beta). \end{align*} and $l$ satisfying $l<1+2m$. \end{itemize} We will assume that \[ \alpha-1+2\beta>0 \] which ensures that $m+(1+\alpha)\gamma > \Theta$. To get the estimates for the outer problem \eqref{eq-psi}, we need $\beta \in ( 0,\frac{1}{2} )$ , $ \Theta \in (0,\beta) $ and \begin{align*} \Theta < \min \Bigl( \beta , \frac{1}{2} - \beta , \nu_1-1+\beta(a_1-1) , \nu_2-1+\beta(a_2-1) , \nu_3-1,\nu_4-1+\beta \Bigr) \end{align*} \begin{align*} \Theta < \min \Bigl( \nu_1 - \delta \beta (5-a_1) -\beta , \nu_2 - \beta , \nu_3-3\beta , \nu_4 - \beta \Bigr) . \end{align*} Also to control the nonlinear terms in \eqref{eq-psi} we need $\delta>0$ in $\| \ \|_{*,\nu_1,a_1,\delta}$ to be small. To find $\Theta$ in the range above we need \begin{align*} \nonumber \nu_1 & > \max\bigl(1-\beta (a_1-1),\delta \beta (5-a_1) - \beta \bigr) , & \nu_2 & > \max\bigl(1-\beta (a_2-1), \beta \bigr) , \\ \nu_3 & > \max ( 1 , 3\beta) , & \nu_4 & > \max(1-\beta,\beta) . \end{align*} To solve the inner system given by equations \eqref{eqphi1}, \eqref{eqphi2}, \eqref{eqphi3}, and \eqref{eqphi4} we will need \begin{align} \nonumber \nu_1 & < 1 , & \nonumber \nu_2 & < 1-\beta(a_2-2) , \\ \nonumber \nu_3 &< \min\bigl( 1+\Theta+\sigma_1 , 1+\Theta + 2\gamma \beta , \nu_1 + \frac{1}{2} \delta \beta ( a_1-2 ) \bigr) , & \nonumber \nu_4 & < 1, \end{align} where $\sigma_1 \in (0,\gamma(\alpha-1+2\beta))$. \subsection {The proof of Theorem~\ref{teo1}} Let us consider the operator \begin{equation} \begin{aligned} \mathcal A = (\mathcal A_0,\mathcal F,\mathcal K,\tilde{\mathcal P}_1,\mathcal X_1) \end{aligned}\label{operador1}\end{equation} where $\mathcal A_0$, $\mathcal F$, $\mathcal K$, $\tilde{\mathcal P}_1$, $\mathcal X_1$ are given in \eqref{def-A0}, \eqref{def-F}, \eqref{def-K}, \eqref{def-tilde-P1}, \eqref{def-X1}. The proof of Theorem~\ref{teo1} consists in showing that $\mathcal A:\mathcal B \subset F \times E \times {\mathbb C} \times X_1 \times X_2 \to F \times E \times {\mathbb C} \times X_1 \times X_2$ has a fixed point, where $\mathcal B$ is defined by \eqref{def-B}. We do this using the Schauder fixed point theorem. The estimates needed to show that $\mathcal A$ maps $\mathcal B$ into itself and the compactness are obtained in a similar way. They are based on the following estimates for the operators e $\mathcal A_0$, $\mathcal F$, $\mathcal K$, $\tilde{\mathcal P}_1$, $\mathcal X_1$. We claim that if $(\psi,\Phi,\kappa,p_1,\xi^1) \in \mathcal B$ then \begin{align} \label{estimates} \left\{ \begin{aligned} \| \mathcal A_0(\psi,\Phi,\kappa,p_1,\xi^1) \|_{\sharp,\Theta,\gamma} & \leq C T^\sigma \\ \| \mathcal F (\psi,\Phi,\kappa,p_1,\xi^1) \|_E & \leq C T^\sigma \\ |\mathcal K (\psi,\Phi,\kappa,p_1,\xi^1) - \kappa_0| & \leq \frac{C}{|\log T|} \\ \| \tilde{\mathcal P}_1(\psi,\Phi,\kappa,p_1,\xi^1) \|_{*,3-\sigma} & \leq C |\log T|^{1-\sigma} \log^2( |\log T| ) \\ \| \mathcal X_1(\psi,\Phi,\kappa,p_1,\xi^1) \|_{X_2} & \leq C T^\sigma. \end{aligned} \right. \end{align} We give below the proof of some of the estimates stated above. We first show that for $(\psi,\Phi,\kappa,p_1,\xi^1)\in \mathcal B$, \begin{align} \nonumber \| \mathcal A_0(\psi,\Phi,\kappa,p_1,\xi^1) \|_{\sharp,\Theta,\gamma} \leq C T^\sigma . \end{align} For the proof let us write $ \tilde g = g_1 + g_2 + g_3 + g_4 + g_5 $ where \begin{align*} g_1 & = Q_\omega \bigl( ((\partial_r^2+\partial_z^2) \eta) \phi + 2 {\nabla} \eta {\nabla} \phi - \eta_t \phi \bigr) \\ & \quad + \eta Q_\omega\bigl( - \dot\omega J \phi + \lambda^{-1}\dot\lambda y\cdot {\nabla} _y \phi + \lambda^{-1} \dot\xi \cdot {\nabla} _y \phi \bigr) \\ g_2 & = (1-\eta) \tilde L_U [\Psi^*] + (\Psi^*\cdot U ) U_t \\ g_3 & = (1-\eta)[ {\mathcal K}_{0}[p,\xi]+ {\mathcal K}_{1}[p,\xi]] + \Pi_{U^\perp}[ \tilde {{\mathcal R}}_1] + ( \Phi^0\cdot U)U_t , \\ g_4 & = N_U( \eta Q_\omega \phi + \Pi_{U^\perp}( \Phi^0 + \Psi)^* ) \\ g_5&=\frac{1}{r} \partial_r \left( \Pi_{U^\perp} \big( \eta^\delta\, \Phi^0[\omega, \lambda , \xi] + Z^*\big ) + \eta_R Q_\omega \phi \right) + (1-\eta) \frac{1}{r} \partial_r U + \eta^\delta {\mathcal E} ^{out,1} + {\mathcal E} ^{out,0} . \end{align*} We claim that \begin{align} \nonumber \|g_1\|_{**} \leq C T^\sigma \|\Phi\|_E , \end{align} for some $\sigma>0$. Indeed, we have \begin{align*} | (\partial_r^2+\partial_z^2) \eta \phi_1 | &\leq C \lambda_*^{\nu_1-2} R^{-a_1} \chi_{[ |x-q| \leq 3 \lambda_* R]} \|\phi_1\|_{*,\nu_1,a_1,\delta} \\ | (\partial_r^2+\partial_z^2) \eta \phi_2 | &\leq C \lambda_*^{\nu_2-2} R^{-a_2} \chi_{[ |x-q| \leq 3 \lambda_* R]} \|\phi_2\|_{\nu_2,a_2-2} \\ | (\partial_r^2+\partial_z^2) \eta \phi_3 | &\leq C \lambda_*^{\nu_3-2} R^{-1} \chi_{[ |x-q| \leq 3 \lambda_* R]} \|\phi_3\|_{**,\nu_3} \\ | (\partial_r^2+\partial_z^2) \eta \phi_4 | &\leq C \lambda_*^{\nu_4-2} R^{-2} \log R \chi_{[ |x-q| \leq 3 \lambda_* R]} \|\phi_4\|_{***,\nu_4} . \end{align*} If \begin{align*} \Theta < \min( \nu_1-1+\beta(a_1-1) , \nu_2-1+\beta(a_2-1) , \nu_3-1,\nu_4-1+\beta ) , \end{align*} we find that for any $j=1,2,3,4$: \begin{align*} | \phi_j (\partial_r^2+\partial_z^2) \eta |\leq C T^\sigma \lambda_*^{\Theta-1+\beta} \chi_{[ |x-q| \leq 3 \lambda_* R]} \| \Phi \|_E, \end{align*} for some $\sigma>0$. Then we have \[ \| Q_\omega ((\partial_r^2+\partial_z^2) \eta) \phi \|_{**} \leq C T^\sigma \|\Phi\|_E \] and similarly \[ \| ( \partial_t \eta) Q_\omega \phi \|_{**} +\| Q_\omega \lambda^{-1} \nabla\eta\nabla_y \phi \|_{**} \leq C T^\sigma \|\Phi\|_E . \] The other terms $g_2$, $g_3$, $g_4$, $g_5$ can be estimated in the same way. In the estimate for $g_2$ it is important to have the property that $\Psi^* = Z^* + \psi$ vanishes at $(r_0,z_0,T)$. \medskip Next we estimate the operator $\mathcal F_1$. The other operators $\mathcal F_2,\ldots,\mathcal F_4$ are handled similarly. We claim that for $(\psi,\Phi,\kappa,p_1,\xi^1)\in \mathcal B$, we have \begin{align} \label{estF1-1} \| \mathcal F_1(\psi,\Phi,\kappa,p_1,\xi^1) \|_{*,a,_1,\nu_1} & \leq C \lambda_*(0)^{\sigma} ( \| \psi \|_{\sharp,\Theta,\gamma} + \| \dot p \|_{L^\infty(-T,T)} + \| Z_0 \|_{C^2} ) . \end{align} Indeed, by Proposition~\ref{prop1.0} we have \begin{align} \label{from-prop1.0} \| \mathcal F_1(\Phi) \|_{*,\nu_1,a_1,\delta} & \leq C \| h_1[p,\xi, \Psi^* ] \|_{\nu_1,a_1}. \end{align} From the definition of $h_1$ \eqref{def-h1} and recalling that $ \Psi^* = Z^* + \psi $ we get \begin{align*} & \| h_1[p,\xi, \Psi^* ] \|_{\nu_1,a_1} \\ & \quad \leq \| \lambda^2 Q_{-\omega} (\tilde L_U [ \psi ]_0 +\tilde L_U [ \psi ]_2 ) \chi_{D_{2R} } \|_{\nu_1,a_1} +\| \lambda^2 Q_{-\omega} (\tilde L_U [ Z^* ]_0+\tilde L_U [ Z^* ]_2 ) \chi_{D_{2R} } \|_{\nu_1,a_1} \\ & \quad \quad + \| \lambda^2 Q_{-\omega} {\mathcal K}_{0}[p,\xi] \|_{\nu_1,a_1} . \end{align*} We claim that for $j=0$ and $j=2$: \begin{align} \label{phi1RHS1-0} \| \lambda^2 Q_{-\omega} \tilde L_U [ \psi ]_j \, \chi_{D_{2R} } \|_{\nu_1,a_1} &\leq C T^\sigma \lambda_*(0)^\Theta \| \psi \|_{\sharp,\Theta,\gamma} \end{align} Indeed, from \eqref{Ltilde-j} we get, for $j=0$ and $j=2$: \begin{align*} | \lambda^2 Q_{-\omega} \tilde L_U [ \psi ]_j| & \leq C \frac{\lambda_*}{(1+|y|)^3} \|\nabla \psi \|_{L^\infty} . \end{align*} We use $ \nu_1 <1 $ and $a_1<3$ to estimate for $|y|\leq 2 R$ \begin{align} \nonumber \frac{\lambda_*}{(1+|y|)^3} & \leq \frac{\lambda_*^{\nu_1}}{(1+|y|)^{a_1}} \lambda_*(0)^{1-\nu_1} . \end{align} Then for $|y|\leq 2 R$ and $j=0,2$: \begin{align*} | \lambda^2 Q_{-\omega} \tilde L_U [ \psi ]_j | & \leq C \frac{\lambda_*^{\nu_1}}{(1+|y|)^{a_1}} \lambda_*(0)^{1-\nu_1} \|\nabla \psi \|_{L^\infty} \leq C \frac{\lambda_*^{\nu_1}}{(1+|y|)^{a_1}} \lambda_*(0)^{1-\nu_1} \lambda_*(0)^\Theta \| \psi \|_{\sharp,\Theta,\gamma}, \end{align*} and \eqref{phi1RHS1-0} follwos. Next we claim that \begin{align} \label{phi1RHS2} \| \lambda^2 Q_{-\omega} \tilde L_U [Z^* ]_j \chi_{D_{2R} } \|_{\nu_1,a_1} \leq C T^\sigma \| Z_0 \|_{C^2}, \end{align} for $j=0,2$ and some $\sigma>0$. Indeed, we use the assumption \eqref{condZ0} and standard estimates for the heat equation to obtain for $j=0,2$: \begin{align*} | \lambda^2 Q_{-\omega}\tilde L_U [ Z^* ]_j \, \chi_{D_{2R} } | \leq C \frac{\lambda_*}{(1+\rho)^3} \| Z_0 \|_{C^2(\overline \Omega)} , \end{align*} Since $\nu_1<1$, we get \begin{align*} \| \lambda^2 Q_{-\omega}\tilde L_U [ Z^* ]_j \, \chi_{D_{2R} } \|_{\nu_1,a_1} & \leq C \lambda_*(0)^{1-\nu_1} \| Z_0 \|_{C^2(\overline \Omega)} . \end{align*} This implies \eqref{phi1RHS2}. Next we estimate $ \lambda^2 Q_{-\omega} {\mathcal K}_{0}[p,\xi] $. We claim that \begin{align} \label{phi1RHS3} \| \lambda^2 Q_{-\omega} {\mathcal K}_{0}[p,\xi] \|_{\nu_1,a_1} & \leq C T^\sigma \| \dot p \|_{L^\infty(-T,T)}. \end{align} Indeed, consider ${\mathcal K}_{01}$ given in \eqref{K01}. We have \begin{align*} |\lambda^2 Q_{-\omega} {\mathcal K}_{01}[p,\xi] | & \leq C \frac{\lambda_*}{(1+\rho)^3} \int_{-T} ^t | \dot p(s) k(z,t-s) | \, ds . \end{align*} A direct computation shows that \begin{align*} \| \lambda^2 Q_{-\omega} \tilde L_U [ {\mathcal K}_{01}[p,\xi] ] \chi_{D_{2R} } \|_{\nu_1,a_1} & \leq C \lambda_*(0)^{1-\nu_1} \| \dot p \|_{L^\infty(-T,T)} \\ & \leq CT^\sigma \| \dot p \|_{L^\infty(-T,T)} , \end{align*} for some $\sigma>0$. The estimate for ${\mathcal K}_{02}$ is similar, and we obtain \eqref{phi1RHS3}. Combining \eqref{phi1RHS1-0}, \eqref{phi1RHS2}, and \eqref{phi1RHS3} we finally obtain \begin{align} \nonumber & \| h_1[p,\xi, \Psi^* ] \|_{\nu_1,a_1} \leq C T^\sigma( \| \psi \|_{\sharp,\Theta,\gamma} + \| \dot p \|_{L^\infty(-T,T)} + \| Z_0^* \|_{C^2} ) , \end{align} and combining with \eqref{from-prop1.0} we get \eqref{estF1-1}. Compactness of the operator $\mathcal A $ in \equ{operador1} is proved using suitable variants of \eqref{estimates}. Indeed, the previous computations show that if we vary the parameters $\Theta,\gamma,\nu_j,a_j,\delta, \sigma$ of the norms slightly, so that the restrictions in {\mathcal S} \ref{constants} are kept, then we still obtain \eqref{estimates} where the norms in the left hand side are defined with the new parameters while $\mathcal B$ is defined with the old parameters. More precisely, one can show, for example, that if $\Theta',\gamma'$ are fixed close to $\Theta,\gamma$, then for $(\psi,\Phi,\kappa,p_1,\xi^1)\in \mathcal B$ (this set defined still with $\Theta, \gamma, \ldots$) we get \begin{align} \nonumber \| \mathcal A_0(\psi,\Phi,\kappa,p_1,\xi^1) \|_{\sharp,\Theta',\gamma'} \leq C T^\sigma , \end{align} (for a possibly different $\sigma>0$). Then one proves that if $\gamma<\gamma'$, $\Theta'-\Theta>2(\gamma'-\gamma)$ one has a compact embedding in the sense that if $(\psi_n)_n$ is a bounded sequence in the norm $\| \ \|_{\sharp,\Theta',\gamma'} $, then for a subsequence it converges in the norm $\| \ \|_{\sharp,\Theta,\gamma} $. This compact embedding is a direct consequence of a standard diagonal argument using Ascoli's theorem, and examining the estimates for a uniform smallness control of its values near time $T$. Similar statements hold for the other components $\Phi,\kappa,p_1,\xi^1$. The proof is concluded. \qed \medskip \noindent {\bf Acknowledgements:} The research of J.~Wei is partially supported by NSERC of Canada. J.~D\'avila and M.~del Pino have been supported by grants Fondecyt 1130360, 1150066, Fondo Basal CMM (AFB170001).
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{ Definitions. Notations. Previous results. Statement of problem.} \vspace{4mm} \begin{center} \ {\it A. Grand Lebesgue \ $ \ G(\psi) \ $ spaces.} \par \end{center} \vspace{4mm} \ Let $ \ (X,B, \mu) \ $ be non-trivial measurable space with separable diffuse measure $ \ \mu. \ $ This imply that for arbitrary set $ \ A \in B, \ $ for which $ \ \mu(A) \in (0,\infty) \ $ there exists a subset $ \ A_1 \subset A \ $ such that $ \ \mu(A_1) = \mu(A)/2. \ $ \par \ The {\it separability} is understood relative the usually distance function $$ \rho(A_1,A_2) \stackrel{def}{=} \mu(A_1 \setminus A_2) + \mu(A_2 \setminus A_1), \ A_1, A_2 \in B. $$ \ Note in particular that despite these measures are atomless, but the random values (measurable functions) defined on these spaces may have a discrete distribution. \par \ Let also now $ \psi = \psi(p), \ p \in (a,b), \ a = \const \in [1,\infty), \ b = \const \in (a, \infty] $ be certain bounded from below: $ \inf \psi(p) > 0 $ continuous inside the semi - open interval $ p \in (a,b) $ numerical valued function. We can and will suppose without loss of generality $$ \inf_{p \in (a,b)} \psi(p) = 1 \eqno(1.0) $$ and $ \ a = \inf \{ p, \ \psi(p) < \infty \}; \ \hspace{4mm} \ b = \sup \{ p, \ \psi(p) < \infty \}, $ so that $ \supp \ \psi = [a, b) $ or $ \supp \ \psi = [a, b] \ $ or $ \supp \ \psi = (a, b) $ or in turn $ \supp \ \psi = (a, b]. $ The set of all such a functions will be denoted by $ \ \Psi(a,b) = \{ \psi(\cdot) \}; \ $ $$ \ \Psi := \cup_{(a,b): \ 1 \le a < b \le \infty} \Psi(a,b). $$ \ One can define formally $ \ \psi(p) = +\infty, \ p \notin \supp \ \psi \ $ and $ \ C/\infty := 0. \ $ \par \ Of course, in the case when the measure $ \ \mu(X) \ $ is finite; one can accept without loss of generality $ \ \mu(X) = 1, \ $ (a probabilistic case), then it is reasonable to admit only $ \ \supp \ \psi = [1,b) \ $ or $ \ \supp \ \psi = [1,b]; \ $ the last restriction may holds true iff $ \ b < \infty. \ $ \par \vspace{4mm} {\bf Definition 1.1.} (See [4], [5]-[7], [11]-[12], [13], [10].) \par \ By definition, the (Banach) Grand Lebesgue Space (GLS) $ \ G \psi = G\psi(a,b) $ consists on all the real (or complex) numerical valued measurable functions (random variables, r.v., in the case when $ \ \mu(X) = 1) \hspace{5mm} \ f: \ X \to R \ $ defined on our measurable space and having a finite norm $$ || \ f \ || = ||f||G\psi \stackrel{def}{=} \sup_{p \in (a,b)} \left[ \frac{|f|_p}{\psi(p)} \right]. \eqno(1.1) $$ \ Here and in what follows the notation $ \ |f|_p \ $ denotes an ordinary Lebesgue-Riesz $ \ L_p = L_p(X,\mu) \ $ norm for the measurable function $ \ f: \ $ $$ |f|_p \stackrel{def}{=} \left[ \ \int_X |f(x)|^p \ \mu(dx) \ \right]^{1/p}, \ p \ge 1. $$ \vspace{4mm} \ The function $ \ \psi = \psi(p) \ $ is said to be the {\it generating function } for this space. \par \ If for instance $ \ \mu(X) = 1 \ $ and $ \ \psi(p) = \psi_{r}(p) = 1, \ p \in [1,r], \ $ where $ \ r = \const \in [1,\infty), $ (an extremal case), then the correspondent $ \ G\psi_{r}(p) \ $ space coincides with the classical Lebesgue-Riesz space $ \ L_r(X,\mu) = L_r(X) = L_r \ $ $$ ||\xi|| G\psi_{r} = |\xi|_r, \ r \in [1, \infty). $$ \vspace{4mm} \ Furthermore, let now $ \eta = \eta(z), \ z \in S $ be arbitrary family of random variables defined on any set $ \ z \in S \ $ such that $$ \exists a, b = \const\in (1,\infty], \ a < b, \ \forall p \in [1,b) \ \Rightarrow \psi_S(p) := \sup_{z \in S} |\eta(z)|_p < \infty. $$ \ The function $ p \to \psi_S(p) $ is named as a {\it natural} function for the family of random variables $ S. $ Obviously, $$ \sup_{z \in S} ||\eta(z)||G\Psi_S = 1. $$ \ The family $ \ S \ $ may consists on the unique r.v., say $ \ \Delta: \ $ $$ \psi_{\Delta}(p):= |\Delta|_p, $$ if of course the last function is finite for some values $ \ p \in( p_1,p_2); \ p_0 > 1, \ p_1 \in (p_0, \infty]. \ $\par \ These spaces appearers at first in [15]; see also [11]-[13], [4], [6]-[7], [10], [20] etc. They are applied in statistics, theory of random processes and fields, theory of Partial Differential Equations, theory of operators and so one. \par \vspace{4mm} \begin{center} \ {\it B. \ $ \ B(\phi) \ $ spaces. } \par \end{center} \ A very important subclass of these spaces form the so - called $ \ B(\phi) \ $ spaces. Let now $ \ (\Omega = \{\omega\}, F, {\bf P} ) \ $ be certain sufficiently rich probability space. Let also $ \ \phi = \phi(\lambda), \ \lambda \in (-\lambda_0, \ \lambda_0), \ \lambda_0 = \const \in (0, \infty] $ be certain even strong convex which takes positive values for positive arguments twice continuous differentiable function, briefly: Young-Orlicz function, such that $$ \phi(0) = 0; \ \phi^{''}(0) \in (0,\infty). $$ \ For instance: $ \ \phi(\lambda) = \phi_2(\lambda) = \lambda^2/2; \ $ is the so-called subgaussian case.\par \ We denote the set of all these Young-Orlicz function as $ \ \Phi = \Phi_{\lambda_0}= \{ \ \phi \ \}. \ $\par \vspace{4mm} {\bf Definition 1.2.} (See [15], [11]-[12], [13].)\par \ We will say by definition that the centered numerical valued random variable (r.v) $ \ \xi \ $ belongs to the space $ B(\phi), \ \phi \in \Phi,\ $ if there exists certain non-negative constant $ \ \tau \ge 0 \ $ such that $$ \forall \lambda \in (- \lambda_0, \ \lambda_0) \ \Rightarrow {\bf E}\exp(\lambda \ \xi) \le \exp(\phi(\lambda \ \tau)). \eqno(1.2). $$ \ The minimal non-negative value $ \ \tau \ $ satisfying (1.2) for all the values $ \ \lambda \in (- \lambda_0, \ \lambda_0), \ $ is named a $ \ B(\phi) \ $ norm of the variable $ \ \xi, \ $ write $$ ||\xi||B(\phi) \stackrel{def}{=} \inf \{\tau, \tau > 0: \ \forall \lambda \in (- \lambda_0, \ \lambda_0) \ \Rightarrow {\bf E}\exp(\lambda \ \xi) \le \exp(\phi(\lambda \ \tau)) \}. \eqno(1.3) $$ \vspace{4mm} \ These spaces are very convenient for the investigation of the r.v. having a exponential decreasing tail of distribution, for instance, for investigation of the limit theorem, the exponential bounds of distribution for sums of random variables, non-asymptotical properties, problem of continuous and weak compactness of random fields, study of Central Limit Theorem in the Banach space etc. The detail investigation of these spaces may be found in [15], [4], [6]-[7], [8], [13]-[14], [16], [17], [20]. \par \ One unexpected new application of these spaces may be found in a recent article [1]. \par \ The space $ \ B(\phi), \ \phi \in \Phi \ $ with respect to the norm $ || \xi ||B(\phi) \ $ and ordinary algebraic operations is a rearrangement invariant Banach space in the classical sense [3], chapters 1,2; which is in turn isomorphic to the subspace of the space $ \ G\psi_{\phi}, \ $ where $$ \psi_{\phi}(p) := \frac{p}{\phi^{-1}(p)}. $$ consisting on all the centered r.v. having a finite norm $ \ ||\xi||G\psi_{\phi} < \infty. \ $ \par \vspace{4mm} \ {\bf Our aim in this short report is description of the associate and dual spaces for the Grand Lebesgue Spaces.} \par \vspace{4mm} \ We intend to simplify one in the articles [6]-[7], [17]. \par \vspace{4mm} \section{ Main result: structure of associate spaces. } \vspace{4mm} \ Recall first of all that the {\it associate} space $ \ Y' \ $ for arbitrary rearrangement one $ \ Y \ $ builded over source measurable space $ \ (X,B, \mu) \ $ consists by definition on all the {\it linear} functionals of the forms $$ l(f) = l_g(f) \stackrel{def}{=} \int_X f(x) \ g(x) \ \mu(dx), \ f(\cdot) \in Y, \eqno(2.1) $$ for (certain) measurable function $ \ g: X \ \to R. \ $ It will be presumed that this functional $ \ l_g(\cdot) \ $ is bounded: $$ ||l_g||Y' \stackrel{def}{=} \sup_{f: ||f||Y \le 1} |l_g(f)| < \infty. \eqno(2.2) $$ \ The detail description of these spaces is explained in the classical monograph of C.Bennet and R.Sharpley [3], chapters 1,2. \par \ We will identify in this case as usually for convenience the linear functional $ \ l_g \ $ with its generating function $ \ g: \ $ $$ ||g||Y' := ||l_g||Y'. $$ \vspace{4mm} \ Let now $ \ Y = G\psi \ $ for some $ \ \psi = \psi(\cdot) \in \Psi(a,b), \ $ where as above $ \ a = \const \ge 1, \ b = \const \in (a, \infty]. \ $ Let also $ \ 0 \ne f \in G\psi; \ $ one can suppose without loss of generality $ \ ||f||G\psi= 1. \ $ Therefore $$ |f|_p \le \psi(p), \ p \in (a,b). $$ \ Denote as usually $$ q= q(p) = p' := p/(p-1), \ a' = a/(a-1), \ b' = b/(b-1), $$ so that $ \ \infty' = 1. \ $ Introduce the so-called {\it adjacent} function $ \ \nu = \nu(q) = \nu[\psi](q), \ q \in (b', a') \ $ as follows $$ \nu(q) = \nu[\psi](q) \stackrel{def}{=} \frac{1}{\psi(q/(q-1))}. \eqno(2.3) $$ \ Let us estimate the functional $ \ l_g(f) \ $ from the relation (2.1). We apply the classical H\"older's inequality for the values correspondingly $ \ p \in (a,b) \ $ and $ \ q = q(p) \in (b',a') $ $$ |l_g(f)| \le |f|_p \ |g|_{q(p)} \le \psi(p) \ |g|_q = \frac{|g|_q}{\nu[\psi](q)}. $$ \ Define a following functional $$ V(g) = V[\psi](g) := \inf_{q \in (b', a')} \left[ \ \frac{|g|_q}{\nu[\psi](q)} \ \right], \eqno(2.4) $$ \ We proved actually the following proposition: \vspace{4mm} \ {\bf Theorem 2.1. } Assume that $ \ V(g) \ < \infty; \ $ then $ \ g \in G\psi' \ $ and wherein $ \ ||g||G\psi' \le V(g). \ $ \par \ As a slight consequence: $$ G\nu \subset G\psi'; \ ||l_g||(G\psi)' \le ||g||G\nu[\psi]. \eqno(2.4a) $$ \vspace{4mm} \ {\bf Example 2.1.} Let the measure $ \ \mu \ $ be probabilistic: $ \ \mu(X) = 1. \ $ If $ \ \psi(p) := \psi_{(r)}(p) = 1, \ p \in [1,r], \ $ where $ \ r = \const \in [1, \infty), \ $ so that $ \ a = 1, \ b = r, \ \supp \ \psi_{(r)} = [1,r], \ $ then the space $ \ G\psi_{(r)} \ $ coincides with the classical Lebesgue - Riesz space $ \ L_r = L_r(X). \ $ \par \ In this case $$ V(g) = |g|_{r'}, \ r' = r/(r-1). $$ \ Thus, in this example the associate space is quite equal to the its conjugate. \par \vspace{4mm} \ {\bf Example 2.2.} Let now again in the probabilistic case, i.e. when $ \ \mu(X) = 1 \ $ $$ \psi(p) = \psi_m(p) \stackrel{def}{=} p^{1/m}, \ p \in [1,\infty), \ m = \const > 0. $$ \ The case $ \ m = 2 \ $ correspondent to the subgaussian case. \par \ We have $$ \nu_m(q) \stackrel{def}{=} \nu \left[\psi_m \right](q) = \left( \frac{q-1}{q} \right)^{1/m}; $$ therefore $$ ||g||(G\psi_m)' \le \inf_{q \in (b',a')} \left\{ \ \left( \ \frac{q}{q-1} \ \right)^{1/m} \ |g|_q \ \right\}. $$ Of course $$ \nu_m(q) \asymp (q-1)^{1/m}, \ q \in (1,2); \ \nu_m(q) \asymp 1, \ q \in [2, \infty). $$ \ Notice that in the considered case the the estimate (2.4a) gives a weak result. Namely, as long as for any r.v. $ \ \eta \ $ $$ \sup_{q \ge 2} |\eta|_q = \lim_{q \to \infty} |\eta|_q = \vraisup_{x \in X} |\eta(x)| = |\eta|_{\infty}, $$ we deduce on the basis of relation (2.4a) a trivial estimate $$ ||g||(G\psi_m)' \le C(m) |g|_{\infty}, \ C(m) \in (0,\infty). $$ \vspace{4mm} \section{ Main result: structure of dual spaces. } \vspace{4mm} \ We need to introduce the following addition notations, conditions and definitions. $$ h(p) = h[\psi](p) := p \ \ln\psi(p), \ p \in (a,b), $$ $$ V(u) = V[\psi](u):= h^*[\psi](\ln |u|), \ \psi \in \Psi. $$ Define also the following Young-Orlicz function $$ \ N[\psi](u) := \exp( V(u) ) = \exp( V[\psi](u) ) := $$ $$ \exp \left[ \ h^*[\psi](\ln |u|) \ \right], \ u \ge e; \ N[\psi](u) = C \ u^2, \ |u| < e. \eqno(3.1) $$ \ Recall that the well - known Young-Fenchel, or Legendre transform $ \ h^* \ $ for the real valued function $ \ h = h(z), \ $ not necessary to be convex, is defined as follows $$ h^*(v) \stackrel{def}{=} \sup_{z \in \Dom(h) } ( v z - h(z)). $$ \ Further, denote by $ \ \Gamma = \{ \ \gamma \ \} \ $ the collection of all {\it finite-additive } numerical valued functions defined on the sigma - field $ \ B. \ $ \par \ Of course, the finite additivity does not exclude the "ordinary" countably sigma - additivity. \par \ Define following [18], see also [19], norm on the set $ \ \Gamma: \ $ $$ |||\gamma||| = |||\gamma|||_{\psi} \stackrel{def}{=} \sup \left\{ \ \int_X f(x) \ \gamma(dx): \ ||f||G\psi \le 1 \ \right\}, \eqno(3.2) $$ where the integral in (3.2) may be understood in the sense of R.G.Bartle [2]; see also [18]. On the other hands, one can choose only the function $ \ f(\cdot) \ $ to be {\it simple}, or on the other words, {\it step function}, i.e. taking only finite possible values on the sets of finite "measure" $ \ \gamma: \ $ \par $$ f(x) = \sum_{i=1}^n c_i \ \chi_{D(i)}(x), \ D(i) \in B, \ c_i \in R, \ n = 1,2, \ldots, \ \gamma(D(i)) \in R; $$ where as ordinary $ \ \chi_D(x) \ $ denotes an indicator function of the measurable set $ \ D: \ \chi_D(x) = 1, \ x \in D; \ \chi_D(x) = 0, \ x \notin D. \ $ \par \ Obviously, for such a functions $$ \int_X f(x) \ \gamma(dx) = \sum_{i=1}^n c_i \ \gamma(D(i)). $$ \vspace{4mm} {\bf Theorem 3.1} Assume in addition to the foregoing conditions imposed on the function $ \ \psi(\cdot) \ $ that the function $ \ V(x) = V[\psi](x) \ $ satisfies the following restriction: $$ \exists \ \alpha = \const \in (0,1), \ \exists K = \const > 1, \forall x \in (0,\infty) \ \Rightarrow V(x/K) \le \alpha \ V(x), \eqno(3.3) $$ which is in turn a slight analog of the famous $ \ \Delta_2 \ $ condition.\par \ We assert that arbitrary continuous linear functional $ \ l(f), \ f \in G\psi \ $ has an unique representation of the form $$ l(f) = l^{\gamma}(f) = \int_X \ f(x) \ \gamma(dx), \ \gamma \in \Gamma, \ |||\gamma|||_{\psi} < \infty, \eqno(3.4) $$ and conversely each functional $ \ l^{\gamma}(\cdot) \ $ of the form (3.4) belongs to the conjugate (dual) space $ \ (G\psi)^* \ $ and moreover $$ || l^{\gamma}||(G\psi)^* = |||\gamma|||_{\psi}; \eqno(3.5) $$ on the other words, the (Banach) space $ \ S = S(\psi) :=(\Gamma, \ ||| \ \cdot \ ||| ) \ $ coincides with the dual (conjugate) space $ \ (G\psi)^* \ $ relative the equality (3.5). \par \vspace{4mm} \ {\bf Proof.} {\bf I.} Suppose $ \ \gamma \in S \ $ and following $ \ |||\gamma|||_{\psi} < \infty. \ $ The equality (3.5) follows immediately from the direct definition of the spaces $ \ S(\psi); \ $ obviously, it is sufficient to verify (3.5) for all the step functions. \par \ It is worth to note that this conclusion is true still without the condition (3.3) \par \vspace{4mm} \ {\bf 2.} Conversely, let the space $ \ G[\psi], \ \psi \in \Psi(a,b), \ 1 \le a < b \le \infty \ $ be given. Assume also that the condition (3.3) is satisfied. \par \ It is proved in particular in [12] that the Grand Lebesgue Space $ \ G\psi \ $ in this case quite coincides with the so-called {\it exponential} Orlicz space $ \ L(N[\psi]) \ $ builded over our measurable triplet $ \ (X,B,\mu) \ $ equipped with the correspondent Young-Orlicz function $$ N[\psi](u) := \exp( V(u) ) = \exp(V[\psi](u)). $$ \ The norm in this space may be used the ordinary Luxemburg norm. \par \ The conjugate spaces for these ones are calculated in [18]; see also [19]. They have the form (3.5). \par \ This completes the proof of theorem 3.1. \par \vspace{4mm} \ {\bf Remark 3.1.} The condition (3.3) is satisfied for very popular class of $ \ G \psi \ $ spaces, indeed, if for instance $$ V(x) = C \ x^m, \ C,m = \const \in (0,\infty), \ x > 0, $$ as well as in the case when $$ V(x) = C \ x^m \ L(x), C,m = \const \in (0,\infty), \ x > 0, $$ where $ \ L = L(x) \ $ is positive continuous slowly varying simultaneously $ \ x \to 0+ \ $ and as $ \ x \to \infty \ $ function such that $$ L(x/K) \le \alpha \ K^m \ L(x), \ x > 0. \eqno(3.6) $$ \ In turn, the last condition is satisfied if the function $ \ L(x) \ $ is in addition strictly increasing. \par \vspace{4mm} \section{ Again about associate space.} \vspace{4mm} \ Let us return to the problem of finding of associate space to the Grand Lebesgue ones. We intend to apply the described above approach throughout embedding into Orlicz spaces [13], [14], [17]. \par \ In detail, let $ \ \mu(X) = 1 \ $ and let the function $ \ \psi(\cdot) \ $ satisfy the condition (3.3). Suppose also that the function $ \ f: X \to R \ $ be from the space $ \ G\psi; $ then it belongs also to the correspondent Orlicz's space $ \ L(N[\psi]) \ $ and herewith $$ ||f||L(N[\psi]) \le C(\psi) \ ||f||G\psi, \ C(\psi) < \infty, $$ and inverse inequality is also true.\par \ One can use the famous H\"older's inequality $$ |l_g(f)| = \left| \ \int_X f(x) \ g(x) \ \mu(dx) \ \right| \le 2 \ ||f||L(N[\psi]) \ ||g||L(N^*[\psi]) \le $$ $$ 2 \ C(\psi) \ ||f||G\psi \ ||g||L(N^*[\psi]), $$ see e.g. [9], [19], [20]. \par \vspace{4mm} \ To summarize: \par \vspace{4mm} \ {\bf Theorem 4.1.} Suppose once again that the function $ \ \psi(\cdot) \ $ is from the set $ \ \Psi \ $ and satisfies the condition (3.3). The Orlicz space $ \ L(N^*[\psi]) \ $ over our measurable space $ \ (X, B, \ \mu) \ $ quite coincides to the associate $ \ (G\psi)' \ $ for the Grand Lebesgue Space $ \ G\psi \ $ and moreover $$ ||l_g||(G\psi)' \le 2 \ C(\psi) \ ||g||L(N^*[\psi]). \eqno(4.1) $$ \vspace{4mm} \ {\bf An example.} \par \ Let $ \ \mu(X) = 1; \ \psi(p) = \psi_2(p) := p^{1/2}, \ - \ $ a subgaussian case, see e.g. [5]. Then the correspondent conjugate Young-Orlicz function has a form $$ N^*[\psi_2](y) \asymp |y| \ \ln^{1/2}(e + |y|), \ |y| \ge 1. $$ \ A slight generalization: $ \ \psi(p) = \psi_m(p) := p^{1/m}, \ m = \const > 0. \ $ Then $$ N^*[\psi_m](y) \asymp |y| \ \ln^{1/m}(e + |y|), \ |y| \ge 1. $$ \ Many other examples of calculation of these functions $ \ N^*[\psi](p), \ $ including the cases when $ \ \psi(p) = p^{1/m} \ L(p), \ p \in [1, \infty), \ $ where $ \ L = L(p) \ $ is positive continuous slowly varying as $ \ p \to \infty \ $ function; $ \ \psi(p) = \exp \left(C p^{\beta} \right), \ C, \beta, p = \const \in (0,\infty) \ $ etc, may be found in the articles [12], [14]. \par \vspace{6mm} \section{Concluding remarks.} \vspace{4mm} {\bf A. } It is no hard by our opinion to generalize obtained here results into a multidimensional case of $ \ G\psi \ $ spaces.\par \vspace{4mm} \ {\bf B. } As long as the $ \ B(\phi) \ $ spaces are particular cases of Grand Lebesgue ones, we obtained on the way also the associate and dual for these spaces, as well. \par \vspace{4mm} \ {\bf C. } It is interest by our opinion to extend the integral representation for dual and associate linear continuous functionals for many wide classes of rearrangement invariant spaces.\par \begin{center} \vspace{6mm} {\bf References.} \vspace{4mm} \end{center} {\bf 1. Rodrigo Banuelos and Adam Osekovski.} {\it Weighted square function estimates.} \\ arXiv:1711.08754v1 [math.PR] 23 Nov 2017 \\ \vspace{3mm} {\bf 2. R. G. Bartle.} {\it A general bilinear vector integral.} Studia Math. 15 (1956), 337-352.\\ \vspace{3mm} {\bf 3. Bennet C., Sharpley R.} {\it Interpolation of operators.} Orlando, Academic Press Inc., (1988). \\ \vspace{3mm} {\bf 4. Buldygin V.V., Kozachenko Yu.V. } {\it Metric Characterization of Random Variables and Random Processes.} 1998, Translations of Mathematics Monograph, AMS, v.188. \\ \vspace{3mm} {\bf 5. X.Fernique.} {\it Regularite des trajectoires des fonctions aleatoires gaussiennes.} Lectures Notes in Mathematics, {\bf 480}, (1975). Springer Verlag, Berlin-Heidelberg.\\ \vspace{4mm} {\bf 6. A. Fiorenza.} {\it Duality and reflexivity in grand Lebesgue spaces. } Collect. Math. {\bf 51,} (2000), 131-148. \\ \vspace{3mm} {\bf 7. A. Fiorenza and G.E. Karadzhov.} {\it Grand and small Lebesgue spaces and their analogs.} Consiglio Nationale Delle Ricerche, Instituto per le Applicazioni del Calcoto Mauro Picone", Sezione di Napoli, Rapporto tecnico 272/03, (2005).\\ \vspace{3mm} {\bf 8. Dudley R.M.} {\it Uniform Central Limit Theorem.} Cambridge, University Press, (1999), 352-367.\\ \vspace{3mm} {\bf 9. Daniele Imparato.} {\it Martingale inequalities in exponential Orlicz spaces.} Journal of inequalities in pure and applied mathematic. Volume 10 (2009), Issue 1, Article 1, pp. 1-10.\\ \vspace{3mm} {\bf 10. T. Iwaniec and C. Sbordone.} {\it On the integrability of the Jacobian under minimal hypotheses. } Arch. Rat.Mech. Anal., 119, (1992), 129-143. \\ \vspace{3mm} {\bf 11. Kozachenko Yu. V., Ostrovsky E.I. } (1985). {\it The Banach Spaces of random Variables of subgaussian Type. } Theory of Probab. and Math. Stat. (in Russian). Kiev, KSU, 32, 43-57. \\ \vspace{3mm} {\bf 12. Kozachenko Yu.V., Ostrovsky E., Sirota L.} {\it Relations between exponential tails, moments and moment generating functions for random variables and vectors.} \\ arXiv:1701.01901v1 [math.FA] 8 Jan 2017 \\ \vspace{3mm} {\bf 13. Ostrovsky E.I. } (1999). {\it Exponential estimations for Random Fields and its applications,} (in Russian). Moscow-Obninsk, OINPE. \\ \vspace{3mm} {\bf 14. Ostrovsky E. and Sirota L.} {\it Vector rearrangement invariant Banach spaces of random variables with exponential decreasing tails of distributions.} \\ arXiv:1510.04182v1 [math.PR] 14 Oct 2015 \\ \vspace{3mm} {\bf 15. Ostrovsky E.I.} {\it Generalization of norm of Buldygin-Kozachenko and Central Limit Theorem in Banach space. } Probability Theory Applications, 1982, V. 27 , Issue 3, p. 618. \\ \vspace{3mm} {\bf 16. Ostrovsky E., Rogover E. } {\it Exact exponential bounds for the random field maximum distribution via the majorizing measures (generic chaining).} \\ arXiv:0802.0349v1 [math.PR] 4 Feb 2008 \\ \vspace{3mm} {\bf 17. Ostrovsky E. and Sirota L. } {\it Bilateral small Lebesgue spaces.} \\ arXiv:0904.4535v1 [math.FA] 29 Apr 2009 \\ \vspace{3mm} {\bf 18. M.M.Rao.} {\it Linear functional on Orlicz spaces.} Pacific Journal of Mathematics, Vol. 25, No. 3, 535-585, 1968. \\ \vspace{3mm} {\bf 19. M.M.Rao, R.Z.Ren.} {\it Theory of Orlicz spaces.} Marcel Dekker. \ Berlin, Heidelberg, 1992. \\ \vspace{3mm} {\bf 20. O.I.Vasalik, Yu.V.Kozachenko, R.E.Yamnenko.} {\it $ \ \phi \ - $ subgaussian random processes. } Monograph, Kiev, KSU, 2008; (in Ukrainian). \\ \vspace{3mm} \end{document}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} \label{s_intro} We will come to these reformulations through the theory of semigroups. Recall a semigroup $S$ is a set equipped with a closed associative binary operation. A group $G$ can be thought of as a semigroup. However in general, unlike groups, semigroups need not contain an identity element or inverse elements, although there are semigroups with an identity element as well as semigroups with inverse elements. See \cite{H} or \cite {P} for instance. Herein, though, when we speak of a semigroup we will only be interested in it as a set closed under some specified associative binary operation unless otherwise stated. The elements in the subsemigroup Orientable($S$) are characterized as those elements of $S$ that are solutions to an \emph{orientable} equation, a certain type of simple equation in one variable over $S$. Two elements $u$ and $v$ in $S$ are $\sigma_{orient}$-congruent to one another if they are solutions to a two-variable orientable equation, a natural extension of the one variable variety. We note and the reader can verify this after we review orientable equations in Section \ref{orientable equations} that if $S$ did contain an identity element then the elements belonging to Orientable ($S$) could also be characterized as those elements that are $\sigma_{orient}$-congruent to $S$'s identity element. Now in a certain sense, as somewhat indicated in the above paragraph, the subsemigroup Orientable($S$) determines the congruence $\sigma_{orient}$ on $S$ and, as demonstrated in \cite{CCT}, the quotient semigroup $S / \sigma_{orient}$ is a commutative semigroup and Orientable($S)$ serves as its identity element, in the case when Orientable($S$) is non-empty. Already, at least, this ``looks like'' and ``acts like'' how the commutator subgroup $[G,G]$ determines a congruence on $G$ that gives rise to a quotient group $G / [G,G]$ that is a commutative group, the abelianization of $G$, and where the commutator subgroup $[G,G]$ serves as its identity element. To further strengthen these resemblances it was also shown in \cite{CCT} - using diagrams embedded on orientable surfaces and hence the origin of the modifier orientable for orientable equations - that when semigroup $S$ and group $G$ are both defined by the same semigroup presentation $P = \langle \hspace{.1cm} X \hspace{.1cm} \vert \hspace{.1cm} R \hspace{.1cm} \rangle$, then $w$ belongs to Orientable($S$) if and ony if $w$ belongs to $[G,G]$ where $w$ is a positive word on the alphabet $X$ and the natural map from $S$ into $G$ factors through $\sigma_{orient}$ as an embedding into $G / [G, G]$. The aim of this article is to show that when the semigroup $S$ is itself a group $G$ then these resemblances are in fact identically the same. In Section \ref{orientable equations} we review orientable equations and reference some propositions concerning same found in \cite{CCT}. We will also review and reference material concerning the commutator subgroup and the abelianization of a group. In Section \ref{main results} we demonstrate our main results, namely, $ G/\sigma_{orient} = G/[G,G]$ and Orientable($G$) $=$ $[G,G]$ for any group $G$. Our proofs will be carried out simply by mathematical induction, at this stage there will be no need for the geometry of diagrams embedded on orientable surfaces. \section{Orientable Equations} \label{orientable equations} The following definitions and results - with a slight change in notation, namely, we now use only lower case letters representing elements in $S$ - are taken directly from \cite{CCT}. Our main results in Section \ref{main results} will simply follow from mathematical induction and from the definitions of equations $(1)$ and $(2)$ below. We include the following propositions merely as context for $(1)$ and $(2)$. Let $S$ be a semigroup. We pick a symbol $t\notin S$ to be a variable. A {\it simple equation over $S$ in the variable $t$} is an expression of the form \bigskip \begin{center} $a=btc$ \qquad$(1)$\end{center} \bigskip \noindent where $a,b,c\in S$, $a$ is non-empty, and least one of the elements $b$ and $c$ is non-empty. An element $w \in S$ satisfies this equation if upon substitution $a=bwc$ in $S$. If there are non-empty elements $x_1, ..., x_n$ (not necessarily distinct) in $S$ and factorizations $a=\prod_{i=1}^{n_1}a_i$, $b=\prod_{j=1}^{n_2}b_j$ and $c=\prod_{k=1}^{n_3}c_k$ where $n_1 + n_2 + n_3 = 2n$, $n_1 = n_2 + n_3$ and each $x_\nu$ occurs once among $a_1, ..., a_{n_1}$ and once among $b_1, ..., b_{n_2}, c_1, ..., c_{n_3}$ then we call the simple equation an {\it orientable equation}. In other words a simple equation is orientable if the factors $x_\nu$ can be somehow paired up across the equality sign of the simple equation. Elements of $S$ that satisfy orientable equations are called {\it orientable} elements of $S$ and by {\it Orientable($S$)} we mean the set $\{ w \hspace{.1cm} | \hspace{.1cm} w \hspace{.1cm} $is an orientable element of$ \hspace{.1cm} S \}$. \bigskip The following appears in Proposition 1.1 in \cite{CCT}. In fact, there it is also claimed that Orientable($S$) is a {\it unitary} subsemigroup of $S$ although we will have no need of this concept and fact. \begin{proposition} \label{orientable(S)} Let S be a semigroup then Orientable(S) is a subsemigroup of S. \end{proposition} We now introduce two-variable orientable equations, modeled after the single variable orientable equations. \bigskip Let $S$ be a semigroup. We pick two symbols $t_{1}$ and $t_{2}$ both $\notin S$ to be variables. A {\it two-variable simple equation over $S$} in the variables $t_{1}$ and $t_{2}$ is an expression of the form \bigskip \begin{center} $at_{1}b=ct_{2}d$ \qquad$(2)$\end{center} \bigskip \noindent where $a,b,c, $and$ d \in S$, some or possibly all of which may be empty. An ordered pair $(u,v)$ $\in S \times S$ satisfies this equation if upon substitution $aub=cvd$ in $S$. Analogous to the one-variable equations of section 1, if there are non-empty elements $x_1, ..., x_n$ (not necessarily distinct) in $S$ and factorizations $a=\prod_{i=1}^{n_1}a_i$, $b=\prod_{j=1}^{n_2}b_j$, $c=\prod_{k=1}^{n_3}c_k$, and $d=\prod_{l=1}^{n_4}d_l$ where $n_1 + n_2+n_3 + n_4=2n$, $n_1 + n_2 = n_3 + n_4$ and each $x_\nu$ occurs once among $a_1, ..., a_{n_1}, b_1, ..., b_{n_2}$ and once among $c_1, ..., c_{n_3}, d_1, ..., d_{n_4}$ then we call such two-variable simple equation a {\it two-variable orientable equation over S}. \bigskip Let $u$ and $v$ be two elements in $S$. We say $u$ is {\it orientably-equivalent} to $v$ in $S$ and write $ u \sigma_{orient} w$ if the ordered pair $(u,v)$ satisfies a two-variable orientable equation over $S$. \bigskip The following proposition follows from Claims 4.1, 4.2, 4.3, and 4.4 in \cite{CCT}. \begin{proposition} \label{congruence} Let $S$ be a semigroup then $\sigma_{orient}$ is a congruence on $S$. \end{proposition} \bigskip The following follows from Proposition 4.1 in \cite{CCT} where it is also shown that the quotient semigroup $S/\sigma_{orient}$ is {\it cancellative} although we will nave no need for this concept or fact. \begin{proposition} \label{commutative} Let $S$ be a semigroup then the quotient semigroup $S/\sigma_{orient}$ is a commutative semigroup. \end{proposition} The following follows from Propostion 4.3 in \cite{CCT} \begin{proposition} \label{identity} Let $S$ be a semigroup. Then $Orientable(S)$ is the identity element for $S/\sigma_{orient}$, when Orientable($S$) is non-empty. \end{proposition} \section{Main Results} \label{main results} Recall that for group $G$ a {\it commutator} in $G$ is an element of the form $xyx^{-1}y^{-1}$ where $x$ and $y$ are elements of $G$ and $x^{-1}$ and $y_{-1}$ are their inverses, respectively. The commutator subgroup $[G,G]$ of $G$ is the smallest subgroup of $G$ containing all of $G$'s commutators. It is well known that $[G,G]$ is a normal subgroup and that $[G,G]$ determines a congruence $\sim$ on $G$ where for elements $g$ and $h$ of $G$ we have $g \sim h$ when $gh^{-1} \in [G,G]$. It is also well known that $\sim$ determines the abelian (commutative) quotient group $G/[G,G]$ - the abelianization of $G$ - where $[G,G]$ serves as its identity element. All of these claims can be found in most undergraduate abstract algebra textbooks, we cite two \cite{Hu, L}. For element $g \in G$ we will use the notation $[g]$ to denote its congruence class in the $G/[G,G]$, i.e. $[g] = \{ h \vert h \in G \hspace{.1cm} $and$ \hspace{.1cm} h \sim g \}$. \begin{theorem} \label{Orientable(G)} If $G$ is a group then Orientable($G$) $=$ $[G,G]$, the commutator subgroup of $G$. \end{theorem} \begin{proof} $\Rightarrow$ Let $g$ $\in$ $G$ that satisfies some orientable equation $a=btc$. Since $a=bgc$ in $G$ then clearly $[a] = [b][g][c]$ in $G / [G,G]$. As $G / [G,G]$ is abelian, $[b^{-1}][a][c^{-1}] = [g]$. Since $a=btc$ is an orientable equation and $G/[G,G]$ is abelian, clearly $[b^{-1}][a][c^{-1}] = [1]$. Hence $[g] = [1]$ and so $g$ $\in$ $[G,G]$. $\Leftarrow$ Let $g$ $\in$ $[G,G]$. So there must exist elements $x_{i}, y_{i} \in G$ for $1 \leq i \leq n$ such that $g = \prod_{i=1}^{n}x_{i}y_{i}x_{i}^{-1}y_{i}^{-1}$. We induct on $n$. For $n=1$ we have $x_{1}y_{1}x_{1}^{-1}y_{1}^{-1} = g$. Hence $x_{1}y_{1} = gy_{1}x_{1}$ and therefore $g$ satisfies the orientable equation $x_{1}y_{1} = ty_{1}x_{1}$. So next assume $g = \prod_{i=1}^{n}x_{i}y_{i}x_{i}^{-1}y_{i}^{-1}$. Hence $gy_{n}x_{n}y_{n}^{-1}x_{n}^{-1} = \prod_{i=1}^{n-1}x_{i}y_{i}x_{i}^{-1}y_{i}^{-1}$ and therefore by induction $gy_{n}x_{n}y_{n}^{-1}x_{n}^{-1}$ satisfies some orientable equation $a=btc$, i.e., $a=bgy_{n}x_{n}y_{n}^{-1}x_{n}^{-1}c$. Since $x_{n}x_{n}^{-1}y_{n}y_{n}^{-1} = 1$ then $ax_{n}x_{n}^{-1}y_{n}y_{n}^{-1} = bgy_{n}x_{n}y_{n}^{-1}x_{n}^{-1}c$. Now lastly observe that $ax_{n}x_{n}^{-1}y_{n}y_{n}^{-1} = bty_{n}x_{n}y_{n}^{-1}x_{n}^{-1}c$ is also an orientable equation that $g$ satisfies. \end{proof} \begin{theorem} If $G$ is a group then $G / \sigma_{orient} = G / [G,G]$, the abelianzation of $G$. \end{theorem} \begin{proof} Let $g$ be some element of group $G$ and consider the congruency classes $[g]_{\sigma}$ and $[g]$ of $G / \sigma_{orient}$ and $G / [G,G]$, respectively. We will show $[g]_{\sigma} = [g]$. So consider $h \in [g]_{\sigma}$. Hence $h$ and $g$ satisfy a two-variable orientable equation as given in $(2)$. Therefore for some elements $a$, $b$, $c$, and $d$ in $G$ we have $agb = chd$ in $G$. Hence [agb] = [chd] in $G / [G,G]$. Hence [ab][g] = [cd][h] in $G / [G,G]$ as $G / [G,G]$ is abelian. By the definition of a two-variable orientable equation it easy to see that we must also have [ab]= [cd] in $G / [G,G]$. Hence $[g] = [h]$ and therefore $[g]_{\sigma} \subseteq [g]$. Lastly, consider $h \in [g]$. Hence $gh^{-1} \in [G,G]$. Therefore, by Theorem \ref{Orientable(G)}, $gh^{-1} \in Orientable(G)$. Hence $gh^{-1}$ satisfies an orientable equation as given in $(1)$. Therefore there exists elements $a$, $b$, and $c$ in $G$ such that $a = bgh^{-1}c$ in $G$. It is easy to to check that $at_{1}h^{-1} = bt_{2}h^{-1}c$ is a two-variable orientable equation over $G$ and that $g$ and $h$ satisfy it upon substituting $h$ into $t_1$ and $g$ into $t_2$. Therefore $h \in [g]_{\sigma}$ and $[g] \subseteq [g]_{\sigma}$. \end{proof} \section{Acknowledgments} \label{acknowledge} The authors would like to express their deep gratitude to Maritza Martinez, Director of the Educational Opportunities Program at the University at Albany, State University of New York, for her support and encouragement throughout this undergraduate research project. We would also like to thank Anahi Bolanos, Grace Coste, Stephine Rodriguez, and Yasser Teouri for their valued participation in a weekly math seminar in part devoted to the second named author's presentation of some of the materials herein.
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} In his 2002 Ph.D thesis \cite{Zwegers} Zwegers gave an intrinsic definition of mock theta functions and provided new insight into three families of such functions, constructed \begin{enumerate} \item in terms of Appell--Lerch sums, \item as the Fourier coefficients of meromorphic Jacobi forms, and \item via theta functions attached to cones in lattices of indefinite signature. \end{enumerate} The first two constructions have played a central role in recently observed moonshine connections between finite groups and mock theta functions. These started with the observation in \cite{Eguchi2010} that the elliptic genus of a K3 surface has a decomposition into characters of the $N=4$ superconformal algebra with multiplicities that at low levels are equal to the dimensions of irreducible representations of the Mathieu group $M_{24}$. Appell--Lerch sums appear in this analysis in the so called ``massless" characters. This Mathieu moonshine connection was conjectured in \cite{UM,MUM} to be part of a much more general phenomenon, known as umbral moonshine, which attaches a vector-valued mock modular form $H^X$, a finite group $G^X$, and an infinite-dimensional graded $G^X$-module $K^X$ to the root systems of each of the $23$ Niemeier lattices. The analysis in \cite{MUM} relied heavily on the construction of mock modular forms in terms of meromorphic Jacobi forms and built on the important work in \cite{Dabholkar:2012nd} extending the analysis of \cite{Zwegers} and characterizing special Jacobi forms in terms of growth conditions. Whilst the existence of the $G^X$-modules $K^X$ has now been proved \cite{Gannon:2012ck,umrec} for all Niemeier root systems $X$, no explicit construction of the modules $K^X$ is yet known. However, in this paper we construct the $G^X$-module $K^X$ for the case that $X=E_8^3$. To do so we employ the third characterization of mock theta functions in terms of indefinite theta functions. This enables us to employ the formalism of vertex operator algebras \cite{Bor_PNAS,FLM} which has been so fruitfully employed (in \cite{FLM,borcherds_monstrous} to name just two) in the understanding of monstrous moonshine \cite{MR554399,Tho_FinGpsModFns,Tho_NmrlgyMonsEllModFn}. See \cite{mnstmlts} for a recent review of moonshine both monstrous and umbral, and many more references on these subjects. To explain the methods of this paper in more detail, we first recall the {\em Pochammer symbol} \begin{gather}\label{eqn:intro-poch} (x;q)_n :=\prod_{k=0}^{n-1} (1-xq^k), \end{gather} and the fifth order mock theta functions \begin{gather} \begin{split} \chi_0(q)&:=\sum_{n\geq 0} \frac{q^{n} }{(q^{n+1};q)_n},\\ \chi_1(q)&:= \sum_{n\geq 0} \frac{q^{n} }{(q^{n+1};q)_{n+1}}, \end{split} \end{gather} from Ramanujan's last letter to Hardy \cite{MR2280843,MR947735}. The conjectures of \cite{MUM} (see also \cite{mumcor}) imply the existence of a bi-graded super vector space $K^X=\bigoplus_r K^X_r=\bigoplus_{r,d}K^{X}_{r,d}$ that is a module for $G^X\simeq S_3$ and satisfies \begin{gather}\label{eqn:intro:KXmocktheta} \begin{split} \sdim_qK^X_1&=-2q^{-1/120}+\sum_{n>0}\dim K^X_{1,n-1/120}q^{n-1/120}=2q^{-1/120}(\chi_0(q)-2),\\ \sdim_qK^X_7&=\sum_{n>0}\dim K^X_{7,n-49/120}q^{n-49/120}=2q^{71/120}\chi_1(q). \end{split} \end{gather} Here $\sdim_q V:=\sum_{n}(\dim (V_{\bar 0})_n-\dim (V_{\bar 1})_n)q^n$ for $V$ a $\QQ$-graded super space with even part $V_{\bar 0}$ and odd part $V_{\bar 1}$. According to work \cite{MR2558702} of Zwegers, we have identities \begin{gather}\label{eqn:intro:zwegers} \begin{split} 2-\chi_0(q)&=\frac{1}{(q;q)_{\infty}^2}\left(\sum_{k,l,m\geq 0}+\sum_{k,l,m<0}\right)(-1)^{k+l+m}q^{(k^2+l^2+m^2)/2+2(kl+lm+mk)+(k+l+m)/2},\\ \chi_1(q)&=\frac{1}{(q;q)_{\infty}^2}\left(\sum_{k,l,m\geq 0}+\sum_{k,l,m<0}\right)(-1)^{k+l+m}q^{(k^2+l^2+m^2)/2+2(kl+lm+mk)+3(k+l+m)/2}, \end{split} \end{gather} where $(x;q)_{\infty}:=\prod_{n\geq 0}(1-xq^n)$. In this article we use (\ref{eqn:intro:zwegers}) as a starting point for the construction of a super vertex operator algebra $V^X$ (cf. (\ref{eqn:va:cnstn-VX})). We show that canonically-twisted modules for $V^X$, constructed explicitly in \S\ref{sec:va:cnstn} (cf. (\ref{eqn:va:cnstn-Vtw})), furnish a bi-graded $G^X$-module for which the graded trace functions are exactly compatible with the predictions of \cite{MUM}. In other words, we construct the analogue of the moonshine module $\vn$, of Frenkel--Lepowsky--Meurman \cite{FLMBerk,FLMPNAS,FLM}, for the $X=E_8^3$ case of umbral moonshine. To prove that our construction is indeed the $X=E_8^3$ counterpart to $\vn$, we verify the $X=E_8^3$ analogue of the Conway--Norton moonshine conjecture, proven by Borcherds in \cite{borcherds_monstrous} in the case of the monster, which predicts that the trace functions arising are uniquely determined by their automorphy and their asymptotic behavior near cusps. Thus we verify the $X=E_8^3$ analogues of both of the two major conjectures of monstrous moonshine. To prepare for precise statements of results, recall that vector-valued functions $H^X_g(\tau)=(H^X_{g,r}(\tau))$ on the upper half plane $\HH$ are considered in \cite{MUM}, for $g\in G^X\simeq S_3$, where the components are indexed by $r\in \ZZ/60\ZZ$. Define $o(g)$ to be the order of an element $g\in G^X$. The $H^X_g$ are not uniquely determined in \cite{MUM}, except for the case that $g=e$ is the identity, $o(g)=1$. But it is predicted that $H^X_g$ is a mock modular form of weight $1/2$ for $\Gamma_0(o(g))$, with shadow given by a certain vector-valued unary theta function $S^X_g$ (cf. (\ref{eqn:mcktht-SXg})), and specified polar parts at the cusps of $\Gamma_0(o(g))$. In more detail, $H^X_g$ should have the same polar parts as $H^X:=H^X_e$ at the infinite cusp of $\Gamma_0(o(g))$, but should have vanishing polar parts at any non-infinite cusps. In practice, this amounts to the statement that we should have \begin{gather} H^X_{g,r}(\tau)=\begin{cases} \mp 2q^{-1/120}+O(q^{119/120}),&\text{ if $r=\pm 1,\pm11,\pm19,\pm29\pmod{60}$,}\\ O(1),&\text{ otherwise,} \end{cases} \end{gather} for $q=e^{2\pi \ii\tau}$, and all components of $H^X_g(\tau)$ should remain bounded as $\tau\to 0$, if $g\neq e$. (For if $g\neq e$ then $o(g)=2$ or $o(g)=3$, and then $\Gamma_0(o(g))$ has only one cusp other than the infinite one, and this is the cusp represented by $0$.) Our main result is the following, where the functions $T^X_g$ are defined in \S\ref{sec:mcktht:um} (cf. (\ref{eqn:mcktht-TXg})) in terms of traces of operators on canonically-twisted modules for $V^X$. \begin{thm}\label{thm:intro-maintheorem} Let $g\in G^X$. If $o(g)\neq 3$ then $2T^X_{g}$ is the Fourier expansion of the unique vector-valued mock modular form of weight $1/2$ for $\Gamma_0(o(g))$ whose shadow is $S^X_g$, and whose polar parts coincide with those of $H^X_g$. If $o(g)=3$ then $2T^X_g$ is the Fourier expansion of the unique vector-valued modular form of weight $1/2$ for $\Gamma_0(3)$ which has the multiplier system $\rho_{3|3}\overline{\sigma^X}$, and polar parts coinciding with those of $H^X_g$. \end{thm} Here $\sigma^X:\SL_2(\ZZ)\to\GL_{60}(\CC)$ denotes the multiplier system of $S^X:=S^X_e$ (cf. (\ref{eqn:mcktht:um-sigmaX})), and $\rho_{3|3}:\Gamma_0(3)\to \CC^\times$ is defined in (\ref{eqn:mcktht:um-rho33}). Armed with Theorem \ref{thm:intro-maintheorem}, we may now define the $H^X_g$ concretely and explicitly, for $g\in G^X$, by setting \begin{gather}\label{eqn:intro-HXg} H^X_g(\tau):=2T^X_g(\tau), \end{gather} where $T^X_g(\tau)$ denotes the function obtained by substituting $e^{2\pi \ii \tau}$ for $q$ in the series expression (\ref{eqn:mcktht-TXg}) for $T^X_g$. Expressions for the components of $H^X_g$ are given in \S5.4 of \cite{MUM}, in terms of fifth order mock theta functions of Ramanujan, for the cases that $o(g)=1$ and $o(g)=2$, but it is not verified there that these prescriptions define mock modular forms with the specified shadows. Our work confirms these statements, as the following theorem demonstrates. \begin{thm}\label{thm:intro-rammcktht} We have the following identities. \begin{gather} H^{X}_{1A,1}(\tau) =\begin{cases}\label{eqn:intro-rammcktht1} \pm 2q^{-1/120} \left( \chi_0(q) - 2 \right),&\text{ if $r=\pm 1,\pm 11,\pm 19,\pm 29$,} \\ \pm 2q^{71/120} \chi_1(q), &\text{ if $r=\pm 7,\pm 13,\pm 17,\pm 27$.} \end{cases}\\ H^{X}_{2A,1}(\tau) =\begin{cases}\label{eqn:intro-rammcktht2} \mp 2 q^{-1/120} \phi_0(-q), &\text{ if $r=\pm 1,\pm 11,\pm 19,\pm 29$,} \\ \pm 2 q^{-49/120} \phi_1(-q), &\text{ if $r=\pm 7,\pm 13,\pm 17,\pm 27$.} \end{cases} \end{gather} \end{thm} The fifth order mock theta functions $\phi_0$ and $\phi_1$ were defined by Ramanujan (also in his last letter to Hardy), by setting \begin{gather} \begin{split}\label{eqn:intro-phi01} \phi_0(q)&:=\sum_{n\geq 0} q^{n^2} {(-q;q^2)_n},\\ \phi_1(q)&:=\sum_{n\geq 0} q^{(n+1)^2} {(-q;q^2)_n}. \end{split} \end{gather} The identities (\ref{eqn:intro-rammcktht1}) follow immediately from Theorem \ref{thm:intro-maintheorem}, since the the $V^X$-modules used to define the $T^X_g$ have been constructed specifically so as to make Zwegers' identity (\ref{eqn:intro:zwegers}) manifest. By contrast, the $o(g)=2$ case of Theorem \ref{thm:intro-rammcktht} requires some work, since the expressions we obtain naturally from our construction of $T^X_g$ do not obviously coincide with (\ref{eqn:intro-rammcktht2}). Thus the proof of Theorem \ref{thm:intro-rammcktht} entails non-trivial $q$-series identities which may be of independent interest. \begin{cor}\label{cor:intro-qseriesid} We have \begin{gather} \begin{split}\label{eqn:intro-qseriesid1} &\left( \sum_{k,m \ge 0} - \sum_{k,m <0} \right)_{k=m \pmod{2}} (-1)^m q^{k^2/2+m^2/2+4km+k/2+3m/2} \\ &\qquad= {\prod_{n>0} (1+q^n)} \left( \sum_{k,m \ge 0} - \sum_{k,m <0} \right) (-1)^{k+m} q^{3 k^2+m^2/2 +4km+k+m/2}, \end{split}\\ \begin{split}\label{eqn:intro-qseriesid7} &\left( \sum_{k,m \ge 0} - \sum_{k,m <0} \right)_{k=m \pmod{2}} (-1)^m q^{k^2/2+m^2/2+4km+3k/2+5m/2} \\ &\qquad= {\prod_{n>0} (1+q^n)} \left( \sum_{k,m \ge 0} - \sum_{k,m <0} \right) (-1)^{k+m} q^{3 k^2+m^2/2 +4km+3k+3m/2}. \end{split} \end{gather} \end{cor} The reader who is familiar with modularity results on trace functions attached to vertex operator algebras (cf. \cite{Zhu_ModInv,Dong2000,MR2046807}) and super vertex operator algebras (cf. \cite{DonZha_MdlrOrbVOSA}) may find it surprising that the functions we construct are (generally) mock modular, rather than modular, and have weight $1/2$, rather than weight $0$. In light of Zwegers' work \cite{Zwegers,MR2558702}, it is clear that we can obtain trace functions with mock modular behavior by considering vertex algebras constructed according to the usual lattice vertex algebra construction, but with a cone (or union of cones, cf. \S\ref{sec:va:cva}) taking on the role usually played by a lattice. A suitably chosen cone is the main ingredient for our construction of $V^X$. A general procedure for constructing super vertex operator algebras from cones in arbitrary signature is formalized in Theorem \ref{thm:va:cva-VD}. Note however that the cone vertex algebra construction does not, on its own, naturally give rise to trace functions with weight $1/2$. For this we introduce a single ``free fermion'' to the cone vertex algebra that we use to construct $V^X$, and we insert the zero mode (i.e. $L(0)$-degree preserving component) of the canonically-twisted vertex operator attached to a generator when we compute graded traces on canonically-twisted modules for $V^X$. In practice, this has the effect of multiplying the cone vertex algebra trace functions by $\eta(\tau):=q^{1/24}\prod_{n>0}(1-q^n)$. We remark that this technique may be profitably applied to other situations. For example, it is known (cf. e.g. \cite{MR1650637}) that the moonshine module $V^\natural$, when regarded as a module for the Virasoro algebra, is a direct sum of modules $L(h,24)$, for $h$ ranging over non-negative integers, satisfying \begin{gather} \operatorname{{tr}}_{L(h,24)}q^{L(0)-c/24}=\begin{cases} (1-q)q^{-23/24}\eta(\tau)^{-1}&\text{ for $h=0$,}\\ q^{h-23/24}\eta(\tau)^{-1}&\text{ for $h>0$,} \end{cases} \end{gather} where $c=24$. Also, the multiplicity of $L(0,24)$ is $1$, and the multiplicity of $L(1,24)$ is $0$. Consequently, the weight $1/2$ modular form $\eta(\tau)J(\tau)$, with $J(\tau)=q^{-1}+O(q)$ the (so normalized) elliptic modular invariant, is almost the generating function of the dimensions of the homogeneous spaces of Virasoro highest weight vectors in $V^\natural$. Indeed, the actual generating function is just $q^{1/24}\eta(\tau)J(\tau)+1$. Certainly $\eta(\tau)J(\tau)$ has nicer modular properties than the Virasoro highest weight generating function of $V^\natural$, and moreover, an even more striking connection to the monster, as four of the dimensions of non-trivial irreducible representations for the monster appear as coefficients: \begin{gather}\label{eqn:intro-fourans} \eta(\tau)J(\tau)=\cdots+196883q^{25/24}+21296876q^{49/24}+842609326q^{73/24}+19360062527q^{97/24}+\cdots \end{gather} (cf. p.220 of \cite{atlas}). This function $\eta(\tau)J(\tau)$ can be obtained naturally as a trace function on a canonically-twisted module for a super vertex operator algebra. For if we take $V$ to be the tensor product of $V^\natural$ with the super vertex operator algebra obtained by applying the Clifford module construction to a one dimension vector space (see \S\ref{sec:va:cliffmod} for details), then, choosing an irreducible canonically-twisted module $V_{\rm tw}$ for $V$, and denoting by $p(0)$ the coefficient of $z^{-1}$ in the canonically-twisted vertex operator attached to a suitably scaled element $p\in V$ with $L(0)p=\frac12 p$, we have \begin{gather}\label{eqn:intro-VetaJ} \operatorname{{tr}}_{V_{\rm tw}}p(0)q^{L(0)-c/24}=\eta(\tau)J(\tau), \end{gather} where now $c=49/2$. (See \S\ref{sec:va:cliffmod} for more detail.) The importance of trace functions such as (\ref{eqn:intro-VetaJ}) within the broader context of modularity for super vertex operator algebras is analyzed in detail in \cite{MR3077918}. (See also \cite{MR3205090}.) The organization of the paper is as follows. In \S\ref{sec:va} we recall some familiar constructions from vertex algebra and use these to construct the super vertex operator algebra $V^X$, and its canonically-twisted modules $V^{X,\pm}_{{\rm tw},a}$, which play the commanding role in this work. We recall the lattice construction of super vertex algebras in \S\ref{sec:va:latva}, modules for lattice super vertex algebras in \S\ref{sec:va:latvamod}, and the Clifford module super vertex algebra construction in \S\ref{sec:va:cliffmod}. New material appears in \S\ref{sec:va:cva}, where we attach a super vertex operator algebra to a cone in an indefinite lattice. Using this, we formulate the construction of $V^X$ and the $V^{X,\pm}_{{\rm tw},a}$ in \S\ref{sec:va:cnstn}. We also equip these spaces with $G^X$-module structure in \S\ref{sec:va:cnstn}, and compute explicit expressions (cf. Proposition \ref{prop:va:cnstn-tracefnexpressions}) for the graded traces of elements of $G^X$. In \S\ref{sec:mcktht} our focus moves from representation theory to number theory, as we seek to determine the properties of the graded traces arising from the action of $G^X$ on the $V^\pm_{{\rm tw},a}$. We recall the relationship between mock modular forms and harmonic Maass forms in \S\ref{sec:mcktht:maass}, and we recall some results on Zwegers' indefinite theta series in \S\ref{sec:mcktht:indtht}. The proofs of our main results, Theorems \ref{thm:intro-maintheorem} and \ref{thm:intro-rammcktht}, are the content of \S\ref{sec:mcktht:um}. We give tables with the first few coefficients of the $H^X_g$ in \S\ref{sec:coeffs}. We frequently employ the notational convention $e(x):=e^{2\pi i x}$. \section{Vertex Algebra}\label{sec:va} This section begins with a review of the lattice (super) vertex algebra construction in \S\ref{sec:va:latva}, and the natural generalization of this which defines lattice vertex algebra modules in \S\ref{sec:va:latvamod}. We review the special case of the Clifford module super vertex algebra construction we require in \S\ref{sec:va:cliffmod}. We introduce cone vertex algebras in \S\ref{sec:va:cva}, and put all of the preceding material together for the construction of $V^X$, and its canonically-twisted modules, in \S\ref{sec:va:cnstn}. \subsection{Lattice Vertex Algebra}\label{sec:va:latva} We briefly recall, following \cite{Bor_PNAS,FLM}, the standard construction which associates a super vertex algebra $V_L$ to a central extension of an integral lattice $L$. We also employ \cite{MR2082709} as a reference. Set $\gt{h}:=L\otimes_{\ZZ}\CC$, and extend the bilinear form on $L$ to a symmetric $\CC$-bilinear form on $\gt{h}$ in the natural way. Set $\hat{\gt{h}}:=\gt{h}[t,t^{-1}]\oplus \CC {\bf c}$, for $t$ a formal variable, and define a Lie algebra structure on $\hat{\gt{h}}$ by declaring that ${\bf c}$ is central, and $[u\otimes t^m,v\otimes t^n]=m\lab u,v\rab\delta_{m+n,0}\,{\bf c}$ for $u,v\in\gt{h}$ and $m,n\in\ZZ$. We follow tradition and write $u(m)$ as a shorthand for $u\otimes t^m$. The Lie algebra $\hat{\gt{h}}$ has a triangular decomposition $\hat{\gt{h}}=\hat{\gt{h}}^-\oplus \hat{\gt{h}}^0\oplus \hat{\gt{h}}^+$ where $\hat{\gt{h}}^{\pm}:=\gt{h}[t^{\pm 1}]t^{\pm 1}$ and $\hat{\gt{h}}^0:=\gt{h}\oplus \CC{\bf c}$. We require a bilinear function $b:L\times L\to \ZZ/2\ZZ$ with the property that $b(\lambda,\mu)+b(\mu,\lambda)=\lab \lambda,\mu\rab+\lab\lambda,\lambda\rab\lab\mu,\mu\rab +2\ZZ$. If $\{\varepsilon_i\}$ is an ordered $\ZZ$-basis for $L$ then we may take $b$ to be the unique such function for which \begin{gather} b(\le_i,\le_j)=\begin{cases} 0&\text{when $i\leq j$,}\\ 1&\text{when $i>j$.} \end{cases} \end{gather} Set $\beta(\lambda,\mu):=(-1)^{b(\lambda,\mu)}$, and define $\CC_{\beta}[L]$ to be the ring generated by symbols ${\bf v}_{\lambda}$ for $\lambda\in L$ subject to the relations ${\bf v}_{\lambda}{\bf v}_{\mu}=\beta(\lambda,\mu){\bf v}_{\lambda+\mu}$. \begin{rmk} The algebra $\CC_{\beta}[L]$ is isomorphic to the quotient $\CC[\hat{L}]/\lab \kappa+1\rab$, where $\hat{L}$ is the unique (up to isomorphism) central extension of $L$ by $\lab \kappa\rab\simeq\ZZ/2\ZZ$ such that \begin{gather} aa'=\kappa^{\lab \bar{a},\bar{a}'\rab+\lab \bar{a},\bar{a}\rab\lab \bar{a}',\bar{a}'\rab}a'a, \end{gather} for $a,a'\in \hat{L}$ lying above $\bar{a},\bar{a}'\in L$, respectively. (Cf. \cite{FLM}.) \end{rmk} Now define a $\hat{\gt{h}}^0\oplus \hat{\gt{h}}^+$-module structure on $\CC_{\beta}[L]$ by setting ${\bf c}{\bf v}_{\lambda}={\bf v}_{\lambda}$ and $u(m){\bf v}_{\lambda}= \delta_{m,0} \lab u,\lambda\rab{\bf v}_{\lambda}$ for $u\in \gt{h}$ and $\lambda\in L$, and define $V_L$ to be the induced $\hat{\gt{h}}$-module, \begin{gather} V_L:=U(\hat{\gt{h}})\otimes_{U(\hat{\gt{h}}^0\oplus \hat{\gt{h}}^+)}\CC_{\beta}[L]. \end{gather} Then, according to \S5.4.2 of \cite{MR2082709}, for example, $V_L$ admits a unique super vertex algebra structure $Y:V_L\to (\operatorname{End} V_L)[[z,z^{-1}]]$ such that $1\otimes {\bf v}_{0}$ is the vacuum vector, \begin{gather}\label{eqn:va:latva-Yu} Y(u(-1)\otimes {\bf v}_{0},z)=\sum_{n\in\ZZ} u(n)z^{-n-1} \end{gather} for $u\in \gt{h}$, and \begin{gather}\label{eqn:va:latva-Ylambda} Y(1\otimes {\bf v}_{\lambda},z)= \exp\left(-\sum_{n<0}\frac{\lambda(n)}{n}z^{-n}\right) \exp\left(-\sum_{n>0}\frac{\lambda(n)}{n}z^{-n}\right) {\bf v}_{\lambda}z^{\lambda(0)} \end{gather} for $\lambda \in L$. Here ${\bf v}_{\lambda}$ denotes the operator $p\otimes {\bf v}_{\mu}\mapsto \beta(\lambda,\mu)p\otimes {\bf v}_{\lambda+\mu}$, and $z^{\lambda(0)}(p\otimes {\bf v}_{\mu}):=(p\otimes {\bf v}_{\mu}) z^{\lab \lambda,\mu\rab}$. Note that we have \begin{gather} V_L\simeq S(\hat{\gt{h}}^-)\otimes \CC[L] \end{gather} as modules for $\hat{\gt{h}}^-\oplus \hat{\gt{h}}^0$. Given that $\{\le_i\}$ is a basis for $L$, choose $\le_i'\in L\otimes_{\ZZ}\QQ$ such that $\lab \le_i',\le_j\rab=\delta_{i,j}$, and define \begin{gather} \omega:=\frac{1}{2}\sum_{i=1}^3\le_i'(-1)\le_i(-1)\otimes {\bf v}_0. \end{gather} Then $\omega$ is a conformal element for $V_L$ with central charge equal to the rank of $L$. If we define $L(n)\in \operatorname{End} V_L$ so that $Y(\omega,z)=\sum_{n\in \ZZ}L(n)z^{-n-2}$ then $[L(0),v(n)]=-nv(n)$ and $1\otimes {\bf v}_{\lambda}$ is an eigenvector for $L(0)$ with eigenvalue $\lab \lambda,\lambda\rab/2$. Note that we do not assume that the bilinear form on $L$ is positive-definite. Vectors of non-positive length in $L$ will give rise to infinite dimensional eigenspaces for $L(0)$, so in general $(V_L,Y,{\bf v}_0,\omega_u)$ is a conformal super vertex algebra, but not a super vertex operator algebra. Automorphisms of $L$ can be lifted to automorphisms of $V_L$. For suppose given $g\in \operatorname{Aut}(L)$ and a function $\alpha:L\to \{\pm 1\}$ satisfying \begin{gather}\label{eqn:va:latva-alphacond} \alpha(\lambda+\mu)\beta(\lambda,\mu)=\alpha(\lambda)\alpha(\mu)\beta(g\lambda,g\mu) \end{gather} for $\lambda,\mu\in L$. Then we obtain an automorphism $\hat{g}$ of $\operatorname{Aut}(V_L)$ by setting \begin{gather}\label{eqn:va:latva-hatg} \hat{g}(p\otimes {\bf v}_\lambda):=\alpha(\lambda) (g\cdot p)\otimes{\bf v}_{g\lambda}, \end{gather} for $p\in S(\hat{\gt{h}}^-)$ and $\lambda\in L$, where $g\cdot p$ denotes the natural extension of the action of $\operatorname{Aut}(L)$ on $\gt{h}=L\otimes_{\ZZ}\CC$ to $S(\hat{\gt{h}}^-)$, determined by $g\cdot u(m)=(gu)(m)$ for $u\in\gt{h}$. For example, take $g$ to be the {\em Kummer involution} of $L$, given by $g\lambda=-\lambda$ for $\lambda\in L$. Then $\beta(\lambda,\mu)=\beta(-\lambda,-\mu)$ for all $\lambda,\mu\in L$, since $\beta$ is bi-multiplicative, so we may take $\alpha\equiv 1$ in (\ref{eqn:va:latva-alphacond}). We denote the corresponding automorphism of $V_L$ by $\theta$, and note that the action of $\theta$ on $V_L$ is given explicitly as follows. If $p\in S(\hat{\gt{h}}^-)$ is a homogeneous polynomial of degree $k$ in variables $u_i(-m_i)$, where $u_i\in\gt{h}$ and the $m_i$ are positive integers, then \begin{gather}\label{eqn:va:latva-theta} \theta(p\otimes v_{\lambda})=(-1)^k p\otimes v_{-\lambda}. \end{gather} \subsection{Lattice Vertex Algebra Modules}\label{sec:va:latvamod} Let $\gamma$ be an element of the dual lattice $L^*:=\{\lambda\in L\otimes_{\ZZ}\QQ\mid \lab \lambda,L\rab\subset\ZZ\}$. Define $\CC_{\beta}[L+\gamma]$ to be the complex vector space generated by symbols ${\bf v}_{\mu+\gamma}$ for $\mu\in L$, regarded as an $\CC_{\beta}[L]$-module according to the rule ${\bf v}_{\lambda}\cdot{\bf v}_{\mu+\gamma}=\beta(\lambda,\mu){\bf v}_{\lambda+\mu+\gamma}$. Equip $\CC_{\beta}[L+\gamma]$ with an $U(\hat{\gt{h}}^0\oplus \hat{\gt{h}}^+)$-module structure much as before, by letting ${\bf c}{\bf v}_{\mu+\gamma}={\bf v}_{\mu+\gamma}$ and $u(m){\bf v}_{\mu+\gamma}=\delta_{m,0}\lab u,\mu+\gamma\rab {\bf v}_{\mu+\gamma}$ for $u\in\gt{h}$ and $\mu\in L$. Let $V_{L+\gamma}$ be the $\hat{\gt{h}}$-module defined by setting $V_{L+\gamma}:=U(\hat{\gt{h}})\otimes_{U(\hat{\gt{h}}^0\oplus \hat{\gt{h}}^+)}\CC_{\beta}[L+\gamma]$. Then we have an isomorphism \begin{gather} V_{L+\gamma}\simeq S(\hat{\gt{h}}^-)\otimes \CC[L+\gamma] \end{gather} of modules for $\hat{\gt{h}}^-$. Define vertex operators $Y_{\gamma}:V_L\to(\operatorname{End} V_{L+\gamma})[[z,z^{-1}]]$ using the same formulas as before but interpret the operator ${\bf v}_{\lambda}$ in (\ref{eqn:va:latva-Ylambda}) as ${\bf v}_{\lambda}(p\otimes {\bf v}_{\mu+\gamma})=\beta(\lambda,\mu)p\otimes {\bf v}_{\lambda+\mu+\gamma}$, according to the $\CC_{\beta}[L]$-module structure on $\CC_{\beta}[L+\gamma]$ prescribed above. Note that the construction of $V_{L+\gamma}$ depends upon the choice of coset representative $\gamma\in L^*$, so that $V_{L+\gamma}$ might be different from $V_{L+\gamma'}$, as a $\CC_{\beta}[L]$-module, for example, even when $L+\gamma=L+\gamma'$, but different choices of coset representative are guaranteed to define isomorphic $V_L$-modules according to \cite{MR1245855}. The construction just described may be generalized so as to realize certain twisted modules for $V_L$. We give a brief description here, and refer to \S3 of \cite{MR1284796} for more details. Choose a vector $h\in\gt{h}$. Then for $p\in S(\hat{\gt{h}}^-)$ and $\lambda\in L$ we have $h(0)p\otimes {\bf v}_{\lambda}=\lab h,\lambda\rab p\otimes{\bf v}_{\lambda}$. So if $h$ is chosen to lie in $L\otimes_{\ZZ}\QQ$ then \begin{gather}\label{eqn:va:latvamod-sigmah} g_h:=e^{2\pi i h(0)} \end{gather} is a finite order automorphism of $V_L$, which acts as multiplication by $e^{2\pi i \lab h,\lambda\rab}$ on the vector $p\otimes {\bf v}_{\lambda}$. The kernel of the map $L\otimes_{\ZZ}\QQ\to \operatorname{Aut}(V_L)$ given by $h\mapsto g_h$ is exactly $L^*$, so $(L\otimes_{\ZZ}\QQ)/L^*$ is naturally a group of automorphisms of $V_L$. We may construct all the corresponding twisted modules for $V_L$ explicitly. To do this choose an $h$ in $L\otimes_{\ZZ}\QQ$ and let $\CC[L+h]$ be the complex vector space generated by symbols ${\bf v}_{\lambda+h}$ for $\lambda\in L$. Just as before, we define a $U(\hat{\gt{h}}^0\oplus \hat{\gt{h}}^+)$-module structure on $\CC[L+h]$ by setting ${\bf c}{\bf v}_{\mu}={\bf v}_{\mu}$ and $u(m){\bf v}_{\mu}=\delta_{m,0}\lab u,\mu\rab {\bf v}_{\mu}$ for $u\in\gt{h}$ and $\mu\in L+h$. Let $V_{L+h}$ be the $\hat{\gt{h}}$-module defined by setting $V_{L+h}:=U(\hat{\gt{h}})\otimes_{U(\hat{\gt{h}}^0\oplus \hat{\gt{h}}^+)}\CC[L+h]$, so that we have an isomorphism \begin{gather} V_{L+h}\simeq S(\hat{\gt{h}}^-)\otimes \CC[L+h] \end{gather} of modules for $\hat{\gt{h}}^-$. Taking $M$ to be a positive integer such that $Mh\in L^*$, define vertex operators $Y_h:V_L\to(\operatorname{End} V_{L+h})[[z^{1/M},z^{-1/M}]]$ using the same formulas as before but interpret the operator ${\bf v}_{\lambda}$ in (\ref{eqn:va:latva-Ylambda}) as ${\bf v}_{\lambda}(p\otimes {\bf v}_{\mu+h})=\beta(\lambda,\mu)p\otimes {\bf v}_{\lambda+\mu+h}$. Then $V_{L+h}=(V_{L+h},Y_{h})$ is an irreducible $g_h$-twisted module for $V_L$, and any $g_h$-twisted module for $V_L$ is of the form $V_{L+h'}$ for some $h'\in L\otimes_{\ZZ}\QQ$ that is congruent to $h$ modulo $L^*$. Note that the action of $L\otimes_{\ZZ}\QQ$ on $V_L$, given by $h\mapsto g_{h}$, extends to the $g_{h'}$-twisted module $V_{L+h'}$, for $h'\in L\otimes_{\ZZ}\QQ$. For given $h,h'\in L\otimes_{\ZZ}\QQ$, we may define \begin{gather}\label{eqn:va:latvamod-sigmahtw} g_{h}(p\otimes {\bf v}_{\lambda+h'}):=e^{2\pi i \lab h,\lambda\rab}(p\otimes {\bf v}_{\lambda+h'}) \end{gather} for $p\in S(\hat{\gt{h}}^-)$ and $\lambda\in L$. Then we have $g_h Y_{h'}(a,z)b=Y_{h'}(g_h a,z)g_h b$ for $a\in V_L$ and $b\in V_{L+h'}$. \subsection{Clifford Module Vertex Algebra}\label{sec:va:cliffmod} We also require the standard procedure---see \cite{MR1123265} for a general treatment, and \cite{MR1372717} for the special, one-dimensional case we consider here---which attaches a Clifford module super vertex operator algebra to a vector space equipped with a symmetric bilinear form. So let $\gt{p}$ be a one dimensional complex vector space equipped with a non-degenerate symmetric bilinear form $\lab\cdot\,,\cdot\rab$. Set $\hat{\gt{p}}=\gt{p}[t,t^{-1}]t^{1/2}$ and write $a(r)$ for $a\otimes t^r$. Extend the bilinear form from $\gt{p}$ to $\hat{\gt{p}}$ by requiring that $\lab a(r),b(s)\rab=\lab a,b\rab\delta_{r+s,0}$. Set $\hat{\gt{p}}^{\pm}=\gt{p}[t^{\pm 1}]t^{\pm 1/2}$, write $\lab\hat{\gt{p}}^{\pm}\rab$ for the sub algebra of $\operatorname{Cliff}(\hat{\gt{p}})$ generated by $\hat{\gt{p}}^{\pm}$ and define a one-dimensional $\lab \hat{\gt{p}}^+\rab$-module $\CC{\bf v}$ by requiring that ${\bf 1}{\bf v}={\bf v}$ and $a(r){\bf v}=0$ for $a\in \gt{p}$ and $r>0$. Here $\operatorname{Cliff}(\hat{\gt{p}})$ denotes the {\em Clifford algebra} attached to $\hat{\gt{p}}$, which we take to be the quotient of the tensor algebra $T(\hat{\gt{p}})=\CC{\bf 1}\oplus \hat{\gt{p}}\oplus \hat{\gt{p}}^{\otimes 2}\oplus \cdots$ by the ideal generated by expressions of the form $u\otimes u+\frac{1}{2}\lab u,u\rab{\bf 1}$ for $u\in \hat{\gt{p}}$. Observe that the induced $\operatorname{Cliff}(\hat{\gt{p}})$-module, $A(\gt{p})=\operatorname{Cliff}(\hat{\gt{p}})\otimes_{\lab \hat{\gt{p}}^+\rab}\CC{\bf v}$, is isomorphic to $\bigwedge(\hat{\gt{p}}^-){\bf v}$ as a $\lab \hat{\gt{p}}^-\rab$-module. We obtain a super vertex algebra structure on $A(\gt{p})$ by setting \begin{gather} Y(a(-1/2){\bf v},z)=\sum_{n\in\ZZ}a(n+1/2)z^{-n-1} \end{gather} for $a\in \gt{p}$, for the reconstruction theorem of \cite{MR2082709} ensures that this rule extends uniquely to a super vertex algebra structure $Y:A(\gt{p})\otimes A(\gt{p})\to A(\gt{p})((z))$ with $Y({\bf v},z)=\operatorname{Id}$. Let $p\in\gt{p}$ such that $\lab p,p\rab=-2$. We obtain a super vertex operator algebra structure, with central charge $c=1/2$, by taking \begin{gather} \omega=\frac{1}{4}p(-3/2)p(-1/2){\bf v} \end{gather} to be the Virasoro element. To construct canonically-twisted modules for $A(\gt{p})$ set $\hat{\gt{p}}_{{\rm tw}}=\gt{p}[t,t^{-1}]$ and extend the bilinear form from $\gt{p}$ to $\hat{\gt{p}}_{{\rm tw}}$ as before by requiring that $\lab a(r),b(s)\rab=\lab a,b\rab\delta_{r+s,0}$. Set $\hat{\gt{p}}_{{\rm tw}}^{>}=\gt{p}[t]t$ and $\hat{\gt{p}}_{{\rm tw}}^{\leq}=\gt{p}[t^{-1}]$, and define a $1$-dimensional $\lab \hat{\gt{p}}_{{\rm tw}}^>\rab$-module $\CC{\bf v}_{{\rm tw}}$ by requiring, much as before, that ${\bf 1}{\bf v}_{{\rm tw}}={\bf v}_{{\rm tw}}$ and $a(r){\bf v}=0$ for $a\in \gt{p}$ and $r>0$. Then for the induced $\operatorname{Cliff}(\hat{\gt{p}}_{{\rm tw}})$-module, \begin{gather} A(\gt{p})_{{\rm tw}}:=\operatorname{Cliff}(\hat{\gt{p}}_{{\rm tw}})\otimes_{\lab \hat{\gt{p}}_{{\rm tw}}^>\rab}\CC{\bf v}_{{\rm tw}}, \end{gather} there is a unique linear map $Y_{{\rm tw}}:A(\gt{p})\otimes A(\gt{p})_{{\rm tw}}\to A(\gt{p})_{{\rm tw}}((z^{1/2}))$ such that \begin{gather}\label{eqn:Ytwu} Y_{{\rm tw}}(u(-1/2){\bf v},z)=\sum_{n\in \ZZ}u(n)z^{-n-1/2} \end{gather} for $u\in\gt{p}$, and $(A(\gt{p})_{{\rm tw}},Y_{{\rm tw}})$ is a canonically-twisted module for $A(\gt{p})$. Again one may use (a suitably modified formulation of) the reconstruction theorem of \cite{MR2082709} to see this (cf. \cite{MR2074176}). We refer to \cite{MR1372717} for a concrete and detailed description of $Y_{{\rm tw}}$. Note that $A(\gt{p})$ is isomorphic to $\bigwedge(\hat{\gt{p}}_{{\rm tw}}^{\leq}){\bf v}$ as a $\lab \hat{\gt{p}}_{{\rm tw}}^{\leq}\rab$-module. With $p\in\gt{p}$ as above, such that $\lab p,p\rab=-2$, we have $p(0)^2={\bf 1}$ in $\operatorname{Cliff}(\gt{p})$. Set \begin{gather} {\bf v}_{\rm tw}^\pm:=({\bf 1}\pm p(0)){\bf v}_{\rm tw}, \end{gather} so that $p(0){\bf v}_{\rm tw}^\pm=\pm{\bf v}_{\rm tw}^\pm$. Then $A(\gt{p})_{\rm tw}=A(\gt{p})_{\rm tw}^+\oplus A(\gt{p})_{\rm tw}^-$ is a decomposition of $A(\gt{p})_{\rm tw}$ into irreducible canonically-twisted $A(\gt{p})$-modules, where $A(\gt{p})_{\rm tw}^\pm$ denotes the sub module of $A(\gt{p})_{\rm tw}$ generated by ${\bf v}_{\rm tw}^\pm$. \begin{gather} A(\gt{p})_{{\rm tw}}^\pm:=\operatorname{Cliff}(\hat{\gt{p}}_{{\rm tw}})\otimes_{\lab \hat{\gt{p}}_{{\rm tw}}^>\rab}\CC{\bf v}_{{\rm tw}}^\pm \end{gather} From (\ref{eqn:Ytwu}) we see that the $L(0)$-degree preserving component of $Y_{\rm tw}(p(-1/2){\bf v},z)$ is $p(0)$. Computing the graded-trace of $p(0)$ on $A(\gt{p})_{\rm tw}^{\pm}$, we find \begin{gather}\label{eqn:va:cliffmod-trpAtw} \operatorname{{tr}}_{A(\gt{p})_{\rm tw}^{\pm}}p(0)q^{L(0)-c/24}=\pm q^{1/24}\prod_{n>0}(1-q^n), \end{gather} where the factor $q^{1/24}$ appears because $L(0){\bf v}_{\rm tw}^{\pm}=\frac{1}{16}{\bf v}_{\rm tw}^{\pm}$ and $c=1/2$. \subsection{Cone Vertex Algebra}\label{sec:va:cva} Let $L$ be an integral lattice as before, and suppose $\{\le_i\}$ is a $\ZZ$-basis for $L$. Define $P$ to be the monoid of non-negative rational combinations of the chosen basis vectors $\le_i$, \begin{gather} P:=\left\{\sum_i \alpha_i\le_i\in L\otimes_{\ZZ}\QQ\mid \alpha_i\geq 0,\,\forall i\right\}, \end{gather} and define $N$ to be the semigroup of strictly negative rational combinations of the $\le_i$, \begin{gather} N:=\left\{\sum_i \alpha_i\le_i\in L\otimes_{\ZZ}\QQ\mid \alpha_i< 0,\,\forall i \right\}. \end{gather} Define $D:=P\cup N$ to be the union of $P$ and $N$. Our goal in this section is to attach a vertex algebra structure to the intersection $D\cap L$. For convenience we use the abbreviated notation $D(L):=D\cap L$, and more generally \begin{gather}\label{eqn:va:cva-DLgamma} D(L+\gamma):=D\cap (L+\gamma) \end{gather} for $\gamma\in L\otimes_{\ZZ}\QQ$. We interpret the notations $P(L+\gamma)$ and $N(L+\gamma)$ similarly. Given $K\subset L$ write $V_K$ for the $\hat{\gt{h}}$-submodule of $V_L$ generated by the ${\bf v}_{\lambda}$ for $\lambda\in K$, \begin{gather} V_K\simeq S(\hat{\gt{h}}^-)\otimes \CC[K]. \end{gather} Observe that if $K\subset L$ is closed under addition and contains $0$---i.e. if $K$ is a submonoid of $L$---then $V_K$ is a sub super vertex algebra of $V_L$, and $\omega$ is a conformal element for $V_K$. Furthermore, if $K'\subset L$ satisfies $K+K'\subset K'$ then the restriction of the vertex operators $a\otimes b\mapsto Y(a,z)b$ to $V_{K}\otimes V_{K'}<V_L\otimes V_L$ equips $V_{K'}$ with a module structure over $V_K$. So in particular, $V_{P(L)}$ (cf. (\ref{eqn:va:cva-DLgamma})) is a conformal super vertex algebra. If the basis $\{\le_i\}$ is chosen so that $P$ has no non-trivial vectors with non-positive length squared, then the eigenspaces for the action of $L(0)$ on $V_{P(L)}$ are finite-dimensional, the eigenvalues of $L(0)$ are contained in $\frac{1}{2}\ZZ$ and bounded from below, and thus $V_{P(L)}$ is a super vertex operator algebra. We will now show that the super vertex algebra structure on $V_{P(L)}$ extends naturally to $V_{D(L)}=V_{P(L)}\oplus V_{N(L)}$. For this we require a $V_{P(L)}$-module structure on $V_{N(L)}$, which we achieve by implementing the following standard method (cf. e.g. \S2 of \cite{MR1284796}). Suppose that $g$ is an automorphism of a super vertex algebra $V=(V,Y,{\bf v})$ and, $g_M\in \GL(M)$ is a linear automorphism of a $V$-module $M=(M,Y_M)$ satisfying $g_MY_M(a,z)m=Y_M(ga,z)g_Mm$ for $a\in V$ and $m\in M$. Observe then that we obtain a new $V$-module structure $M^g:=(M,Y_M^g)$ on the vector space underlying $M$ by setting \begin{gather}\label{eqn:va:cva-Ymg} Y_M^g(a,z)m:=g_MY_M(a,z)g_M^{-1}m \end{gather} for $a\in V$ and $m\in M$. Indeed, we have $Y_M^g(a,z)=Y_M(ga,z)$. Now take $M=V=V_L$ and $g=g_M=\theta$ in (\ref{eqn:va:cva-Ymg}), where $\theta\in \operatorname{Aut}(V_L)$ is the involution defined in \S\ref{sec:va:latva}, determined by requiring that $\theta(1\otimes {\bf v}_{\lambda})=1\otimes {\bf v}_{-\lambda}$ for $\lambda\in L$, and $[\theta,u(m)]=-u(m)$ for $u\in \gt{h}$ (cf. (\ref{eqn:va:latva-theta})). Observe that $\theta$ maps $V_{N(L)}$ to $V_{(-N)\cap L}$ which is a subspace of $V_{P(L)}$. Since \begin{gather} P+(-N)=\{\lambda+\mu\mid \lambda\in P,\,\mu\in -N\} \end{gather} is a subset of $-N$, the space $V_{(-N)\cap L}$ is even a $V_{P(L)}$-submodule of $V_{P(L)}$, so we obtain a $V_{P(L)}$-module structure on $V_{N(L)}$ by restricting the map $a\otimes b\mapsto Y^{\theta}(a,z)b$ to $V_{P(L)}\otimes V_{N(L)}$. Note that $Y^{\theta}(a,z)b=\theta Y(a,z)\theta b=Y(\theta a,z)b$. For a vertex algebra structure on $V_{D(L)}$ we must also identify maps $V_{N(L)}\otimes V_{P(L)}\to V_{N(L)}((z))$ and $V_{N(L)}\otimes V_{N(L)}\to V_{P(L)}((z))$. For the first of these we use $\widetilde{Y}(a,z)b:=Y(a,z)\theta b$. For the second we set $\widetilde{Y}(a,z)b:=\theta Y(a,z)b=Y(\theta a, z)\theta b$. To summarize, we define a vertex operator correspondence $\widetilde{Y}:V_{D(L)}\otimes V_{D(L)}\to V_{D(L)}((z))$, by setting \begin{gather}\label{eqn:va:cva-vdvops} \widetilde{Y}(a,z)b :=\begin{cases} Y(a,z)b,&\text{ for $a,b\in V_{P(L)}$,}\\ Y(\theta a,z)b,&\text{ for $a\in V_{P(L)}$ and $b\in V_{N(L)}$,}\\ Y(a,z)\theta b,&\text{ for $a\in V_{N(L)}$ and $b\in V_{P(L)}$,}\\ \theta Y(a,z)b,&\text{ for $a,b\in V_{N(L)}$,}\\ \end{cases} \end{gather} where $Y$ denotes the usual vertex operator correspondence on $V_L$, determined by (\ref{eqn:va:latva-Yu}) and (\ref{eqn:va:latva-Ylambda}). \begin{thm}\label{thm:va:cva-VD} The four-tuple $(V_{D(L)},\widetilde{Y},{\bf v},\omega)$ is a conformal super vertex algebra. It is a super vertex operator algebra if $D$ has no non-trivial vectors of non-positive length squared. \end{thm} \begin{proof} The proof is a standard exercise in lattice vertex algebra computations. The fundamental reason that the construction works is the fact that we obtain a commutative monoid structure $\tilde{+}$ on $D$ when we define \begin{gather} \lambda\tilde{+}\mu :=\begin{cases} \lambda+\mu,&\text{ for $\lambda,\mu\in P$,}\\ -\lambda+\mu,&\text{ for $\lambda\in P$ and $\mu\in N$,}\\ \lambda-\mu,&\text{ for $\lambda\in N$ and $\mu\in P$,}\\ -\lambda-\mu,&\text{ for $\lambda,\mu\in N$.}\\ \end{cases} \end{gather} The remaining details are left to the reader. \end{proof} Observe that the decomposition $V_{D(L)}=V_{P(L)}\oplus V_{N(L)}$ determines a $\ZZ/2$-grading on $V_{D(L)}$. We call this the {\em sign grading}, and we define the {\em sign automorphism} of $V_{D(L)}$ to be the linear map $s:V_{D(L)}\to V_{D(L)}$ determined by setting \begin{gather}\label{eqn:va:cva-signaut} s(a):=\begin{cases} a,&\text{ when $a\in V_{P(L)}$,}\\ -a,&\text{ when $a\in V_{N(L)}$.} \end{cases} \end{gather} It follows easily from the definition (\ref{eqn:va:cva-vdvops}) of the super vertex algebra structure on $V_{D(L)}$ that $s$ is indeed an automorphism of $V_{D(L)}$. We can construct certain twisted (and untwisted) modules for $V_{D(L)}$, by suitably modifying the constructions recalled in \S\ref{sec:va:latvamod}. Namely, for $h\in L\otimes_{\ZZ}\QQ$ take $V_{D(L+h)}$ to be the $\hat{\gt{h}}$-module defined by setting $V_{D(L+h)}:=U(\hat{\gt{h}})\otimes_{U(\hat{\gt{h}}^0\oplus \hat{\gt{h}}^+)}\CC[D(L+h)]$, where $\CC[D(L+h)]$ is the complex vector space generated by symbols ${\bf v}_{\mu}$ for $\mu\in D\cap(L+h)$, regarded as a $U(\hat{\gt{h}}^0\oplus \hat{\gt{h}}^+)$-module by setting ${\bf c}{\bf v}_{\mu}={\bf v}_{\mu}$ and $u(m){\bf v}_{\mu}=\delta_{m,0}\lab u,\mu\rab {\bf v}_{\mu}$ for $u\in\gt{h}$ and $\mu\in D\cap(L+h)$. As usual, we have an isomorphism \begin{gather} V_{D(L+h)}\simeq S(\hat{\gt{h}}^-)\otimes \CC[D(L+h)] \end{gather} of modules for $\hat{\gt{h}}^-$. Taking $M$ to be a positive integer such that $Mh\in L^*$, define vertex operators $\widetilde{Y}_h:V_{D(L)}\to(\operatorname{End} V_{D(L+h)})[[z^{1/M},z^{-1/M}]]$ using (\ref{eqn:va:latva-Yu}), (\ref{eqn:va:latva-Ylambda}) and (\ref{eqn:va:cva-vdvops}), but interpret the operator ${\bf v}_{\lambda}$ in (\ref{eqn:va:latva-Ylambda}) as ${\bf v}_{\lambda}(p\otimes {\bf v}_{\mu+h})=\beta(\lambda,\mu)p\otimes {\bf v}_{\lambda+\mu+h}$. \begin{thm}\label{thm:va:cva-VDh} Let $h\in L\otimes_{\ZZ}\QQ$. Then the pair $(V_{D(L+h)},\widetilde{Y}_h)$ is a $g_h$-twisted module for $V_{D(L)}$. In particular, $(V_{D(L+h)},\widetilde{Y}_h)$ is a $V_{D(L)}$-module when $h\in L^*$. \end{thm} \subsection{Main Construction}\label{sec:va:cnstn} We now take $L=\ZZ \le_1+\ZZ \le_2+\ZZ \le_3$ to be the rank $3$ lattice with bilinear form $\lab\cdot\,,\cdot\rab$ determined by \begin{gather}\label{eqn:va:cnstn-bilinearform} \lab \le_i,\le_j\rab=2-\delta_{i,j}. \end{gather} Then $L$ is an integral, non-even lattice with signature $(1,2)$. Set $\rho:=(\le_1+\le_2+\le_3)/5$ and observe that \begin{gather}\label{eqn:va:cnstn-iplambdarho} \lab \lambda,\rho\rab=k+l+m \end{gather} for $\lambda=k\le_1+l\le_2+m\le_3$, so $\rho$ belongs to the dual $L^*$ of $L$. In fact, $L^*/L$ is cyclic of order $5$, and $\rho+L$ is a generator. If we set \begin{gather}\label{eqn:va:cnstn-Lj} L^j:=\{\lambda\in L\mid \lab\lambda,\rho\rab=j\jmod{2}\}, \end{gather} then $L=L^0\cup L^1$ is the decomposition of $L$ into its even and odd parts, by which we mean that $\lab\lambda,\lambda\rab$ is even or odd according as $\lambda$ lies in $L^0$ or $L^1$. Let $V_L$ be the super vertex operator algebra attached to $L$ via the construction of \S\ref{sec:va:latva}, where the bilinear function $b:L\times L\to \ZZ/2\ZZ$ is determined by setting \begin{gather}\label{eqn:va:cnstn-b} b(\le_i,\le_j):=\begin{cases} 0&\text{when $i\leq j$,}\\ 1&\text{when $i>j$.} \end{cases} \end{gather} There is an obvious action of the symmetric group $S_3$ on $L$, by permutations of the basis vectors $\le_i$. We lift this action to $V_L$ in the following way. Recall from \S\ref{sec:va:latva} that a lift $\hat{g}\in \operatorname{Aut}(V_L)$ of an automorphism $g\in \operatorname{Aut}(L)$ is determined by a choice of function $\alpha:L\to \{\pm 1\}$ satisfying (\ref{eqn:va:latva-alphacond}). Observe that any such automorphism $\hat{g}$ restricts to an automorphism of $V_{D(L)}$, so long as $g$ preserves the subset $D(L)\subset L$. Taking $\mu=k\lambda$ in (\ref{eqn:va:latva-alphacond}) we have $\alpha((k+1)\lambda)=\alpha(\lambda)\alpha(k\lambda)\beta(\lambda,\lambda)^k\beta(g\lambda, g\lambda)^k$, since $\beta$ is bi-mulitplicative, so given the prescription (\ref{eqn:va:cnstn-b}) we see that $\beta(\lambda,\lambda)=k_1k_2+k_2k_3+k_3k_1$ for $\lambda=k_1\le_1+k_2\le_2+k_3\le_3$, which is invariant under the action of $S_3$. So actually $\beta(\lambda,\lambda)=\beta(g\lambda,g\lambda)$, and thus we may assume $\alpha(k\lambda)=\alpha(\lambda)^k$ in (\ref{eqn:va:latva-alphacond}) for $\lambda\in L$ and $k$ a positive integer, when $g$ acts by permuting the $\le_i$. Observe also that for $\lambda,\mu,\nu\in L$ we have \begin{gather} \alpha(\lambda+\mu+\nu)\beta(\lambda,\mu)\beta(\mu,\nu)\beta(\nu,\lambda) =\alpha(\lambda)\alpha(\mu)\alpha(\nu)\beta(g\lambda,g\mu)\beta(g\mu,g\nu)\beta(g\nu,g\lambda) \end{gather} according to (\ref{eqn:va:latva-alphacond}), which specializes to \begin{gather} \begin{split}\label{eqn:va:cnstn-alphavareps} &\alpha(\lambda)\beta(\le_1,\le_2)^{k_1k_2}\beta(\le_2,\le_3)^{k_2k_3}\beta(\le_3,\le_1)^{k_3k_1}\\ &=\alpha(\le_1)^{k_1}\alpha(\le_2)^{k_2}\alpha(\le_3)^{k_3} \beta(g\le_1,g\le_2)^{k_1k_2}\beta(g\le_2,g\le_3)^{k_2k_3}\beta(g\le_3,g\le_1)^{k_3k_1} \end{split} \end{gather} for $\lambda=k_1\le_1+k_2\le_2+k_3\le_3$. Consider the case that $g=\sigma$ is the cyclic permutation $(123)$. From (\ref{eqn:va:cnstn-alphavareps}) we see that we may lift $\sigma$ to $\operatorname{Aut}(V_L)$ by taking $\alpha(\le_i)=1$ for $i\in\{1,2,3\}$, and more generally $\alpha(k_1\le_1+k_2\le_2+k_3\le_3)=(-1)^{k_2k_3+k_3k_1}$, in the construction of \S\ref{sec:va:latva}. We denote the corresponding automorphism of $V_L$ by $\hat{\sigma}_0$. \begin{gather}\label{eqn:va:cnstn-sigma} \hat{\sigma}_0(p\otimes {\bf v}_{k_1\le_1+k_2\le_2+k_3\le_3}) :=(-1)^{k_2k_3+k_3k_1}(\sigma\cdot p)\otimes{\bf v}_{k_3\le_1+k_1\le_2+k_2\le_3} \end{gather} Next consider $g=\tau:=(12)$. Applying (\ref{eqn:va:cnstn-alphavareps}) we see that we may lift $\tau$ to $\operatorname{Aut}(V_L)$ by taking $\alpha(\le_i)=1$ as before, and more generally $\alpha(k_1\le_1+k_2\le_2+k_3\le_3)=(-1)^{k_1k_2}$, in the construction of \S\ref{sec:va:latva}. We denote the corresponding automorphism of $V_L$ by $\hat{\tau}_0$. \begin{gather}\label{eqn:va:cnstn-tau} \hat{\tau}_0(p\otimes {\bf v}_{k_1\le_1+k_2\le_2+k_3\le_3}) :=(-1)^{k_1k_2}(\tau\cdot p)\otimes{\bf v}_{k_2\le_1+k_1\le_2+k_3\le_3} \end{gather} Using (\ref{eqn:va:cnstn-sigma}) and (\ref{eqn:va:cnstn-tau}) one can check that $\hat{\sigma}_0^3=\hat{\tau}_0^2=(\hat{\tau}_0\hat{\sigma}_0)^2=\operatorname{Id}$ in $\operatorname{Aut}(V_L)$, so $\hat{\sigma}_0$ and $\hat{\tau}_0$ do indeed generate a copy of $S_3$ in $\operatorname{Aut}(V_L)$. Observe that $V_L=V_{L^0}\oplus V_{L^1}$ is the decomposition of $V_L$ into its even and odd parity subspaces, where $L^j$ is defined by (\ref{eqn:va:cnstn-Lj}). Recall the automorphisms $g_h$ of $V_L$, defined for $h\in L\otimes_{\ZZ}\QQ$ by (\ref{eqn:va:latvamod-sigmah}). Then we see from (\ref{eqn:va:cnstn-Lj}) that the canonical involution of $V_L$, acting as $+1$ on the even subspace $V_{L^0}$, and $-1$ on the odd subspace $V_{L^1}$, is realized by $g_{\rho/2}$. So the canonically-twisted modules for $V_L$ are exactly the $V_{L+a\rho/2}$, for $a\in\{1,3,5,7,9\}$ (cf. \S\ref{sec:va:latvamod}). The prescription (\ref{eqn:va:latvamod-sigmahtw}) furnishes an extension of the action of the canonical involution $g_{\rho/2}$, from $V_L$ to $V_{L+a\rho/2}$. Since $\rho$ is $S_3$-invariant we may also extend the actions of $\hat\sigma_0$ and $\hat\tau_0$ to $V_{L+a\rho/2}$, by setting \begin{gather} \begin{split}\label{eqn:va:cnstn-sigtaunoughtVLtw} \hat\sigma_0(p\otimes {\bf v}_{\lambda+a\rho/2})&:=(-1)^{k_2k_3+k_3k_1}(\sigma\cdot p)\otimes{\bf v}_{\sigma\lambda+a\rho/2},\\ \hat\tau_0(p\otimes {\bf v}_{\lambda+a\rho_2})&:=(-1)^{k_1k_2}(\tau\cdot p)\otimes{\bf v}_{\tau\lambda+a\rho/2}, \end{split} \end{gather} for $p\in S(\hat{\gt{h}}^-)$ and $\lambda=k_1\le_1+k_2\le_2+k_3\le_3$. Now consider $V_{D(L)}=(V_{D(L)},\widetilde{Y},{\bf v}_0,\omega)$, where $D$ is the cone determined by the basis $\varepsilon_i$, \begin{gather} D=\left\{\sum_{i=1}^3 \alpha_i\varepsilon_i\in L\otimes_{\ZZ}\QQ \mid \alpha_i\geq 0,\,\forall i, \text{ or } \alpha_i<0\,,\forall i\right\}, \end{gather} and $\widetilde{Y}$ is the vertex operator correspondence defined by (\ref{eqn:va:cva-vdvops}) in \S\ref{sec:va:cva}. Observe that if we set \begin{gather}\label{eqn:va:cnstn-leprime} \le_i':=2\rho-\le_i \end{gather} for $i\in\{1,2,3\}$ then $\lab \le_i',\le_j\rab=\delta_{i,j}$. Since the values $\lab \varepsilon_i,\varepsilon_j\rab$ are all positive, there are no non-trivial vectors $\lambda\in D$ with $\lab \lambda,\lambda\rab\leq 0$. So, by virtue of Theorem \ref{thm:va:cva-VD}, the super vertex algebra $V_{D(L)}$ becomes a super vertex operator algebra, with central charge $c=3$, when equipped with the conformal element \begin{gather} \omega=\frac{1}{2}\sum_{i=1}^3\le_i'(-1)\le_i(-1)\otimes {\bf v}_0. \end{gather} Observe that the actions (\ref{eqn:va:cnstn-sigma}) and (\ref{eqn:va:cnstn-tau}), of $\hat\sigma_0$ and $\hat\tau_0$, respectively, restrict from $V_L$ to $V_{D(L)}$, since $D$ is invariant under coordinate permutations. We define automorphisms $\hat\sigma$ and $\hat\tau$ for $V_{D(L)}$, by taking $\hat\sigma:=\hat\sigma_0$ and $\hat\tau:=\hat\tau_0\circ s$, where $s$ is the sign automorphism of $V_{D(L)}$, defined in \S\ref{sec:va:cva}. Since $s$ has order two and commutes with $\hat\tau_0$ we see that $\hat\sigma$ and $\hat\tau$ generate a copy of $S_3$ in $\operatorname{Aut}(V_{D(L)})$, and we denote this group $\hat{G}$. \begin{gather} \hat{G}:=\lab \hat\sigma,\hat\tau\rab<\operatorname{Aut}(V_{D(L)}) \end{gather} Theorem \ref{thm:va:cva-VDh} and the discussion above furnish us with canonically-twisted $V_{D(L)}$-modules $V_{D(L+a\rho/2)}$ for $a$ an odd integer. Note that this furnishes five distinct canonically-twisted $V_{D(L)}$-modules, since the isomorphism type of $V_{D(L+a\rho/2)}$ is determined by $a\pmod{10}$, since $k=10$ is the minimal positive integer such that $k\rho/2\in L$. We extend the action of the canonical involution $g_{\rho/2}$ from $V_{D(L)}$ to $V_{D(L+a\rho/2)}$ just as we do for $V_L$-modules (cf. (\ref{eqn:va:latvamod-sigmahtw})), by setting \begin{gather}\label{eqn:sigmaVxtn} g_{\rho/2}(p\otimes {\bf v}_{\lambda+a\rho/2}):=(-1)^{\lab \rho,\lambda\rab}p\otimes {\bf v}_{\lambda+a\rho/2} \end{gather} for $p\in S(\hat{\gt{h}}^-)$ and $\lambda+a\rho/2\in D(L+a\rho/2)$. Similarly, we extend the actions of $\hat\sigma$ and $\hat\tau$, from $V_{D(L)}$ to $V_{D(L+a\rho/2)}$, \begin{gather} \begin{split}\label{eqn:va:cnstn-sigtauVDLtw} \hat\sigma(p\otimes {\bf v}_{\lambda+a\rho/2})&:=(-1)^{k_2k_3+k_3k_1}(\sigma\cdot p)\otimes{\bf v}_{\sigma\lambda+a\rho/2},\\ \hat\tau(p\otimes {\bf v}_{\lambda+a\rho_2})&:= \begin{cases} (-1)^{k_1k_2}(\tau\cdot p)\otimes{\bf v}_{\tau\lambda+a\rho/2},&\text{ if $\lambda+a\rho/2\in P$,}\\ (-1)^{k_1k_2+1}(\tau\cdot p)\otimes{\bf v}_{\tau\lambda+a\rho/2},&\text{ if $\lambda+a\rho/2\in N$,} \end{cases} \end{split} \end{gather} and thus obtain actions of $\hat{G}$ on the canonically-twisted $V_{D(L)}$-modules, $V_{D(L+a\rho/2)}$. In (\ref{eqn:va:cnstn-sigtauVDLtw}) we write $p$ for an element of $S(\hat{\gt{h}}^-)$, and assume $\lambda=k_1\le_1+k_2\le_2+k_3\le_3$. We now let $V^X$ denote the tensor product super vertex operator algebra \begin{gather}\label{eqn:va:cnstn-VX} V^X:=A(\gt{p})\otimes V_{D(L)}. \end{gather} We write $V_{{\rm tw},a}^\pm$ for the canonically-twisted $V^X$-module, \begin{gather}\label{eqn:va:cnstn-Vtw} V^{\pm}_{{\rm tw},a}:=A(\gt{p})_{\rm tw}^{\pm}\otimes V_{D(L+a\rho/2)}. \end{gather} We extend the action of $\hat{G}\simeq S_3$ from $V_{D(L)}$ to $V^X$, and from $V_{D(L+a\rho/2)}$ to $V^{\pm}_{{\rm tw},a}$, by letting $\hat{G}$ act trivially on the Clifford module factors, setting \begin{gather} \hat\sigma(u\otimes v):=u\otimes \hat\sigma(v),\quad \hat\tau(u\otimes v):=u\otimes \hat\tau(v), \end{gather} for $u\in A(\gt{p})$ and $v\in V_{D(L)}$, and for $u\in A(\gt{p})_{\rm tw}^\pm$ and $v\in V_{D(L+a\rho/2)}$. Given $g\in \hat{G}$ and $a$ an odd integer, we now define $T^{\pm}_{g,a}$ to be the trace of the operator $gg_{\rho/2}p(0)q^{L(0)-c/24}$ on the canonically-twisted $V^X$-module $V^{\pm}_{{\rm tw},a}$, \begin{gather}\label{eqn:va:cnstn-Tpmga} T^{\pm}_{g,a}:= \operatorname{{tr}}_{V^{\pm}_{{\rm tw},a}}gg_{\rho/2}p(0)q^{L(0)-c/24}. \end{gather} Recall that $(q;q)_\infty=\prod_{n>0}(1-q^n)$ (cf. (\ref{eqn:intro-poch})). Our concrete construction allows us to compute explicit formulas for the trace functions $T^{\pm}_{g,a}$. \begin{prop}\label{prop:va:cnstn-tracefnexpressions} The trace functions $T^{\pm}_{g,a}$ admit the following expressions, for $a\in\{1,3,5,7,9\}$. \begin{align} T^{\pm}_{e,a}&=\pm\frac{q^{-1/12}} {(q;q)^2_{\infty}} \left(\sum_{k,l,m\geq 0}+\sum_{k,l,m<0}\right) \label{eqn:va:cnstn-Tpmeaexplicit} (-1)^{k+l+m}q^{(k^2+l^2+m^2)/2+2(kl+lm+mk)+a(k+l+m)/2+3a^2/40}\\ T^{\pm}_{\hat\tau,a}&=\pm\frac{q^{-1/12}} {(q^2;q^2)_{\infty}} \left(\sum_{k,m\geq 0}-\sum_{k,m<0}\right)\label{eqn:va:cnstn-Tpmtauaexplicit} (-1)^{k+m}q^{3k^2+m^2/2+4km+a(2k+m)/2+3a^2/40}\\ T^\pm_{\hat\sigma,a}&=\pm q^{-1/12}\frac{(q;q)_{\infty}}{(q^3;q^3)_{\infty}}\label{eqn:va:cnstn-Tpmsigmaaexplicit} \sum_{k\in\ZZ}(-1)^kq^{15k^2/2+3ak/2+3a^2/40} \end{align} \end{prop} \begin{proof} First consider the case that $g=e$ is the identity. From the definition (\ref{eqn:va:cnstn-Tpmga}) of $T^{\pm}_{e,a}$ we derive \begin{gather}\label{eqn:va:cnstn-Tpmeadirect} T^{\pm}_{e,a} =\pm\frac{1} {(q;q)_{\infty}^2} \sum_{\mu\in D(L+a\rho/2)}(-1)^{\lab \mu-a\rho/2,\rho\rab}q^{\lab \mu,\mu\rab/2-1/12}, \end{gather} for any odd integer $a$. If also $0<a<10$ then $D(L+a\rho/2)=D(L)+a\rho/2$, and so in this situation we may replace $\mu$ with $k\le_1+l\le_2+m\le_3+a\rho/2$ in the summation, where either $k,l,m\geq 0$ or $k,l,m<0$. This leads to (\ref{eqn:va:cnstn-Tpmeaexplicit}) directly, according to the definition (\ref{eqn:va:cnstn-bilinearform}) of $\lab\cdot\,,\cdot\rab$, and the identity (\ref{eqn:va:cnstn-iplambdarho}). The term $3a^2/40$ appears because $\lab\rho,\rho\rab=3/5$. Next take $g=\hat\tau$. We compute \begin{gather}\label{eqn:va:cnstn-Tpmtauadirect} T^{\pm}_{\hat\tau,a}(q)=\pm\frac{1}{(q^2;q^2)_{\infty}} \left(\sum_{\substack{\mu\in P(L+a\rho/2)\\ \tau\mu=\mu}}- \sum_{\substack{\mu\in N(L+a\rho/2)\\ \tau\mu=\mu}}\right) (-1)^{\lab \mu-a\rho/2,\rho+\le_1'\rab}q^{\lab \mu,\mu\rab/2-1/12} \end{gather} using the definition (\ref{eqn:va:cnstn-Tpmga}) of $T^{\pm}_{g,a}$, and the formula (\ref{eqn:va:cnstn-sigtauVDLtw}) for the action of $\hat\tau$. (See also (\ref{eqn:va:cnstn-leprime}).) Note that the sign change for summands with $\mu\in N(L+a\rho_2)$ is a consequence of the fact that the action of $\hat\tau$ is defined by composing $\hat\tau_0$ (cf. (\ref{eqn:va:cnstn-sigtaunoughtVLtw})) with the sign automorphism $s$ (cf. (\ref{eqn:va:cva-signaut})). Restricting to $0<a<10$, we obtain (\ref{eqn:va:cnstn-Tpmtauaexplicit}) from (\ref{eqn:va:cnstn-Tpmtauadirect}) in much the same way as above, by taking $\mu=k\le_1+k\le_2+m\le_3+a\rho/2$ in the summations, with $k,m\geq 0$ in the first of these, and $k,m<0$ in the second. The factor $(-1)^k$ in $(-1)^{k+m}$, corresponding to $(-1)^{\lab \mu-a\rho/2,\le_1'\rab}$ in (\ref{eqn:va:cnstn-Tpmtauadirect}), arises from the factor $(-1)^{k_1k_2}=(-1)^{k^2}=(-1)^k$ in (\ref{eqn:va:cnstn-sigtauVDLtw}). Finally we consider $g=\hat\sigma$ (cf. (\ref{eqn:va:cnstn-sigtauVDLtw})). Then the appropriate analogue of (\ref{eqn:va:cnstn-Tpmeadirect}) and (\ref{eqn:va:cnstn-Tpmtauadirect}) is \begin{gather}\label{eqn:va:cnstn-Tpmsigmaadirect} T^{\pm}_{\hat\sigma,a}(q)=\pm\frac{(q;q)_{\infty}}{(q^3;q^3)_{\infty}} \sum_{\substack{\mu\in D(L+a\rho/2)\\ \sigma\mu=\mu}} (-1)^{\lab \mu-a\rho/2,\rho\rab}q^{\lab \mu,\mu\rab/2-1/12}. \end{gather} We obtain (\ref{eqn:va:cnstn-Tpmsigmaaexplicit}) from (\ref{eqn:va:cnstn-Tpmsigmaadirect}), by restricting to $0<a<10$, and substituting $\mu=k\le_1+k\le_2+k\le_3+a\rho/2=(5k+a/2)\rho$ in the summation. This completes the proof of the proposition. \end{proof} \section{Mock Theta Functions}\label{sec:mcktht} In this section we consider the modular properties of the trace functions defined in \S\ref{sec:va:cnstn}, computed explicitly in Proposition \ref{prop:va:cnstn-tracefnexpressions}. We recall some basic facts about Maass forms in \S\ref{sec:mcktht:maass}, including their relationship to mock modular forms. We require some facts about theta series of cones in indefinite lattices due to Zwegers \cite{Zwegers}, which we recall in \S\ref{sec:mcktht:indtht}. The proof of our main result, Theorem \ref{thm:intro-maintheorem}, appears in \S\ref{sec:mcktht:um}. In particular, we identify the umbral McKay--Thompson series attached to $X=E_8^3$ as trace functions arising from the action of $G^X$ on canonically-twisted modules for $V^X$ in \S\ref{sec:mcktht:um}. \subsection{Harmonic Maass Forms}\label{sec:mcktht:maass} Define the weight $1/2$ {\em Casimir operator} $\Omega_{\tfrac12}$, a differential operator on smooth functions $H:\HH\to\CC$, by setting \begin{gather}\label{eqn:mcktht:um-cas} (\Omega_{\frac12}H)(\tau):= -4\Im(\tau)^2\frac{\partial^2H}{\partial\tau\partial\overline{\tau}}(\tau) +i\Im(\tau)\frac{\partial H}{\partial\overline{\tau}}(\tau) +\frac{3}{16}H(\t). \end{gather} Note that $\Omega_{\tfrac12}=\Delta_{\tfrac 12}+\tfrac{3}{16}$, where $\Delta_{k}$ is the hyperbolic Laplace operator in weight $k$. Following the work \cite{BruFun} of Bruinier--Funke (cf. \cite{ono_unearthing,zagier_mock}), a {\em harmonic weak Maass form} of weight $1/2$ for $\Gamma<\SL_2(\ZZ)$ is defined to be a smooth function $H:\HH\to \CC$ that transforms as a (not necessarily holomorphic) modular form of weight $1/2$ for $\Gamma$, is an eigenfunction for $\Omega_{\frac12}$ with eigenvalue $3/16$, and has at most exponential growth as $\tau$ approaches cusps of $\Gamma$. Define $\beta(x)$ for $ x \in \RR_{ \ge 0}$ by setting \begin{gather}\label{eqn:mcktht:indtht-beta} \beta(x) := \int_x^\infty u^{-1/2} e^{- \pi u} {\rm d}u. \end{gather} Note that $\beta$ is related to the incomplete Gamma function by $\sqrt{\pi}\beta(x)=\Gamma(1/2,\pi x)$. If $H$ is a harmonic weak Maass form of weight $1/2$ then we can canonically decompose $H$ into its {\em holomorphic} and {\em non-holomorphic} parts, $H=H^++H^-$, where \begin{align} H^+(\tau)& =\sum_{n\gg -\infty}c_H^+(n)q^n,\label{eqn:mcktht:um-Hp}\\ H^-(\tau)& =2ic_H^-(0)\sqrt{2\Im(\tau)}-i\sum_{n>0}c_H^-(n){\frac{1}{\sqrt{2n}}}\beta(4n\Im(\tau))q^{-n},\label{eqn:mcktht:um-Hm} \end{align} for some uniquely determined values $c_H^{\pm}(n)\in \CC$. (Cf. \S3 of \cite{BruFun}. See also \S5 of \cite{zagier_mock} and \S7.1 of \cite{Dabholkar:2012nd}.) Note that $n$ should be allowed to range over rational values in (\ref{eqn:mcktht:um-Hp}) and (\ref{eqn:mcktht:um-Hm}). We may define the {\em mock modular forms} of weight $1/2$ to be those holomorphic functions $H^+:\HH\to\CC$ which arise as the holomorphic parts of harmonic weak Maass forms of weight $1/2$. For $H^\pm$ as above, the {\em shadow} of $H^+$ is defined, up to a choice of scaling factor $C$, by \begin{gather}\label{eqn:mcktht:um-Hpshadow} g(\tau):=C{\sqrt{2\Im(\tau)}}\overline{\frac{\partial H^-}{\partial\overline{\tau}}}=C\sum_{n\geq 0}c^-_H(n)q^n.\end{gather} Then so long as $c_H^-(0)=0$ (i.e. $g$ is a cusp form), the function $H^-$ is the {\em Eichler integral} of $g$, \begin{gather}\label{eqn:mcktht:um-Hmshadow} H^-(\tau)= \frac{e(-\tfrac18)}{C}\int_{-\overline{\tau}}^\infty \frac{\overline{g(-\overline{z})}}{\sqrt{z+\tau}}{\rm d} z. \end{gather} In this setting, the weak harmonic Maass form $H=H^++H^-$ is called the {\em completion} of $H^+$. Various choices for $C$ can be found in the literature. In \cite{MUM} we find $C=\sqrt{2m}$ in the case that $H=(H_r)$ is a $2m$-vector-valued Maass form for some $\Gamma_0(N)$, such that \begin{gather} (H\cdot\theta)(\t,z):=\sum_r H_r(\tau)\theta_{m,r}(\tau,z) \end{gather} transforms likes a (not necessarily holomorphic in $\tau$) Jacobi form of weight $1$ and index $m$ for $\Gamma_0(N)$, where \begin{gather}\label{eqn:mcktht:maass-tht} \theta_{m,r}(\tau,z):=\sum_{k\in\ZZ}q^{(2km+r)^2/4m}e^{2\pi i z(2km+r) }. \end{gather} The cases of relevance to us here all have $m=30$, so we take $C=\sqrt{60}$ henceforth in (\ref{eqn:mcktht:um-Hpshadow}) and (\ref{eqn:mcktht:um-Hmshadow}). All the shadows arising in this work will be linear combinations of the unary theta functions \begin{gather}\label{eqn:mcktht-Smr} S_{m,r}(\tau):=\left.\frac{1}{2\pi i}\frac{\partial}{\partial z}\theta_{m,r}(\tau,z)\right|_{z=0}=\sum_{k\in\ZZ}(2km+r)q^{(2km+r)^2/4m}, \end{gather} where $m=30$ and $r\neq 0 \pmod{30}$. In particular, we will not encounter any examples for which the shadow $g$ (cf. (\ref{eqn:mcktht:um-Hpshadow})) is not a cusp form. \subsection{Indefinite Theta Series}\label{sec:mcktht:indtht} We will be concerned with quadratic forms of signature $(1,1)$, and so take $r=2$ in the notation of \cite{Zwegers}. (Even though our main construction uses a lattice of signature $(1,2)$, it will develop in \S\ref{sec:mcktht:um} that the trace functions (\ref{eqn:va:cnstn-Tpmeaexplicit}) and (\ref{eqn:va:cnstn-Tpmtauaexplicit}) can be analyzed in terms of theta series of indefinite lattices with signature $(1,1)$. The remaining trace function (\ref{eqn:va:cnstn-Tpmsigmaaexplicit}) is essentially a theta series with rank $1$, and consequently can be handled by classical methods.) Given a symmetric $2\times 2$ matrix $A$, we define a quadratic form $Q: \RR^2 \to \RR$, by setting \begin{equation} Q(x):= \frac{1}{2} ( x,A x), \end{equation} where $(\cdot\,,\cdot)$ denotes the usual Euclidean inner product on $\RR^2$. The associated bilinear form is \begin{equation} B(x,y):= ( x, A y) = Q(x+y)-Q(x)-Q(y) \, . \end{equation} Henceforth assume that $A$ has signature $(1,1)$. Then the set of vectors $c \in \RR^2$ with $Q(c)<0$ is non-empty and has two components. Let $C_Q$ be one of these components. Two vectors $c^{(1)},c^{(2)}$ belong to the same component if $B(c^{(1)},c^{(2)})<0$. Thus, picking a vector $c_{0}$ in $C_Q$ we may identify \begin{equation} C_Q= \left\{ c \in \RR^2 \mid Q(c)<0, ~B(c,c_0)<0 \right\} \, . \end{equation} Zwegers also defines a set of representatives of {\em cusps}, \begin{equation} S_Q:= \left\{ c \in \ZZ^2 \mid \text{$c$ primitive, $Q(c)=0$, $B(c,c_0)<0$} \right\} \, . \end{equation} Define the {\em indefinite theta function} with characteristics $a, b \in \RR^2$, with respect to $c^{(1)}, c^{(2)} \in C_Q$, by setting \begin{gather} \begin{split}\label{eqn:mcktht:indtht-vartheta} &\vartheta^{c^{(1)},c^{(2)}}_{a,b}(\tau) :=\\ & \sum_{\nu \in a + \ZZ^2} \left( E \left( \frac{B(c^{(1)},\nu)}{\sqrt{-Q(c^{(1)})}} \sqrt{\Im(\tau)} \right) -E \left( \frac{B(c^{(2)},\nu)}{\sqrt{-Q(c^{(2)})}} \sqrt{\Im(\tau)} \right) \right) q^{Q(\nu)} e^{2 \pi i B(\nu,b)}, \end{split} \end{gather} where $E(z) := \operatorname{sgn} (z) (1-\beta(z^2))$. Corollary 2.9 of \cite{Zwegers} (cf. also Theorem 3.1 of \cite{zagier_mock}) shows that $\vartheta^{c^{(1)},c^{(2)}}_{a,b}(\tau)$ is a non-holomorphic modular form of weight $1$. Presently we will see that these indefinite theta functions can be used to define harmonic Maass forms whose non-holomorphic parts can be written in terms of the functions \begin{equation}\label{eqn:mcktht-Rab} R_{a,b}(\tau) := \sum_{\nu \in a+\ZZ} \operatorname{sgn} (\nu) \beta(2 \nu^2 \Im(\tau)) q^{-\nu^2/2} e^{- 2 \pi i \nu b}. \end{equation} Note that the $R_{a,b}$ are Eichler integrals (cf. (\ref{eqn:mcktht:um-Hmshadow})) of unary theta functions of weight $3/2$. Indeed, we have \begin{equation}\label{eqn:mcktht-Rabgab} R_{a,b}(\tau) =e(-\tfrac18) \int_{- \bar \tau}^{i \infty} \frac{g_{a,-b}(z)}{\sqrt{z+\tau}}{\rm d}z, \end{equation} for \begin{equation}\label{eqn:mcktht-gab} g_{a,b}(\tau) :=\sum_{\nu \in a+\ZZ} \nu q^{\nu^2/2} e^{2 \pi i \nu b}. \end{equation} Observe also that \begin{equation}\label{eqn:mcktht-gabSmr} g_{\frac{r}{2m},0}(m \tau) = \frac{1}{2m} S_{m,r}(\tau) \end{equation} (cf. (\ref{eqn:mcktht-Smr})), which is useful for comparing the results of \cite{Zwegers} to those of \cite{MUM}. Define $\langle c \rangle_\ZZ^\perp:=\{ \xi \in \ZZ^r \mid B(c, \xi)=0 \}$. For future use we quote the $r=2$ case of Proposition 4.3 from \cite{Zwegers}. \begin{prop}[Zwegers]\label{prop:mcktht-Zwegersprop} Let $c \in C_Q \cap \ZZ^2$ be primitive. Let $P_0\subset \RR^2$ be the finite set determined by requiring that \begin{gather} \left\{ \mu \in a+\ZZ^2 \mid 0 \leq \frac{B(c,\mu)}{2 Q(c)} <1 \right\} = \bigsqcup_{\mu_0 \in P_0} \left( \mu_0 + \langle c \rangle_\ZZ^\perp \right). \end{gather} Then we have \begin{gather} \begin{split}\label{eqn:mcktht-Zwegersprop} \sum_{\nu \in a + \ZZ^2} &\operatorname{sgn} \left( B(c,\nu) \right) \beta \left( - \frac{B(c,\nu)^2}{Q(c)} \Im(\tau) \right) e^{2 \pi i Q(\nu) \tau + 2 \pi i B(\nu,b)} \\ & = - \sum_{\mu_0 \in P_0} R_{\frac{B(c,\mu_0)}{2Q(c)},B(c,b)} (-2 Q(c) \tau) \cdot \sum_{\xi \in \mu_0^\perp + \langle c \rangle_\ZZ^\perp} e^{2 \pi i Q(\xi) \tau + 2 \pi i B(\xi,b^\perp)}, \end{split} \end{gather} where $\mu_0^\perp= \mu_0 - \frac{B(c,\mu_0)}{2 Q(c)} c $ and $b^\perp= b - \frac{B(c,b)}{2 Q(c)} c$. \end{prop} Note that the term \begin{equation} \sum_{\xi \in \mu_0^\perp + \langle c \rangle_\ZZ^\perp} e^{2 \pi i Q(\xi) \tau + 2 \pi i B(\xi,b^\perp)} \end{equation} is a classical (positive-definite) theta function of weight $1/2$. The indefinite theta function construction (\ref{eqn:mcktht:indtht-vartheta}) is applied to mock theta functions of Ramanujan (other than $\chi_0$ and $\chi_1$, which are treated in \cite{MR2558702}) in \cite{Zwegers}. Amongst those appearing are the four functions $F_0$, $F_1$, $\phi_0$ and $\phi_1$, where $\phi_0$ and $\phi_1$ are defined in (\ref{eqn:intro-phi01}), and \begin{gather} \begin{split}\label{eqn:mcktht:um-F01} F_0(q)&:=\sum_{n\geq 0}\frac{q^{2n^2}}{(q;q^2)_n},\\ F_1(q)&:=\sum_{n\geq 0}\frac{q^{2n(n+1)}}{(q;q^2)_{n+1}}. \end{split} \end{gather} These are amongst the fifth order mock theta functions introduced by Ramanujan in his last letter to Hardy. To study these functions Zwegers introduces $6$-vector-valued mock modular forms \begin{gather} F_{5,1}(\tau)=(F_{5,1,r}(\tau)),\quad F_{5,2}(\tau)=(F_{5,2,r}(\tau)), \end{gather} on pages 74 and 79, respectively, of \cite{Zwegers}. Inspecting their definitions, and substituting $2\tau$ for $\tau$, we find that \begin{align} F_{5,1,3}(2\tau)&=q^{-1/120}(F_0(q)-1), &F_{5,2,3}(2\t)&=q^{-1/120}\phi_0(-q),\label{eqn:mcktht:indtht-F5Fphi0}\\ F_{5,1,4}(2\tau)&=q^{71/120}F_1(q), &F_{5,2,4}(2\t)&=-q^{-49/120}\phi_1(-q).\label{eqn:mcktht:indtht-F5Fphi1} \end{align} The content of Proposition 4.10 of \cite{Zwegers} is that \begin{gather}\label{eqn:mcktht:um-HFG1} H_{5,1}(\tau)=F_{5,1}(\tau)-G_{5,1}(\tau), \end{gather} where the vector-valued functions $H_{5,1}$ and $G_{5,1}$ are such that the components of $2\eta(\tau) H_{5,1}(\tau)$ are non-holomorphic indefinite theta functions of the form $\vartheta_{a,b}^{c^{(1)},c^{(2)}}(\tau)$ (cf. (\ref{eqn:mcktht:indtht-vartheta})), and the third and fourth components of $G_{5,1}$ satisfy \begin{gather}\label{eqn:mcktht:um-G3RRRR} G_{5,1,3}(2\tau)=-\frac{1}{2}\left(R_{\frac{19}{60},0}+R_{\frac{29}{60},0}-R_{\frac{49}{60},0}-R_{\frac{59}{60},0}\right)(60\tau), \\ G_{5,1,4}(2\tau)=-\frac{1}{2}\left(R_{\frac{13}{60},0}+R_{\frac{23}{60},0}-R_{\frac{43}{60},0}-R_{\frac{53}{60},0}\right)(60\tau). \label{eqn:mcktht:um-G4RRRR} \end{gather} (Cf. (\ref{eqn:mcktht-Rab}) for $R_{a,b}$.) Moreover, $H_{5,1}(\tau)$ is an eigenfunction for $\Omega_{\frac12}$ with eigenvalue $3/16$ (cf. (\ref{eqn:mcktht:um-cas})). In other words, the components of $H_{5,1}=(H_{5,1,r})$ are harmonic weak Maass forms of weight $1/2$ (cf. \S\ref{sec:mcktht:maass}). Proposition 4.13 of \cite{Zwegers} establishes a similar result for $F_{5,2}$, namely \begin{gather}\label{eqn:mcktht:um-HFG2} H_{5,2}(\tau)=F_{5,2}(\tau)-G_{5,2}(\tau), \end{gather} where $H_{5,2}$ is again a harmonic weak Maass form of weight $1/2$, and $G_{5,2}=-G_{5,1}$. The left hand sides of (\ref{eqn:mcktht:um-HFG1}) and (\ref{eqn:mcktht:um-HFG2}) are harmonic weak Maass forms of weight $1/2$, so they admit canonical decompositions into holomorphic (cf. (\ref{eqn:mcktht:um-Hp})) and non-holomorphic (cf. (\ref{eqn:mcktht:um-Hm})) parts. The summands $F_{5,1}$ and $F_{5,2}$ on the right hand sides are holomorphic by construction, and the $R_{a,b}$ are of the same form as (\ref{eqn:mcktht:um-Hm}) by construction (cf. (\ref{eqn:mcktht-Rab})), so the right hand sides of (\ref{eqn:mcktht:um-HFG1}) and (\ref{eqn:mcktht:um-HFG2}) are precisely the decompositions of $H_{5,1}$ and $H_{5,2}$ into its holomorphic and non-holomorphic parts. Equivalently, the four functions $F_{5,j,r}$ are mock modular forms of weight $1/2$ with completions given by the $H_{5,j,r}$, and the $G_{5,j,r}$ are the Eichler integrals of their shadows. Thus we can describe their shadows explicitly. Applying (\ref{eqn:mcktht-Rabgab}), (\ref{eqn:mcktht-gab}) and (\ref{eqn:mcktht-gabSmr}), and the identities $g_{1-a,0}=g_{-a,0}=-g_{a,0}$, we see that $F_{5,1,3}(2\tau)$ and $-F_{5,2,3}(2\tau)$ have the same shadow \begin{gather}\label{eqn:mcktht:indtht-F5123shadow} \frac12(S_{30,1}+S_{30,11}+S_{30,19}+S_{30,29})(\tau), \end{gather} while $F_{5,1,4}(2\tau)$ and $-F_{5,2,4}(2\tau)$ both have shadow given by \begin{gather}\label{eqn:mcktht:indtht-F5124shadow} \frac12(S_{30,7}+S_{30,13}+S_{30,17}+S_{30,27})(\tau). \end{gather} \subsection{McKay--Thompson Series}\label{sec:mcktht:um} In this section we prove our main result, Theorem \ref{thm:intro-maintheorem}, that the trace functions arising from the action of $G^X$ on the $V^\pm_{{\rm tw},a}$ recover the Fourier expansions of the mock modular forms $H^X_g$ attached to $g\in G^X\simeq S_3$ by umbral moonshine at $X=E_8^3$. To formulate this precisely, let $T^X_{g}=(T^X_{g,r})$ be the vector of Laurent series in (rational powers of) $q$, with components indexed by $\ZZ/60\ZZ$, such that \begin{gather}\label{eqn:mcktht-TXg} T^X_{g,r}:=\begin{cases} T^{\mp}_{g,1},&\text{ for $r=\pm 1,\pm 11,\pm 19,\pm 29\pmod{60}$,}\\ T^{\mp}_{g,7},&\text{ for $r=\pm 7,\pm 13,\pm 17,\pm 23\pmod{60}$,}\\ 0,&\text{ else,} \end{cases} \end{gather} and define the {\em polar part at infinity} of $T^X_{g}$ to be the vector of polynomials in (rational powers of) $q^{-1}$ obtained by removing all non-negative powers of $q$ in each component $T^X_{g,r}$. Let $g\mapsto\bar{\chi}_g^X$ be the natural permutation character of ${G^X}$, so that $\bar{\chi}_g$ is $3$, $1$ or $0$, according as $g$ has order $1$, $2$ or $3$, and define a vector $S^X_g=(S^X_{g,r})$ of theta series, with components indexed by $\ZZ/60\ZZ$, by setting \begin{gather}\label{eqn:mcktht-SXg} S^X_{g,r}:=\begin{cases} \pm\bar{\chi}_g(S_{30,1}+S_{30,11}+S_{30,19}+S_{30,29}),&\text{ if $r=\pm1,\pm11,\pm19,\pm 29\pmod{60}$,}\\ \pm\bar{\chi}_g(S_{30,7}+S_{30,13}+S_{30,17}+S_{30,23}),&\text{ if $r=\pm7,\pm13,\pm17,\pm 23\pmod{60}$,}\\ 0&\text{ else.} \end{cases} \end{gather} (Cf. (\ref{eqn:mcktht-Smr}).) Set $S^X:=S^X_e$, and let $\sigma^X:\SL_2(\ZZ)\to\GL_{60}(\CC)$ denote the multiplier system of $S^X$, so that \begin{gather}\label{eqn:mcktht:um-sigmaX} \sigma^X(\gamma)S^X(\gamma\tau)(c\tau+d)^{-3/2}=S^X(\tau) \end{gather} for $\tau \in\HH$ and $\gamma\in \SL_2(\ZZ)$, when $(c,d)$ is the lower row of $\gamma$. Our next goal (to be realized in Proposition \ref{prop:mcktht:um-TXg}) is to show that $2T^X_g$ is a mock modular form with shadow $S^X_g$ for $g\in G^X$. This condition tells us what the multiplier system of $T^X_g$ must be, at least when $o(g)$ is $1$ or $2$ (as $S^X_g$ is identically zero when $o(g)=3$). For the convenience of the reader we describe this multiplier system in more detail now. It is cumbersome to work with matrices in $\GL_{60}(\CC)$, but we can avoid this since any non-zero component of $T^X_g$ is $\pm1$ times $T^X_{g,1}$ or $T^X_{g,7}$. That is, we can work with the $2$-vector-valued functions $\check T^X_g:=(T^X_{g,1},T^X_{g,7})$ and $\check S^X_{g}:=(S^X_{g,1},S^X_{g,7})$. If $h=(h_r)$ is a modular form of weight $1/2$ with multiplier system conjugate to that of $S^X$, and satisfying \begin{gather}\label{eqn:mcktht-hr} h_{r}:=\begin{cases} h_{1},&\text{ for $r=\pm 1,\pm 11,\pm 19,\pm 29\pmod{60}$,}\\ h_{7},&\text{ for $r=\pm 7,\pm 13,\pm 17,\pm 23\pmod{60}$,}\\ 0,&\text{ else,} \end{cases} \end{gather} then, setting $\check h=(h_1,h_7)$, we have \begin{gather} \check{h}\left(\frac{a\t+b}{c\t+d}\right)\check\nu\left(\frac{a\t+b}{c\t+d}\right) (c\tau+d)^{-1/2}=\check{h}(\t) \end{gather} where $\check\nu:\SL_2(\ZZ)\to\GL_2(\CC)$ is determined by the rules \begin{gather} \begin{split}\label{eqn:mcktht-checknu} \check\nu \begin{pmatrix} 1&1\\ 0&1 \end{pmatrix} &= \begin{pmatrix} e(-\tfrac{1}{120})&0\\ 0&e(-\tfrac{49}{120}) \end{pmatrix},\\ \check\nu \begin{pmatrix} 0&-1\\ 1&0 \end{pmatrix} &=\frac{2e(\frac{3}{8})}{\sqrt{15}} \begin{pmatrix} \sin(\pi\tfrac{1}{30})+\sin(\pi\tfrac{11}{30})&\sin(\pi\frac{7}{30})+\sin(\pi\frac{13}{30})\\ \sin(\pi\tfrac{7}{30})+\sin(\pi\frac{13}{30})&-\sin(\pi\frac{1}{30})-\sin(\pi\frac{11}{30}) \end{pmatrix}. \end{split} \end{gather} We now return to our main objective: the determination of the modularity of $T^X_g$ for $g\in {G^X}$. To describe the multiplier system for $T^X_g$ when $o(g)=3$ we require the function $\rho_{3|3}:\Gamma_0(3)\to \CC^\times$, defined by setting \begin{gather}\label{eqn:mcktht:um-rho33} \rho_{3|3}\left(\begin{matrix}a&b\\c&d\end{matrix}\right):=e\left(\frac{cd}{9}\right). \end{gather} Evidently $\rho_{3|3}$ has order $3$, and restricts to the identity on $\Gamma_0(9)$. \begin{prop}\label{prop:mcktht:um-TXg} Let $g\in G^X$. Then $2T^X_g$ is the Fourier series of a mock modular form for $\Gamma_0(o(g))$ whose shadow is $S^X_g$. The polar part at infinity of $2T^X_g$ is given by \begin{gather} T^X_{g,r}=\begin{cases} \mp 2q^{-1/120}+O(1),&\text{ if $r=\pm 1,\pm11,\pm19,\pm29\pmod{60}$,}\\ O(1),&\text{ otherwise,} \end{cases} \end{gather} and $2T^X_g$ has vanishing polar part at all non-infinite cusps of $\Gamma_0(o(g))$. If $o(g)=3$ then the multiplier system of $2T^X_g$ is given by $\gamma\mapsto \rho_{3|3}(\gamma)\overline{\sigma^X(\gamma)}$. \end{prop} \begin{proof} According to our definition (\ref{eqn:mcktht-TXg}), the components of $T^X_g$ are $T^{\pm}_{g,1}$ or $T^{\pm}_{g,7}$. In practice it is more convenient to work with $T^{\pm}_{g,3}$ than $T^{\pm}_{g,7}$, and we may do so because these functions coincide up to a sign (depending upon $g$). To see this, observe that $D(L+a\rho/2)=-D(L-a\rho/2)$ for $a$ an odd integer. Then comparing with the expressions (\ref{eqn:va:cnstn-Tpmeadirect}), (\ref{eqn:va:cnstn-Tpmtauadirect}) and (\ref{eqn:va:cnstn-Tpmsigmaadirect}), we see that $T^{\pm}_{g,a}=T^{\pm}_{g,-a}$ when $o(g)=1$ or $3$, and $T^{\pm}_{g,a}=-T^\pm_{g,-a}$ when $o(g)=2$. We also have $T^\pm_{g,a}=-T^\pm_{g,a+10}$ for all $g$, so in particular, \begin{gather}\label{eqn:mcktht-37equiv} \begin{split} T^{\pm}_{e,7}&=-T^{\pm}_{e,3},\\ T^{\pm}_{\hat\tau,7}&=T^{\pm}_{\hat\tau,3},\\ T^{\pm}_{\hat\sigma,7}&=-T^{\pm}_{\hat\sigma,3}. \end{split} \end{gather} We will now verify that the series $T^X_g$ are Fourier expansions of vector-valued mock modular forms, and we will determine their shadows. For the case that $g=e$ we compute $3/40-1/12=-1/120$ and $27/40-1/12=71/120$, and see, upon comparison of (\ref{eqn:va:cnstn-Tpmeaexplicit}) with (\ref{eqn:intro:zwegers}), that $T^{\pm}_{e,1}(q)=\pm q^{-1/120}(2-\chi_0(q))$ and $T^{\pm}_{e,3}=\pm q^{71/120}\chi_1(q)$. In particular, \begin{gather} \begin{split}\label{eqn:mcktht:um-Tchi} 2T^-_{e,1}&=2q^{-1/120}(\chi_0(q)-2),\\ 2T^-_{e,7}&=2q^{71/120}\chi_1(q) \end{split} \end{gather} (cf. (\ref{eqn:mcktht-37equiv})). Note that identities $H^X_{e,1}=2q^{-1/120}(\chi_0(q)-2)$ and $H^X_{e,7}=2q^{71/120}\chi_1(q)$ are predicted in \S5.4 of \cite{MUM}, but it is not verified there that this specification yields a mock modular form with shadow $S^X=S^X_e$. We will determine the modular properties of $2T^-_{e,1}$ and $2T^-_{e,7}$ by applying the results of Zwegers on $F_0$, $F_1$, $\phi_0$ and $\phi_1$ that we summarized in \S\ref{sec:mcktht:indtht}. To apply these results we first recall the expressions \begin{gather} \begin{split}\label{eqn:mcktht:um-chiFphi} \chi_0(q) &= 2 F_0(q) - \phi_0(-q), \\ \chi_1(q) &= 2 F_1(q) + q^{-1} \phi_1(-q), \end{split} \end{gather} which are proven in \S3 of \cite{MR1577032}. (The first of these was given by Ramaujan in his last letter to Hardy, where he also mentioned the existence of a similar formula relating $\chi_1$, $F_1$ and $\phi_1$.) Thus we obtain \begin{gather}\label{eqn:mcktht:um-TFF} 2T^-_{e,1}=4F_{5,1,3}(2\tau)-2F_{5,2,3}(2\tau),\\ 2T^-_{e,7}=4F_{5,1,4}(2\tau)-2F_{5,2,4}(2\tau), \end{gather} upon comparison of (\ref{eqn:mcktht:indtht-F5Fphi0}), (\ref{eqn:mcktht:indtht-F5Fphi1}), (\ref{eqn:mcktht:um-Tchi}) and (\ref{eqn:mcktht:um-chiFphi}). Applying the results of Zwegers on $F_{5,1}$ and $F_{5,2}$ recalled in \S\ref{sec:mcktht:indtht}, and the equations (\ref{eqn:mcktht:indtht-F5123shadow}) and (\ref{eqn:mcktht:indtht-F5124shadow}) in particular, we conclude that $2T^-_{e,1}$ and $2T^-_{e,7}$ are mock modular forms of weight $1/2$, with respective shadows given by \begin{gather} 3(S_{30,1}+S_{30,11}+S_{30,19}+S_{30,29})(\tau),\\ 3(S_{30,7}+S_{30,13}+S_{30,17}+S_{30,27})(\tau). \end{gather} In other words, the shadow of $T^X_{e}$ is precisely $S^X_e$, as we required to show. The modular transformation formulas for $H_{5,1}(\tau)$ and $H_{5,2}(\tau)$ given in Propositions 4.10 and 4.13 of \cite{Zwegers}, respectively, show that $T^X_e$ transforms in the desired way under $\SL_2(\ZZ)$. We now consider the case that $o(g)=2$. We may take $g=\hat\tau$. We again begin by using the results recalled in \S\ref{sec:mcktht:indtht} to analyze the components $T^-_{\hat\tau,1}$ and $T^-_{\hat\tau,7}$ separately. For $T^-_{\hat\tau,1}$ let \begin{equation} A = \begin{pmatrix} 6 & 4 \\ 4 & 1 \end{pmatrix}, ~ a= \begin{pmatrix} 1/10 \\ 1/10 \end{pmatrix}, ~ b=\begin{pmatrix} 3/20 \\ -2/20 \end{pmatrix}, ~ c^{(1)}= \begin{pmatrix} -1 \\ 4 \end{pmatrix}, ~ c^{(2)}= \begin{pmatrix} -2 \\ 3 \end{pmatrix}. \end{equation} Then a direct computation using \begin{gather} \nu=\begin{pmatrix} k+\frac{1}{10} \\ m+\frac{1}{10} \end{pmatrix}, \; Q(\nu)= 3 k^2+\frac{m^2}{2} + 4km+k+\frac{m}{2}+\frac3{40},\; B(\nu,b)= \frac{k+m}{2} + \frac{1}{10},\\ \quad \operatorname{sgn} \left( B(c^{(1)},\nu) \right) =\operatorname{sgn} \left(k+\frac{1}{10}\right),\; \operatorname{sgn} \left( B(c^{(2)},\nu) \right)= \operatorname{sgn} \left(-m-\frac{1}{10}\right), \end{gather} gives \begin{equation} 2T^-_{\hat\tau,1}= - \frac{e(-\frac{1}{10}) } {\eta(2 \tau)} \sum_{\nu \in a +\ZZ^2} \left( \operatorname{sgn} \left( B(c^{(1)},\nu) \right) - \operatorname{sgn} \left( B(c^{(2)},\nu) \right) \right) e^{2 \pi i Q(\nu) \tau+ 2 \pi i B(\nu,b)}. \end{equation} Comparing this to the indefinite theta function construction (\ref{eqn:mcktht:indtht-vartheta}) we find that \begin{gather} \label{theta} \begin{split} & \vartheta_{a,b}^{c^{(1)},c^{(2)}}(\tau) = - e(\tfrac{1}{10})\eta(2 \tau) 2T^-_{\hat\tau,1}(\tau) \\ & +\sum_{ \nu \in a + \ZZ^2} \left(\sum_{k=1}^2(-1)^k \operatorname{sgn} (B(c^{(k)},\nu)) \beta \left( - \frac{B(c^{(k)},\nu)^2 \Im (\tau)}{Q(c^{(k)})} \right) \right) q^{Q(\nu)}e^{2 \pi i B(\nu, b)}. \end{split} \end{gather} We now use Proposition \ref{prop:mcktht-Zwegersprop} to rewrite the terms involving $c^{(1)}$ and $c^{(2)}$ in the second line of (\ref{theta}). For the term with $c^{(1)}$ the set $P_0$ of Proposition \ref{prop:mcktht-Zwegersprop} has one element, $\mu_0=\frac{1}{10} \left(\begin{smallmatrix} -9 \\ 1 \end{smallmatrix}\right)$, and we find $\langle c^{(1)} \rangle_\ZZ^\perp= \left\{\left(\begin{smallmatrix} 0 \\ m \end{smallmatrix}\right)\mid m \in \ZZ\right\}$, $b^\perp= \frac12\left(\begin{smallmatrix} 0 \\ {1} \end{smallmatrix}\right)$ and $\mu_0^\perp=\frac12\left( \begin{smallmatrix} 0 \\- {7} \end{smallmatrix}\right)$. Thus \begin{equation} \sum_{\xi \in \mu_0^\perp + \langle c \rangle_\ZZ^\perp} e^{2 \pi i Q(\xi) \tau + 2 \pi i B(\xi,b^\perp)} = e(-\tfrac{1}{4}) \sum_{m \in \ZZ} (-1)^m q^{(m-1/2)^2/2}=0, \end{equation} so this term vanishes. For the term with $c^{(2)}$ the set $P_0$ consists of three elements, $\mu_0=\frac{1}{10}\binom{1}{1},\frac{1}{10}\binom{1}{11},\frac{1}{10}\binom{1}{21}$, and we have $B(c^{(2)},\mu_0)/2 Q(c^{(2)})= \frac1{30}, \frac{11}{30}, \frac{21}{20}$, in the respective cases. The last value of $\mu_0$ also leads to a vanishing contribution, while the other two values lead to \begin{equation} - e(\tfrac{1}{12})R_{\frac{1}{30},-\frac{1}{2}}(15 \tau) \eta(2 \tau) - e(-\tfrac{1}{12})R_{\frac{11}{30},-\frac{1}{2}}(15 \tau) \eta(2 \tau), \end{equation} which we see by applying Euler's identity \begin{equation} q^{1/12} \sum_{k \in \ZZ} (-1)^k q^{3k^2+k}= \eta(2 \tau) . \end{equation} We thus have \begin{equation}\label{eqn:mcktht:um-tau1} - e(-\tfrac{1}{10})\frac{\vartheta^{c^{(1)},c^{(2)}}_{a,b}(\tau)}{\eta(2 \tau)} = 2T^-_{\hat\tau,1} - e(-\tfrac{1}{60})R_{\frac{1}{30},-\frac{1}{2}}(15 \tau) - e(-\tfrac{11}{60}) R_{\frac{11}{30},-\frac{1}{2}}(15 \tau). \end{equation} In particular, $T^-_{\hat\tau,1}$ is the Fourier expansion of a holomorphic function on $\HH$, which we henceforth denote $T^-_{\hat\tau,1}(\tau)$. Since $T^-_{\hat\tau,1}(\tau)$ is holomorphic, the function (\ref{eqn:mcktht:um-tau1}) is a harmonic weak Maass form of weight $1/2$, according to Proposition 4.2 of \cite{Zwegers}. (Cf. also \S\ref{sec:mcktht:maass}.) Thus we are in a directly similar situation to that encountered at the end of \S\ref{sec:mcktht:indtht}. Namely, we have that $T^-_{\hat\tau,1}(\tau)$ is a mock modular form of weight $1/2$ (for some congruence subgroup of $\SL_2(\ZZ)$), and the second and third summands of the right hand side of (\ref{eqn:mcktht:um-tau1}) comprise the Eichler integral of its shadow. Applying (\ref{eqn:mcktht-Rabgab}), (\ref{eqn:mcktht-gab}) and (\ref{eqn:mcktht-gabSmr}), and also \begin{gather} e(-\tfrac{1}{60})g_{\frac{1}{30},\frac{1}{2}}(15 \tau)+e(-\tfrac{11}{60})g_{\frac{11}{30},\frac{1}{2}}(15 \tau) =\frac{1}{30}\left( S_{30,1}+S_{30,11}+S_{30,19}+S_{30,29} \right)(\tau), \end{gather} we conclude that the shadow of $2T^-_{\hat\tau,1}(\tau)$ is indeed $S^X_{\hat\tau,1}(\tau)$ (cf. (\ref{eqn:mcktht-SXg})). For $T^-_{\hat\tau,7}$ we take $A$, $b$, $c^{(1)}$, $c^{(2)}$ as before but set $a=\frac{1}{10}\binom{3}{3}$. We now have \begin{gather} \nu=\begin{pmatrix} k+\frac{3}{10} \\ m+\frac{3}{10} \end{pmatrix},\; Q(\nu)= 3 k^2 + \frac{m^2}{2}+ 4km+3k+\frac{3 m}{2}-\frac{27}{40},\; B(\nu,b)= \frac{k+m}{2} + \frac{3}{10},\\ \operatorname{sgn} \left( B(c^{(1)},\nu) \right) =\operatorname{sgn} (k+3/10),\;\operatorname{sgn} \left( B(c^{(2)},\nu) \right)= \operatorname{sgn} (-m-3/10). \end{gather} Proceeding as we did for $T^-_{\hat\tau,1}$, the contribution from the $c^{(1)}$ term vanishes again. For the $c^{(2)}$ term we find that $P_0$ consists of the three values $\mu_0=\frac{1}{10}\binom{3}{3},\frac{1}{10}\binom{3}{13},\frac{1}{10}\binom{3}{23}$, and we have $B(c^{(2)},\mu_0)/2 Q(c^{(2)})= \frac{3}{30}, \frac{13}{30}, \frac{23}{20}$, respectively. The first value of $\mu_0$ leads to a vanishing contribution while the other two terms lead to \begin{equation} - e(-\tfrac{3}{10})\frac{\vartheta^{c^{(1)},c^{(2)}}_{a,b}(\tau)}{\eta(2 \tau)} = 2T^-_{\hat\tau,7} - e(-\tfrac{13}{60})R_{\frac{13}{30},-\frac{1}{2}}(15 \tau) - e(-\tfrac{23}{60})R_{\frac{23}{30},-\frac{1}{2}}(15 \tau). \end{equation} We conclude thus that $T^-_{\hat\tau,7}$ is a the Fourier expansion of a mock modular form of weight $1/2$, and using \begin{gather} e(-\tfrac{13}{60})g_{\frac{13}{30},\frac{1}{2}}(15 \tau)+e(-\tfrac{23}{60})g_{\frac{23}{30},\frac{1}{2}}(15 \tau) =\frac{1}{30}\left( S_{30,7}+S_{30,13}+S_{30,17}+S_{30,23} \right)(\tau) \end{gather} we see that the shadow of $2T^-_{\hat\tau,1}(\tau)$ is $S^X_{\hat\tau,1}(\tau)$ (cf. (\ref{eqn:mcktht-SXg})). So we have verified that the shadow of $2T^-_{g}=(2T^-_{g,r})$ is $S^X_g=(S^X_{g,r})$ for $o(g)=2$. Corollary 2.9 of \cite{Zwegers} details the modular transformation properties of the indefinite theta functions $\vartheta^{c^{(1)},c^{(2)}}_{a,b}(\tau)$. Applying these formulas, much as in the proofs of Propositions 4.10 and 4.13. in \cite{Zwegers}, we see that $2T^-_{\hat\tau}$ transforms in the desired way under the action of $\Gamma_0(2)$. Corollary 2.9 also enables us to compute the expansion of $2T^-_{\hat\tau}$ at the cusp of $\Gamma_0(2)$ represented by $0$. We ultimately find that both $T^-_{\hat\tau,1}(\tau)$ and $T^-_{\hat\tau,7}(\tau)$ vanish as $\tau\to 0$. Thus $2T^-_{\hat\tau}$ has no poles away from the infinite cusp. It remains to consider the case $o(g)=3$, but this can be handled by applying classical results on positive-definite theta functions, since the formula (\ref{eqn:va:cnstn-Tpmsigmaaexplicit}) gives $T^-_{\hat\sigma,1}$ and $T^-_{\hat\sigma,7}$ explicitly in terms of the Dedekind eta function and the theta series of a rank one lattice. We easily check that these functions transform in the desired way under $\Gamma_0(3)$, and have no poles away from the infinite cusp of $\Gamma_0(3)$. In particular, $2T^-_{\hat\sigma}$ is modular, and has vanishing shadow. \end{proof} We are now ready to prove our main results. \begin{proof}[Proof of Theorem \ref{thm:intro-maintheorem}] Proposition \ref{prop:mcktht:um-TXg} demonstrates that the functions $2T^X_g$ are mock modular forms of weight $1/2$ with the claimed shadows, multiplier systems, and polar parts. It remains to verify that they are the unique such functions. The uniqueness in case $g=e$ is shown in Corollary 4.2 of \cite{MUM}, using the fact (see Theorem 9.7 in \cite{Dabholkar:2012nd}) that there are no weak Jacobi forms of weight 1. We will give a different (but certainly related) argument here. Consider first the case that $o(g)$ is $1$ or $2$. It suffices to show that if $h=(h_r)$ is a modular form of weight $1/2$, transforming with the same multiplier system as $H^X$ under $\Gamma_0(2)$, with $h_r$ vanishing whenever $r$ does not belong to \begin{gather}\label{eqn:mcktht:um-rrestriction} \{\pm 1,\pm 7,\pm 11,\pm 13,\pm 17,\pm 19,\pm 23, \pm 29 \}, \end{gather} then $h$ vanishes identically. The multiplier system for $H^X$ is trivial when restricted to $\Gamma(120)$, so the components $h_r$ are modular forms for $\Gamma_0(2)\cap\Gamma(120)=\Gamma(120)$. Satz 5.2. of \cite{Sko_Thesis} is an effective version of the celebrated theorem of Serre--Stark \cite{MR0472707} on modular forms of weight $1/2$ for congruence subgroups of $\SL_2(\ZZ)$. It tells us that the space of modular forms of weight $1/2$ for $\Gamma(120)$ is spanned by certain linear combinations of the {\em thetanullwerte} $\theta^0_{n,r}(\tau):=\theta_{n,r}(\tau,0)$, and the only $n$ that can appear are those that divide $30$. On the other hand, the restriction (\ref{eqn:mcktht:um-rrestriction}) implies that any non-zero component $h_r$ must belong to one of $q^{-1/120}\CC[[q]]$ or $q^{71/120}\CC[[q]]$. We conclude that all the $h_r$ are necessarily zero by checking, using \begin{gather}\label{eqn:mcktht:um-thetanullwerte} \theta_{n,r}^0(\tau)=\sum_{k\in\ZZ}q^{(2kn+r)^2/4n}, \end{gather} that none of the $\theta^0_{n,r}$ belong to either space, for $n$ a divisor of $30$. The case that $o(g)=3$ is very similar, except that the $h_r$ are now modular forms on $\Gamma_0(9)\cap \Gamma(120)$, which contains $\Gamma(360)$, and the relevant thetanullwerte are those $\theta_{n,r}^0$ with $n$ a divisor of $90$. We easily check using (\ref{eqn:mcktht:um-thetanullwerte}) that there are non-zero possibilities for $h_r$, and this completes the proof. \end{proof} \begin{proof}[Proof of Theorem \ref{thm:intro-rammcktht}] Taking now (\ref{eqn:intro-HXg}) as the definition of $H^X_g$, the identities (\ref{eqn:intro-rammcktht1}) follow directly from the definition (\ref{eqn:mcktht-TXg}) of $T^X_g$, and the explicit expressions (\ref{eqn:va:cnstn-Tpmeaexplicit}) for the components of $T^X_e$. The identities (\ref{eqn:intro-rammcktht2}) follow from the characterization of $H^X_g$ for $o(g)=2$ that is entailed in Theorem \ref{thm:intro-maintheorem}. Indeed, using Zwegers' results (viz., Propositions 4.10 and 4.13 in \cite{Zwegers}) on the modularity of $\phi_0(-q)$ and $\phi_1(-q)$, we see that the function defined by the right hand side of (\ref{eqn:intro-rammcktht2}) is a vector-valued mock modular form with exactly the same shadow as $2T^X_{\hat\tau}$, transforming with the same multiplier system under $\Gamma_0(2)$, and having the same polar parts at both the infinite and non-infinite cusps of $\Gamma_0(2)$. So it must coincide with $H^X_{2A,1}=2T^X_{\hat\tau}$ according to Theorem \ref{thm:intro-maintheorem}. This completes the proof. \end{proof} \begin{proof}[Proof of Corollary \ref{cor:intro-qseriesid}] Andrews established Hecke-type ``double sum'' identities for $\phi_0$ and $\phi_1$ in \cite{MR814916}. Rewriting these slightly, we find \begin{gather} \phi_0(-q)=\frac{(q;q)_{\infty}}{(q^2;q^2)_\infty^2}\label{eqn:mcktht:um-phi0Hecke} \left( \sum_{k,m \ge 0} - \sum_{k,m <0} \right)_{\text{$k=m$ mod $2$}} (-1)^m q^{k^2/2+m^2/2+4km+k/2+3m/2},\\ -q^{-1}\phi_1(-q)=\frac{(q;q)_{\infty}}{(q^2;q^2)_\infty^2}\label{eqn:mcktht:um-phi1Hecke} \left( \sum_{k,m \ge 0} - \sum_{k,m <0} \right)_{\text{$k=m$ mod $2$}} (-1)^m q^{k^2/2+m^2/2+4km+3k/2+5m/2}. \end{gather} Armed with the identities (\ref{eqn:intro-rammcktht2}), we obtain (\ref{eqn:intro-qseriesid1}) and (\ref{eqn:intro-qseriesid7}) by comparing (\ref{eqn:mcktht:um-phi0Hecke}) and (\ref{eqn:mcktht:um-phi1Hecke}) with the explicit expression (\ref{eqn:va:cnstn-Tpmtauaexplicit}) for the components of $T^X_{\hat\tau}$. \end{proof} \section*{Acknowledgement} We thank Miranda Cheng for particularly helpful discussions and advice that took place in the early stages of this work. We also thank Ching Hung Lam for discussions on the vertex operator algebra structure here employed. The research of J.D. was supported in part by the Simons Foundation (\#316779). Both authors gratefully acknowledge support from the U.S. National Science Foundation (grants 1203162 and 1214409). \clearpage
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} A transformative change for robotics is enabling robots to effectively improvise tools. Tools can extend the physical capabilities of robots and make them more useful, by enabling them to go beyond small, fixed sets of interchangeable end-effectors often found in industrial settings. However, a major problem with the philosophy of emphasizing tool use is that the right tool is not always accessible, and robots may have to improvise with what is available. Humans, chimpanzees and certain species of birds have all been known to accomplish tasks by creatively utilizing objects available to them, such as sticks and stones \cite{stout2011stone, toth1993pan, jones1973tool}. In the Apollo 13 incident of 1970, a carbon dioxide filter creatively constructed out of a sock, a plastic bag, book covers, and duct tape helped save the lives of the three astronauts on board \cite{cass2005apollo}. Solving problems inventively by using available objects is colloquially referred to as Macgyvering, and is featured extensively in TV shows \cite{Macgyver}, books \cite{Martian}, inventions \cite{RubeGoldberg}, and even cultural traditions (e.g., the Indian tradition of Jugaad \cite{Jugaad}). However, similar tool improvisation and macgyvering capabilities are currently beyond the scope of robots today, limiting them to predefined tools and tasks. The ability to improvise and invent appropriate tools from available resources can greatly increase robot adaptability, enabling robots to handle any uncertainties or equipment failures that may arise. These capabilities will be particularly useful for robots that explore, as well as work in space, underwater, and other locations where required tools may not be easily available. \begin{figure}[t] \centering \includegraphics[width=0.48\textwidth]{imgs/algo_overview.png} \captionsetup{width=\linewidth} \caption{Tool macvgyering: Given an action or task, and available objects, the robot either substitutes for the missing tool (e.g., using a metal can), or constructs a tool for performing the action (e.g., making a hammer by joining two objects). Highlighted in green are the key contributions of this paper.} \label{fig:algo_overview} \end{figure} Our goal in this work is to enhance the adaptability of robots beyond predefined or prototypical tools. We seek to enable robots to improvise or \textit{Macgyver} tool-based solutions, either by directly selecting an available substitute for a missing tool through tool substitution, or building an appropriate tool from available objects through tool construction. Specifically, we define \textit{Tool Macgyvering} as a subset of macgyvering problems involving tool substitution (e.g., a metal can is used as a substitute tool for hammering a nail), or tool construction (e.g., a hammer is constructed from wooden pieces). In this paper, we contribute a novel Tool Macgyvering framework that takes in a set of available objects along with a desired action to be performed (e.g., ``hit/hammer''), and outputs either a tool substitute or tool construction for performing the action. An overview of our Tool Macgyvering framework is shown in Figure \ref{fig:algo_overview}. \textbf{Tool substitution} identifies the most appropriate object for performing the action, by reasoning about the shape and material of the available objects. \textbf{Tool construction} identifies the most appropriate object combination (construction) for performing the action, by reasoning about shape, material and the different ways of attaching the objects. Finally, \textbf{arbitration} selects between the object substitutes and constructions to output the most appropriate Tool Macgyvering solution. For performing tool substitution, we directly apply our prior work that has been shown to effectively identify substitute tools from partial point clouds \cite{shrivatsav2020}. For tool construction, our prior work introduced an approach that only reasoned about shape and potential ways of attaching objects together to construct tools \cite{nair2019toolconstr, nair2019autonomous}. However, this resulted in tools made of inappropriate materials, e.g., hammers made out of foam. In this paper, we extend our tool construction approach by incorporating material reasoning, to enable robots to reason about material properties when constructing tools. We further evaluate our current approach against our prior work, with an expanded test set of objects of varied shapes and materials, allowing for the construction of more diverse tools than before. As we show in our experiments, the presented framework results in significant improvement over prior results, both in terms of performance and the quality of output constructions. Our key contributions in this paper are as follows: \begin{enumerate} \item Introduction of a novel, unified Tool Macgyvering framework that combines tool substitution and construction, using arbitration to decide between substitution and construction as the appropriate macgyvering solution; \item Incorporation of material reasoning for tool construction, that significantly improves performance over previous tool construction approaches; \item Introduction of arbitration strategies for selecting between tool substitution and tool construction. \end{enumerate} We validate the effectiveness of our tool construction approach on a 7-DOF robotic arm, through autonomous construction of six different tool types. We also demonstrate the efficiency of our arbitration approaches in deciding the most appropriate macgyvering solution for a specified action. \section{Related Work} In this section, we summarize existing work that is closely related to Tool Macgyvering. \subsection{Tool Construction} Existing research in robotics has primarily focused on tool use (\cite{sinapov2007learning, sinapov2008detecting}), with little prior work in tool construction. Some recent work has explored Macgyvering and the inventive use of available objects for problem solving \cite{sarathy2017macgyver, sarathy2018macgyver}. They propose a theoretical formulation of Macgyvering problems as scenarios that require the initial domain to be transformed (e.g., by adding a state or action), for the goal state to be reachable. They further introduce the Macgyver Test as an alternative to the Turing test, to measure the resourcefulness of robots. Our work differs from theirs in that we explicitly reason about visual and physical properties of objects, and different ways of attaching objects for Tool Macgyvering. Additional research in macgyvering has also focused on the construction of environmental structures, such as techniques for Automated Design of Functional Structures (ADFS), involving construction of navigational structures, e.g., stairs or bridges \cite{erdogan2013planning}. They introduce a framework for effectively partitioning the solution space by inducing constraints on the design of the structures. Further, \cite{tosun2018perception} has looked at planning for construction of functional structures by modular robots, focusing on identifying features that enable environmental modification in order to make it traversable. In similar work, \cite{saboia2018autonomous} has looked at modification of unstructured environments using objects, to create ramps that enhance navigability. More recently, \cite{choi2018creating} extended the cognitive architecture ICARUS to support the creation and use of functional structures such as ramps, in abstract planning scenarios. The formulation of the problem specifically conforms to the cognitive architecture, limiting its generalization. More broadly, these approaches are primarily focused on improving robot navigation through environment modification as opposed to construction of tools. Some existing research has also explored the construction of simple machines such as levers and bridges \cite{stilman2014robots, levihn2014using}. Their work formulates the construction of simple machines as a constraint satisfaction problem where the constraints represent the relationships between the design components. The constraints in their work limit the variability of the simple machines that can be constructed, focusing only on the placement of components relative to one another, e.g., placing a plank over a stone to create a lever. Additionally, \cite{wicaksono17towards} focused on using 3D printing to fabricate tools from polymers. However, these approaches do not address the problem of tool construction using environmental objects. \subsection{Tool Substitution} Prior work in tool substitution has explored the use of large-scale semantic networks \cite{boteanu2015towards}, or visual similarities between tools (\cite{abelha2016model, schoeler2016bootstrapping}), to identify good substitutes. In \cite{abelha2016model}, the authors use Superquadrics (SQs) to model objects for tool substitution. SQs are geometric shapes that include quadrics, but allows for arbitrary powers instead of just power of two. In their approach, the candidate tools are represented using SQ parameters, and compared to the desired parameters of the tool for which a replacement is sought. In \cite{schoeler2016bootstrapping}, they learn function-to-shape correspondence of objects using supervised learning to identify substitutes for a given tool using part-based shape matching. To model the tools, they use existing point cloud shape representations, such as Ensemble of Shape Functions (ESF) \cite{wohlkinger2011ensemble}. ESF is a descriptor consisting of 10, 64-bin sized histograms (640-D vector), describing the shape of a point cloud, with much success in representing partial point clouds \cite{wohlkinger2011ensemble, nair2019autonomous}. However, these approaches do not reason about material of the objects when evaluating the substitutes. \subsection{Arbitration of Behaviors} While there has not been prior work specifically in arbitration of tool substitution and construction, behavior-based design methodologies often explore coordination mechanisms for different robot behaviors \cite{arkin2003ethological, likhachev2000robotic, velayudhan2017sloth}. Given a set of behaviors, the goal of the coordination or arbitration mechanism is to generate an output behavior that is either one, or a combination of the input behaviors. Two arbitration strategies have been commonly explored to accomplish this, namely, Action Selection and Behavioral Fusion \cite{velayudhan2017sloth}. In action selection, each behavior is associated with a value function that dictates the behavior chosen at any given instant. Thus, only one of the input behaviors is selected. In contrast, behavioral fusion generates a weighted summation of the input behaviors, often used in navigational tasks. Given the nature of our problem, we use action selection to arbitrate between tool substitution and construction, developing appropriate value functions to select the desired behavior for the specified action. \subsection{Material Reasoning} Material properties play an important role when detecting appropriate objects for tool construction and substitution, e.g., for hammering, wooden or metallic objects are preferred over foam. In \cite{perlow2017raw}, the authors describe an approach for detecting appropriate raw materials for object construction, and demonstrate their work in the simulated world of Minecraft. Their work uses neural networks to classify materials from object images. Several other vision-based approaches to material recognition have been previously explored \cite{schwartz2018recognizing, hu2011toward, bell2015material}. These approaches focus on the visual appearance of objects to decipher their material properties. In contrast to visual reasoning, \cite{erickson2019classification} has explored the use of spectral reasoning for material classification. Spectral reasoning uses a handheld spectrometer to measure the reflected intensities of different wavelengths, in order to profile and classify object materials. Their work has shown promising results with a validation accuracy of 94.6\%. However, generalizing posed a greater challenge, achieving an accuracy of 79\% on previously unseen objects. Nevertheless, spectral data helps offset some critical deficiencies of vision-based approaches, such as sensitivity to light and viewing angle. In this work, we focus on using spectral data to reason about materials of the objects. \subsection{Dual Neural Networks} Dual neural networks\footnote{Also known as Siamese neural networks. We avoid using the term ``Siamese'', instead referring to such networks as Dual Neural Networks in our paper} consist of two identical networks, each accepting a different input, combined at the end with a distance metric. The parameters of the twin networks are tied, and the distance metric computes difference between the final layers of the twin networks. Prior work has successfully used dual networks for matching images \cite{koch2015siamese, bromley1994signature, schroff2015facenet}. In this work, we use dual networks to perform shape and material scoring for tool construction and substitution. Our approach is similar to FaceNet \cite{schroff2015facenet}, in that we learn an embedding from the training data, which is then used to match a query input by computing a similarity score. However, in contrast to prior work, the inputs to our dual networks use ESF features for shape scoring, and spectral data for material scoring. \begin{figure*}[t] \centering \includegraphics[width=1.0\textwidth]{imgs/tool_macgyver_pipeline.jpg} \captionsetup{width=\linewidth} \caption{Overview of our Tool Macgyvering framework highlighting the different steps involved. The tool construction and substitution pipelines are followed by arbitration, to output a combined ranking of the different strategies that the robot then validates. Arbitration essentially combines substitution and construction within the framework.} \label{fig:framework} \end{figure*} \section{Tool Macgyvering} In this section we introduce our Tool Macgyvering framework. We begin by formulating our primary research problem as follows: \textit{``Given an action, and a set $C$ of $n$ candidate objects, how can we generate an output ranking of macgyvered solutions for accomplishing the specified action?''} \smallskip We denote the set of all candidate objects as $C = \{c_1, c_2, ..., c_n\}$. Thus, the problem of identifying tool substitutes involves a search space of size $n$, where each $c_i$ is a potential substitute. However, tool construction presents a more challenging combinatorial state space of size $^nP_m$, assuming that we wish to construct a tool with $m$ objects. We denote the set of all permutations of the $m$ objects as $T = \{T_1, T_2, ...\}$, where $T_i = (c_1, ..., c_m)$ is a tuple representing a specific permutation of $m$ objects. We denote the combined space of tool substitutions and constructions as, $S = C \cup T$, where $|S| = n + {^nP_m}$. The goal of our approach is to evaluate the states in $S$, to identify the best Tool Macgyvering solution. In order to identify the most suitable set of objects, we develop objective functions that effectively score the appropriateness of the substitutes and constructions for performing the specified action. A general objective function can be expressed as a weighted sum over a set of $k$ features of the candidate objects in $s_i \in S$, denoted by $\phi_1, ..., \phi_k$, as follows: \[\Phi(s_i) = \lambda_1*\phi_1(s_i) + \lambda_2*\phi_2(s_i) + ... + \lambda_k*\phi_k(s_i)\] We show that reasoning about three features of the candidate objects, namely, \textbf{\textit{shape}} ($\phi_{shape}$), \textbf{\textit{materials}} ($\phi_{mat}$), and \textbf{\textit{attachments}} ($\phi_{att}$) in the case of tool constructions, enables the robot to effectively explore the state space. We define \textit{attachments} as locations at which objects can be attached together. Our work introduces a learning-based framework for computing the objective function, that is computationally scalable as number of objects increases. \begin{algorithm}[t] \SetKwInOut{Input}{input}\SetKwInOut{Output}{output} \Input{action; $T=permute(C,m)$} \Output{$T^*$, $Att$, $Type$} \BlankLine $E = [], Att = [], Type = []$ $S = C \cup T$ \For{$i\gets1$ \KwTo $|S|$}{ $\phi_{shape}(s_i) = ShapeFit(s_i, action)$ $\phi_{mat}(s_i) = MaterialFit(s_i, action)$ \uIf{$|s_i| > 1$}{ \tcp{Construction with $T_i$} $t_{att} = AttachType(s_i)$ $\phi_{att}(s_i), A_{close}(s_i) = AttachmentFit(s_i, t_{att})$ $\Phi(s_i) = \phi_{shape}(s_i) + \phi_{mat}(s_i) + \phi_{att}(s_i)$ } \Else{ \tcp{Substitution with $c_i$} $t_{att} = \varnothing$ $A_{close}(s_i) = \varnothing$ $\Phi(s_i) = \phi_{shape}(s_i) + \phi_{mat}(s_i)$ } $E.append(\Phi(s_i))$ $Att.append(A_{close}(s_i))$ $Type.append(t_{att})$ } \tcp{Arbitrate based on value functions} $V = Arbitrate(E, S)$ $S^* = sort(S, V)$ \tcp{Sort $S$ based on $V$} \Return $S^*, Att, Type$ \caption{Tool Macgyvering} \end{algorithm} Our complete Tool Macgyvering framework is shown in Figure \ref{fig:framework}, and our complete Tool Macgyvering algorithm is shown in Algorithm 1. The pipeline begins with \textbf{workspace segmentation} which enables the system to identify the candidate objects in the robot's workspace. We use plane subtraction and Sample Consensus Segmentation (SAC)\footnote{The implementation was provided by the PCL library} to identify the candidate objects available to the robot using RGB-D data from a camera mounted over the table. The \textbf{shape scoring} algorithm ($ShapeFit()$, Algorithm 1, line 4), evaluates the visual appropriateness of the candidate objects and assigns a corresponding shape score ($\phi_{shape}$ or $\phi'_{shape}$). In this paper, we present two ways of computing the shape score, detailed in the following sections. Following shape scoring, the \textbf{material scoring} algorithm ($MaterialFit()$, Algorithm 1, line 5), evaluates the material fitness of the candidate objects, and assigns a corresponding material score ($\phi_{mat}$). The shape and material scores are combined for tool substitution, in a final objective function $\Phi^{subs}$. For tool construction, the scores discussed above do not indicate whether the objects can be attached. Hence, our \textbf{attachment scoring} algorithm ($AttachmentFit()$, Algorithm 2), evaluates whether the candidate objects can be attached. The algorithm outputs an attachment score, which is combined with the shape and material scores to compute the final objective function $\Phi^{cons}$. The final objectives are then used for computing value functions for \textbf{arbitration}. Arbitration uses the value functions to generate a combined \textbf{ranking} of the tool substitutes and constructions (ranked from highest to lowest values). Finally, the robot \textbf{validates} each construction/substitute for their task suitability, by applying the desired action with the object. In the case of construction, the robot first constructs the tool, and then validates it by applying the desired action on the tool. In this work, we assume that the robot can observe whether the tool succeeded, and that the action trajectory is pre-specified. Alternatively, the action trajectory could be learned from demonstration \cite{rana2017towards}, including, if necessary, adapting the original action to fit the dimensions of the new tool \cite{fitzgerald2014representing,gajewski2018adapting}. If the object fails at performing the action or cannot be constructed, the robot iterates through the ranks until a solution is found. In the following sections we describe material, shape, and attachment scoring, followed by the final objective computation for tool substitution and tool construction. Finally, we present three different arbitration strategies for ranking the substitutes and constructions. \subsection{Material Scoring ($\phi_{mat}$)} Given an action and spectral reading of an object as inputs, material scoring seeks to predict the degree to which the spectral reading is similar to that of canonical tools used for the action. Our previous work has shown that supervised learning using dual neural networks is able to effectively predict material similarity between objects (\cite{shrivatsav2020}), and we follow a similar approach to compute $\phi_{mat}$. Dual neural networks consist of two identical networks, each accepting a different input, combined at the end with a distance metric. The parameters of the twin networks are tied, and the distance metric computes difference between the final layers of the twin networks. The networks are trained on pairs of inputs that are of the same/different classes, to discriminate between the class identity of the input pairs. Once the network weights are learned, we use positive examples (i.e., canonical materials) from the training data to learn an \textit{embedding}. $\phi_{mat}$ is then computed as the similarity of the query spectral reading to the embedding. This enables us to match the input spectral reading to the variety of canonical materials that facilitate an action, rather than conforming to the materials of a specific tool. Here, we assume that the material of the action part of the tool is most critical to performing the action. As a result, we simplify our model by only considering the material of the action part, e.g., we model a knife consisting of a metal blade and plastic handle, as metal. This assumption holds for the vast majority of household tools, but could be relaxed in future work. \begin{table}[t] \centering \includegraphics[width=0.38\textwidth]{imgs/materials_table.png} \captionsetup{width=\linewidth} \caption{Showing the appropriate materials for performing each action, used for generating training pairs for the dual networks} \label{tbl:materials} \end{table} \subsubsection{Feature Representation} We use the SCiO, a commercially available handheld spectrometer (shown in Figure \ref{fig:framework}), to extract spectral readings for the objects. The SCiO scans objects to return a 331-D vector of real-valued spectral readings. \subsubsection{Network Architecture} Our model consists of three hidden layers of 426, 284 and 128 units each. We apply tanh activation and a dropout of 0.5 after each layer. The final layer is a sigmoid computation over the element-wise $L_1$ difference between the third layer of the two networks. We use Adam optimizer with learning rate of 0.001. \subsubsection{Training} To train the dual neural network, we use the SMM50 dataset\footnote{Dataset available at https://github.com/Healthcare-Robotics/smm50}, which contains spectrometer readings for five classes of materials: plastic, paper, wood, metal and foam. For our work, we manually identified the most appropriate material classes for different actions, also shown in Table \ref{tbl:materials}. We create random pairings of spectral readings, where both materials in the pair are appropriate for the action, or either one is not. Given a set $N$ of training samples, $y(x_i, x_j) = 1$, if both materials are appropriate for a given action (as indicated by Table \ref{tbl:materials}), and $y(x_i, x_j) = 0$, if either $x_i$ or $x_j$ corresponds to an inappropriate material. That is, for ``Hit'', (metal, metal) and (metal, wood) pairings are both positive examples, whereas (metal, foam) is a negative example. Note that, each pair does not necessarily consist of the same material class. The reason is that, we would like all appropriate material classes for a given action, such as metal and wood for ``Hit'', to be mapped closer in the embedding space, than metal and foam. This allows us to overcome the variance across material classes, learning an embedding space where the desired material classes are closer in distance. Our training minimizes the standard regularized binary cross-entropy loss function as: \begin{align*} \mathcal{L}(x_i, x_j) = y(x_i, x_j)\log(\mathbf{p}(x_i, x_j)) + \\ (1-y(x_i,x_j))\log(1-\mathbf{p}(x_i, x_j)) + \lambda |\mathbf{w}|^2 \end{align*} The output prediction of the final layer $L$, is given as: \begin{align*} \mathbf{p} = \sigma (\mathbf{w}^T (|h_{1, L-1} - h_{2, L-1}|) + \beta) \end{align*} Where $\sigma$ denotes the sigmoidal activation function, $\beta$ denotes the bias term learned during training, and $h_{1, L-1}$, $h_{2, L-1}$ denotes the final hidden layers of the twin networks respectively. The element-wise $L_1$ norm of the final hidden layers is passed to the sigmoid function. In essence, the sigmoid function computes a similarity between the output features of the final hidden layers of the two twin networks. Once the network is trained, we learn an embedding using the positive examples (not pairings) from our training set, $x^p_i \in N$, where $x^p_i$ is an appropriate spectral reading for the action. We denote the output of the final hidden layer, for a given input $x$ as, $f(x) = h_{1, L-1}(x)$. We pass each $x^p_i$ through one of the twin networks (since both networks are identical and their weights tied), to map each input into a $d$-dimensional Euclidean space, denoted by $f(x^p_i) \in \mathbb{R}^d$. We then compute the embedding as an average over $f(x^p_i)$, for all the positive examples $x^p_i$, where $N_p$ is the total number of positive examples in the training set: \begin{align*} \mathcal{D}^p_{action} = \frac{1}{N_p} \sum_{i=1}^{N_p} f(x^p_i) \ \forall \ x^p_i \in N \end{align*} We compute the $d$-dimensional embedding space $\mathcal{D}^p_{action}$, using the spectral readings corresponding to appropriate materials as positive examples, $x^p_i \in N$. The computed embedding represents an aggregation of the most appropriate spectral readings in the training set for a specific action. \subsubsection{Prediction} Given the spectral reading corresponding to a candidate object $c_j$, we compute $f(c_j)$ using our pre-trained model. Then, $\phi_{mat}$ is computed by $MaterialFit()$ (Algorithm 1, Line 5), as follows: \begin{align*} \phi_{mat}(c_j) = \sigma (\mathbf{w}^T |\mathcal{D}^p_{action} - f(c_j)| + \beta) \end{align*} This score represents the similarity between material of the candidate object and the embedding, $\mathcal{D}^p_{action}$, representative of all the positive examples within the training data. For tool construction, the score is computed for the objects $c_j \in T_i$. \begin{figure}[t] \centering \includegraphics[width=0.49\textwidth]{imgs/shape_scoring.jpg} \captionsetup{width=\linewidth} \caption{Figure highlighting the two types of shape scoring. For independent shape scoring, the candidate parts are scored independently and combined into a single score as a product of their independent scores. For joint shape scoring, the composite object is scored.} \label{fig:shape_scoring} \end{figure} \subsection{Shape Scoring ($\phi_{shape}$ or $\phi'_{shape}$)} Given an action, e.g., ``scoop'', and object point cloud as inputs, shape scoring seeks to predict the shape fitness of the object for performing the action, by learning shape-to-function correspondence of objects. In this paper, we present two ways of computing the shape score in the context of tool construction (also shown in Figure \ref{fig:shape_scoring}): \begin{itemize} \item \textbf{Independent shape scoring}: This approach separately scores each object used for the tool construction. The final shape score is then computed as a product of their independent shape scores; \item \textbf{Joint shape scoring}: This approach scores the \textit{combination} of the different objects in terms of the shape appropriateness of their overall configuration. \end{itemize} Note that, we only use joint shape scoring for tool substitution, since substitution does not involve attaching different objects together, instead the overall shape of the object is scored. We now describe each scoring method. \subsubsection{Independent shape scoring ($\phi_{shape}$)} Since independent shape scoring evaluates tools on a per-part basis, we consider tools to have action parts and grasp parts\footnote{This covers the vast majority of tools \cite{myers2015affordance, abelha2017learning}}. We then train independent neural networks that can learn correspondence between the shape and function of specific tool parts. Hence, we train separate networks for the tools' action parts, and for a supporting function: ``\textit{Handle}'', which refers to the tools' grasp part. We represent the shape of the input object point clouds using Ensemble of Shape Functions (ESF) \cite{wohlkinger2011ensemble} which is a 640-D vector. Each neural network takes the ESF feature for an object as input, and outputs a binary label indicating whether the object is suitable for the function. For more information on training the networks, we refer the reader to \cite{nair2019autonomous}. For the score prediction, given an action and a tuple of candidate objects $T_i$, we can compute a shape score for $T_i$ using the trained networks. Ordering of the objects within the tuple indicates correspondence to action or grasp parts. Let $\mathcal{K}$ denote the set of objects in $T_i$ that are candidates for the action parts of the final tool, and let $T_i - \mathcal{K}$ be the set of candidate grasp parts. Then the shape score $\phi_{shape}(T_i)$ is computed by using the trained networks as follows: \begin{align*} \phi_{shape}(T_i) = \prod_{c_j \in \mathcal{K}}p(action|c_j) \prod_{c_j \in T_i-\mathcal{K}}p(handle|c_j) \end{align*} Where, $p$ is the prediction confidence of the corresponding network. Thus, we combine prediction confidences for all action parts and grasp parts. For example, if the specified action is ``hit'' and $T_i$ consists of two objects $(c_1, c_2)$, then $\phi_{shape}(T_i) = p(hit|c_1)*p(handle|c_2)$. \subsubsection{Joint shape scoring ($\phi'_{shape}$)} For joint shape scoring, our goal is to learn the correspondence between the full tool shape and functionality, rather than a part-based approach. Here, we train independent dual neural networks on full tool point clouds corresponding to different actions. As before, we represent the input point clouds using ESF features. Each dual neural network takes as input the ESF feature for an object (or object combination), and outputs a binary label indicating whether the input is suitable for a particular function. The training procedure is similar to material scoring (additional details in \cite{shrivatsav2020}), and is used to learn an embedding space, $\mathcal{E}^p_{action}$, representative of the positive training examples (i.e., canonical tools for performing the action, obtained from tool databases such as ToolWeb \cite{abelha2016model}). In the case of tool substitution, each candidate object point cloud $c_i$, is passed as input to the trained model and the shape score is computed as follows: \begin{align*} \phi_{shape}'(c_i) = \sigma (\mathbf{w}^T |\mathcal{E}^p_{action} - f(c_i)|^2 + \beta) \end{align*} Where $f$ denotes the output of the final hidden layer of the dual neural network. This score represents the similarity between the ESF feature of the input and the embedding. For tool construction, given an input set of objects $T_i$, our joint shape scoring approach begins by aligning the components in $T_i$ in a configuration consistent with prototypical tools used for the specified action. In order to retrieve this configuration, we sample one random tool from the ToolWeb dataset used for training the shape scoring model, corresponding to the specified action. Further, we use Principal Component Analysis (PCA) to orient the object point clouds in $T_i$ with respect to the example tool. The aligned point cloud is then passed as input to the dual network to compute a shape score as above. Figure \ref{fig:shape_scoring} shows an example of the aligned point cloud. The joint shape scoring method effectively treats constructions as substitutes. \begin{algorithm}[t] \SetKwInOut{Input}{input}\SetKwInOut{Output}{output} \Input{candidate tool parts $T_i$, attachment type $t_{att}$} \Output{$\phi_{att}(T_i)$, $A_{close}(T_i)$} \BlankLine $\phi_{att}(T_i) = 0,$ $A_{close}(T_i) = []$ $T_i' = Align(T_i)$ $P = ComputeIntersections(T_i')$ \tcp{Compute attachment points based on attachment type} \uIf{$t_{att} = `pierce'$}{ \uIf{$isPierceable(T_i)$}{ $A^{T_i} = P$ $\alpha = 0.5$ } \Else{ $A^{T_i} = \varnothing$ } } \uElseIf{$t_{att} = `grasp'$}{ $A^{T_i} = GraspSample(T_i)$ $\alpha = 0$ } \uElseIf{$t_{att} = `magnetic'$}{ \tcp{Predefined magnet location} $A^{T_i} = userInput(T_i)$ $\alpha = 0$ } \Else{ $A^{T_i} = \varnothing$ } \uIf{$A^{T_i} \neq \varnothing$}{ \ForEach{$t_i \in T_i', c_k \in t_i$}{ $A^{T_i}(c_k) = ClosestAttachment(P, c_k, A^{T_i})$ $\phi_{att}(T_i) \stackrel{+}{=} \|P, A^{T_i}(c_k)\|$ \tcp{Dist to P} $A_{close}(T_i).append(A^{T_i}(c_k))$ } } \Else{ $\phi_{att}(T_i) = \infty$ \Return $\phi_{att}(T_i), P$ } $\phi_{att}(T_i) \stackrel{+}{=} \alpha$ \tcp{Add cost} $\gamma = -max(\phi_{att}(T_i))$ \tcp{normalizer} \Return $\phi_{att}(T_i)/\gamma, A_{close}(T_i)$ \caption{Attachment Fit} \end{algorithm} \subsection{Attachment Scoring ($\phi_{att}$)} Given an action and a set of objects, we seek to predict whether the objects can be attached to perform the specified action. Hence, attachment scoring is specific to tool construction only. The degree to which the objects facilitate the desired attachment is indicated by the attachment score. In order to attach the objects, we consider three attachment types, namely, \textit{pierce attachment} (piercing one object with another, e.g., foam pierced with a screwdriver), \textit{grasp attachment} (grasping one object with another, e.g., a coin grasped with pliers), and \textit{magnetic attachment} (attaching objects via magnets on them). The attachment scoring algorithm is shown in Algorithm 2 ($AttachmentFit()$). Attachment scoring begins by aligning the components of the candidate tool $T_i$ in a configuration consistent with prototypical tools used for the specified action ($Align()$, line 2). This is similar to the aligned point cloud generation process followed in our joint shape scoring approach. This results in a set of alignments $T_i'$. We then approximate the intersections of the point clouds in each alignment by calculating the centroid of closest points between the point clouds ($ComputeIntersection()$, line 3). The resultant set of centroids, $P$, is the candidate list of attachments we want to make, i.e., the target attachment locations. The attachment score $\phi_{att}(T_i)$ is then computed as the Euclidean proximity of the target attachment locations, $P$, and the closest attachments facilitated by the candidate objects (denoted as $A_{close}(T_i)$), depending on the attachment type $t_{att}$. This is computed for each object $c_j$, in each alignment $t_i \in T_i'$ ($ClosestAttachment()$, lines 21-25). The resulting score, $\phi_{att}$, is normalized (by $\gamma$) (Line 31, Algorithm 2). The negative normalizer ranks lower $\phi_{att}$ as better. If a object $c_j \in T_i$ is known to have no attachment points, $\phi_{att}(T_i) = \infty$, since the objects cannot be attached to construct the tool. Thus, given the set of closest attachment points $A^{T_i}(c_j)$ on the objects $c_j \in T_i$, the attachment score is computed as (Algorithm 2, line 23): \begin{align*} \phi_{att}(T_i) = \begin{cases} \alpha + \sum\limits_{c_j \in T_i} \left\Vert P - A^{T_i}(c_j)\right\Vert, & \text{if attachable} \\ \infty, & \text{otherwise} \end{cases} \end{align*} The term $\alpha$ denotes a fixed cost of attaching the objects and varies for each attachment type, depending on whether some attachments are costlier than others. E.g., piercing an object may damage it, and a high cost associated with piercing can encourage other alternatives where available. Thus, the set of attachment points $A^{T_i}$ is required to compute $\phi_{att}$. In the case of pierce and grasp attachments, we assume that the capabilities of the acting tool is known ($t_{att}$ is known). That is, objects with pierce capability (screwdrivers and sharp pointed objects), and objects with grasp capability (pliers, tongs) are known a-priori. However, these can be identified using existing affordance learning approaches \cite{AffordanceNet18}. Below, we describe how the attachments are computed for each attachment type (Lines 4-19, Algorithm 2). \subsubsection{Pierce Attachment} Similar to material reasoning, we use the SCiO sensor to reason about material pierceability. We train a neural network to output a binary label indicating pierceability of the input spectral reading. We assume homogeneity of materials, i.e., if an object is pierceable, it is uniformly pierceable throughout the object. For our model, we use a neural network with a single hidden layer of 256 units and a binary output layer. We used the Adam optimizer with ReLU activation layer, and a sigmoid in the final layer. To train our model, we used the same dataset used for material reasoning, namely, SMM50, with spectrometer readings for five classes of materials: plastic, wood, metal, paper and foam. Of these classes, we consider paper and foam objects to be pierceable and for each, we provide the pierceability labels. For each material class, 12 different objects were used with 50 samples collected per object from different locations of the object. This results in a total of 600 spectrometer readings per class. To determine the attachment score during tool construction for the input $T_i$, the SCiO sensor is used to scan the objects and the corresponding spectral reading is passed to the classifier. The attachment score $\phi_{att}(T_i)$ is then computed based on the classifier label. If the output label is zero (Algorithm 2, line 5, $isPierceable(T_i) = 0$), $A^{T_i} = \varnothing$ since pierce attachment is not possible. If pierceable, $A^{T_i} = P$, assuming homogeneity of material properties allowing the objects to be configured at the desired location, and $\alpha = 0.5$ indicating a fixed cost of performing the pierce attachment. \subsubsection{Grasp Attachment} Grasp attachment is defined as using one object to grasp/hold another object to extend the robot's reach (e.g., grasping a bowl with pliers). We model the grasping tool (pliers or tongs) as an extended robot gripper, allowing the use of existing robot grasp sampling approaches \cite{ten2017grasp, levine2018learning, zech2016grasp}, for computing locations where the tool can grasp objects. In particular, we use the approach discussed by \cite{ten2017grasp}, that outputs a set of grasp locations, given the input parameters reflecting the attributes of the pliers/tongs used for grasping. We cluster the grasp locations (using Euclidean metric) to identify unique grasps. As described in their work, without any additional training, the geometry-based grasp sampling approach achieves an accuracy of 73\%. To further improve accuracy, it is possible to train an object-specific model to identify valid grasps. A key challenge with using a pre-trained model is the need to re-train it for every newly encountered pliers/tongs with differing parameters, which can be inefficient in terms of computational resources. Hence, we use the geometry-based grasp sampling approach without any object-specific refinement. To compute attachment score for the input $T_i$, grasps are sampled for the objects (Line 12, $GraspSample(T_i)$) using the existing grasp sampling algorithm\footnote{Implementation at https://github.com/atenpas/gpg based on \cite{ten2017grasp}}. Once sampled, the resultant grasp locations are returned as potential attachment points $A^{T_i}$. The grasp locations are used to compute $\phi_{att}$ based on their Euclidean proximity to $P$. We set $\alpha = 0$ since there is no explicit cost associated with performing grasp attachments. \subsubsection{Magnetic Attachment} We assume the locations of magnets to be provided or predefined, i.e., $A^{T_i}$ is known, and we compute $\phi_{att}$ based on their Euclidean proximity to $P$. We set $\alpha = 0$ since there is no explicit cost associated with performing magnetic attachments. If magents are absent, $\phi_{att} = \infty$. However, as described in \cite{nair2019toolconstr}, it is also possible to perform magnetic attachments via exploration if they are not predefined. This process uses the desired target locations $P$ to explore attachments of the objects. If magnets are present proximal to $P$, enabling the desired configuration, then the objects are attached during the exploration process. \begin{figure*}[t] \centering \includegraphics[width=0.95\textwidth]{imgs/construction_example.PNG} \captionsetup{width=\linewidth} \caption{The robot setup and steps involved in a typical tool construction cycle. In the case of pierce attachment, the robot uses the SCiO sensor to sense material properties, and in case of grasp attachment, the robot samples valid grasps for the object. The robot then builds the tool and tests it by performing the action with the tool \cite{nair2019autonomous}.} \label{fig:construction_example} \end{figure*} \subsection{Final Score Computation} Given the shape, material and attachment scores, we compute the final scores for tool substitution and construction. \subsubsection{Tool Substitution} The final score for tool substitutes is computed as a weighted sum of the shape and material scores. We empirically determined uniform weights of $\lambda_1 = 1$ and $\lambda_2 = 1$ to work best. Our final score for tool substitutes, $\Phi^{subs}$, is computed as follows. Note that we use the joint shape scoring method to compute shape scores for tool substitutes: \[\Phi^{subs}(c_i) = \phi_{shape}'(c_i) + \phi_{mat}(c_i)\] Each $c_i \in C$ denotes a candidate object, which is a potential substitute tool. \subsubsection{Tool Construction} For the tool constructions, the final score is computed as a weighted sum of the shape, material and attachment scores. Similar to substitution, we found uniform weights of $\lambda_1 = 1, \lambda_2 = 1$, and $\lambda_3 = 1$, to work best for tool constructions. Our final score, $\Phi^{cons}$, is computed as: \[\Phi^{cons}(T_i) = \phi_{shape}(T_i) + \phi_{mat}(T_i) + \phi_{att}(T_i)\] Each $T_i \in T$ denotes a permutation of the candidate objects used for tool construction. Note that either the independent or joint shape scoring approach can be used towards the final score computation for tool construction. If joint shape scoring is used, the final score is computed as $\Phi^{subs}(T_i) + \phi_{att}(T_i)$. The final score can optionally be used to generate a ranking of tool constructions. The robot can then iterate through the ranking until a successful construction is found \cite{nair2019autonomous}. \subsection{Arbitration of Tool Substitution and Tool Construction} Arbitration combines tool substitution and tool construction within our pipeline, and in this section we present different arbitration strategies for deciding between the two. We formulate the problem as follows: \textit{``Given an action, and a set C of n candidate objects, how can we arbitrate between tool substitution and tool construction for accomplishing the specified action?''} \smallskip Inspired by existing research in behavioral robotics, each strategy (substitution or construction) is associated with a value function, $\Psi$, that dictates the strategy chosen at a given instant \cite{velayudhan2017sloth, arkin2003ethological, likhachev2000robotic}. The value functions in our work, account for the overall fitness of the substitutes and constructions for performing the specified action. We generate a combined ranking of the strategies (highest to lowest value) that the robot iterates through, validating each strategy until a solution is found. Our set of states $S = C \cup T$, represents the union of the set of all individual objects $c_i$, and the set of all permutations of $m$ objects $T_i$, for tool construction. We now introduce three different value functions for arbitration, that uses the final scores computed in the previous section. First, we present a \textbf{rule-based approach} that assigns a fixed value to constructions, as follows: \begin{align*} \Psi_{rule}(s_i) = \begin{cases} 10, & \text{if} \ |s_i| = 1, \Phi^{subs}(s_i) > 1.0 \\ 0, & \text{if} \ |s_i| > 1, \Phi^{cons}(s_i) > 1.0 \\ -\infty, & \text{otherwise} \end{cases} \end{align*} Where, $|s_i|$ denotes the cardinality of $s_i \in S$, to indicate whether a single object is being evaluated (substitute, $c_i$) or a combination of objects (construction, $T_i$). This approach prefers substitutions over constructions, provided the substitutions have a higher score than a threshold. We empirically set our threshold to 1.0. A fixed value is also assigned to constructions that exceed the threshold in terms of the construction objective. Second, we present a \textbf{direct comparison} approach that compares objectives, and assigns values to states in $S$ as: \begin{align*} \Psi_{obj}(s_i) = \begin{cases} \Phi^{subs}(s_i), & \text{if} \ |s_i| = 1 \\ \Phi^{cons}(s_i), & \text{if} \ |s_i| > 1 \end{cases} \end{align*} Note that the tool construction objective $\Phi^{cons}(s_i)$ automatically assigns a cost associated with attachments, namely the attachment score $\phi_{att}$, and penalizes constructions over substitutions. Here, tool construction uses the independent shape scoring approach. Third, we present a \textbf{substitution-based} approach that uses joint shape scoring for tool constructions, in effect treating the constructions as substitute objects. Hence, the values for states in $S$ are assigned as follows: \begin{align*} \Psi_{subs}(s_i) = \begin{cases} \Phi^{subs}(s_i), & \text{if} \ |s_i| = 1 \\ \Phi^{subs}(s_i) + \phi_{att}(s_i), & \text{if} \ |s_i| > 1 \end{cases} \end{align*} Here, the final attachment score is added to account for the cost of attachment for tool constructions. This enables the tool constructions and substitutions to be compared directly in terms of the shape scoring objective. In the following sections, we evaluate each component of our Tool Macgyvering pipeline, namely, tool construction, tool substitution, and arbitration. \section{Tool Construction Evaluation} \begin{figure}[t] \centering \includegraphics[width=0.49\textwidth]{imgs/Full_dataset.jpg} \captionsetup{width=\linewidth} \caption{The 58 objects used for experimental validation.} \label{fig:all_obj} \end{figure} \begin{table*}[t] \centering \includegraphics[width=0.94\textwidth]{imgs/final_pipeline.png} \captionsetup{width=\linewidth} \caption{Table showing results of our ablation studies. Combined shape, material and attachment reasoning (in bold) performs best. Arrows indicate whether lower or higher values are preferred, e.g., lower ranks are preferred.} \label{fig:final_pipeline} \end{table*} In this section, we describe our experimental setup and present the results specifically for our tool construction approach. We validate our approach on the construction of tools for six different actions, encoded as textual inputs: `hit', `scoop/contain', `flip', `screw', `rake' and `squeegee'. Each tool consists of two components ($m = 2$) corresponding to the action part (`hit', `scoop/contain', `flip', `screw', `rake', `squeegee') and grasp part (`handle'). The performance of our tool construction approach is evaluated in terms of the final ranking output by our algorithm. We use the final score $\Phi^{cons}$ to rank the different constructions. The tool models used to compute the desired attachment location $P$, is acquired from the ToolWeb dataset \cite{abelha2017learning}. Our experiments seek to validate two aspects of our work: \begin{enumerate} \item \textit{Final tool ranking evaluation}: Performance of our tool construction approach in terms of final ranking, with ablation studies. We use the final score $\Phi^{cons}$ to rank the object constructions. \item \textit{Comparison to prior tool construction approaches}: Performance of our current tool construction approach against our prior work, namely, \cite{nair2019toolconstr, nair2019autonomous}\footnote{As discussed in the Related Work, we are not aware of any other prior work that demonstrates tool construction using environmental objects.}. \end{enumerate} For all our experiments, we use a test set consisting of 58 previously unseen candidate objects for tool construction (shown in Figure \ref{fig:all_obj}). These objects consist of metal (11/58), wood (12/58), plastic (19/58), paper (2/58) and foam (14/58) objects. Only the foam and paper objects are pierceable. Figure \ref{fig:construction_example} shows a sample experimental setup and steps involved in the robot tool construction. During tool construction, the robot begins by scanning the materials of the objects for attachment scoring, followed by ranking and construction of the tools. The robot then tests the tool by using it to perform the desired action, iterating through the ranks until a successful construction is found. To overcome manipulation and perception challenges that are beyond the scope of this work, the available objects were spaced apart and oriented to facilitate grasping. For the evaluation, we create 10 different sets of 10 objects (chosen from the 58) for each of the six tools, and report the average results (total $10 \times 6$ cases with 10 candidate objects per case). We create each set by choosing a random set of objects, ensuring that only one ``correct'' combination of objects exists per set. The correct combinations are determined based on human assessment of the objects. \begin{table*}[t] \centering \includegraphics[width=0.9\textwidth]{imgs/indi_tools.PNG} \captionsetup{width=\linewidth} \caption{Table showing tool-wise breakdown of the combined shape, material and attachment reasoning approach along with the example tools used for computing the target attachment locations.} \label{fig:indi_tools} \end{table*} \subsection{Final Tool Ranking} \label{subsec:final_ranking} We evaluate our overall approach in terms of the final output ranking generated on the sets of objects described in the previous section. We perform ablation studies to compare performances of shape, material and attachment reasoning for tool construction. For shape scoring, we use the independent shape scoring approach owing to its success in our previous work \cite{nair2019autonomous}. The metrics used in this evaluation consider i) the final ranking of the correct combinations, and ii) the computation time. We would like the correct combination to be ranked as high as possible, ideally ranked at 1, indicating that it would be the first object combination the robot will attempt to construct. We report the average rank of the correct combination for each tool (average of 10 builds), the number of builds for which the correct combination was ranked within the top 5 ranks (hits@5), the average number of possible configurations of objects, and the average total computation time. The number of object configurations highlight the complexity of the state space and is also used to compute the rank\% as the fraction of rank over total configuration space. Table \ref{fig:final_pipeline}, shows the overall performance of our approach, and Table \ref{fig:indi_tools} shows a tool-wise breakdown. From Table \ref{fig:final_pipeline}, we see that our final approach combining shape, material and attachment scoring, yields a rank of 5.84, with 67\% hits@5, and 5.72\% rank\%. Hence, we see that there is a significant benefit to combining shape, attachment and material reasoning, in terms of final ranking, rank\% and hits@5. Using only shape and attachment also performs well with a rank of 8.43 and rank\% of 8.26\%, in comparison to the other baselines. All approaches significantly outperform random ranking, which explores roughly half of the entire configuration space (with rank\% of 49.9\%). In Table \ref{fig:indi_tools}, we show the performance of combined shape, material and attachment reasoning for each action. Also shown are some example tools used for the computation of the target attachment locations, $P$. Overall, our approach achieved an average rank of 5.84 across all tool types. Note that the total configuration space for each tool is large (avg. $\approx 100$ configurations), indicating the complexity of the problem space, and the effectiveness of our combined reasoning approach in ranking the tool construction with a rank\% of 5\%. Thus, only a small fraction of the total configuration space is explored by our approach. We also note that the approach performed relatively worse on ``squeegee'' with a rank of 10.33, primarily because none of the available object combinations closely resemble an actual squeegee, making it a challenging problem for tool construction. Our approach achieves an average rank of 5.03 across the remaining tool types. \textit{\textbf{Summary:}} Combining shape, material and attachment reasoning leads to significantly improved performance for tool construction compared to prior work. \subsection{Comparison to tool construction approaches} \begin{figure*}[t] \centering \includegraphics[width=0.98\textwidth]{imgs/tool_collage.png} \captionsetup{width=\linewidth} \caption{Table showing a collage of the complete 60 tool constructions in our test set, constructed for six different actions. Note that a small number of experiments (8/60) led to the creation of similar tools due to the availability of objects that could be connected. A symbol on the bottom left of each image indicates that a given approach failed to find the correct construction in that case: $\circ$ : Current work, $\square$: \cite{nair2019autonomous}, and $\triangle$: \cite{nair2019toolconstr}.} \label{fig:tool_table} \end{figure*} We compare our final tool construction approach incorporating material reasoning, to our prior work, namely, \cite{nair2019toolconstr} and \cite{nair2019autonomous}. We use the same set of objects and evaluation metrics as the previous section, additionally adding the completion rate metric to indicate how many of the total 60 constructions, were successfully found. We mark a tool construction attempt as a failure if either, 1) the correct combination was assigned a score of $-\infty$, e.g., due to incorrect attachment/material predictions or, 2) the approach returned a tool that did not match in terms of material, e.g., hammers constructed of foam. \begin{table}[t] \centering \includegraphics[width=0.49\textwidth]{imgs/baselines_final.png} \captionsetup{width=\linewidth} \caption{Table showing performance of our current approach against previous tool construction work (\cite{nair2019toolconstr, nair2019autonomous}).} \label{fig:baselines_final} \end{table} Our results are shown in Tables \ref{fig:baselines_final} and Figure \ref{fig:tool_table}. As shown in Table \ref{fig:baselines_final}, our current approach outperforms our prior work with a high completion rate of 96.67\%, rank of 5.84 and hits@5 of 67\%. Hence, there is an improvement in the tool construction pipeline with the introduction of material reasoning, reflected by the lower completion rates of the other approaches (27\% for \cite{nair2019toolconstr} and 60\% for \cite{nair2019autonomous}). Our approach fails at some constructions owing to incorrect pierceability and graspability predictions. Figure \ref{fig:tool_table} shows the diversity of tool constructions output by our approach, including several interesting combinations, e.g., combining pliers and coin to create screwdriver (Construction \#10). The symbols at the lower left corner indicate \textit{failed} constructions for each approach. Note that, 91\% of the failure cases in our prior approaches were owing to incorrect materials of the constructed tools. Overall, our current approach is able to effectively reason about materials, resulting in improved quality of constructions over prior work. Additionally, our results demonstrate the capability of our approach to construct a diverse set of tools. \textit{\textbf{Summary:}} Incorporation of material reasoning significantly improves the performance of tool construction over prior approaches, with improved quality of constructions. \section{Tool Substitution Evaluation} \begin{figure}[t] \centering \includegraphics[width=0.49\textwidth]{imgs/subs_dataset.png} \captionsetup{width=\linewidth} \caption{The 30 objects used for evaluating tool substitution.} \label{fig:subs_dataset} \end{figure} In this section, we briefly summarize results from our prior tool substitution work \cite{shrivatsav2020} for six actions: ``Hit'', ``Cut'', ``Scoop'', ``Flip'', ``Poke'' and ``Rake'', with five material classes: Metal, wood, plastic, paper and foam. Our experiment validated the performance of combined shape and material reasoning for tool substitution on a set of partial point clouds, and spectral readings of real-world objects. The 30 objects used in our experiments are shown in Figure \ref{fig:subs_dataset}. For validation, we created six sets of 10 objects per action (total 36 sets). Each set consisted of one ``correct'' substitute for the given action, and nine incorrect, which acts as our ground truth\footnote{The correct substitute was determined by three independent evaluators (with a Cronbach's alpha of 0.93).}. We used the final score $\Phi^{subs}$ to rank the different tool substitutes and evaluate our approach. Our metrics included hit@1, indicating the proportion of sets for which the correct tool was ranked at 1; Average Rank, which is the average rank of the correct tool across the test sets; and hits@5, indicating the number of times the correct tool was ranked within the top five ranks of our output. Our results in Table \ref{fig:results_final} show that overall, our approach combining shape and material outperformed the other conditions, with an average ranking of 2 across all the sets. In particular, we note that combining shape and material significantly improved hit@5 (86\% vs 67\% for shape and 58\% material only), and hit@1 (53\% vs 28\% for shape and 22\% material only). All three approaches performed significantly better than random ranking of the objects (hit@1 of 5\% and hit@5 of 14\%). \begin{table}[t] \centering \includegraphics[width=0.42\textwidth]{imgs/Results_final_table.png} \captionsetup{width=\linewidth} \caption{Chart showing the ablation results for tool substitution. Combined shape and material scoring performs better overall (bold). Arrows indicate whether higher or lower values are preferred \cite{shrivatsav2020}.} \label{fig:results_final} \end{table} \begin{figure}[t] \centering \includegraphics[width=0.49\textwidth]{imgs/Collage_subs.png} \captionsetup{width=\linewidth} \caption{First row shows examples of some canonical tools for each action. Following rows show the ranking of objects (top 3) for some of the sets. Check marks indicate the ground truths. The actual materials of the objects are also noted \cite{shrivatsav2020}.} \label{fig:collage} \end{figure} Figure \ref{fig:collage} shows some of the ranked substitutes returned by combined shape and material reasoning, for some of the test sets. The results highlight the challenges of working with partial RGBD data and material scans. For example, the (closed) metal can ranked as the \#2 substitute tool for scooping is ranked highly, because its reflective surface resulted in a point cloud that resembled a concave bowl. Further, an incorrect material prediction for the metal mug, resulted in it being ranked as \#2 substitute for hitting. \textit{\textbf{Summary:}} Combined shape and material reasoning leads to significantly improved performance for tool substitution, when compared to reasoning about material only or shape only. \section{Evaluation of Arbitration Strategies} Given the independent evaluations of tool construction and substitution presented above, we now evaluate how these two capabilities can be combined using different arbitration strategies. As before, we validate our strategies on six different actions: `hit', `scoop', `flip', `screw', `rake' and `squeegee'. We created five different sets of objects per action for a total of 30 different cases. In each set, we included one ``correct'' substitute object, and one ``correct'' constructed object (substitution/construction pair), both of which are capable of performing the action, and the remainder of the objects were randomly chosen incorrect candidates. The ``correct'' substitutes and constructions for each pair were selected from the substitution and construction test sets used in the experiments described previously. In each case, we asked three independent evaluators (with Cronbach's alpha of 0.93), to evaluate which among the substitute/construction pair would be a better alternative for performing the specified action. For each object in the test set, the final scores were computed ($\Phi^{subs}$ or $\Phi^{cons}$), and used in the value functions for arbitration. Thus, the final ranking generated by the value functions is a combined ranking of tool substitutes and constructions. We evaluated our arbitration strategies, both in the context of the overall ranking of the ground truth, and also in terms of the specific option chosen between the two alternatives (i.e., either substitution or construction). Our evaluation metrics include average rank, rank\%, and hits@5, as before. Additionally, we include a metric that indicates the \% times the correct option was chosen (\% correct). We compute this by evaluating whether the arbitration strategy correctly chose between the substitution/construction pair i.e., scored the ground truth option better. \begin{table}[t] \centering \includegraphics[width=0.47\textwidth]{imgs/arb_compare_table.png} \captionsetup{width=\linewidth} \caption{Chart showing the \% number of times the correct option was chosen by each arbitration approach, along with other metrics. Bold highlights the best approach, and arrows indicate whether higher or lower values are preferred.} \label{fig:arb_compare} \end{table} Our results in Table \ref{fig:arb_compare} show that direct comparison of scores outperform the other approaches. In terms of ranking (rank, rank\% and hits@5), we note that both direct and substitution-based approaches perform comparably. However, in terms of the \% times the correct strategy was chosen, direct comparison (83.33\%) outperformed substitution-based approach (60\%). In our observations, the substitution-based approach was more likely to rank substitutes as better than constructions. However, both direct comparison and substitution-based approaches outperformed random selection (36.67\%). Another observation is regarding the inferior performance of the rule-based approach (20\%) compared to random selection, in terms of \% correct. This is because rule-based almost consistently ranked substitutes as better than constructions, which did not always conform with the ground truth labels. However, it performed better in terms of average rank, rank\% and hits@5. This is because the ground truth substitutes were ranked better consistently, resulting in a better average ranking performance. \begin{figure}[t] \centering \includegraphics[width=0.49\textwidth]{imgs/arbitration_collage.jpg} \captionsetup{width=\linewidth} \caption{The arbitration results for direct and substitution-based approach for six substitution/construction pairs. Checkmarks indicate the human evaluated ground truth. Subs $\rightarrow$ substitution, const $\rightarrow$ construction.} \label{fig:arb_collage} \end{figure} Our results in Figure \ref{fig:arb_collage} highlights some of the substitution-construction pairs in our test set, along with the selections of the direct and substitution-based strategies. As shown in the table, substitution-based approach was more inclined towards selecting the substitute tools over the constructions. Overall, the direct comparison approach conformed more to the ground truth assessments made by the human evaluators. \textit{\textbf{Summary:}} Direct comparison of the scores outperformed other baseline approaches in arbitrating between construction and substitution. Secondly, the best design choices for our final Tool Macgyvering framework involve using independent shape scoring (combined with material and attachments) for tool construction; joint shape scoring with material reasoning for tool substitution; and direct comparison for arbitration. \section{Discussion and Future Work} In this work, we presented a novel Tool Macgyvering framework that combined tool substitution and tool construction using arbitration, to output macgyvered solutions for performing an action. We extended our prior work on tool construction by incorporating material reasoning, resulting in significantly improved performance and quality of output constructions. Our approach effectively discovered 96.67\% of working object combinations (as opposed to 27\% and 60\% in prior work), while exploring only a small percentage of the total configuration space (5.72\%). We also introduced arbitration strategies for deciding between tool substitution and construction for performing an action. Our arbitration strategy involving direct comparison of scores correctly selected between substitution and construction for 83.33\% of the test cases, outperforming the other approaches. In summary, the key findings of this work are as follows: \begin{enumerate} \item Combining material reasoning with shape and attachment reasoning significantly improves quality of output constructions, with a superior performance over previous tool construction approaches in terms of completion rate (96.67\% completion); \item Combined material, shape and attachment reasoning enables the efficient construction of a wide range of tools as shown in Figure \ref{fig:tool_table}; \item Arbitration by direct comparison correctly selected between substitution and construction with an accuracy of 83.33\%, and performed better than other arbitration strategies; \item The best performing design for our final Tool Macgyvering framework includes: a) tool construction utilizing \textit{independent shape scoring}, \textit{material scoring}, and \textit{attachment scoring}, b) tool substitution utilizing \textit{joint shape scoring} and \textit{material scoring}, combined with c) \textit{direct comparison} for arbitration. \end{enumerate} In future work, a number of changes can be made to further improve the performance of the system. For example, we observed cases in which shape scoring produced incorrect ranking because the RGBD sensor captured only a partial point cloud of an object. Future work can address such problems through active perception. Additionally, our future work will address a key limitation of our current approach that the number of objects utilized for constructing a tool equals number of tool parts i.e, there is a one-to-one correspondence between candidate objects and tool parts. Finally, additional physical attributes, such as mass and density, can be incorporated into the reasoning framework to further improve performance. In terms of arbitration, a key limitation of our existing approach is that it only considers the physical attributes of the objects. However, other factors such as effort, risk, and the task constraints can influence the decision. In our future work, we will expand our arbitration strategies to consider a wider range of factors within a multi-objective function. \ifCLASSOPTIONcaptionsoff \newpage \fi \section*{Acknowledgments} This work is supported in part by NSF IIS 1564080 and ONR N000141612835. \bibliographystyle{./IEEEtran}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Background galaxy number counts and shear noise-levels} Because the optical images used in this analysis... \begin{figure* \includegraphics[width=10.9cm]{1787f23.eps} \caption{Shown in greyscale is a...} \label{cl12301} \end{figure*} In this case.... \begin{figure*} \centering \includegraphics[width=16.4cm,clip]{1787f24.ps} \caption{Plotted above...} \label{appfig} \end{figure*} Because the optical images... \section{Title of Second appendix.....} These studies, however, have faced... \begin{table} \caption{Complexes characterisation.}\label{starbursts} \centering \begin{tabular}{lccc} \hline \hline Complex & $F_{60}$ & 8.6 & No. of \\ ... \hline \end{tabular} \end{table} The second method produces... \end{appendix} \end{document} \begin{appendix} \section{Background galaxy number counts and shear noise-levels} \longtab[1]{ \begin{longtable}{lrcrrrrrrrrl} \caption{Line data and abundances ...}\\ \hline \hline Def & mol & Ion & $\lambda$ & $\chi$ & $\log gf$ & N & e & rad & $\delta$ & $\delta$ red & References \\ \hline \endfirsthead \caption{Continued.} \\ \hline Def & mol & Ion & $\lambda$ & $\chi$ & $\log gf$ & B & C & rad & $\delta$ & $\delta$ red & References \\ \hline \endhead \hline \endfoot \hline \endlastfoot A & CH & 1 &3638 & 0.002 & $-$2.551 & & & & $-$150 & 150 & Jorgensen et al. (1996) \\ \end{longtable} \end{appendix} \begin{appendix} \longtab[1]{ \begin{landscape} \begin{longtable}{lrcrrrrrrrrl} ... \end{longtable} \end{landscape} \end{appendix} \section{Introduction} Debris disks are the leftovers of stellar (and planetary) formation process(es). As the primordial gas-rich, massive circumstellar disk evolves, the small dust grains will grow and form planetesimals, which may become the building blocks of future planets. With a half-life time of a few million years (\citealp{Hernandez2007}), this primordial disk will rapidly transition towards its debris disk phase, losing the vast majority of its gaseous content. In this debris disk phase, the planetesimals leftover from the planet formation process collisionally erode to produce the small dust grains that are observed. Recent advances in high contrast imaging instruments such as the \textit{Gemini Planet Imager} (GPI, \citealp{Perrin2015}) or the VLT/\textit{Spectro-Polarimetric High-contrast Exoplanet REsearch} (SPHERE, \citealp{Beuzit2019}) provide us with new avenues to investigate debris disks at high angular resolution. For instance, \citet{Olofsson2016} and \citet{Milli2017} studied the scattering phase function over a wide range of scattering angles for HD\,61005 and HR\,4796\,A, respectively, using the SPHERE instrument. Such studies help better constrain the nature of the small dust grains in young debris disks. \citet{Lee2016} presented numerical simulations that can explain a variety of morphologies of debris disks. Their code, which can account for the effect of stellar radiation pressure, was used by \citet{Esposito2016} to model GPI observations of the debris disk around HD\,61005. Overall, thanks to the exquisite spatial resolution provided by this new generation of instruments, we are able to perform in-depth studies of young and bright debris disks, trying to constrain the collisional activity responsible for the production of small dust grains. Since \citet{Jura1991} reported the detection of mid- and far-infrared (IR) excess emission with the \textit{InfraRed Astronomy Satellite}, the debris disk around \object{HR\,4796\,A} has been intensely studied, from the ground and from space, with ever increasing image quality. HR\,4796\,A is young ($8\pm2$\,Myr, \citealp{Stauffer1995}) and nearby ($71.9 \pm 0.7$\,pc, \citealp{Gaia2018}). The A-type star, hosting one of the brightest debris disks (fractional luminosity of about $5 \times 10^{-3}$, \citealp{Moor2006}) is in a visual binary system (see Section\,\ref{sec:HR4796B}), with the secondary being an M-type star at a separation of $7.7\arcsec$ (\citealp{Jura1993}). Scattered light and thermal emission observations have revealed a narrow ring of dust at about 77\,au from the central A-type star (e.g., \citealp{Jayawardhana1998,Wyatt1999,Augereau1999,Wahhaj2005,Schneider2009,Thalmann2011,Moerchen2011,Lagrange2012,Rodigas2015,Milli2015,Perrin2015,Milli2017,Schneider2018,Kennedy2018,Milli2019}). Several studies have reported that the disk displays a brightness asymmetry along its major axis, which is most likely related to the non-zero eccentricity of the disk. It has been postulated that planets could be shepherding the parent planetesimals, inducing the observed azimuthal asymmetries via secular interactions (e.g., \citealp{Wyatt1999}). In this paper, we present SPHERE/ZIMPOL observations of HR\,4796\,A and we investigate the collisional activity and the production of small dust grains by constraining the azimuthal and radial distribution of the dust. \section{Observations and data reduction}\label{sec:obs_data_red} The data were obtained with SPHERE/ZIMPOL (\citealp{Schmid2018}) in its polarimetric mode P2 corresponding to field stabilized observations. They are part of the SPHERE Guaranteed Time Observations\footnote{ESO program 097.C-0523(A)}. The target HR\,4796\,A was observed on the night of 2016-5-24 during a sequence alternating between two short unsaturated polarimetric cycles in Fast Polarimetry (called Fast) and two deeper saturated polarimetric cycles in Slow Polarimetry (called Slow). We repeated three times the pattern Fast Slow Fast with the derotator position angle set to $0^\circ$, $30^\circ$ and $60^\circ$, respectively. Two more cycles were actually observed, with position angles of $120^{\circ}$ and $150^{\circ}$, but not used in this paper. For those last two cycles, columns of deeply saturated pixels fall right along the semi-major axis of the disk, and excluding them leads to a cleaner final reduction (but noisier in the regions close to the semi-minor axis). Given that the purpose of this paper is to study the radial profiles along the semi-major axes of the disk, we opted to exclude those additional cycles. This strategy enables us to get unsaturated frames bracketing the deep saturated exposures in order to calibrate the photometry, and the different derotator position angles aim at introducing an additional diversity parameter to smooth out any residual pattern. In this paper, we focus only on the polarized image of the disk without any absolute flux calibration, therefore only the deep unsaturated Slow Polarimetry images are used for the image discussed later. The absolute polarized flux of the disk and polarized fraction are treated in a separate paper (\citealp{Milli2019}). The extreme adaptive optics (SAXO; \citealp{Fusco2006}) yielded a point-spread function (PSF) with a full width at half maximum of $30-40$\,mas, larger than the expected diffraction limit because of the low-wind effect (see also \citealp{Milli2019}). The images were reduced with custom Python routines. No calibration frames (such as dark, bias or flat field) are applied to avoid introducing additional sources of noise that might degrade the polarized sensitivity. The ZIMPOL instrument is indeed designed to beat atmospheric and instrumental speckles by using the concept of polarimetric differential imaging with a masked CCD synchronized with a ferroelectric liquid crystal modulated at a rate of 27\,Hz in Slow Polarimetry (\citealp{Schmid2012}). This unique design allows the signal coming from the two orthogonal polarization directions to be captured by the exact same pixels, self-calibrating any differential and pixel-to-pixel effect after applying the polarimetric subtraction. A half-wave plate (HWP) is introduced very early in the optical train to calibrate the instrumental polarization. Each polarimetric cycle includes four positions of the HWP: $0^\circ$, $22.5^\circ$, $45^\circ$ and $67.5^\circ$. We applied the double difference technique to obtain the two Stokes parameter $Q$ and $U$ out of these 4 HWP positions (\citealp{Avenhaus2014}). Additional instrumental polarization coming upstream from the HWP remains and are removed by subtracting from the Stokes $Q$ and $U$ a scaled version of the intensity image $I$, as described in \citet{Ginski2016} and \citet{Canovas2011}. We then constructed local Stokes vectors $Q_\phi$ (shown in Fig.\ref{fig:cuts}) and $U_\phi$, containing the astrophysical signal and an estimate of the noise, respectively (\citealp{Benisty2015,Olofsson2016}, see also \citealp{Canovas2015} for a discussion on the effect of optical depth). \section{Grain-size dependent dust distribution}\label{sec:rp_code} Scattered light observations in the near-infrared are sensitive to the small-end of the grain size distribution (e.g., \citealp{Mulders2013}). Therefore, to properly characterize the dust production and the collisional activity, one needs an appropriate prescription for the behavior of the $\mu$m-sized dust grains. Besides gravity, the dominant effect that can affect the orbital parameters of small dust grains is stellar radiation pressure (\citealp{Wyatt2005} showed that Poynting-Robertson drag does not play a significant role in massive debris disks, but see \citealp{Kennedy2015} for a discussion about detecting dust in the inner regions using nulling interferometry). Consequently, in this paper, we try to constrain the radial and azimuthal distribution of the dust, taking into account the effect of radiation pressure. \subsection{Description of the code}\label{sec:model} The code used in this study is inspired by the work presented in \citet{Lee2016}. We start with an analytical description of the belt of planetesimal-sized parent bodies that produce the observed dust. The ring is defined by 6 parameters: a reference radius $r_0$, eccentricity $e$, position angle on the sky $\phi$ (positive from north to east), inclination $i$, argument of periapsis $\omega$, and width of the belt $\delta_{\mathrm{r}}$. The radial distribution of the parent bodies follows a normal distribution centered at $r_0$ with a standard deviation $\delta_{\mathrm{r}}$. All the dust grains, produced in a collisional cascade will originate from those parent bodies, whose sizes, numbers, or masses do not need to be explicitly defined in the code. However, the number of grains of different sizes released from those unseen parents bodies are defined by a size distribution between a minimum and a maximum size. Within this range of sizes, the grains are affected by radiation pressure from the star, and their dynamical evolution is size-dependent. One has to note that all parent bodies have a ``forced eccentricity'', as they all share the same $e$ and the same $\omega$. As input parameters, we consider a grain size distribution, which initially follows the ``ideal'' prescription of \citet[][a differential power-law of the form d$n(s) \propto s^{-3.5}$d$s$, where $s$ is the grain size]{Dohnanyi1969}. The distribution is divided in $n_{\mathrm{g}}$ intervals between the minimum and maximum grain sizes ($s_{\mathrm{min}}$ and $s_{\mathrm{max}}$, respectively). The number of grains in each interval is computed as Eq.\,2 of \citet{Dullemond2008} \begin{equation}\label{eqn:ndens} n(s) = \left( \frac{s}{s_{\mathrm{min}}} \right) ^{p} \times s \times \Delta \mathrm{log}(s), \end{equation} where $\Delta \mathrm{log}(s)$ is the width of each bin in logarithmic space. Since the $n_{\mathrm{g}}$ grain sizes are logarithmically spaced, each $\Delta \mathrm{log}(s)$ is the same, except the first and last ones which are half of that value (so that the grain size distribution is exactly between $s_{\mathrm{min}}$ and $s_{\mathrm{max}}$). For each grain size, we then compute the dimensionless $\beta$ ratio between radiation pressure and gravitational forces (\citealp{Burns1979}). For a given $s$, the value of $\beta$ depends on the dust properties (optical constants and density) and the stellar properties (mass and luminosity), and is evaluated as \begin{equation} \beta(s) = \frac{3 L_\star}{16 \pi G c^2 M_\star} \times \frac{Q_{\mathrm{pr}}(s)}{\rho s}, \end{equation} where $L_\star$ and $M_\star$ are the stellar mass and luminosity, $G$ the gravitational constant, $\rho$ the dust density. The radiation pressure efficiency $Q_\mathrm{pr}(s)$ is equal to $Q_\mathrm{ext}(s, \lambda) - g_\mathrm{sca}(s) \times Q_\mathrm{sca}(s, \lambda)$ averaged over the stellar spectrum, with $Q_\mathrm{ext}$ and $Q_\mathrm{sca}$ the extinction and scattering efficiencies (computed using the Mie theory), and $g_\mathrm{sca}$ the asymmetry parameter (the integral over $4\pi$ steradian of the phase function times the cosine of the scattering angle). To decide where the collision releasing a dust grain takes place, we use a prior distribution on the mean anomaly. This ``collisional distribution'' can either be uniform or a normal distribution (centered either at the pericenter or the apocenter). The standard deviation when using the normal distribution is noted $\delta_{\omega}$. This implies that dust grains are not released uniformly in the disk, but that they can be released preferentially in a localized region of the disk, depending on the azimuth. The mean anomaly is then converted to the true anomaly $\nu$, by solving the Kepler equation. The effect of radiation pressure on the orbital parameters of a single dust grain is parametrized as in \citet{Wyatt1999,Wyatt2006,Lee2016}. Assuming that the parameters of the dust grain, upon its release, are $a$ (drawn from the normal distribution of width $\delta_{\mathrm{r}}$ centered at $r_0$), $e$, $\beta$, and $\omega$, then its ``updated'' orbital parameters ($a_{\mathrm{n}}$, $e_{\mathrm{n}}$, and $\omega_{\mathrm{n}}$) are computed as \begin{equation}\label{eqn:orbit} \begin{aligned} a_{\mathrm{n}} = \cfrac{a \times (1 - \beta)}{1 - 2\beta \cfrac{1 + e \mathrm{cos}(\nu)}{1 - e^2} }, \\ e_{\mathrm{n}} = \frac{1}{1 - \beta} \times \sqrt{e^2 + 2\beta e \mathrm{cos}(\nu) + \beta^2}, \\ \omega_{\mathrm{n}} = \omega + \mathrm{arctan}\left[\frac{\beta \mathrm{sin}(\nu)}{\beta\mathrm{cos}(\nu) + e}\right]. \end{aligned} \end{equation} For a given $\beta$, we make $3\,000$ realizations of the prior collisional distribution. For each realization (giving a set of orbital parameters $a_{\mathrm{n}}$, $e_{\mathrm{n}}$, and $\omega_{\mathrm{n}}$), we check if the updated eccentricity $e_{\mathrm{n}}$ is larger or equal to zero and strictly smaller than unity to avoid hyperbolic orbits. Similarly to \citet{Lohne2017}, the blow-out size does therefore depend on where the grains are launched from (e.g., pericenter or apocenter). If the orbit is bound, the code then populates it with $500$ dust particles, uniformly distributed in mean anomaly. We finally then draw from a normal distribution of standard deviation $h/r = 0.04$ to account for the vertical dispersion on the disk. The opening angle is set to $0.04$ following \citet{Thebault2009}. The $(x, y, z)$ positions of each particle are registered, and we then find the pixel of the image that is closest to the $(x, y, z)$ values depending on the inclination and position angle of the disk (with the same pixel scale as the observations being modeled). We therefore produce number density maps for each value of $\beta$. The modeling strategy described above does not take into account an important effect discussed in \citet{Strubbe2006} and \citet{Thebault2008}, which is that small grains produced inside the belt on high eccentricity (bound) orbits will spend most of their time in the collision-free outer regions where they cannot be collisionally destroyed. This will significantly enhance their collisional lifetimes, and thus their number density as compared to what would be obtained by simply spreading out the number density obtained with Eq.\,\ref{eqn:ndens} over their whole orbit. To a first order, \citet{Strubbe2006} and \citet{Thebault2008} found that a correcting ``enhancement'' factor should be applied to the high-$\beta$ grain number density, which is roughly proportional to their total orbital period divided by the time spent within the birth ring. We take here the simplified expression for this correction factor given in \citet{Lee2016} and apply the following strategy. For each of $3\,000$ particles at a given $\beta$ that are not sent on hyperbolic orbits, we compute this correcting factor as $(1 - \beta)^{\alpha} / [1 - e^2 - 2 \beta \times (1 + e\mathrm{cos}(\nu))]^{\alpha}$, with $\alpha = 3/2$ ($e$ being the eccentricity of the parent belt, \citealp{Lee2016}), and $\nu$ the true anomaly at the moment of the collision. This correction should naturally produce a surface brightness profile of $r^{-3.5}$. However, such a profile is an asymptotic behavior that is reached relatively far away from the parent planetesimal belt, but not right after the birth ring (\citealp{Thebault2012}). When computing the number density maps for each $\beta$ values, the contribution of each particle is multiplied by this correction factor. One should also note that we do not take grain-grain collisions into account, which, as demonstrated in \citet{Lohne2017} can have an impact on the radial extent of the disk. Once the $3\,000 \times 500$ particles have been launched, the scattering angle between the central star and the observer is computed for each pixel in the image. The code then computes one image per grain size bin by multiplying the number density of each pixel by $S_{12} \times \pi s^2 \times Q_{\mathrm{sca}}/(4 \pi r^2)$, where $r$ is the distance to the star, $Q_{\mathrm{sca}}$ the scattering efficiency, and $S_{12}$ the polarized phase function (which can be computed using the Mie theory, or other means). By using the $S_{12}$ element, we are effectively computing the $Q_{\phi}$ image directly, and not the $Q$ and $U$ images. The code can compute scattered light images replacing $S_{12}$ with the $S_{11}$ element of the M\"uller matrix. It can also compute thermal emission images by multiplying the number density map by $4 \pi s^2 Q_{\mathrm{abs}} \pi B_{\nu}(T_{\mathrm{dust}})$ (the dust temperature being evaluated by equating the energy received and emitted by a dust grain of size $s$ at the distance $r$ from the star, Eq.\,5 of \citealp{Wolf2005}). The final image is the collapse of all individual images, weighted by $n(s)$. \subsection{Stellar parameters} To derive the stellar parameters, we first gathered optical and near-IR photometry using the {\it VO SED Analyzer} (VOSA) tool\footnote{http://svo2.cab.inta-csic.es/theory/vosa50/} \citep{Bayo2008}. The stellar effective temperature and surface gravity were retrieved using the VO-DUNES discovery tool\footnote{http://sdc.cab.inta-csic.es/dunes/searchform.jsp} \citep{Eiroa2013}, which explores VIZIER catalogs. We find a value of $T_{\star} = 9\,700$\,K and a $\mathrm{log} (g) = 4.05$. Fitting a Kurucz model (\citealp{Castelli1997}) to the optical and near-IR photometry, we find a stellar luminosity of $L_{\star} = 25.75$\,L$_{\odot}$. The distance of $71.9$\,pc combined with the dilution factor used to scale the Kurucz model leads to a stellar radius of $R_{\star} = 1.79$\,R$_{\odot}$. The stellar mass is determined assuming that the star has finished contracting and we used the following relation between the stellar mass, radius and surface gravity, \begin{equation} \mathrm{log} (g) = 4.44 + \mathrm{log} \left( \frac{M_{\star}}{M_{\odot}} \right) - 2 \mathrm{log} \left( \frac{R_{\star}}{R_{\odot}} \right). \end{equation} We obtain a mass of $1.31$\,M$_{\odot}$. \subsection{Disk parameters} \citet{Milli2017} and \citet{Kennedy2018} presented comprehensive studies of the debris disk as seen in scattered light with VLT/SPHERE IRDIS and in thermal emission with ALMA, respectively. They constrain the main parameters of the disk, and because of our modeling strategy (see next sub-section), we used their results to set the inclination $i$ to $76.6^{\circ}$. \citet{Kennedy2018} reported a value of $76.6^{\circ}\pm0.2$, \citet{Milli2017} a value of $76.45^{\circ}\pm0.7$, \citet{Schneider2018} a value of $75.9^{\circ}\pm0.14$, \citet{Schneider2009} a value of $75.88^{\circ}\pm0.16$, \citet{Thalmann2011} a value of $76.7^{\circ}\pm0.5$. Overall, our choice for the inclination is consistent with previous studies, especially given that it was derived from high signal-to-noise observations at high angular resolution. For the position angle $\phi$, we used a maximum merit function as in \citet[][see \citealp{Olofsson2016} for details on how the elliptic mask is defined]{Thalmann2011}. We find that a value of $\phi = -151.6^{\circ}$ provides a best fit to the observations, but this value will be re-evaluated during the modeling of the observations. \citet{Milli2017} found $\phi = -152.9^{\circ}$, but one should note that we use a different convention for the position angle as the one used in \citet{Milli2017}. For $\phi = 0^{\circ}$ we assume that the major axis is along the north-south axis, with the near side of the disk being towards the east, hence the $180^{\circ}$ difference with the value of $+27.1^{\circ}$ reported in \citet{Milli2017}. One should also note that we do not constrain the direction in which the dust particles orbit around the star, and that the problem is symmetric. \subsection{Modeling strategy} \begin{figure} \centering \includegraphics[width=\hsize]{PDF/data_axis.pdf} \caption{Reduced ZIMPOL image of HR\,4796\,A. The super-imposed lines A to C show the locations where we measure the radial profiles (the width of the lines does not correspond to the width of the slits used to measure the radial profiles).} \label{fig:cuts} \end{figure} Choosing the adequate scattering theory to compute the full (polarization and scattering) phase function when modeling debris disks observations still remains a challenge. The Mie theory (commonly used in the literature as it is computationally fast) seems to be insufficient to reproduce most of the spatially resolved observations (\citealp{Lebreton2012,Rodigas2015,Olofsson2016,Milli2017}). Other alternatives exist, such as the discrete dipole approximation (\citealp{Purcell1973}), but can be time costly. Therefore, to alleviate this challenge, in this study we primarily focus on the radial profiles along the major axis of the disk. By doing so, and assuming that the disk is flat enough, we are probing the exact same scattering angles on both sides of the disk (close to $90^{\circ}$). Consequently, the exact value of the polarized phase function $S_{12}$ (integrated over all sizes) at this scattering angle does not matter when comparing the radial profiles along the north-east and south-west sides. The dependency of $S_{12}(90^{\circ})$ as a function of the grain size remains, but given that observations in the optical are mostly sensitive to the small dust grains, which are dominating in number, we consider this effect to be of second order. Nonetheless, modeling the radial cuts along the semi-major axis only does not allow us to really constrain the morphology of the disk. Preliminary results using solely radial cuts along the major axis led to clearly wrong results; the best fitting model would have a very large eccentricity ($e \sim 0.2-0.3$) and large reference radius ($r_0 \sim 100-120$\,au) so that the major axis of the model would not be the same as in the observations but the radial cuts would intercept the disk at other azimuthal angles. Therefore we considered two additional radial profiles at $\phi \pm 10^{\circ}$ from the major axis, to constrain the peak position of the disk, and obtain a more reliable determination of $r_0$, $e$, and $\omega$. We chose not to consider additional cuts closer to the semi-minor axis as the signal-to-noise degrades quite significantly. Moreover, our study focuses on determining the radial density profiles, which are best constrained close to the semi-major axis. Figure\,\ref{fig:cuts} shows the location of those radial cuts. The observations are de-rotated to align each of the different axes with the vertical direction. The radial profiles are then computed as the average of the polarized flux (from the $Q_{\phi}$ image) over a vertical slit centered on the central pixel, with a width of $\pm\,5$\,pixels. For the uncertainties, we compute the standard deviation over the same vertical slit, from the $U_{\phi}$ image. For a given synthetic image, because of the finite number of particles that are used to generate the image, in some cases there can be some ``shot noise'', with a given pixel being much brighter than its neighbors. This may lead to artificial local minima when trying to find the best fit model. To circumvent this issue, the model image is first smoothed by performing a median clipping over $3 \times 3$ neighboring pixels (the central pixel, which value is being estimated, is not included when computing the median). It is then convolved by a 2D gaussian of standard deviation $1.22 \times \lambda/D$, where $D = 8$\,m the diameter of the telescope and $\lambda = 0.735$\,$\mu$m. We then proceed similarly to the observations to extract the radial profiles, and scale them by finding the scaling factor that minimizes the difference between the north-east and south-west sides simultaneously. For the profile along the major axis (A$_{\mathrm{N}}$ and A$_{\mathrm{S}}$ on Fig.\,\ref{fig:cuts}), the scaling factor is the same for both sides, as the polarized phase function should be the same. This allows us to test if the best fit model can successfully reproduce the brightness asymmetry between the north and south sides. For the radial cuts along B$_{\mathrm{N}}$, B$_{\mathrm{S}}$, C$_{\mathrm{N}}$, and C$_{\mathrm{S}}$, each scaling factor is determined independently as the polarized phase function is sampled at different angles (in principle, the phase function should be similar for the pairs C$_{\mathrm{N}}$-B$_{\mathrm{S}}$ and B$_{\mathrm{N}}$-C$_{\mathrm{S}}$ but we left the scaling factors unconstrained). Since the central region of the observations is contaminated by the instrumental PSF, we compute the goodness of fit between the ranges $\left[-1\farcs4, -0\farcs95\right]$ and $\left[0\farcs95, 1\farcs4\right]$ along the major axis, and $\left[-1\farcs3, -0\farcs7\right]$ and $\left[0\farcs7, 1\farcs3\right]$ for the other axis (these regions are highlighted by the wider white solid lines on Fig.\ref{fig:cuts}). The free parameters of the model are the reference radius $r_0$, the standard deviation of the radial distribution $\delta_{\mathrm{r}}$, the argument of periapsis $\omega$, the standard deviation of the azimuthal collision probability at the pericenter $\delta_{\omega}$, the eccentricity $e$, and the position angle $\phi$. The location of the star is fixed and is not allowed to vary in our modeling approach. As mentioned before the inclination is not a free parameter and is set at $76.6^{\circ}$; the radial cuts are all close to the major axis of the disk, the worst direction to properly determine the inclination $i$. The grain size distribution exponent $p$ is set to $-3.5$. The grain size distribution is defined between $s_{\mathrm{min}} = 6$\,$\mu$m (small enough that the grains in the first few bins are set on hyperbolic orbits) and $s_{\mathrm{max}} = 1.3$\,mm. The value of $s_{\mathrm{max}}$ is chosen so that the minimum value of $\beta$ is $5 \times 10^{-3}$ (optical constants of astrosilicates from \citealp{Draine2003}, with a density of $3.5$\,g.cm$^{-3}$) and that the grain size distribution is sampled over a significant range of sizes (we set $n_{\mathrm{g}} = 200$). For the polarized phase function, we use the analytical Henyey-Greenstein expression, as \begin{equation} S_{12,\mathrm{HG}} = \frac{1 - \mathrm{cos}^2(\theta)}{1 + \mathrm{cos}^2(\theta)} \frac{1}{4\pi} \frac{1-g^2}{(1+g^2 - 2g\mathrm{cos}(\theta))^{3/2}}, \end{equation} where $\theta$ is the scattering angle and $g$ the anisotropic scattering factor ($-1 \leq g \leq 1$). Similar approaches are commonly used for modeling polarimetric images of debris disks (e.g., \citealp{Engler2017}, who also included a term for the diffraction peak caused by grains larger than the wavelength). Nonetheless, in the preliminary versions of \citet{Milli2019} the authors found that the polarized phase function at scattering angles close to $90^{\circ}$, is overall well reproduced using $g \sim 0.3$\footnote{Since then, \citet{Milli2019} revised this value to $g = 0.4$, but with our modeling strategy, the exact value of $g$ is not really relevant. As mentioned before, we are fitting both sides of semi-major axis simultaneously (as they probe the same scattering angle the shape of the phase function does not matter), while for all the other radial cuts (B$_{\mathrm{N}}$, B$_{\mathrm{S}}$, C$_{\mathrm{N}}$, and C$_{\mathrm{S}}$), each profile is scaled up or down separately.} (the final best fit being obtained using a combination of two Henyey-Greenstein functions, to best match the brightness close to the semi-minor axis, which we do not try to reproduce here). We will therefore use this value of $g = 0.3$ throughout the paper, to alleviate the number of free parameters. Discussing the polarized phase function and the implications it has on the dust properties is out of the scope of this paper and we refer the reader to \citet{Milli2019}. The choice of the analytical form of the polarized phase function also alleviates some of the free parameters regarding the dust properties (such as the porosity as well, see \citealp{Arnold2019}). Indeed, now that the phase function is parametrized for all sizes, the absolute value of $\beta$ associated with each grain size matters less; what matters is the global shape of the $\beta(s)$ curve as it will determine the radial distribution of the dust grains, which we are modeling. With our approach we can therefore ignore most of the dust properties\footnote{The only remaining value that still depends on the choice of the scattering theory is the value of $Q_\mathrm{sca}$} and focus on the impact that the radiation pressure has on the spatial distribution of the small dust grains in the disk, regardless of their true sizes and properties. To determine the best-fitting parameters and estimate their uncertainties, we then used an affine invariant ensemble sample Monte-Carlo Markov Chain (\texttt{emcee} package, \citealp{Foreman-Mackey2013}). The initial conditions are set to be close to the disk parameters reported in previous studies (for $r_0$, $e$, $\omega$, and $\phi$). We used $30$\,walkers and ran a short burn-in phase (length of $400$ models) and then run chains of $1\,000$ models for each walker. The uniform priors are also reported in Table\,\ref{tab:grid}. At the end, the mean acceptance fraction is of $0.27$ (indicative of convergence and stability, \citealp{Gelman1992}), and the maximum auto-correlation length among all the free parameters is $68$ steps. \subsection{Results}\label{sec:results} \begin{table} \caption{Details for the modeling of the observations and best-fit results.} \label{tab:grid} \centering \begin{tabular}{lcc} \hline\hline Parameters & Prior & Best-fit \\ \hline $r_0$ [au] & [$70$, $85$] & $76.4_{-0.3}^{+0.4}$ \\ $\delta_{\mathrm{r}}$ [au] & [$1$, $5$] & $3.6_{-0.2}^{+0.2}$ \\ $\omega$ [$^{\circ}$] & [$-270$, $-90$] & $-254.3_{-1.6}^{+1.8}$ \\ $\delta_{\omega}$ [$^{\circ}$] & [$20$, $120$] & $63.9_{-11.3}^{+14.4}$ \\ $e$ & [$0.0$, $0.2$] & $0.076_{-0.010}^{+0.016}$ \\ $\phi$ [$^{\circ}$] & [$-158$, $-148$] & $-152.1_{-0.1}^{+0.1}$ \\ \hline \end{tabular} \end{table} \begin{figure} \centering \includegraphics[width=\hsize]{PDF/radial_cut.pdf} \caption{Radial profiles of the disk along the three cuts highlighted in Figure\,\ref{fig:cuts} (error bars are $1\sigma$), with the best-fit model over-plotted in red. The gray shaded areas show where the goodness of fit is \textit{not} estimated.} \label{fig:results} \end{figure} \begin{figure*} \centering \includegraphics[width=\hsize]{PDF/data_models.pdf} \caption{Observations and best-fit model to the SPHERE/ZIMPOL observations of HR\,4796\,A, with the same linear stretch (left and right, respectively). For the model, the blue circle marks the location of the pericenter.} \label{fig:images} \end{figure*} \begin{figure*} \centering \includegraphics[width=\hsize]{PDF/emcee_combined.pdf} \caption{Projected probability density distributions along with the determined uncertainties for the different parameters in the modeling as well as density plots. The contours correspond to the $[0.12$, $0.39$, $0.68$, $0.86]$ density percentiles.} \label{fig:corner} \end{figure*} The radial profiles are displayed in Fig.\,\ref{fig:results} and the observations and best-fit model are shown in Fig.\,\ref{fig:images}, with the same linear stretch (the difference in surface brightness closer to the semi-minor axis being due to the choice of the polarized phase function, see \citealp{Milli2019}). The location of the pericenter of the disk is marked by a blue circle on the right panel of Fig.\,\ref{fig:images}. While the best fit model slightly under-estimates the polarized intensity at separations larger than $1\arcsec$ along the B$_{\mathrm{N}}$ and C$_{\mathrm{N}}$ axis (still within $3\sigma$), and the peak positions along the B$_{\mathrm{S}}$ and C$_{\mathrm{S}}$ axis are not a perfect match to the data (even though the signal-to-noise is lower in those regions), overall, the radial profiles along the major axis (A$_{\mathrm{N}}$ and A$_{\mathrm{S}}$) are well reproduced, both for the peak positions and the slopes, up to $1\farcs4$. Figure\,\ref{fig:density} shows a top view of the weighted cross section of the best fit model. The most probable parameters are summarized in Table\,\ref{tab:grid} and the probability density functions are shown in Figure\,\ref{fig:corner}. The uncertainties for the MCMC results are estimated from the $0.16$ and $0.84$ quartiles using the \texttt{corner} package (\citealp{ForemanMackey2016}). The projected posterior distributions are shown in Fig.\,\ref{fig:corner}. We find that the pericenter should be located on the front side of the disk, with $\omega = -254^{\circ}$$^{+2}_{-2}$. The reference radius of the disk is of $r_0 = 76.4^{+0.4}_{-0.3}$\,au, and the standard deviation of the radial distribution of the parent belt is $\delta_{\mathrm{r}} = 3.6^{+0.2}_{-0.2}$\,au, while the standard deviation of the collisional distribution is $\delta_{\omega} = 63.9^{\circ}$$^{+14.4}_{-11.3}$. We find that the eccentricity is $e = 0.076^{+0.016}_{-0.010}$, and the position angle is $-152.1^{\circ}$$^{+0.1}_{-0.1}$, close to the value of $-151.6^{\circ}$ we previously found to define the location of the major axis. \section{Discussion}\label{sec:discussion} The debris disk around HR\,4796\,A has been resolved at high angular resolution on several occasions, with different instruments (e.g., \citealp{Lagrange2012,Rodigas2015,Milli2017,Schneider2018,Kennedy2018}). All these observations showed that the disk appears as a narrow ring. With the new ZIMPOL observations presented here, we also find the disk to be very narrow. To reproduce the observations, our modeling results suggest that the parent planetesimal belt follows a normal distribution with a standard deviation that can be as narrow as $3.6$\,au. One should note that the radial extent of the parent belt is of the order of the vertical of the disk, suggesting it is shaped as a thin torus. From the results of \citet{Rodigas2014}, \citet{Milli2017} concluded that the observed width of the disk around HR\,4796\,A could be explained by a planet lighter than Saturn, inwards of the ring, shepherding the debris disk (or even smaller if the planet is migrating, e.g., \citealp{Perez2019}). Given the comparable angular resolution provided by the SPHERE/IRDIS and ZIMPOL instruments, we do not revise the values reported in \citet{Milli2017} to explain the narrowness and eccentricity of the parent planetesimal belt. But the ZIMPOL observations provide new insights into the azimuthal distribution of the dust grains, as well as on the production of small dust grains in this young debris disk. \subsection{The pericenter glow of HR 4796 A} \begin{figure*} \centering \includegraphics[width=\hsize]{PDF/alma.pdf} \caption{\textit{Left:} ALMA $880$\,$\mu$m Briggs-weighted image of HR\,4796\,A. \textit{Right:} best fit model at the same wavelength, convolved with a similar beam (displayed in the bottom left corner of both images). No noise was added to the model.} \label{fig:alma} \end{figure*} \begin{figure*} \centering \includegraphics[width=\hsize]{PDF/mid_ir.pdf} \caption{Mock observations with the Michelle instrument ($18.1$\,$\mu$m, \textit{left} panel) and T-ReCS ($24.5$\,$\mu$m, \textit{right} panel).} \label{fig:midir} \end{figure*} The pericenter glow effect was originally proposed by \citet{Wyatt1999} who modeled Keck\,II observations of HR\,4796\,A at $18.2\,\mu$m (also presented in \citealp{Telesco2000}). The asymmetry observed in thermal emission could be explained by the fact that the dust grains in the direction of the forced pericenter of the disk are closer to the star, and hence warmer as they receive more stellar light. The pericenter glow can also be observed in scattered light observations, the dust grains at the pericenter receiving, and therefore scattering, more light compared to the apocenter. \citet{Wyatt1999} reported an argument of periapsis of $26^{\circ}$ for an eccentricity of $0.02$, meaning that the pericenter is located close to the projected semi-major axis of the disk. Nonetheless, from their Fig.\,7 showing an ensemble of possible solutions, a forced eccentricity of $0.07$ would lead to an argument of periapsis of about $\sim75^{\circ}$ which, despite different reference system, is compatible with our results. However, \citet{Moerchen2011} modeled Michelle and T-ReCS mid-IR observations, and found that the pericenter should be located along the projected semi-major axis of the disk (perpendicular to the line of sight), but with an even larger eccentricity of $0.06$ compared to \citet[][$0.02$ when the pericenter is located near the projected semi-major axis]{Wyatt1999} . They also found a family of solutions, with increasing eccentricity when the pericenter is moved closer to the projected semi-\textit{minor} axis. However, since then it has been shown that this is not compatible with several studies that consistently found the pericenter to be located closer to the projected semi-\textit{minor} axis of the disk, without increasing $e$ too much (e.g., \citealp{Thalmann2011}; \citealp{Rodigas2014}; \citealp{Milli2017}, and this work). In Figure\,\ref{fig:dw0} we show the radial cuts along the semi-major axis of the disk, for a model in which particles are released uniformly in mean anomaly (all the other parameters being the same as the best model obtained before). One can note that in this model the brightness asymmetry is not well reproduced, justifying why our best fit model requires more particles to be released near the pericenter. To further check the validity of our model, we computed mock observations at $18.1$ and $24.5$\,$\mu$m, to be compared with the observations presented in \citet{Moerchen2011}. These mock observations are shown in Figure\,\ref{fig:midir}; we used the same code as before, to compute images at the two wavelengths only considering thermal emission from the dust grains (using the same parameters as the best fit model to the optical observations), convolved them with 2D Gaussian with full width at half maximum of $0\farcs52$ and $0\farcs72$, and added noise to the images to mimic the real observations. While we did not aim at fitting those observations, visual inspection suggests that we are obtaining very comparable results, the brightness asymmetry being more pronounced at $18.1$\,$\mu$m than at $24.5$\,$\mu$m (where it is only marginally detected). We additionally visually compared the ALMA observations presented in \citet{Kennedy2018} with a model computed using the same code at the same wavelength as the ALMA observations (thermal emission only). Figure\,\ref{fig:alma} shows the Briggs-weighted Band\,7 image (left) and the thermal emission for our best fit model, convolved by a 2D Gaussian similar to the beam of the observations. Our best fit model can reproduce the overall shape of the disk, the width of the ring, and the brightness distribution over all azimuthal angles. ALMA $880$\,$\mu$m observations being sensitive to larger grains, which are not subjected to strong radiation pressure, bound dust grains close to the cut-off size do not contribute significantly to the thermal emission at those wavelengths and the model traces the distribution of the parent planetesimal belt. For those large grains, even though they are preferentially released near the pericenter, their eccentricities are rather small (close to the eccentricity of the parent belt), and therefore when populating the orbits, dust particles are distributed almost uniformly in the azimuthal direction. The point being that, in our model, the larger the grains the less significant the value of $\delta_{\omega}$ becomes; ALMA observations can hardly constrain if dust grains are released in a preferential location of the disk (e.g., the pericenter in our case). Overall, this brings confidence to our best-fit model of the dust production and distribution in the disk around HR\,4796\,A, as it can reproduce (at least to the first order) observations from optical to millimeter wavelengths. In that model the pericenter glow effect plays a very minor role to explain the brightness asymmetry. \subsection{Dust production around HR 4796 A} The azimuthal number density of an eccentric ring should naturally peak at the location of the apocenter (\citealp{Pan2016}), and not at the location of the pericenter as is the case for HR\,4796\,A according to our best-fit model. As discussed in \citet{Pearce2014}, if a planet is interacting with a debris disk less massive than the planet, the number density should be higher at the apocenter. Such a model would therefore not be applicable to the disk around HR\,4796\,A (around which very stringent upper limits on the presence of companions have been placed, \citealp{Milli2017}). On the other side, \citet{Pearce2015} also investigated the case of the interactions between an eccentric planet and a debris disk of comparable mass. They conclude that the end-result of those interactions usually is a double-ringed debris disk, which is not observed for the disk around HR\,4796\,A. A possible mechanism that could help explain our results would be the violent collision of large bodies at $76.4$\,au from the star (e.g., \citealp{Kral2015}). The numerical simulations presented in \citet{Jackson2014} indicate that all the bodies released from the original collision point will have to pass through the same location at each orbit, which would locally increase the collision probabilities. This collision point would therefore become the main production site for any secondary dust (or gas) in the system. \citet{Jackson2014} studied the cases of eccentric progenitors, and concluded that the resulting brightness asymmetry highly depends on where the collision took place. If the collision happened near the apocenter of the progenitors, then the brightness asymmetry is ``constructive'' due to the increased density at the collision point and the fact that particles spend more time near the apocenter (Fig.\,11-D of \citealp{Jackson2014}). On the other hand, if the collision took place close to the pericenter of the progenitors, then there is a competition\footnote{One should note however that the spatial resolution of the observations has to be taken into account here. If the disk is radially spatially resolved the flux is distributed over different areas (the apocenter being more extended), making the comparison less straightforward than for unresolved observations.} between the over-densities due to the ``pinch-point'' at the pericenter, and the more time spent by particles at the apocenter (Fig.\,11-C of \citealp{Jackson2014}). Nonetheless, the apocenter being more spread out than the pericenter (especially for the bound grains with the highest $\beta$ value), the latter one may still appear brighter than the former. Overall, the observations of the disk around HR\,4796\,A could be explained if the disk is the outcome of a unique, massive collision of initially eccentric progenitors (and the collision should have taken place close to the pericenter of the progenitors). Because the velocity dispersion is larger at the pericenter compared to the apocenter, a collision near the former could, in principle, generate a larger amount of small dust grains. Nonetheless, given how bright the disk is ($f_\mathrm{disk} = L_\mathrm{disk}/L_\star \sim 5 \times 10^{-3}$), the collision would have to have been an extremely rare event. If this is indeed the case, the implications for the formation of large planetesimals at distances larger than $70$\,au are strong (see \citealp{Kenyon2005}, as well as the discussion about the collisional status of the disk in \citealp{Kennedy2018}). One would need to form at least two very massive oligarchs in the outermost regions, on a timescale of a few Myr. Indeed, the mass of the body whose breakup is able to produce a disk of debris of fractional luminosity $5\times10^{-3}$ is extremely large. Taking the equations $2$-$5$ of \citet{Wyatt2007} linking $f_\mathrm{disk}$ to the mass of a collisional cascade producing it, we find that, even in the very optimistic hypothesis that all the mass is contained in $\leq1$m bodies, one needs at least a few Earth masses of material for the disk to be as bright as $f_\mathrm{disk} = 5\times10^{-3}$ at $75$\,au from its central star (\citealp{Augereau1999} had already found a similar estimate for the amount of $\leq1$m bodies). As a consequence, the catastrophic breakup scenario requires the breakup of planetary object, probably at least in the super-Earth range. One may furthermore wonder if such a collision should not also have released significant amount of gas (as postulated for $\beta$\,Pictoris for instance, \citealp{Dent2014}). Despite sensitive observations, no gas has been detected in the outermost regions of HR\,4796\,A (\citealp{Kennedy2018}), but \citet{Iglesias2018} reported possible ``falling evaporating bodies'', by detecting variable extra absorption lines in optical spectroscopic observations. \subsection{The effect of HR 4796 B on the debris disk}\label{sec:HR4796B} \begin{table} \caption{Stellar properties for HR\,4796\,A and HR\,4796\,B.} \label{tab:gaia} \centering \begin{tabular}{lcc} \hline\hline Parameters & HR\,4796\,A & HR\,4796\,B \\ \hline $\alpha$ & $189.0039\pm0.0977$ & $189.0019\pm0.0472$ \\ $\delta$ & $-39.8696\pm0.1004$ & $-39.8711\pm0.0518$ \\ $\pi$ [mas] & $13.9064\pm0.1349$ & $14.1030\pm0.0625$ \\ $\mu_{\alpha}$ [mas.yr$^{-1}$] & $-55.653\pm0.181$ & $-59.236\pm0.096$ \\ $\mu_{\delta}$ [mas.yr$^{-1}$] & $-23.740\pm0.230$ & $-29.867\pm0.125$ \\ R$_{\mathrm{V}}$ [km.s$^{-1}$] & $7.10\pm1.10$ & $7.63\pm0.70$ \\ \hline \end{tabular} \end{table} To estimate if the M-type star HR\,4796\,B can reasonably have an effect on the disk around HR\,4796\,A (projected separation of $568.3$\,au), through radiation pressure, stellar winds, or gravitational interactions (e.g., \citealp{Thebault2010}, \citealp{Cuello2019}), we first check the separation between both stars. We used their Gaia DR2 measurements (\citealp{Gaia2018}), which are reported in Table\,\ref{tab:gaia}. The radial velocity of HR\,4796\,A was taken from \citet{Iglesias2018}, and we estimated the one of HR\,4796\,B from UVES observations (program IDs 082.C-0218 and 089.C-0207). The first noteworthy difference is the parallaxes of both stars, which translate to distances of $71.9$ and $70.9$\,pc for HR\,4796\,A and B, respectively. However, the measurements for HR\,4796\,A have large uncertainties due to its brightness and the star is flagged for possible astrometric errors. Therefore, we here assume that the B star has the same parallax as the A star and to evaluate if the former can have an impact on the disk around the A star, we checked if the system is bound. To that end, we estimated the escape velocity of B with respect to A, as $\sqrt{2 G M_{\star, \mathrm{A}} / r}$, where $G$ is the gravitational constant, $M_{\star, \mathrm{A}}$ is the mass of the HR\,4796\,A ($1.3$\,M$_{\odot}$), and $r$ the separation between the two stars. With the positions and velocities of both stars, we find an escape velocity of $2.01$\,km.s$^{-1}$, compared to a relative velocity of $2.47$\,km.s$^{-1}$ (estimated from the proper motions and radial velocities of both stars). As we assumed the same parallax for both stars, this is the most favorable case with the smallest three-dimensional separation. Therefore, with the available astrometric measurements, it seems that HR\,4796\,A and B are not bound to each other, most likely minimizing the possible impact that the B component can have on the debris disk, but this may have to be revisited with a more reliable astrometry for HR\,4796\,A in the near future. \subsection{An asymmetric halo of small dust grains} \citet{Schneider2018} presented deep HST observations of the disk around HR\,4796 and revealed an extremely extended halo outside of the birth ring. They detect this ``exo-ring'' material up to $\sim 10\arcsec$ along the north-east side, while the south-west side appears more compact. As a matter of fact, there seems to be an ``arc-like'' feature along the south-west side, which could be due to either interactions with the ISM gas, or with HR\,4796\,B. Based on the discussion above, the latter scenario is probably unlikely, if HR\,4796\,A and B are indeed not gravitationally bound. The authors mention the possibility that the dust grains in the halo \textit{may} be unbound from the central star, but those dust grains should be evacuated from the system extremely quickly (see e.g., \citealp{Thebault2008}), and therefore should hardly be detected. On the other hand, bound grains, with large $\beta$ ratios can have their apocenter at $10-20$ times the separation of the birth ring. Therefore, in the following, we will attempt to reproduce the general shape of the exo-ring (extended emission along the north-east with an arc-like shape along the south-west), based on our current best-fit model, and we will be considering bound dust grains with high $\beta$ ratios. This subsection is meant to be speculative, given the complexity of the problem and we simply aim at providing two scenarios to explain the HST observations. First of all, out-of-the-box, our current best fit model cannot reproduce the HST observations. We find that the pericenter is located along the north side, and therefore, the disk will naturally be more extended in the direction toward the apocenter, i.e., along the south side, while the HST observations show otherwise. We will therefore consider here two alternative explanations: \textit{(i)} the dust grains set on high eccentricity orbits are interacting with the local interstellar medium, and \textit{(ii)} a slow precession of the pericenter of the debris disk is causing the apparent asymmetry. \subsubsection{Description of the code} When considering additional forces, such as gas drag, the analytical prescriptions given in Eq.\,\ref{eqn:orbit} are no longer useful to estimate the orbital parameters of the dust grains. Therefore, we opted to use a simple Runge–Kutta integrator at the 4th order to estimate the positions and velocities of the particles. The forces considered here are the gravitational potential from the central star (weighted by $1-\beta$ when considering the effect of radiation pressure), and the gas drag on each dust grain. We followed the work of \citet{Marzari2011} (see also \citealp{Pastor2017}) for the implementation, and the acceleration felt by a dust grain is estimated as \begin{equation}\label{eqn:ism} \boldsymbol{f} = -C_{\mathrm{D}} n_{\mathrm{H}} m_{\mathrm{H}} A_{\mathrm{s}} \lvert \boldsymbol{v} - \boldsymbol{v_{\mathrm{H}}}\rvert (\boldsymbol{v} - \boldsymbol{v_{\mathrm{H}}}), \end{equation} where $\boldsymbol{v}$ is the velocity of the dust grain, $\boldsymbol{v_{\mathrm{H}}}$ the velocity of the ISM gas, $n_{\mathrm{H}}$ and $m_{\mathrm{H}}$ are the density and mass of the hydrogen atoms in the ISM, and $C_{\mathrm{D}}$ is a drag coefficient (set to $2.5$ as in \citealp{Marzari2011}). The cross section of a dust grain is given by $A_{\mathrm{s}} = \pi s^2$. To produce an image, we initially release $10\,000$ particles (uniformly distributed in mean anomaly) within the birth ring (described by its semi-major axis, eccentricity, and argument of pericenter, which are the same as the best fit model) and follow their trajectories in time. For the sake of simplicity, in those simulations there is no preferential collisions near the pericenter. At the initialization of the simulation, we first check if each particle is indeed bound to the system, by estimating its initial velocity, and comparing it to the escape velocity of the system. Only particles that are bound are kept in the simulation. At each time step (usually $4$\,years) we save the $(x,\,y,\,z)$ positions of each of the particles, and then project those values onto the sky plane, depending on the inclination, position angle, and opening angle of the disk. For all the simulations presented in the following, we assume the Henyey-Greenstein approximation for the phase function, with $g = 0$ (isotropic scattering), and compute the surface brightness by multiplying the phase function by $Q_\mathrm{sca} \pi s^2 / (4\pi r^2)$. Also, we only consider a single grain size of $s = 7.15$\,$\mu$m, as this is the smallest size for which all the particles remain bound to the star, and have a high $\beta$ ratio of $\sim 0.46$. \subsubsection{Interaction with local ISM gas} \begin{figure*} \centering \includegraphics[width=\hsize]{PDF/ISM.pdf} \caption{HST observations, as published in \citet{Schneider2018} (\textit{left} panel), and mock HST observations calculated from the N-body simulations of small, bound, dust grains around HR\,4796\,A, when considering interactions with the local ISM (clockwise and counter-clockwise rotations on the \textit{middle} and \textit{right} panels, respectively). The images are in log-scale.} \label{fig:ism} \end{figure*} The simulations presented in this section last for $100\,000$\,years ($25\,000$ steps of $4$\,years), and the free parameters that we investigate are: the density of hydrogen atoms in the ISM $n_{\mathrm{H}}$, the direction and velocity of the ISM gas (all encompassed in the $\boldsymbol{v_{\mathrm{H}}}$ vector), and the direction of rotation of the dust grains in the disk. To simplify the definition of the problem, we assume that the $\boldsymbol{v}$ vector only represents the orbital velocity of the particles, while the $\boldsymbol{v_{\mathrm{H}}}$ vector contains information about both the proper motion of the star and the direction of motion of the ISM gas. As mentioned before, a full exploration of the parameter space is out of the scope of this paper, and we simply aim at providing a qualitative assessment on how to explain the HST observations. Figure\,\ref{fig:ism} shows two simulations as well as the HST observations published in \citet{Schneider2018} (left panel), where the only difference is the rotation direction of the dust grains in the disk (clockwise and counter-clockwise for the middle and right panels, respectively). The images have been convolved by a 2D gaussian with a standard deviation of $1$\,pixel, and are shown with a log stretch to highlight the faint outer regions (as a consequence, the birth ring appears broader than in ZIMPOL simulations which are shown with a linear scale). For those simulations, we tried to fix as many free parameters as possible. We therefore assumed that the radial velocity of the ISM gas matches the one of the central star (\citealp{Iglesias2018} detected several absorption lines at different velocities, but we cannot know the relative distances of those clouds, only that they are between the star and the observer). Furthermore, given that the arc-like shape is in the east-west direction, we assumed that the ISM gas has a null velocity in the north-south direction, and that the $\delta$ component of the $\boldsymbol{v_{\mathrm{H}}}$ vector is equal to the proper motion of the star (see Table\,\ref{tab:gaia}). This leaves us with $\alpha$, $n_{\mathrm{H}}$, and the direction of rotation of the dust grains in the disk as free parameters. Similarly to \citet[][studying the disk around HD\,61005]{Pastor2017}, we find that the ISM density has to be quite significant to produce the arc-like feature in the south-west direction (but overall, the density and the amplitude of the velocity are degenerate parameters). For the simulations presented in Figure\,\ref{fig:ism}, we set $n_{\mathrm{H}} = 125$\,cm$^{-3}$ and $\alpha = -100$\,mas.yr$^{-1}$. This means that in those simulations, the star is moving $55$\,mas.yr$^{-1}$ towards the west (following its proper motion in right ascension) and that the ISM gas is moving $45$\,mas.yr$^{-1}$ towards the east. Overall, both simulations can produce the arc-shape structure as seen with HST, roughly at the same separation ($\sim 5\arcsec$), but when the dust grains are moving in the clockwise direction onto the sky plane, the north-east side of the disk is much fainter, and the south side of the birth ring appears brighter than the north side. This suggests that \textit{if} the extended halo is indeed the consequence of interactions with the ISM gas, then, the dust grains would most likely be orbiting counter-clockwise around HR\,4796\,A. The main caveat in this scenario is the rather large density of the ISM gas required, compared to the surroundings of the solar system,, similarly to what was found for HD\,61005 by \citet{Pastor2017}. However, the volume density of the cold, dense, interstellar medium can vary between $5-100$\,cm$^{-3}$ (e.g., \citealp{Nguyen2019} and references therein). Therefore, our results, while on the higher end of the range, remain overall compatible with studies of the interstellar medium. \subsubsection{Precession of the pericenter} \begin{figure} \centering \includegraphics[width=\hsize]{PDF/precession.pdf} \caption{Mock HST observations calculated from the N-body simulations of small, bound, dust grains around HR\,4796\,A, when considering that the pericenter is precessing over time. The image is in log-scale.} \label{fig:precession} \end{figure} The second scenario we investigate to explain the HST observations is based on the fact that an eccentric disk is more radially extended towards the apocenter. Several studies of the disk around HR\,4796\,A consistently found that the pericenter of the disk is located close to the (projected) semi-minor axis of the disk, on the north side. But, if the disk is precessing (see also \citealp{Lohne2017}), in the past the pericenter might have been located on the south side, resulting in a more extended disk along the north side. To test this hypothesis, we run similar N-body simulations, for a single grain size, without any additional forces (only gravitational and radiation pressure forces). The two free parameters of the simulations are the total duration $t = t_\mathrm{sim}$ and the precessing rate $\Omega$ of the pericenter (i.e., $(\omega_{\mathrm{end}} - \omega_{\mathrm{start}}) / t_\mathrm{sim}$). In this case, we assume that the pericenter of the disk is rotating counter-clockwise, from the south-west towards the north-west (just past the semi-minor axis of the disk). Therefore, we only consider dust grains that are also rotating counter-clockwise around the star. We chose $\omega_{\mathrm{start}} = -320^{\circ}$, $\omega_{\mathrm{end}} = -254^{\circ}$, and a total duration $t_\mathrm{sim} = 200\,000$\,years, divided in $n_\mathrm{sim} = 500$ different steps. At $t = 0$, the pericenter is located at $\omega_{\mathrm{start}}$, and the N-body simulation last over $t_\mathrm{sim}$, with a time step of $4$\,years and we save the position of all the particles for the last $20$ iterations (hence the last $80$\,years). The $i^{\mathrm{th}}$ N-body simulation (out of the $500$) will last slightly less time ($t_\mathrm{sim} - i \times t_\mathrm{sim}/n_\mathrm{sim}$) and the location of the pericenter will have moved slightly ($\omega_{\mathrm{start}} + i \times (\omega_{\mathrm{end}} - \omega_{\mathrm{start}})/n_\mathrm{sim}$), and we still save the last $80$\,years of the simulation. At the end, we simply collapse all the $n_\mathrm{sim}$ images together (see \citealp{Thebault2012b} for a similar approach). Figure\,\ref{fig:precession} shows the result of this simulation (on an aggressive log stretch to reveal the outermost regions, also rendering the birth ring thicker than in other Figures). One can see that indeed the south-west side is slightly dimmer than the north-east side, less extended in the radial direction, and that arc-like structures start to develop in the south-west side. The concentric ellipses close to the birth ring are due to the fact that we save the positions of the particles over the last $80$\,years of the simulations, but the time step between each N-body simulations is larger than $80$\,years ($400$\,years). It also appears, that we cannot reproduce the scale of the HST image in this simulation (the arc shape is located at about $5\arcsec$ in the HST observations, while the disk still appears bright at even $10\arcsec$). This would most likely suggest that we are observing grains with a $\beta$ value slightly smaller than $0.46$. This approach is extremely simplistic, and heavily depends on the initial conditions and the duration of the simulation (i.e, when we decide to stop the simulation). The main caveat is that we assume that the dust particles can actually survive over $t_\mathrm{sim}$ without being destroyed. If we assume $\beta = 0.45$, $e = 0.07$, $a = 76.4$\,au, and draw uniformly $1\,000$ values for the true anomaly between $[0, 2\pi)$, we can estimate the orbital parameters of the dust grains following Eq.\,\ref{eqn:orbit}. We can then estimate the orbital period of those grains as $T = 2\pi \sqrt{a_{\mathrm{n}}^3/GM_\star(1 - \beta)}$. We find a mean value of $18\,500$\,years with a standard deviation of $15\,800$\,years (the distribution is strongly peaked at $\sim 5\,500$\,years). This means, that over $t_\mathrm{sim} = 200\,000$\,years, the dust grains would, on average, pass trough the birth ring $10-40$ times (this criterion, combined with the CPU time for the simulation, drove the choices for the precession rate and the total duration of this simple exercise). Depending on the optical depth of the birth ring, those grains may get destroyed before that, which may work in our favor. Indeed, to produce an arc-like feature at large distances from the star, one needs to break the symmetry. If the disk has been precessing for a long time, and none of the particles are destroyed, then the result would most likely be symmetric. However, if after several orbits, the oldest dust grains are destroyed in the birth ring, this can generate a possible asymmetry in the surface brightness of the disk, as we ``lose memory'' from past events. Overall, our simulation does not properly match the observations, and remains speculative, but it seems to be going in the right direction to try and explain the HST observations presented in \citet{Schneider2018}. Those observed images could well be result of a combination of both effects, precession of the disk and interactions with local interstellar medium. Furthermore, the scenario that we invoked earlier to explain the morphology of the disk (a collision at $76.4$\,au, following the work of \citealp{Jackson2014}), is not incompatible with precession of the disk. \citet{Jackson2014} mention that even though the original collision point remains static on orbital time-scales, it should still be able to precess, due for instance to the presence of other massive bodies in the vicinity of the birth ring. Further investigation of the lifetime of the dust grains, precession rate, and the reason for the precession of the disk is out of the scope of this paper, the intent here being simply to propose a scenario that can reconcile all the available observations of the disk. \section{Conclusion} We presented high angular resolution SPHERE/ZIMPOL observations that reveal the bright debris disk around HR\,4796\,A with exquisite angular resolution (see also \citealp{Milli2019} for a discussion about the polarized phase function). At optical wavelengths, the north-east side of the disk is brighter than the south-west side, which we aimed at reproducing in this paper. We modeled the radial profiles along (and close to) the semi-major axis of the disk with a code that includes a simple prescription of the effect of radiation pressure. With this code, which is faster (but also much simpler) than models including a more accurate treatment of collisional and dynamical evolution of debris disks (e.g., \citealp{Kral2013,Kral2015,Lohne2017}), we are able to reproduce the observed profiles on both sides of the disk. We can reproduce the outer edge of the disk without invoking the presence of an outward planet truncating the disk (similarly to \citealp{Thebault2012}) and we find that the underlying planetesimal belt can be as narrow as a few au. As previously stated in the literature, this could be related to the presence of an unseen planet, inwards of the planetesimal belt, shepherding the debris disk. We find, similarly to other studies in the past years, that the pericenter of the disk is located close to the projected semi-minor axis of the disk. We show that with such a configuration, the pericenter glow, that had been postulated to explain marginally resolved observations, has in fact very little impact on the azimuthal brightness distribution of the disk. To reproduce the observed brightness asymmetry, we find that small dust grains must be preferentially released, as a result of collisions between larger bodies, close to the pericenter of the disk. Finally, our best-fit model can self-consistently reproduce most of the available observations of the disk, from optical to millimeter wavelengths. The only dataset that remains challenging to explain are the recently published HST observations that reveal an extended halo at large separations from the star. After concluding that HR\,4796\,B \textit{may} not be bound with HR\,4796\,A, thus minimizing its possible effect on the disk, we propose two possible scenarios that could help explaining the HST dataset; interactions with the local interstellar medium and precession of the disk. Even though the results remain speculative, those hypotheses (or a combination of those) could help reproduce the very faint halo of small dust grains. But further investigation is required to confirm our findings, for instance by performing more refined dynamical simulations (e.g., accounting for grain-grain collisions during the evolution of an eccentric disk, \citealp{Lohne2017}) \begin{acknowledgements} We thank Glenn Schneider for kindly providing the HST observations. We thank the anonymous referee for providing useful comments that helped improving the paper, especially the description of the code. This research has made use of the SIMBAD database (operated at CDS, Strasbourg, France) and of the Spanish Virtual Observatory (http://svo.cab.inta-csic.es, supported from the Spanish MICINN / MINECO through grants AyA2008-02156, AyA2011-24052). This research made use of Astropy, a community-developed core Python package for Astronomy (\citealp{Astropy}), as well as the TOPCAT software (\citealp{Taylor2005}). SPHERE is an instrument designed and built by a consortium consisting of IPAG (Grenoble, France), MPIA (Heidelberg, Germany), LAM (Marseille, France), LESIA (Paris, France), Laboratoire Lagrange (Nice, France), INAF–Osservatorio di Padova (Italy), Observatoire de Gen\`eve (Switzerland), ETH Zurich (Switzerland), NOVA (Netherlands), ONERA (France) and ASTRON (Netherlands) in collaboration with ESO. SPHERE was funded by ESO, with additional contributions from CNRS (France), MPIA (Germany), INAF (Italy), FINES (Switzerland) and NOVA (Netherlands). SPHERE also received funding from the European Commission Sixth and Seventh Framework Programmes as part of the Optical Infrared Coordination Network for Astronomy (OPTICON) under grant number RII3-Ct-2004-001566 for FP6 (2004–2008), grant number 226604 for FP7 (2009–2012) and grant number 312430 for FP7 (2013–2016). We also acknowledge financial support from the Programme National de Plan\'etologie (PNP) and the Programme National de Physique Stellaire (PNPS) of CNRS-INSU in France. This work has also been supported by a grant from the French Labex OSUG@2020 (Investissements d'avenir – ANR10 LABX56). The project is supported by CNRS, by the Agence Nationale de la Recherche (ANR-14-CE33-0018). It has also been carried out within the frame of the National Centre for Competence in Research PlanetS supported by the Swiss National Science Foundation (SNSF). MRM, HMS, and SD are pleased to acknowledge this financial support of the SNSF. Finally, this work has made use of the SPHERE Data Centre, jointly operated by OSUG/IPAG (Grenoble), PYTHEAS/LAM/CESAM (Marseille), OCA/Lagrange (Nice) and Observatoire de Paris/LESIA (Paris). We thank P. Delorme and E. Lagadec (SPHERE Data Centre) for their efficient help during the data reduction process. J.~O., A.~B., J.\,C.~B., D.~I., M.~M., M.\,R.~S., and C.~Z. acknowledge support from the ICM (Iniciativa Cient\'ifica Milenio) via the Nucleo Milenio de Formación planetaria grant. J.~O. acknowledges support from the Universidad de Valpara\'iso and from Fondecyt (grant 1180395). A.~B. acknowledges support from Fondecyt (grant 1190748). M.~M. acknowledges financial support from the Chinese Academy of Sciences (CAS) through a CAS-CONICYT Postdoctoral Fellowship administered by the CAS South America Center for Astronomy (CASSACA) in Santiago, Chile. FMe acknowledges funding from ANR of France under contract number ANR-16-CE31-0013. J.\,C.~B. acknowledges support from Proyecto FONDECYT postdoctorado 2018 nro. 3180716. G.~M.~K. is supported by the Royal Society as a Royal Society University Research Fellow. A.~Z. acknowledges support from the CONICYT + PAI/ Convocatoria nacional subvenci\'on a la instalaci\'on en la academia, convocatoria 2017 + Folio PAI77170087. \end{acknowledgements} \bibliographystyle{aa}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} \label{intro} \hspace*{2em} Short-lived gluinos could be defined to be those which decay before interacting hadronically in a detector or beam-dump. Their decay produces the lightest neutralino (lsp)\footnote{Generally, a superposition of the SUSY partners of the photon, $Z^0$ and neutral Higgses, not considering effects of a possible light gravitino.} which escapes the detector or dump without interacting, carrying with it much of the gluino's energy and momentum. Short-lived gluinos are excluded for masses $\,\raisebox{-0.13cm}{$\stackrel{\textstyle<} {\textstyle\sim}$}\, 160$ GeV\cite{cdf:gluinolim2} by the absence of characteristic missing energy events in the FNAL collider. Thus to be lighter than this, gluinos must be long-lived\footnote{The long-lived gluino window was first pointed out in ref. \cite{f:51}. It was subsequently discussed in the work of ref. \cite{deq}.}. It is natural for gluinos to be much lighter than squarks if their masses are entirely radiative in origin. In that case, if the SUSY and elecroweak symmetry breaking scales are less than $\sim10$ TeV, gluino and lsp masses will range from of order 100 MeV to of order 30 GeV\cite{bgm,f:96}. This is the mass range explored here. A gluino in the mass range $\sim 1.5 - 3.5$ GeV is excluded, whatever its lifetime, from the absence of a peak in the photon energy spectrum in radiative Upsilon decay. This is because two gluinos with mass in that range would form a pseudoscalar bound state, the $\eta_{\tilde{g}}$, whose branching fraction in $\Upsilon \rightarrow \gamma \eta_{\tilde{g}}$ can be reliably computed using perturbative QCD and is predicted\cite{keung_khare,kuhn_ono,goldman_haber} to be greater than the experimental upper bound\cite{tutsmunich,cusb}\footnote{The range excluded by the CUSB experiment is incorrectly claimed to extend to lower gluino masses, by using the pQCD results of refs. \cite{keung_khare,kuhn_ono,goldman_haber} out of their range of validity. A detailed analysis of the actual excluded range in given in ref. \cite{f:93}. The lower limit for validity of a pQCD, non-relativistic potential model description of an $\eta_{\tilde{g}}$ was taken to be $\sim 3$ GeV, mainly by analogy with the success of the same description of charmonium. However since the effective value of the coupling is so much stronger due to the larger color charge of the gluino in comparison to a quark, even a 3 GeV $\eta_{\tilde{g}}$ may not be in the perturbative regime, in which case the range of validity of the CUSB procedure may not be even this large. Note that any gluino whose lifetime is longer than the strong interaction disintegration time of the $\eta_{\tilde{g}}$, i.e., $\tau \,\raisebox{-0.13cm}{$\stackrel{\textstyle>} {\textstyle\sim}$}\, \sim 10^{-22}$ sec, will produce the requisite bump in the photon energy spectrum, and thus be excluded by CUSB.}. In this paper I address the question of whether long-lived gluinos having mass less than $\sim 1.5$ or greater than $3.5$ GeV are excluded on other grounds. Many experiments which are commonly cited as ruling out gluinos of this mass range actually provide only weak limits when one takes account of the gluino lifetime. These experiments as well as the most powerful indirect constraints, which are also presently unable to exclude this mass range, will be reviewed below. My purpose here is to propose tests which will {\it unambiguously} demonstrate or exclude the existance of light gluinos. An inevitable consequence of the existance of a long-lived gluino is the existance of neutral hadrons containing them. Generically, hadrons containing a single gluino are called $R$-hadrons\cite{f:23}. The lightest of these would be the neutral, flavor singlet $g \tilde{g}$ ``glueballino'', called $R^0$. There would also be $R$-mesons, $\bar{q}q \tilde{g}$, and $R$-baryons,$qqq \tilde{g}$, with the $\bar{q}q$ or $qqq$ in a color octet. Unlike ordinary baryons which are unable on account of fermi statistics to be in a flavor singlet state, there is a neutral flavor-singlet $R$-baryon, $uds \tilde{g}$, called $S^0$ below. It should be particularly strongly bound by QCD hyperfine interactions, and probably is the lightest of the $R$-baryons\cite{f:51,f:52}, even lighter than the $R$-nucleons. The strategy pursued here is to identify production and detection mechanisms for the $R^0$ for which {\it reliable} rate estimates can be made, so that searches which are sufficiently sensitive will definitively rule them out or find them. First, we use theoretical arguments to estimate $R$-hadron masses as a function of gluino mass. Then experiments are proposed to settle the question. \section{$R$-hadron mass estimates} \label{mass} \hspace*{2em} If the gluino is heavier than $\sim 3.5$ GeV, then the $R^0$ and $S^0$ will have masses approximately equal to the mass of the gluino. For the window $ m_{\tilde{g}} \,\raisebox{-0.13cm}{$\stackrel{\textstyle<} {\textstyle\sim}$}\, 1.6$ GeV, lattice gauge theory (lgt) should be used to determine the hadron spectrum, and hopefully the necessary calculations will be done soon. However we can get a rough idea without it, as follows. Let us begin by estimating hadron masses if the gluino is as light as possible. If the gluino were massless, the spectrum would be expected to contain an unacceptably light\cite{ev,sv} flavor-singlet goldstone boson associated with the spontaneous breaking of the non-anomalous linear combination of quark and gluino chiral U(1) symmetries. For three light flavors of quarks the non-anomalous axial current is\footnote{The fields appearing in this expression are left-handed Weyl spinors and a sum over indices is understood. $i$ labels the three light quark flavors and $j$ and $a$ label the 3 quark and 8 gluino color degrees of freedom.}: \begin{equation} J^5_{\mu} = \frac{1}{\sqrt{26}} \left\{\bar{q}^{i,j}_L \gamma_{\mu} q^{i,j}_L - \bar{q^c}^{i,j}_L \gamma_{\mu} {q^c}^{i,j}_L - \bar{\lambda^a} \gamma_{\mu} \lambda^a \right\}. \label{j5nonanom} \end{equation} We can obtain a theoretical lower bound on the gluino mass by identifying the $\eta'$ with this pseudogoldstone boson\footnote{This possibility was suggested in ref. \cite{f:51} but not developed in quantitative detail as is done here.}. The flavor singlet pseudoscalar which gets its mass from the anomaly would then be identified with a more massive state, which will be discussed below. If this were the correct description of the $\eta'$, its quark content would be reduced by a factor of $\frac{18}{26} \approx 0.7$ in comparison to the usual picture. Interestingly, this seems not to be ruled out by existing constraints. Sound predictions for the $\eta'$, avoiding model dependent assumptions such as the relation between $F_1$ and $F_8$, are for ratios of branching fractions to final states which couple to the quark component\cite{chanowitz:etaprime}. These ratios are insensitive to the presence of a gluino or gluonic component. Absolute predictions are highly sensitive to theoretically incalculable hadronic effects, due to the very restricted phase space for the $\eta'$ to decay through strong interactions. This means that rates which could potentially determine whether the $\eta'$ has a $30\%$ gluino component, in practice cannot be predicted reliably enough to be useful.\footnote{A possible way to discriminate is to study the production of the various pseudoscalars in $J/\Psi$ decay. G. Farrar and G. Gabadadze, in preparation.} Assuming tat the $\eta'$ is the pseudogoldstone boson connected to the spontaneous breaking of the conserved axial current (\ref{j5nonanom}), like the $K^+$ for $J^5_K = \frac{1}{\sqrt{6}} \left\{\bar{u}^j_L \gamma_{\mu} s^j_L - \bar{u^c}^j_L \gamma_{\mu} {s^c}^j_L \right\}$, standard current algebra manipulations lead to predictions for $m_{\eta'}^2 f_{\eta'}^2$ and $m_{K}^2 f_{K}^2$. Taking their ratio, neglecting $m_u$ and $m_d$ in comparison to $m_s$, and solving for $m_{\tilde{g}}$ leads to: \begin{equation} m_{\tilde{g}} \approx \frac{m_s}{4}\frac{<\bar{s} s>}{<\bar{\lambda} \lambda>}\left[13 \left( \frac{m_{\eta'}^2 f_{\eta'}^2}{m_{K}^2 f_{K}^2 }\right) - 3 \right]. \label{mgluino} \end{equation} With $f_{\eta'} \approx f_K$ this gives $m_{\tilde{g}} \sim 11 \frac{<\bar{s} s> }{<\bar{\lambda} \lambda>} m_s$. Since the QCD attractive force between color octets is greater than that between triplet and antitriplet, $<\bar{\lambda} \lambda>$ is presumably larger than $<\bar{s}s>$. Most-attractive-channel arguments\cite{drs} suggest that the condensates depend exponentially on the Casimirs of the condensing fermions so that since $C_8/C_3 = 9/4$, $<\bar{\lambda} \lambda>$ could be an order of magnitude or more larger than $<\bar{s}s>$. Thus pending lattice calculations of $<\bar{\lambda} \lambda>$ or $m(\eta')$ as a function of gluino mass and without gluinos, the phenomenological analysis should be general enough to include a gluino as light as $\sim 100$ MeV or less. In this case the $R$-hadron properties are about the same as they would be for a massless gluino. If the gluino were massless, the mass of the $R^0$ should be $1440 \pm 375$ MeV, i.e., about $1 \frac{1}{2}$ GeV, as follows. Consider supersymmetric SU(3) Yang Mills theory. Since supersymmetry in this theory does not break dynamically\cite{witten}, hadrons must fall into degenerate supermultiplets. The massive chiral supermultiplet containing the $0^{++}$ glueball also contains a $0^{-+}$ (the lowest $\tilde{g} \tilde{g}$ bound state) and two spin-$\frac{1}{2}$ states, namely the two helicities of the $R^0$ (the $g \tilde{g}$ bound state). At the classical level this theory has a chiral U(1) phase invariance since the gluinos are massless, like the chiral U(1) of ordinary QCD with massless quarks. This symmetry is clearly not realized in the hadron spectrum, since the $R^0$ is degenerate with the massive glueball. Nor is there a goldstone boson associated with the breaking of this U(1) symmetry since the pseudoscalar $\tilde{g} \tilde{g}$ bound state is also degenerate with the glueball. This is not paradoxial for the same reason that in ordinary QCD we can accomodate the $\eta'$ mass. Namely, the axial U(1) current has an anomaly so that non-perturbative effects give the pseudoscalar $\tilde{g} \tilde{g}$ bound state a mass. The chiral U(1) symmetry is explicitly broken by quantum effects so that Goldstone's theorem is circumvented\footnote{It is interesting that supersymmetry relates the mass produced by non-perturbative effects through the anomaly to the mass-gap for the glueball coming from confinement, suggesting that confinement is essential to the understanding of the mass of the $\eta'$ even in ordinary non-susy QCD.}. Now consider hadron masses in QCD with three light quarks and a massless gluino. The mass of the $0^{++}$ glueball is predicted\cite{weingarten:glueballs} using lattice QCD in quenched approximation (i.e., QCD with only gluons and no quarks or gluinos) to be $1440\pm 110$ MeV. In ordinary lattice QCD, with three light quarks but no gluinos, the quenched approximation is commonly taken to be valid at the $10-15 \%$ level for hadron masses\footnote{See ref. \cite{quenched} for a critical discussion of quenched approximation.}. Since the 1-loop beta function for QCD with no light quarks but an octet of gluinos is the same as for QCD with three light quarks, one can expect that the error for quenched approximation in the supersymmetric Yang Mills theory is also $10-15 \%$, and that the quenching error with both quarks and gluinos is $\sim 15-25 \%$. If this is so, then a full lattice calculation in QCD with 3 light quarks and a massless octet of gluinos would give a mass for the $R^0$ of $\sim 1440 \pm 375$ MeV, where the lattice error of ref. \cite{weingarten:glueballs} on the glueball mass was combined in quadrature with the estimated error from the quenched approximation, taken to be $25\%$ of $1440$ MeV to be conservative. As we shall see below, it is much more difficult to detect an $R^0$ with mass $\sim 1065$ GeV than one with mass $\sim 1800$ GeV. Thus a lgt calculation which reduced the range of uncertainty on the mass of the $R^0$ would be very helpful, especially if it showed we could ignore the region close to 1 GeV. In QCD extended by gluinos, the flavor singlet pseudoscaler which gets mass from the anomaly is orthogonal to the anomaly-free current (\ref{j5nonanom}), thus it is $70\%$ $\tilde{g} \tilde{g}$ and $30\% ~ u \bar{u} + d \bar{d} + s \bar{s}$. In the supersymmetric Yang Mills theory discussed above, the pseudoscalar $\tilde{g} \tilde{g}$ state which gets mass from the anomaly and is degenerate with the $0^{++}$ glueball would have a mass of $1440 \pm 240$, adding the error in ref. \cite{weingarten:glueballs} in quadrature with a $15\%$ error for unquenching. There is evidence for an ``extra'' flavor singlet pseudoscalar present in the meson spectrum in the 1410-1490 region\cite{PDG,mark3,dm2}, which has a large coupling to gluons\cite{f:93}. If confirmed, it is an excellent candidate to be the pseudoscalar whose mass comes from the anomaly, in the very light gluino scenario. To recapitulate, we have seen above that from purely theoretical considerations we can at present only rule out $R^0$ and $S^0$ masses below about 1100 MeV. Having the lightest possible masses requires both the $\eta'$ and extra pseudoscalar meson in the 1410-1490 MeV region to have large gluino components, but increasing the gluino mass to $\sim 700$ MeV allows one to return to the conventional phenomenology for the $\eta'$ and interpret the extra state as a simple $\eta_{\tilde{g}}$, the lowest-lying $\tilde{g} \tilde{g}$ bound state. If gluinos are much heavier than this, one needs another explanation for the extra state in the 1410-1490 region. \section{Existing Experimental Limits} \label{exptlims} \hspace*{2em} {}From the CUSB experiment, we infer\footnote{See footnote on the first page of the present paper.} that the $\eta_{\tilde{g}}$ does not lie in the 3-7 GeV range, so that the gluino would not be in the $\sim 1.5-3.5$ GeV range. In order to compare to limits from other experiments searching for $R^0$'s, we shall convert this limit to an effective gluino mass using the relation \begin{equation} m(R^0) = 0.72 (1 + e^{-\frac{m_{\tilde{g}}}{2}}) + m_{\tilde{g}} (1 - e^{-m_{\tilde{g}}}), \label{mRmg} \end{equation} with all masses in GeV. This is actually just a convention for making the figure, but is physically reasonable in that it yields the $m_{\tilde{g}}=0$ result of the previous section and in analogy with mesons made of one light and one heavy quark associates an additive confinement energy of about half the mass of a light-quark-meson (here, of the $0^{++}$ glueball whose mass is $\sim 1.44 $ GeV) to the light constituent (here, the gluon) of a light-heavy composite. In another quarkonium decay experiment, the ARGUS group\cite{argus} looked for events in which $\Upsilon' \rightarrow \gamma + \chi_b(1^3P_1)$, followed by $\chi_b(1^3P_1) \rightarrow g \tilde{g} \tilde{g}$, with one of the final $R$-hadrons decaying in a distance of 1-60 cm from the $e^+ e^-$ interaction point. From the absence of such events at a level predicted by pQCD they concluded that gluinos in the mass range 1-4.5 GeV do not exist in the lifetime range to which they were sensitive. However perturbative QCD overestimates the branching fraction $\chi_b(1^3P_1) \rightarrow g \tilde{g} \tilde{g}$ for very light gluinos, since it fails to include the effect of the substantial reduction in phase space arising from the minimum invariant mass of a pair of $R^0$'s being about 3 GeV, even when the gluino is massless (see section \ref{mass}). To determine whether the experimental sensitivity extends to a gluino mass as low as 1 GeV as stated in ref. \cite{argus}, the experiment should be reanalyzed using a more realistic model of the branching fraction for $\chi_b(1^3P_1) \rightarrow g \tilde{g} \tilde{g}$ in the non-perturbative portion of phase space. The ARGUS results, taken from Fig. 4a of ref. \cite{argus}, are plotted on the figure using the above function to convert from their quoted gluino masses to a common $R^0$ mass. For the largest masses no conversion is used, in order not to make the non-sensical claim that they can exclude $R^0$'s which cannot be kinematically produced. The best constraints beyond CUSB and ARGUS for long-lived gluinos in the radiatively-generated range of up to $O(30)$ GeV come from searches for new neutral particles. Gustafson et al.\cite{gustafson} searched for new hadrons with lifetimes greater than $10^{-7}$ sec, using time-of-flight in a 590m long neutral beam at FNAL. On account of timing and energy resolution limitations, they were capable of distinguishing a particle from a neutron only if its mass was greater than 2 GeV. From the limits of Gustafson et al, Dawson, Eichten and Quigg\cite{deq} (DEQ) concluded that gluino masses in the 2-4 GeV range could be excluded. This experiment is therefore consistent with CUSB and Bernstein et al (see below), and for $\tau_{\tilde{g}} > 10^{-7}$ sec extends the lower end of the excluded mass range to 2 GeV as shown in the figure. The experiment of Bernstein et al.\cite{bernstein} places an upper bound on the production cross-section of a neutral hadron produced in 400 GeV proton collisions, with mass in the range $1.5-7.5$ GeV, which decays with a lifetime $(10^{-8} - 2 \times 10^{-6})$ sec to a 2- or 3-body final state containing a charged hadron. They find $E \frac{d \sigma}{d^3p}|_{90^o} \,\raisebox{-0.13cm}{$\stackrel{\textstyle<} {\textstyle\sim}$}\, 5 \times10^{-35} \frac{\rm cm^2}{\rm(GeV^2/c^3)}$ for mass of 1.5 GeV, and $\,\raisebox{-0.13cm}{$\stackrel{\textstyle<} {\textstyle\sim}$}\, 3 \times 10^{-32} \frac{\rm cm^2}{\rm(GeV^2/c^3)}$ for 7.5 GeV, taking the most sensitive lifetime value of $3 \times 10^{-8}$ sec. Typical decays would be $R^0 \rightarrow {\rm lsp} + \pi$('s) and $S^0 \rightarrow {\rm lsp} + \Lambda^0 + \pi$('s) or $S^0 \rightarrow {\rm lsp} + N + K + \pi$('s). Since the $S^0$ has baryon number $+1$, it would be expected to be produced mainly in the forward direction rather than at $90^o$ where the experiment was done, so is not directly constrained by this experiment. However this experiment does constrain the possibility of $R^0$'s. For the light end of the mass range, a reasonably good analog process which should be even more OZI-suppressed, is $p p \rightarrow \bar{p} X$ whose invariant cross section is $\sim 10^{-27} \frac{\rm cm^2}{\rm(GeV^2/c^3)}$\cite{bourquin_gaillard}, for similar kinematics. For a gluino mass of 3.5 GeV or larger, it is legitimate to use perturbative QCD (pQCD) to compute the expected rate, as a function of gluino mass. This was done in tree approximation by DEQ\cite{deq} for $m_{\tilde{g}} = 3$ GeV. They predicted an invariant cross section of $\sim 10^{-28} \frac{\rm cm^2}{\rm(GeV^2/c^3)}$ for $p_{\perp} = 0$. We can very crudely estimate the cross section for production of a gluino of higher mass but $p_{\perp}=0$ by noting that the cross section is mainly dependent on the combination $ m^2 + p_{\perp}^2$. The DEQ prediction for $m=3$ GeV and $p_{\perp} = 4$ GeV is $\sim 3 \times 10^{-34} \frac{\rm cm^2}{\rm(GeV^2/c^3)}$ (see Fig. 44), which is the same as the Bernstein et al limit for $m=5$ GeV and $p_{\perp} = 0$. Thus the Bernstein et al limit very roughly rules out $R^0$'s with mass less than 5 GeV. The range of lifetime sensitivity corresponding to the cross section limits of Fig. 4a is shown in Fig. 4b, for $m=3$ GeV where it is $\sim 2 \times 10^{-8} - 2 \times 10^{-7}$ sec. Since for a fixed production rate the detector sensitivity depends mainly on $\gamma \beta \tau$, and $\gamma \sim m^{-1}$, the range of maximal sensitivity will shift upward, roughly in proportion to $m$, for $m>3$ GeV. The range excluded by Bernstein et al is shown in the figure. It is the upper elongated region ending at 5 GeV. The limits could in principle be somewhat tightened if there are charged $R$-hadrons which decay only weakly to the $R^0$ or $S^0$, e.g., $R_{K^+} \rightarrow R^0 + \pi^+$. This will be the case if the mass gap between charged $R$-pions and $R$-kaons and the $R^0$, and between charged $R$-baryons and the $S^0$, is greater than the mass of the corresponding kaon or pion. Lattice calculations of the $R$-hadron mass splittings as a function of gluino mass are badly needed here. Bag model predictions for $R$-hadrons cannot be trusted since parameters fixed to fit the ordinary hadrons may not be applicable to $R$-hadrons, and furthermore bag model estimates have not been been reliable for the glueball spectrum. Nonetheless old bag model estimates\cite{chanowitz_sharpe,f:51,f:52} suggest that for some parameters there may not be enough phase space for $R_K \rightarrow K + R^0$ or $R_N \rightarrow K + S^0$. Thus a search for charged $R$-hadrons is worthwhile even though a null result would not exclude gluinos. Note that there is no relation between the lifetimes of the $R^0$ and $S^0$ and lifetimes of charged $R$-hadrons, since the latter decay to the $R^0$ and $S^0$ through conventional weak interactions and would be expected to have a lifetime comparable to weakly decaying hadrons of a similar mass, i.e., $10^{-10} - 10^{-13}$ sec for masses in the range 1 - 5 GeV. Briefly, the experimental constraints would be: \begin{itemize} \item Cutts et al\cite{cutts} use time of flight to exclude lifetimes greater than $\sim(2-5) \times 10^{-8}$ sec, for charged particles with masses in the 4-10 GeV range. \item Bourquin et al\cite{bourquin} search for decaying particles in the CERN hyperon beam, extending the excluded range for new charged particles to cover the 2-4 GeV mass range, for lifetimes of order $10^{-9}-10^{-8}$ sec. \item Charged $R-$hadrons having mass of the same order of magnitude as the $D$ or $B$ mesons must have a lifetime too short or long to decay in vertex detectors used to measure $D$ and $B$ lifetimes. \item There is a CDF limit on the existance of charged hadrons having $\gamma \tau \,\raisebox{-0.13cm}{$\stackrel{\textstyle>} {\textstyle\sim}$}\, 10^{-7}$ sec\cite{cdf:stablechlim}, but it only addresses masses greater than 50 GeV because the present detector has time resolution at the nanosecond level. \end{itemize} Otherwise the constraints on charged $R$-hadrons are poor and the coverage is surprisingly spotty. It must be reemphasized, however, that even if strong-interaction-stable $R$-hadrons exit, one cannot immediately apply these experimental constraints on the allowed regions for their mass and lifetime to the limits in the figure, because there is no direct relation between the lifetime of a charged $R$-hadron and that of the $R^0$. If the gluino lifetime is long because the squark mass is much larger than $m_W$, then beam dump experiments\cite{f:23,charm,ball,bebc,akesson}, which look for the reinteraction of the lsp in a neutrino detector, become ineffective because the lsp cross section falls as $\left( \frac{m_W}{M_{sq}} \right)^4$. Even if the lsp cross section is not too small, the gluino must decay before losing energy in the dump, e.g., in 10 cm in the Ball et al FNAL beam dump experiment\cite{ball,f:55}, i.e., requiring a lifetime $ \,\raisebox{-0.13cm}{$\stackrel{\textstyle<} {\textstyle\sim}$}\, 5 \frac{m_{\tilde{g}}}{1 {\rm GeV}} 10^{-11}$ sec. Likewise, the BEBC experiment\cite{bebc} observes that if $\tau_{\tilde{g}} \,\raisebox{-0.13cm}{$\stackrel{\textstyle>} {\textstyle\sim}$}\, 5 \times 10^{-11}$ sec the gluino decay does not occur before interaction, ``severely degrading the photino flux reaching our detector.'' For massless photino they model this effect, but in general beam dump experiments need to be analyzed in terms of the three parameters $m_{\tilde{g}},~\sigma_{\rm lsp}$ and $\tau_{\tilde{g}}$. Beam dump experiments cannot be used to exclude regions of the gluino mass-lifetime plane without further assumptions which are not in general appropriate to our case, except for gluinos with lifetimes shorter than about $ 5 \times 10^{-11}$ sec. The HELIOS experiment\cite{akesson} explicitly addresses direct production of WINP's, and not long-lived gluinos, since it requires that no energy degradation occur in the dump. The possibility of large gluino mass is at present only addressed by collider missing energy searches that detect the existance of a gluino which decays inside the apparatus with a substantial portion of its energy going to the lsp which is very weakly interacting and escapes. Indeed, this is the classic gluino signal\cite{f:24}. The CDF missing energy search\cite{cdf:gluinolim2} is sensitive to gluinos which decay within about 1 meter of their production, i.e., having $\frac {E_{\tilde{g}}}{m_{\tilde{g}}} \tau_{\tilde{g}} \,\raisebox{-0.13cm}{$\stackrel{\textstyle<} {\textstyle\sim}$}\, 3 \times 10^{-9}$ sec. They require the missing transverse energy to be greater than 40 GeV. To get a very rough idea of their regime of sensitivity (which could be determined more accurately by modeling the energy spectrum of the produced gluinos) we can take as a typical event the case in which the gluino is emitted at $45^o$ and assume it decays giving 1/3 its energy and momentum to the lsp which escapes with the minimal transverse energy to satisfy their cuts. In this case, the actual energy of the decaying gluino would be $3 \cdot \sqrt{2} \cdot 40 = 170 $ GeV, ignoring gluino, quark and lsp masses. Thus gluinos with lifetimes longer than about $2 \times 10^{-11} \left(\frac{m_{\tilde{g}}}{1 {\rm GeV}}\right) $ sec, would not be efficiently detected in the CDF search. They do not investigate masses lower than 20 GeV, where they lose efficiency on account of the acoplanarity and missing $E_T$ cuts. The UA1 missing energy search\cite{ua1} claims to be sensitive enough to exclude masses as low as 4 GeV. Although a gluino lifetime is not included in their efficiency monte carlo, they state that they believe they are fully sensitive to gluinos whose lifetime is shorter than $10^{-10}$ sec. This agrees with the crude estimate given above for the CDF experiment, for $m_{\tilde{g}} = 5$ GeV. Nonetheless, especially for the high mass end of the UA1 experiment, a monte carlo is needed to know the lifetime sensitivity as a function of mass. For simplicity, and to be conservative, we will use the estimate $2 \times 10^{-11} \left(\frac{m_{\tilde{g}}}{1 {\rm GeV}}\right) $ sec for both CDF and UA1. To summarize this section, gluinos in the mass range $\sim 1.5-3.5$ GeV are absolutely excluded (CUSB). Lighter gluinos are allowed, as long as the $R^0$ lifetime is not in the range $2 \times 10^{-6}- 10^{-8}$ sec if the $R^0$ mass is greater than 1.5 GeV (Bernstein et al), or the range $> 10^{-7}$ sec if its mass is greater than 2 GeV (Gustafson et al). Gluinos with mass around 4 GeV or above, must have a lifetime longer than about $\sim 2 \times 10^{-11} \left( \frac{m_{\tilde{g}}}{1 {\rm GeV}}\right) $ sec (UA1,CDF), with the ranges $> 10^{-7}$ sec (Gustafson), $2 \times 10^{-6}-10^{-8}$ sec (Bernstein et al) and $\sim 10^{-10}$ sec (ARGUS) ruled out for masses in the vicinity of 4-5 GeV. The figure is an attempt to summarize these results, combining experiments which report results directly in terms of $m(R^0)$ with those characterized by limits on $m_{\tilde{g}}$ by use of eqn. (\ref{mRmg}). Given the primitive nature of eqn. (\ref{mRmg}) and the $\pm 375$ MeV uncertainty on the $R^0$ mass when the gluino is massless (section \ref{mass}), as well as the very rough methods used to extract the ranges of mass and lifetime sensitivity for the various experiments, a $\,\raisebox{-0.13cm}{$\stackrel{\textstyle>} {\textstyle\sim}$}\, 20 \%$ uncertainty should be attached to all the boundaries shown in this figure. \section{Theoretical Comments: Gluino Lifetime and Production Estimates} \label{lifetime} \hspace*{2em} How natural is it from a theoretical point of view for an $R^0$ in the mass range $1.5 - 2.5$ GeV to have a lifetime longer than $2 \times 10^{-6}$ sec, or for an $R^0$ with mass $\,\raisebox{-0.13cm}{$\stackrel{\textstyle>} {\textstyle\sim}$}\, 5$ GeV to have a lifetime longer than $\sim 2 \times 10^{-11} \left( \frac{m_{\tilde{g}}}{1 {\rm GeV}}\right) $ sec? For the higher mass range the $R^0$ and gluino lifetimes can be taken to be approximately the same, since for a relatively massive state one can ignore the effects of confinement on the overlap of the initial and final states, and the modifications to phase space from the hadron masses. For the low end of the range, if the lsp mass is low compared to the gluino mass, one could either argue by analogy to known hadron decays\cite{f:23} or, following Franco\cite{franco}, take the $R^0$ lifetime to be that of a gluino of $\sim \frac{3}{4}$ of its mass. For the interesting case that the lsp mass is a significant fraction of $m(R^0)$, tools have not yet been developed which allow us to reliably estimate the resultant suppression in the decay rate. The decay rate for an unconfined gluino to decay to the lsp and a $ \bar{q} q $ pair can be obtained as follows. In general, the lsp is a superposition of the fermionic partners of the neutral SU(3) and U(1) gauge (w3-ino and bino) and Higgs bosons (higgsinos). However it is shown in ref. \cite{f:96} that when gaugino masses are all radiatively generated the higgsino component of the lsp is in fact less than $1\%$ in amplitude. Thus we can approximate the lsp wavefunction as $ cos\theta |\tilde{b}> + sin\theta |\tilde{w}_3>$. The decay rate of the gluino assuming the lsp to be a photino was given in ref. \cite{hk:taugluino}, so we need only replace $e_q^2$ appearing in their expression by $(\frac{sin \theta}{sin \theta_W})[I_z + z \frac{Y}{2}]^2$, where $z=\frac{tan \theta_W}{tan\theta}$, and average over left and right handed contributions. Thus the total rate for gluino to decay to the lsp and a $u \bar{u},~d \bar{d}$, or $s \bar{s}$ pair, ignoring the quark masses is: \begin{eqnarray} & \Gamma_{\tilde{g}} = \frac{\alpha_s \alpha_{em} (1 - \frac{2}{9}z + z^2) m_{\tilde{g}}^5}{128 \pi M_{sq}^4} (\frac{sin \theta}{sin \theta_W})^2 \times & \\ & [(1-y^2)(1 + 2y - 7y^2 + 20y^3 - 7 y^4 + 2 y^5 + y^6) + 24y^3(1 - y + y^2)log(y)], & \nonumber \label{taugluino} \end{eqnarray} where $y=\frac{m_{\rm lsp}}{m_{\tilde{g}}}$. We have taken $M_L^{sq} = M_R^{sq} = M_{sq}$ for simplicity. The $\theta$ dependent factor ranges from 1, for a light neutralino in the low-$\mu$ region where $\theta \approx \theta_W$, to $(cos \theta_W)^{-2}$ for a heavy neutralino in the high-$\mu$ region where $cos \theta \approx 1$. Thus for a rough estimate we take this factor to be 1. We also take $\alpha_s \sim 0.1$ and $\alpha_{em} = 1/128$, since the relevant scale is the squark mass. Then, for instance with a massless lsp, the squark mass must be greater than $\sim 2$ TeV for a gluino with effective mass in the 1-1.5 GeV mass range to have $\tau_{\tilde{g}} \geq 2~10^{-6}$ sec. If instead the lsp mass is 90\% of the gluino effective mass, the squark mass must only be greater than about 200 GeV. For a gluino of mass 5 GeV, the UA1 bound is most relevant. For lsp mass of zero or $0.9 m_{\tilde{g}}$ one finds that the squark mass must be greater than 1 TeV or $\sim 130$ GeV, respectively. These squark masses increase to 6 TeV or 670 GeV for a 15 GeV gluino. As shown in ref. \cite{f:96}, when gaugino masses arise radiatively, these conditions are naturally accomodated in much of parameter space. It is also worth noting that absolute stability is a real possibility for the $S^0$, since the mass difference between it and the lsp must be greater than the sum of proton and electron masses for it to decay. If it binds to nuclei, this would be ruled out experimentally by the sensitive searches for exotic isotopes, at least for some mass regions\cite{muller}. However one would expect a repulsive, not attractive, interaction between a nucleus and the flavor-singlet $R^0$ or $S^0$ , since the intermediate state created when they exchange mesons with a nucleon has a much higher energy\footnote{Unlike the binding of a nucleus where exchange of mesons between pairs of nucleons, each of which can absorb or emit an $I=1$ meson and remain a nucleon, leads to intermediate states close in energy to the original state.}. Anomalous signals in extensive air showers and underground muons seemingly coming from Cygnus X-3 are consistent with the intermediate particle being a neutron, except that the neutron decays too quickly to make the long trip\footnote{See, e.g., ref. \cite{bei} for a summary.}. Long-lived $R^0$'s were investigated\cite{bi}, but discarded\cite{ov} on account of the mistaken belief that they would imply a long lived charged $R$-proton which is ruled out by, e.g., ref. \cite{muller}. If the present quiet of Cygnus X-3 is only a cyclical phenomenon and such events are observed again in the future, an $S^0$ interpretation should be seriously considered. Turning now to cross section calculations, I am not aware of any recent pQCD calculations of gluino production at a hadron collider, except for very massive gluinos. The old analyses\cite{deq,bhk} should be updated, making an attempt to estimate the uncertainty in the gluino distribution, as well as including 1-loop corrections which have proved very important for ordinary pQCD predictions. From deep inelastic and Drell-Yan experiments, the quark and antiquark distributions are reasonably well fixed. Direct photon production gives information on the gluon distribution function, so the momentum sum rule then provides some constraint on the gluino distribution. The naive argument\cite{f:9} which leads to behavior $(1-x)^7$ for the sea-quark distribution functions at large-$x$, leads to the same behavior for the gluino distribution function. Since the $R^0, {}~\eta'$, and $\eta_{\tilde{g}}$ masses are so much larger than pion masses, one would expect that the low-$Q^2$ gluino distribution functions are smaller than those of the sea-quarks. However since the 1-loop beta-function for gluinos is the same as for 3 flavors of light quarks, the gluino distribution function evolves as rapidly as all three quarks together, so a light gluino would become an important component of the nucleon at larger $Q^2$. Although the gluon and gluino distribution functions are individually difficult to determine well, without assumptions as to their functional form for the entire $x$ range, their sum is much better determined\footnote{Comparably to the determination of the gluon distribution function, when the gluino possibility is ignored.}. Since both gluons and gluinos give rise to gluino jets, the actual prediction for $R^0$ production is relatively stable. If the existance of gluinos were established, the ratio of events with 1 and 2 $R^0$'s would allow the ratio of gluino and gluon distributions to be constrained. Demanding consistency of pQCD predictions with observed jet production may also allow the gluino distribution function to be further constrained, since the amplitudes for gluinos to produce jets differs from those for quarks or gluons to produce jets. \section{Indirect Evidence Regarding Light Gluinos} \label{indirect} \hspace*{2em} For years it has been recognized that in principle the running of $\alpha_s$ is sensitive to the presence of gluinos\footnote{For the first discussion of this, see ref. \cite{f:32}}. In deep inelastic scattering experiments the ambiguity introduced by higher twist contributions is too large to allow one to decide between QCD with and without gluinos\footnote{Thus comparison of the values of $\alpha_s$ from deep-inelastic scattering and $Z^0$ decay are inconclusive, although suggestive\cite{jk}.}. Gluinos modify the $e^+ e^-$ annihilation cross sections only in order $\alpha_s^2$, by providing an additional source of 4-jet events and making virtual corrections to 2-jet events. The possibility of infering or excluding gluinos directly from LEP event characteristics was discussed in ref. \cite{f:82}, where the sensitivity to the as-yet-uncalculated 1-loop corrections was shown to be too great to allow one to decide between ordinary QCD and QCD with massless gluinos\footnote{More recent articles on this subject have come to the same conclusion\cite{m-ts,opal4jet}.}. The reason that it is generally difficult to discriminate between QCD with and without gluinos is because adding gluinos to the theory modifies it in competing ways which tend to cancel. For instance the value of $\alpha_s$ at LEP is obtained by fitting QCD predictions for various aspects of event shapes and extracting the value of $\alpha_s$ which gives the best fit. Gluinos are an additional source of 4-jet events, but at the same time $\alpha_s$ runs more slowly when there are gluinos. This means that, for a given value of $\alpha_s(M_Z)$, the typical value of $\alpha_s(Q_{eff})$ in multi-jet events is lower than it would be for QCD without gluinos, which tends to reduce the number of multi-jet events\footnote{Ref. \cite{enr} correctly emphasized the need to extract $\alpha_s$ from data with and without gluinos before evaluating the consistency of the running between different energy scales, with and without gluinos. However that analysis only includes the virtual corrections to the running of $\alpha_s$ in LEP events and not the effect of real gluino jet production which is of the same order, so is incomplete. Bryan Webber and I (unpublished) tried to see if we could find some systematic preference of the LEP data for QCD with and without gluinos by looking at the entire menagerie of quantities from which $\alpha_s$ is extracted. We found that the only region in which there was a significant difference in predictions with and without gluinos is precisely the region in which hadronization is most important, and for which the 1-loop corrections to the 4-jet cross section (which are not yet available) are crucial.}. Just as the effects of gluinos tend to cancel at LEP, one cannot simply say that the number of jets predicted at the Tevatron will be increased by such-and-such an amount, since if there are light gluinos they will be present in the hadron structure functions and will use some of the ``room'' for gluons, reducing the production of conventional jets to some extent\footnote{See ref. \cite{f:82} for a discussion of the difficulty of using hadron collider jet cross-section characteristics to infer or exclude the existance of long-lived gluinos given the present level of theoretical precision.}. To address the possibility of light gluinos by their effects on jets or the running of $\alpha_s$, one must a) compare predictions for actual experimental observables with and without gluinos and not try to compare derived quantities such as $\alpha_s$ and b) {\it fully} incorporate gluinos into the analysis, including their effects on distribution functions. Recently, there have been a number of attempts to make the kind of careful analysis which would be necessary to obtain reliable indirect information of the possibilty of light gluinos. Ref. \cite{RtauRZ} used theoretical predictions for the hadronic branching fractions $R_{\tau}$ and $R_Z$ with and without gluinos, to extract $\alpha_s(m_{\tau})$ and $\alpha_s(m_{Z})$ with and without gluinos, then checked whether the running of $\alpha_s$ between these values was consistent with what QCD predicts, with and without gluinos. The main difficulty with this approach is the issue of how to treat the effect of a light or massless gluino on $\tau$ decay, and also the question of the validity of neglecting non-perturbative contributions of order $\frac{1}{m^2}$ as is done in ref. \cite{bnp}. The latter issue is discussed in ref. \cite{altarelli:hanoi}, where it is argued that unless the validity of neglecting $\frac{1}{m^2}$ corrections is established, the error should be taken to be twice that assigned in the ``nominal'' case of ref. \cite{RtauRZ}, i.e., that $R_{TH} = 2$ is appropriate for the ref. \cite{RtauRZ} analysis. With respect to the former issue, since the invariant mass of an $R^0$ pair and of the $\eta_{\tilde{g}}$ is too large to contribute significantly to $\tau$ decay, independent of the gluino mass, the gluino contribution should be neglected when determining $\alpha_s(m_{\tau})$, as is done for the charm quark. This can be implemented\footnote{M. Schmelling and R. St.Denis, private communication.} by using an ``effective'' gluino mass $\,\raisebox{-0.13cm}{$\stackrel{\textstyle>} {\textstyle\sim}$}\, m_{\tau}/2$ when using their fig. 2. One then finds that even at 90\% confidence level there is no excluded region of gluino mass from this analysis when $R_{TH} = 2$. For other attempts to study indirect evidence for light gluinos see, e.g., refs. \cite{clavelli,clavetal}. \section{Proposals for Experiments} \hspace*{2em} Now let us turn to the question of how to establish or rule out the existance of new light hadrons, $R^0$ or $S^0$. One method, proposed years ago\cite{f:51}, is to look for exclusive reactions such as $K^-p \rightarrow R^0 S^0$, followed by elastic scattering of the $R^0$ and $S^0$ off protons. With accurate measurements of the $R^0$ and $S^0$ production angles, and measurement of the recoil proton momenta in the secondary $R^0 p \rightarrow R^0 p$ and $S^0 p \rightarrow S^0 p$ scatterings, there is in principle one more equation than unknowns and the masses of the $R^0$ and $S^0$ can both be determined. Using a hydrogen bubble chamber would seem to work nicely for observing the initial and secondary scatterings, but a high efficiency for identifying $K^0_L$'s and neutrons would be desirable to reduce background, so this may not be the optimal approach. The interaction lengths of the $R^0$ and $S^0$ are probably somewhat shorter than for ordinary mesons and baryons, on account of the greater color charge of the gluinos as compared to quarks and on account of the $S^0$ having 4 constituents rather than 3 for a normal baryon. The candidate events should show a threshold behavior consistent with the measured $R^0$ and $S^0$ masses, which would corroborate the validity of the overall picture. Note that this experiment is sensitive to gluinos with any lifetime long enough that the $R^0$ and $S^0$ rescatter before decaying, so that it is complementary to the experiment of Bernstein et al. and sensitive to lower masses than Gustafson et al. However this method has two important weaknesses: First, the cross section may be very small, since one is asking for a very exotic final state to be produced in an exclusive mode. Second, it is not possible to reliably calculate the cross section so that one cannot establish a level of sensitivity adequate to definitively exclude the phenomenon. Unfortunately it is also a demanding, single-purpose experiment and theoretical prejudice has favored heavy gluinos, so that experimenters have not looked just to see if something might be there\footnote{Recently, Carlson and Sher\cite{cs} proposed searching for the decays of gluinos following their photoproduction at CEBAF. This is an excellent experiment, since something may be found. However it does not satisfy the present criterion of being useful for excluding a light gluino, since the relatively low invariant mass range which can be probed at CEBAF means that the non-perturbative effects of $R^0$ and $\eta_{\tilde{g}}$ masses will suppress the signal and the calculations of the production rates are therefore not sufficiently reliable to allow exclusion. They report results for the effective gluino mass being taken to be 1 and 1.5 GeV, and the dramatic rate of decrease with effective gluino mass reflects the sensitivity to this effect. To obtain reliable inclusive cross sections for production of light particles from pQCD, one must impose a $p_{\perp}^{min}$ cut. The event rates they quote are so large that this may be possible, but as long as their signal is the decay of the gluino, the proposed experiment can only be used to examine a limited lifetime region.}. Here I propose other experiments which also do not rely on observing the decay of the $R^0$ or $S^0$ and are thus able to rule out or observe long-lived gluinos, but which do not have the difficulties of the one discussed above. Except in the forward direction, we expect that $S^0$ production is much smaller than $R^0$ production, so let us ignore $S^0$'s for simplicity. By working at high energy, exotics can be produced relatively easily and inclusive cross-sections can be reliably computed from perturbative QCD in appropriate kinematical regions. The cross section for producing $R^0$'s is essentially just the gluino jet cross section, since all gluino jets end in an $R^0$ (or, rarely $S^0$) because other $R$-hadrons eventually decay to these\footnote{Or {\it very} rarely, gluinos from independent jets can annihilate, but at this order one must also consider jet evolution which produces gluinos.}. The gluino-jet cross-section is approximately 10\%\cite{f:82} of the total jet cross section, so that it is actually quite common for a Tevatron collider or fixed target event to contain an $R^0$ pair. $p_t$ cuts can be imposed to insure that perturbative QCD event generators can be reliably used to compute the expected rate, even for light gluinos. Showing that there are no such events at a level of $4\sigma$ below the prediction, would then convincingly rule out the existance of these gluinos\footnote{Care must be taken to realistically estimate the theoretical uncertainty, including that from the distribution functions and neglect of higher order corrections to the partonic scattering amplitudes, which in ordinary QCD have proven to be larger than originally estimated.}. Basically the idea is an outgrowth of the suggestion of ref. \cite{f:51}, but sacrificing the additional constraints of exclusive production in favor of the higher rate and reliable calculability of high energy inclusive production. A high energy beam from an accelerator is incident on the primary target. This produces a neutral beam containing neutrons, kaons, hyperons, and possibly $R^0$'s and $S^0$'s. This beam illuminates a secondary target in which an elastic scattering $R^0 p \rightarrow R^0 p$ may occur. Measuring the momentum of the recoil proton and the angle of the produced $R^0$ (by observing {\it its} interaction, which need not be elastic) gives enough constraints to solve for $m_R$, if indeed the reaction is elastic. Knowing the visible energy of the final particles in the secondary scattering of the produced $R^0$ can help choose between multiple solutions and help discard events in which the primary scattering is not elastic.\footnote{I am grateful to T. Devlin for making these points.} Of course the background due to other reactions, especially $n~p \rightarrow n~ p$ or $K^0_L p \rightarrow K^0_L p$, or inelastic scattering, will be quite severe even after vetoing on extra charged particles and $\pi^0$'s, so excellent resolution is crucial. Timing could be used to measure $p/E$ of the incident neutral. With this information, one would have an over-constrained system of equations without relying on the secondary scattering being elastic and one could verify that the initial reaction was indeed $R^0 p \rightarrow R^0 p$ as well as determine the $R^0$ mass. If the $R^0$ is sufficiently heavy, one can get adequate resolution with nsec accuracy using the beam buckets without being forced to put the secondary target so far away that the loss of solid angle would be intolerable\footnote{Keeping the distance between the two targets as small as possible is also desirable from the standpoint of being sensitive to relatively shortlived $R^0$'s as well.}. Modern $O(10)$ psec timing could allow the lower mass regions to be investigated, except that it requires tagging the initial $R^0$ production event, so entails a reduction in rate. Detailed monte carlo simulation is needed to determine whether it is possible to cover the very-light gluino regime, where the $R^0$ may be difficult to distinguish from a neutron. With many events, a discrepancy between the observed and expected event characteristics such as angular distribution and rates would be a useful diagnostic. Another handle for some range of $R^0$ lifetimes would be a distance dependence of the anomalous events. In the above discussion I focussed on the process $R^0 p \rightarrow R^0 p$ for identifying the $R^0$. It is the most attractive option from a theoretical point of view since its cross section is easiest to estimate\footnote{The optical theorem relates the forward elastic cross section to the total cross section. Above the resonance region one would expect $\sigma(R^0p)\sim \sigma (\pi p) \sim \sigma(pp)$ since the confinement scale rather than the color charge of the valence constituents, seems most important in determining the size of a system of light, relativistic quarks or gluons or gluinos. Using lattice gauge theory, it might be possible to measure the color charge radius of the $R^0$ or at least its ratio to that of the pion or nucleon, to improve upon this crudest possible estimate. Or one could use information from lgt on glueball masses to try to constrain a bag model for color octet constituents, and then determine their radius.}. If one's goal is to try to unambiguously exclude light gluinos, then one must use reactions which can be estimated with some confidence both to produce and to detect them. However if one wants the most effective way to discover light gluinos if they exist, one can consider other detection reactions such as $R^0 p \rightarrow R_p \eta'$ or $R^0 p \rightarrow K^+ S^0$, whose signature may be much more distinctive\footnote{I am indebted to W. Willis for emphasizing this point.}. In the resonance region, such cross sections can be very large. Further work is needed to try to estimate them. A setup such as KTeV, where the distance between primary and secondary targets (the regenerator) is 120m and the typical energy of the long-lived neutrals is about 100 GeV, would be mainly sensitive to lifetimes longer than $\sim 4~ 10^{-9}$ sec. Thus if it can be used for this purpose, it will be able to probe a large part of the interesting lifetime range. In a collider experiment, pair produced heavy gluinos would radiate gluons and light quarks to produce jets containing ordinary hadrons and an $R^0$. For sufficietly heavy $R^0$ and good timing capabilities, one could in principle detect the time delay $p/E$ for the late-arriving neutral particles to deposit energy in the calorimeter. Assuming each of them to be an $R^0$ which stopped in the calorimeter, producing very light particles, the energy it deposited in the calorimeter would be roughly of the same magnitude as $p$ of the $R^0$. Knowing $p$ and $p/E$, one could solve for $m_R$. A detailed study of the conversion of an $R^0$'s momentum to the energy deposition in the calorimeter (in particular the extent of the fluctuations to be expected), is needed to see if this method is feasible in practice. Another way that the production of a pair of heavy long-lived gluinos might be infered in principle, would be to search for events in collider experiments in which fitting energy and momentum conservation at the jet level requires two of the jets to be given a large mass. \section{Summary} \hspace*{2em} As is shown in ref. \cite{f:96}, if gaugino masses are generated by loop effects, the gluino and lsp masses will be in the range from $\sim 100$ MeV to $\,\raisebox{-0.13cm}{$\stackrel{\textstyle<} {\textstyle\sim}$}\, 30$ GeV if the SUSY and ew symmetry breaking scales are $\,\raisebox{-0.13cm}{$\stackrel{\textstyle<} {\textstyle\sim}$}\, 10$ TeV. Furthermore, in a substantial part of parameter space the lsp is near in mass or heavier than the gluino, so that long gluino lifetimes are natural. The phenomenology of such light, long-lived gluinos is the subject of the present paper. Some aspects of the phenomenology of the associated lsp are discussed in ref. \cite{f:96}. A very light gluino (mass of order a few hundred MeV or less) is particularly attractive since it emerges naturally when dimension-3 SUSY breaking operators are absent from the low-energy theory, as is the case in hidden sector dynamical SUSY breaking with no gauge singlets\cite{bkn}. Consideration of the pseudoscalar spectrum is shown to imply that the gluino mass must be greater than $\sim 100$ MeV. A very light gluino would lead to new hadrons, the $R^0 ~ (g \tilde{g})$ and $S^0 ~ (uds\tilde{g})$, with masses around $1 \frac{1}{2}$ GeV. Experiments to definitively rule out or discover them are possible but very challenging. Existing direct and indirect experimental constraints are reviewed and found not to address the most interesting scenarios. Experiments directed at the higher mass range are also mentioned. \section{Acknowledgements} I have benefitted from discussions with T. Banks, J. Bronzan, N. Christ, J. Conway, T. Devlin, H. Georgi, C. Lovelace, A. Masiero, A. Mueller, M. Schmelling, S. Somalwar, R. St.Denis and W. Willis. \newpage
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} \begin{figure}[b] \centering \includegraphics[width=8cm]{Figures/AgentPattern.png} \caption{ Patterns evolved by CA agents. } \label{AgentPattern} \end{figure} This paper is an extension of the presentation ``Forming Point Patterns by a Probabilistic Cellular Automata Rule'' given at Summer Solstice Conference on Complex Systems in Dresden, Germany, July 15 -- 17, 2019 (see Appendix). The introduced novel approach is methodical and will be explained here in more detail. \COMMENT{ \bibitem{2014-Hoffmann-ACRI-Pattern} Hoffmann, R. (2014). How Agents Can Form a Specific Pattern. ACRI Conf. 2014, LNCS 8751, pp. 660-669 \bibitem{2016-Hoffmann-Polonia-PathPattern} Hoffmann, R. (2016). Cellular Automata Agents form Path Patterns Effectively. Acta Physica Polonica B Proceedings Supplement, Vol. 9 (2016) No.1 \bibitem{2016-Hoffmann-D-ACRI-LinePattern} Hoffmann, R. and D{\'e}s{\'e}rable, D. (2016). Line Patterns Formed by Cellular Automata Agents. ACRI Conf. 2016, LNCS 9863, pp. 424-434 \bibitem{2017-Hoffmann-D-PACT-MaxDomino-Agents} Hoffmann, R. and D{\'e}s{\'e}rable, D. (2017). Generating Maximal Domino Patterns by Cellular Automata Agents. Lecture Notes in Computer Science, PaCT Conference } In previous work \cite{ 2014-Hoffmann-ACRI-Pattern, 2016-Hoffmann-Polonia-PathPattern, 2016-Hoffmann-D-ACRI-LinePattern, 2017-Hoffmann-D-PACT-MaxDomino-Agents} different patterns were generated by moving Cellular Automata agents. Such pattern are depicted in Fig. \ref{AgentPattern}. The behavior of an agent was defined by an embedded finite state machine (FSM). The agent's FSM was trained offline by a genetic algorithm (GA) to form a specific patterns. The patterns were locally defined by small pattern templates. The number of matching templates was used to define the fitness function for the GA. A population of agents with different FSMs was optimized by testing their performance through simulation: Agents were favored that produced global patterns best by counting the local pattern matches . The effort to train such agents is quite high, especially to find agents that work on any field size. In order to avoid such a computational effort, a novel approach to construct directly the required CA rule was proposed that is described here. It has the potential to be applied to more complex pattern formations. Whereas the local matching templates are hidden in the FSMs of the moving agents in the former work, now the local matching templates are directly used in the definition of a classical uniform CA rule. This new approach was successfully applied to place a maximal number of dominoes in a 2D field \cite{2019-pact-domino}, to find a sensor point pattern (to cover an area by active sensors) \cite{2020-HoffmannSeredynski-SensorPoint}, to place a maximal number of dominoes in the diamond \cite{Hoffmann-2021}, and to cover a space with a minimal number of dominoes \cite{Hoffmann-2021-MinimalDominoPact}. The objective is to find a point pattern with a maximal number of points by a CA rule. The cell's state is $\in\{0,1\}$ where `1' represents a point and `0' some material / space between points. Points are not allowed to touch each other, they have to be separated by 0-cells, and every 0-cell finds at least one point in its Moore-neighborhood. To solve this problem we consider it as a tiling problem with overlapping \textit{point tiles}. A point tile is a $3\times3$ pixel array where the center is `1' and the other pixels are `0'. The task is to cover the space with overlapping point tiles without gaps. Our problem is one of the diverse covering problems \cite{Snyder2011} and it is related to the NP-complete \textit{vertex cover problem} introduced by Hakimi \cite{Hakimi1965} in 1965. A vertex cover is a set of nodes in a graph such that every edge of the graph has at least one end point in the set. A minimum cover is a vertex cover which has the smallest number of nodes for a given graph. Hakimi proposed a solution method based on Boolean functions, later integer linear programming \cite{Gomesa2006}, branch-and-bound, genetic algorithm, and local search \cite{Richter2007} were used, among others. Other related problems are the \textit{Location Set Covering Problem} \cite{Church1976} and the \textit{Central Facilities Location Problem} \cite{Mehrez2016}. These problems aim to find the locations for \emph{P} facilities that can be reached within a weighted distance from demand points, minimizing the number of \emph{P}, or minimizing the average distance, or maximizing the coverage. For covering problems there are a lot of applications, in economy, urban planning, engineering, etc. \begin{figure}[tb] \centering \includegraphics[width=0.32\textwidth]{Figures/Antalya2200.jpg} \includegraphics[width=0.32\textwidth]{Figures/fish2200.jpg} \includegraphics[width=0.32\textwidth]{Figures/bh2022-2000.jpg} \caption{ Everyday's point patterns. } \label{Antalya2200} \end{figure} Sample applications for maximal point patterns are: constructing a sieve with a maximum number of holes keeping a high stiffness, constructing an effective brush, dense packing of square parcels, or attracting particles minimizing their total energy. We can observe point patterns everyday, like the ones depicted in Fig. \ref{Antalya2200} \begin{figure}[tb] \centering \includegraphics[width=7cm]{Figures/4-Automaton.png} \caption{ Each cell is an Automaton, connected to its neighbors. C: Center Cell to be modified. N, E, S, W: neighboring cells } \label{4-Automaton} \end{figure} \begin{figure}[tb] \centering \includegraphics[width=7cm]{Figures/3-GlobalPattern-LocalRule.png} \caption{ A local rule is applied at every site $(x, y)$ of the array (field of cells). The rule is simple but it can induce a complex global pattern. } \label{3-GlobalPattern-LocalRule} \end{figure} Cellular Automata (CA) is a well known modeling and decentralized computing paradigm \cite{WolframNewKind}. A CA is a field / array of cells placed in a grid with local connections (Fig. \ref{4-Automaton}). The next cell's state is computed by a local rule $f$ taking the neighboring states into account: $C\leftarrow f(C,N, E,S,W)$. Every cell applies the same simple local rule which may result in a complex global pattern / configuration change (Fig. \ref{3-GlobalPattern-LocalRule}). \section{The Problem and Solutions for \textit{n $\times$ n} Fields} \subsection{The Problem} Given is a field of $n \times n$ cells with cyclic boundary conditions. The cell's state is $s \in \{0,1\}$ where 0 will be represented by green or white and 1 by blue or black. Initially we assume a random configuration. The objective is to find a CA rule that successfully forms a \textit{Point Pattern}. A point pattern consists of \textit{point} cells ($s=1$) and \textit{zero} cells ($s=0$). There are two constraints that have to be fulfilled for any cell at site $(x,y)$ of the field. \begin{enumerate} \item The 8 cells surrounding a point at $(x,y)$ in the Moore-neighborhood have to be zero. So the first constraint is: $ s(x,y)=1 \wedge s(x,y\pm1)=0 \wedge s(x\pm1,y)=0 \wedge s(x\pm1,y\pm1)=0~. $ \item At least one of the 8 cells surrounding a zero cell at $(x,y)$ in the Moore-neighborhood has to be a point. The second constraint is: $s(x,y)=0 \wedge \exists s(x',y')=1$ where $(x',y') \in \{ (x,y\pm1),(x\pm1,y),(x\pm1,y\pm1) \}$~. \end{enumerate} The two conditions define allowed positions between zeroes and ones. Simply speaking, points should be near to each other but are not allowed to touch. We call patterns that fulfill these constraints \textit{valid}. We aim at valid patterns with a \textit{maximal} number of possible points $p_{max}$, therefore we call our problem also the \textit{max point pattern problem}: \vspace{10pt} $p \rightarrow p_{max}$ where $p$ is the number of points in the pattern. \vspace{10pt} \noindent In addition we favorite a fast CA rule that rapidly produces a valid point pattern. Note that another problem would be to aim at patterns with a \textit{minimal} number of points, the \textit{min point pattern problem}. This problem needs more discussion and will not be treated here further. \subsection{Optimal Solutions for \textit{n} Even} \begin{figure}[htb] \centering \includegraphics[width=7cm]{Figures/SolutionsEven.png} \caption{ Optimal patterns for even $n=2,4,6,8$. } \label{SolutionsEven} \end{figure} First we look at the possible optimal solutions for $n \times n$ fields, where $n$ is even (Fig. \ref{SolutionsEven}). We consider solutions as equivalent if they are equal under cyclic shift, rotation and mirroring. Let $w$ denote the number of different pattern solutions. For $n=2$ there is only one solution with one point. For $n=4$ we find $w=2$ solutions with $p=4$ points. For $n=6$ we have 2 solutions with 9 points. For $n=8$ we get 4 solutions with 16 points. In general the \textit{maximal} number of points is $p_{max}=n^2/4$ because the square cell field can be tiled into small non-overlapping blocks / tiles of size $2 \times 2$, each with one black cell / point. \section{Rule Design Issues} \subsection{Updating Scheme} We may use \textit{synchronous} or \textit{asynchronous} updating and a \textit{deterministic} or a \textit{probabilistic} rule. This makes four options: \begin{enumerate} \item synchronous updating \& deterministic rule \item synchronous updating \& probabilistic rule \item asynchronous updating \& deterministic rule \item asynchronous updating \& probabilistic rule \end{enumerate} Our goal is to find a rule that produces valid point patterns and preferably converges to an optimal solution (\textit{max pattern}), or finds several or even all optimal solutions during its evolution in time. \textit{Option 1:} Until now it was not possible to design such a rule. The problem is that the evolving pattern may get stuck in sub-optimal local fixed or oscillating structures such as we know from the \emph{Game of Life}. It remains an open questions if it is possible to find such a rule. \textit{\textit{Options 2 -- 4:}} These options are related because the computation of a new configuration is stochastic. It seems that they can be transformed into each other to a certain extent. We decided to use option 4 (\textit{asynchronous updating \& probabilistic rule}). With asynchronous updating we don't need buffered storage elements and a central clock for synchronization which is closer to the modeling of natural processes. Basically asynchronous updating can be considered as a sequential process where only one cell of the array is updated at a certain (micro) time-step. Nevertheless it may be possible to update cells of a subset in parallel if there is no influence between outputs and inputs within the subset. An example is to update odd and even cells alternately. We want to use the following sequential updating scheme: \begin{itemize} \item A cell is selected by a certain \textit{Selection Strategy}. \item The new cell's state-value is computed by the rule and immediately assigned to the cell's state memory during each micro time-step $\tau \rightarrow \tau+1$, i.e. $s^{\tau+1} \leftarrow Rule(s, neighbors' states)^\tau$. \item Each time-step $t \rightarrow t +1$ consists of $N = n^2$ micro time-steps. This means that $n^2$ updates are performed during one time-step. \item Configurations are considered / observed (logged, sampled) at time-steps. \end{itemize} There are several possible selection strategies: \begin{enumerate} \item \textbf{Pure Random}, also called \textit{random select, pure asynchronous, fully asynchronous, independent random ordering}. For each new micro time-step, a cell is selected uniformly at random and then immediately updated. Thereby a cell may not be updated during a time-step, or several times. \item \textbf{Random Sequence}, also called \textit{random new sweep}. The update sequence is given by a random permutation of the cell indexes (defining the order), changed for every new time-step. Thereby each cell is updated exactly once during one time-step. \item \textbf{Deterministic Sequence.} The permutation of cell indexes is fixed. Examples: strict index order, first select odd cells then even cells, order in which the distance between cells is large or small on average. \end{enumerate} Experiments with the later defined CA rules showed that any of these selection strategies is working well with a relatively small influence on the performance. Therefore an update strategy can be chosen that is easy to implement or which can work better in parallel. A deeper investigation is necessary to confirm this conclusion. \begin{figure}[htb] \centering \includegraphics[width=9cm]{Figures/UpdateEvents.png} \caption{ The probability how often a cell is updated during a time-step with $N$ micro time-steps. $N = 16 \times 16$. } \label{UpdateEvents} \end{figure} \begin{figure}[htb] \centering \includegraphics[width=9cm]{Figures/MicroSteps.png} \caption{ Example of an evolution of a $4 \times 4$ 4-point pattern. The configurations at each micro time-step are shown. A time-step $t \rightarrow t +1$ consists of $n \times n$ micro time-steps $\tau \rightarrow \tau +1$ . The update strategy was pure random. Points are colored in blue, state = 1. Colors for state = 0 and for cover levels: 0 (white), 1 (yellow), 2 (light green), 3 (green), 4 (dark green). } \label{MicroSteps} \end{figure} Let us consider the strategy \textit{Pure Random}. The probability $P(k)$ of $k$ update events for a specific cell during altogether $N$ events ($n^2$ micro-steps during one time-step) is given by the binomial distribution \vspace{10pt} $P(k)=(1-q)^{N-k}q^k \binom{N}{k}$ where $q=1/N$. \vspace{10pt} Fig. \ref{UpdateEvents} depicts the probability how often a cell is updated during a time-step, for $N = 16 \times 16$. For larger fields with $N>16$ cells the graph changes only marginally. Notice that the probability of updating a cell never or once is $P(0)= P(1)=37\%$, and to update twice is $P(2)=18\%$. Fig. \ref{MicroSteps} shows a sample evolution of a $4 \times 4$ 4-point pattern, micro step by micro step using pure random updating. \subsection{Tiling Problem} \begin{figure}[htb] \centering \includegraphics[width=9cm]{Figures/Tile.png} \caption{ (a) The used \textit{point tile} with its hull and kernel. (b) A cyclic $9 \times 9$ field can be tiled by 9 tiles without overlapping, a min pattern. (c) A cyclic $8 \times 8$ field can be tiled by 16 tiles with maximal overlapping, a max pattern. Overlapping tile pixels are colored in green (cover level = 2) or dark green (cover level = 4). } \label{Tile} \end{figure} \begin{figure}[htb] \centering \includegraphics[width=9cm]{Figures/CoverLevel.png} \caption{ Tiles may overlap. The numbers give the cover level $v$, the number of overlapping tiles / pixels. } \label{CoverLevel} \end{figure} The basic idea is to consider the problem as a tiling problem. Given is a certain tile, in our case we call it \textit{point tile} (Fig. \ref{Tile}a). The point tile consists of 9 elements, we call them ``pixels'' and not ``cells'' because we reserve the word ``cell'' for cellular automata cells. The tile is partitioned into a hull and a kernel. The hull consists of 9 pixels with the value 0, and the kernel is the center pixel with value 1. Now we want to cover a given cell field by (potentially) overlapping tiles and thereby find a solution of our problem. Fig. \ref{Tile}b shows how a $9 \times 9$ field (cyclic boundary condition) can be covered with the minimum of 9 tiles without overlapping. Fig. \ref{Tile}c shows how a cyclic $8 \times 8$ field can be covered with the maximum of 16 tiles with high overlapping. In order to find a valid solution we have to allow that the tiles may overlap (Fig. \ref{CoverLevel}). Gaps (uncovered cells) between tiles are not allowed. The cover level $v$ gives the number of tiles / tile pixels that overlap at a certain site of the cell field. The kernel pixel is not allowed to overlap with any other pixel from another tile. So the cover level for the kernel is fixed to 1. Only hull pixels may overlap with other hull pixels, up to 4. By covering each cell by overlapping tile pixels we obtain a \textit{valid} pattern fulfilling the above stated constraints. We call a pattern \textit{max/min pattern} if the \textit{number of used tiles} (equals the \textit{number of points} $p$) is maximal resp. minimal. In this paper we focus on the problem how to produce max patterns, the \textit{max point pattern problem}. Manually a valid pattern can be found by moving tiles around in a given grid field and asserting the constraints. Also a computer program could find valid patterns by moving tiles around, checking the constraints and maximizing or minimizing the number of tiles. But here we want to solve the problem in another way by using so-called \textit{templates} which are shifted tiles or truncations of them. \subsection{Templates} \begin{figure}[htb] \centering \includegraphics[width=9cm]{Figures/Templates.png} \caption{ The 9 templates derived from the point tile. For each pixel (encircled in red) a template is defined. The red marked pixel is called \textit{reference pixel}. The reference pixel serves as the center of a template. The value of the reference pixel is \textit{ref}($A_k$). } \label{Templates} \end{figure} Now we want to design CA rules. We define ``templates'', small matching patterns that will be used in the later described CA rules. They a systematically derived from the point tile (Fig. \ref{Templates}). Let the point tile be stored in a $3\times3$ pixel array $[G(i,j) ]$ with $i,j \in \{-1,0,+1\}$ and $G(i,j)\in \{0,1\}$. We reserve template arrays $[A_k(i,j)]$ $(k=1 ~...~ 9)$, with $i,j \in \{-2,0,+2\}$ and $A_k(i,j)\in \{\#,0,1\}$. The pixel value '\#' is used for unfilled ``don't care pixels''. The size of the array has to be large enough to hold all ``real'' tile pixels $G(i,j) \in \{0,1\}$. For each pixel $G(i,j)$ a template $[A_k(i,j) ]$ is derived by shifting $G$ in a way that each of the nine tile pixels $G(i,j)$ ends up as the center pixel of the \textit{k-th} template: $(i,j)_{\textit{of}~G}\rightarrow (0,0)_{\textit{of}~A_k}$. The \textit{k-th} pixel $(i,j)$ of $G$ is called \textit{reference pixel} because it defines the center $(0,0)$ of one of the \textit{k} templates $A_k$. Its value $G(i,j)$ is called \textit{reference value} $\textit{ref}(A_k)=A_k(0,0)$. For a more precise definition we use the shift operator $\textit{shift}_{(\Delta x, \Delta y)}(E)$ that shifts the matrix $E$ in \emph{x}-direction by $\Delta x$ and in \emph{y}-direction (here defined downwards) by $\Delta y$. The 9 templates are generated by iterating over all tile pixels $(i,j)$: \vspace{8pt} \begin{tabular}{lll} $A_1 \leftarrow \textit{shift}_{(0,0)}(E)$ & $A_2 \leftarrow \textit{shift}_{(1, 1)}(E)$ & $A_3 \leftarrow \textit{shift}_{(0, 1)}(E)$\\ $A_4 \leftarrow \textit{shift}_{(-1, 1)}(E)$ & $A_5 \leftarrow \textit{shift}_{(-1, 0)}(E) $ & $A_6 \leftarrow \textit{shift}_{(-1, -1)}(E)$\\ $A_7 \leftarrow \textit{shift}_{(0, -1)}(E)$ & $A_8 \leftarrow \textit{shift}_{(1, -1)}(E) $ & $A_9 \leftarrow \textit{shift}_{(1, 0)}(E)$ \end{tabular} \vspace{8pt} The reference values are `1' for $A_1$, and `0' for $A_2, A_3,\ldots, A_9$. We have to be aware that the array size for storing templates $A_k$ may be larger than the array size of the point tile because of the shift operation. An array of size $(3+\textit{Xshift}) \times (3+\textit{Yshift}) $ is sufficient large, where \textit{Xshift} / \textit{Yshift} is the maximal needed shift count in $x$- / $y$-direction, i.e. $|i|_{max}$ resp. $|j|_{max}$. Thus we need a $5 \times 5$ array to store any of the point templates. The don't care symbol '\#' is used for pixels that are not relevant in order to complete the whole template array. For example, the templates $A_1, A_2, A_3$ can be encoded and stored as follows \vspace{9pt} \renewcommand{\baselinestretch}{.6}\normalsize \begin{minipage}[h]{.9\textwidth} \begin{verbatim} ##### ##### ##### #000# ##### ##### A1 = #010# A2 = ##000 A3 = #000# #000# ##010 #010# ##### ##000 #000# . \end{verbatim} \end{minipage} \renewcommand{\baselinestretch}{1}\normalsize \vspace{9pt} By rotation, the templates $A_4, A_6, A_8$ can be generated from $A_2$. Similarly $A_5, A_7, A_9$ can be generated from $A_3$. For our later designed CA rule we use also so-called ``\textit{neighborhood templates}''. We yield the neighborhood template $A_i^*$ from the template $A_i$ by setting the center to '\#'. Note that this pixel is the reference pixel. \vspace{9pt} \renewcommand{\baselinestretch}{.6}\normalsize \begin{minipage}[h]{.9\textwidth} \begin{verbatim} ##### ##### ##### #000# ##### ##### A1* = #0#0# A2* = ###00 A3* = #0#0# #000# ##010 #010# ##### ##000 #000# \end{verbatim} \end{minipage} \renewcommand{\baselinestretch}{1}\normalsize \vspace{9pt} \begin{figure}[htb] \centering \includegraphics[width=8cm]{Figures/MatchingTemplates.png} \caption{ For each cell of a valid point pattern exists at least one matching template (a hit). } \label{MatchingTemplates} \end{figure} Note that for each cell of a valid point pattern there exists at least one matching template, a hit, see example in Fig. \ref{MatchingTemplates}. \subsection{Reduced Templates} It turned out by simulations of the CA rules (following Sect. \ref{RuleDesign}) that the template size of $5 \times 5$ can be reduced to $3 \times 3$. \vspace{9pt} \renewcommand{\baselinestretch}{.6}\normalsize \begin{minipage}[h]{.9\textwidth} \begin{verbatim} 000 ### ### A1 = 010 A2 = #00 A3 = 000 000 #01 010 \end{verbatim} \end{minipage} \renewcommand{\baselinestretch}{1}\normalsize \vspace{9pt} It is even possible to further simplify $A_2$ and $A_3$ (as well as their symmetrical ones). The reason is that it is sufficient to test for neighboring zeroes where there is a 1 (constraint 1) and to test for at least one 1-neighbor where there is a 0 (constraint 2). \renewcommand{\baselinestretch}{.6}\normalsize \begin{table}[h!] \begin{verbatim} ### ### ### ### A2 = #00 -> #0# A3 = 000 -> #0# #01 ##1 010 #1# \end{verbatim} \end{table} \renewcommand{\baselinestretch}{1}\normalsize \vspace{-9pt} \section{The Designed Rules \label{RuleDesign} We present now four designed rules that can be applied one after the other in the same time slot. \begin{itemize} \item \textbf{Rule A.} The current cell state is adjusted to the reference value if a neighborhood template matches. \item \textbf{Rule B.} Noise is injected if there is no hit (match). This rule ensures that no cells remain uncovered. \item \textbf{Rule C1.} In order to maximize the number of points, noise is injected if there is one hit. This rule drives the evolution to higher cover levels \item \textbf{Rule C2.} This rule drives the evolution to stable max patterns if $n$ is even. \end{itemize} \subsection{Basic Rule A: Adjust} \begin{figure}[htb] \centering \includegraphics[width=9cm]{Figures/Adjust.png} \caption{ Example for Rule A. A CA cell is adjusted if a neighborhood template match. Otherwise the cell's state remains unchanged. } \label{Adjust} \end{figure} The templates can be used to test a pattern for validity. In fact, the templates are defined in such a way that for each cell $(x,y)$ of a valid pattern there exists at least one matching template (Fig. \ref{MatchingTemplates}). The basic idea is to adjust templates, make them complete. When the tested cells in the neighborhood of cell $(x,y)$ are equal to the template except the center (the reference pixel of the template), then cell $(x,y)$ is adjusted to the correct value (the reference value). So we may use the neighborhood templates $A_k^*$ for testing against the corresponding CA neighbors and adjust anyway. This is performed by the following Rule A (case (a)). Otherwise the new state is equal to the old state (case (b)). \[ s'(x,y) = \left \{ \begin{array}{lll} \textit{ref}(A_i) &\textbf{if} ~\exists A_i^*~\textit{that matches with CA neighbors at $(x,y)$} &(a) \\ s(x,y) &\textbf{otherwise} \textit{~no change} &(b) \\ \end{array} \right. . \] This rule evolves very fast stable point patterns. During the process of the CA rule application the pattern may not be valid or only partially. In partially valid pattern some tiles are detected through template testing, but there exist some noisy or uncovered cells. The number of template matches are called ``hits''. \begin{figure}[htb] \centering \includegraphics[width=9cm]{Figures/SimulationRuleA.png} \caption{ The evolution of a $3 \times 3$ field under different initial conditions and strict index update order. } \label{SimulationRuleA} \end{figure} Fig. \ref{SimulationRuleA} shows simple simulations of a $3 \times 3$ field with strict index update order, with different initial states: (a) all zero, (b) all one, and (c) random. \begin{itemize} \item[(a)] We observe that a point appears already at micro time-step $\tau=1$. This happens because already for the first cell the neighborhood template $A_1^*$ is fulfilled, `0's are detected in the whole Moore-neighborhood (taking cyclic boundary condition into account). The pattern remains stable, because all other cells match with a template (a `1' is detected in the neighborhood). \item[(b)] 8 micro time-steps are needed to evolve a stable point pattern (with one point). At $\tau=0$ the first cell is checked for matching templates. Since all 8 reduced neighborhood templates $A_{i\geq 1}^*$ match (there exist `1'-neighbors), the active cells are set to `0', step by step in strict index order until only one `1' (the point) is left over. \item[(c)] 7 micro time-steps are needed to evolve a stable point pattern, starting with a random configuration. The number of time-steps depends on the actual initial configuration. Averaging over 100 random initial configurations we get $\tau_{average}= 6.5 (2 ~min - 8 ~max)$. \end{itemize} \begin{table}[htb] \caption{ Rule A applied on a field of size $3 \times 3$. Micro time-steps needed for different update orders, starting with an all one configuration. All 100 evolved patterns are stable and contain 1 point.} \begin{center} \begin{tabular}{ |c | c | c | c | c |} \hline $3\times 3$ \textit{init all one} & $\tau_{average}$ & $\tau_{min} - \tau_{max}$ & \textit{time-steps}= \\ & & & $\tau_{average}/9$ \\ \hline strict index order & 8 & 8 - 8 & 0.89 \\% &100\\ \hline random sequence & 17.03 & 10 - 32 & 1.89 \\% &100\\ \hline pure random & 19.90 & 12 - 27 & 2.21 \\% &100\\ \hline \end{tabular} \end{center} \label{TableRuleA3x3} \end{table} \begin{table}[htb] \caption{ Rule A applied on a field of size $4 \times 4$. Micro time-steps needed for different update orders, starting with different initial configurations. All 100 evolved patterns are stable and contain 4 points.} \begin{center} \begin{tabular}{ |c | c | c | c |} \hline $4\times 4$ \textit{init all zero} & $\tau_{average}$ & $\tau_{min} - \tau_{max}$ & \textit{time-steps}= \\ & & & $\tau_{average}/16$ \\ \hline strict index order & 11 & 11 - 11 & 0.69 \\ \hline random sequence & 25.64 & 7 - 87 & 1.60 \\ \hline pure random & 21.67 & 6 - 52 & 2.17 \\ \hline \hline \hline $4\times 4$ \textit{init all one} & $\tau_{average}$ & $\tau_{min} - \tau_{max}$ & \textit{time-steps}=\\ & & & $\tau_{average}/16$ \\ \hline strict index order & 26 & 26 - 26 & 1.63 \\ \hline random sequence & 50.23 & 16 - 102 & 3.14 \\ \hline pure random & 49.68 & 28 - 81 & 3.11 \\ \hline \hline \hline $4\times 4$ \textit{init random } & $\tau_{average}$ & $\tau_{min} - \tau_{max}$ & \textit{time-steps}=\\ & & & $\tau_{average}/16$ \\ \hline strict index order & 24.06 & 16 - 28 & 1.50 \\ \hline random sequence & 45.29 & 13 - 136 & 2.83 \\ \hline pure random & 43.29 & 7 - 126 & 2.71 \\ \hline \end{tabular} \end{center} \label{TableRuleA4x4} \end{table} Another test was performed on a $3 \times 3$ field in order to assess the influence of the update order. Starting with an initial configuration of all ones, we get the results shown in Table \ref{TableRuleA3x3}. For averaging, 100 runs were performed. A stable point pattern evolved fastest with strict order updating. Though this is only a small example it shows that strict sequential updating can perform very well what we can also observe for larger fields. Nevertheless a deeper analysis is necessary to confirm this observation. Another test was performed for a $4 \times 4$ field, varying the initial condition (all zero, all one, random) and the update order (Table \ref{TableRuleA4x4}). 100 runs were performed for averaging. Starting with a initial all one or a random configuration, random sequence order and pure random show about the same performance. We have to notice that the performance results statistically vary if another set of runs is analyzed. So these results could be evaluated more accurately by increasing the number of runs (e.g., up to 10 000). Nevertheless we have observed that all theses different sequential updating orders worked well, where strict index order worked fastest. Now, by simulation of the Cellular Automata Rule A we want to get an impression of the patterns that can evolve for increasing larger fields. \subsubsection{4 $\times$ 4 Patterns} \begin{figure}[htb] \centering \includegraphics[width=12cm]{Figures/SimulationRuleA4x4.png} \caption{ The evolution of a $4 \times 4$ field under different initial conditions and random sequence update order. } \label{SimulationRuleA4x4} \end{figure} Fig. \ref{SimulationRuleA4x4} shows the evolution of a $4 \times 4$ field under different initial conditions and random sequence update order. Similar 4-point patterns evolved. Notice that also similar symmetric patterns under reflection, shift and rotation may evolve. \COMMENT{ The following pattern is also a valid outcome. \begin{verbatim} 0 0 0 0 0 1 0 1 0 0 0 0 0 1 0 1 \end{verbatim} } \subsubsection{5 $\times$ 5 Patterns} \begin{figure}[htb] \centering \includegraphics[width=12cm]{Figures/RuleA5x5.png} \caption{ Some valid $5 \times 5$ patterns evolved by Rule A. (a) Patterns with $p=4,5$ points. (b) Quad pattern representation: the (a) patterns are doubled in horizontal and vertical direction. Quad patterns allow to observe the regularities better. } \label{RuleA5x5} \end{figure} Now we can observe from Fig. \ref{RuleA5x5} that patterns with 4 or 5 points may evolve, whereas for the size $4 \times 4$ only patterns with 4 points evolve. We call valid patterns with a minimum number of points \emph{min patterns} and with a maximum number of points \emph{max patterns}. The average cover level is $v_{avrg}(p,n)=9p/n^2$, ~where $p$ is the number of points. So we get $v_{avrg}(4)=36/25=1.44$ for 4 points and $v_{avrg}(5)=1.8$ for 5 points. There are different 4-point patterns with a different layout of the points. The point layouts of Fig. \ref{RuleA5x5}a show a different distribution of the cover level. The first 4-point pattern has the occurrence/frequency $f(v)$ of $(4=p)+12, 8, 0, 1$ for $v=1,2,3,4$. We may use the equivalent notation $1^{4+12} 2^8 3^0 4^1$. Equivalent pattern under symmetries have the same cover level distribution. The second 4-point pattern has the distribution $1^{4+12} 2^7 3^2 4^0$. The third 4-point pattern has the distribution $1^{4+12} 2^9 3^1 4^0$. The forth 4-point pattern has the distribution $1^{4+12} 2^7 3^2 4^0$. The second and forth pattern have the same distribution but are not similar under symmetries. The fifth pattern has 5 points and the distribution $1^{5+0} 2^{20} 3^0 4^0$. It is the most regular, the distance from any point to its four nearest neighbors is equal ($\Delta x=1, \Delta y=2$). In order to recognize better the inherent periodic pattern structure we can use the ``quad representation'' (Fig. \ref{RuleA5x5}b), the pattern is repeated 4 times, twice in horizontal and twice in vertical direction. This representation allows also to detect equivalent patterns under symmetries. In general we may repeat the pattern horizontally \emph{U} times and vertically \emph{V} times because of the periodic boundary condition. In the following section (Fig. \ref{RuleA6x6}b) we can see a $6 \times 6$ pattern repeated 9 times $(U=V=3)$. \subsubsection{6 $\times$ 6 Patterns} \begin{figure}[htb] \centering \includegraphics[width=8cm]{Figures/RuleA6x6.png} \caption{ Some valid $6 \times 6$ patterns evolved by Rule A. (a) Patterns with $p=4 ~\dots~ 9$ points. (b) An 8-point pattern repeated 3 times in horizontal and in vertical direction. } \label{RuleA6x6} \end{figure} Several evolved $6 \times 6$ patterns are shown in Fig. \ref{RuleA6x6}a. The range of points lies between 4 and 9. 4-point valid patterns consist of tiles that nowhere overlap, the cover level is $v=1$ everywhere, and the horizontal and vertical distance between points is 3. In the most regular 9-point pattern the distance between points is 2, and the cover level distribution is $1^{9+0} 2^{18} 3^0 4^9$. The last two 9-point patterns are similar under rotation and shift, and their cover level distribution both is $1^{9+0} 2^{12} 3^{12} 4^3$. No cover levels are shown in Fig. \ref{RuleA6x6}b (right) in order to expose the pattern's structure. \begin{figure}[t!] \centering \includegraphics[width=9cm]{Figures/RuleA7x7.png} \caption{ Some valid $7 \times 7$ patterns evolved by Rule A. (a) Patterns with $p=7 ~\dots~ 10$ points. (b) The cover levels $v=1$ and $v=2$ were replaced by dots. This representation emphasizes the sites with cover levels 3 or 4 allowing to detect easier equivalent patterns, e.g. the last two patterns are similar under symmetry. } \label{RuleA7x7} \end{figure} \subsubsection{7 $\times$ 7 Patterns} \begin{figure}[t!] \centering \includegraphics[width=9cm]{Figures/RuleA8x8.png} \caption{ Some valid $8 \times 8$ patterns evolved by Rule A with $p=9 ~\dots~ 16$ points. } \label{RuleA8x8} \end{figure} \begin{figure}[htb] \centering \includegraphics[width=10cm]{Figures/minmax.png} \caption{ Some patterns evolved by Rule A with a \emph{minimal} or \emph{maximal} number of points, \textit{min patterns} resp. \emph{max patterns} for $n=2 \dots 8$. } \label{minmax} \end{figure} Some evolved $7 \times 7$ patterns are shown in Fig. \ref{RuleA7x7}a. The number of points range from 7 to 10. The first one is the most regular showing the cover level distribution $1^{7+28} 2^{14} 3^{0} 4^{0} = 1^{p+(n^2-3p)} 2^{2p} 3^{0} 4^{0}$ with $p=n=7$. In Fig. \ref{RuleA7x7}b only cover levels 3 and 4 are shown. This representation can help to detect similar patterns more easily, e.g. the last two ones are similar. In addition sites or local regions with high levels of overlap (3 and 4) are highlighted. For instance in the five 9-point patterns the overlap levels are varying: $3^4, 3^6, 3^6, 3^5 4^1, 3^2 4^3$ and in the six 10-point max patterns we find $3^7, 3^7, 3^7, 3^6, 3^4 4^1, 3^4 4^1$. \subsubsection{8 $\times$ 8 Patterns} Fig. \ref{RuleA8x8} depicts evolved $8 \times 8$ patterns with 9 to 16 points. Some patterns can easily be constructed. The last one can be composed of 4 rectangles (double lines) of size $2 \times 8$ containing 4 points each. Patterns with points $p=16-M,(M=0...4)$ can be composed of $M$ rectangles with 3 points and $4-M$ rectangles with 4 points. Such patterns with a high regularity of construction by a human designer are more difficult to find by the CA rule. Instead, most probably more irregular patterns are evolved which usually are difficult to construct by hand. \subsubsection{Min and Max Patterns} Min and max patterns for fields up to $8 \times 8$ are shown in Fig. \ref{minmax}. The min patterns are unique for $n=2,3,4,7$, and the max patterns are unique for $n=2,3,4$. \subsection{Additional Rule B: Deleting Gaps} Rule A works also fine for other tile shapes, like dominoes \cite{2019-pact-domino,Hoffmann-2021,Hoffmann-2021-MinimalDominoPact}. Depending on the shape, Rule A may produce patterns with uncovered cells (gaps). It was not observed during many simulations that the here used point tile can produce gaps. So the following rule shall be used in the case where gaps may appear, like for dominoes. The rule injects additional noise at sites where no hits (matching templates) occur. In an implementation it is useful to use an additional temporary variable $hit$ that stores the number of hits (template matches). So we use the cell state $(s, hit)$ where $s\in\{0,1\}$ is the cell's pattern state, and $hit\in\{0,1,2,3,4\}$. In our case using the point tile, the hit value needs not to be stored between generations, but for other tiles and objectives (e.g. minimization the number of tiles) it seems to be necessary that the cell rule can use the hits in its neighborhood, either the hit just updated during the actual computation of the new generation, or the hit from the previous generation. During the process of CA rule application the pattern may not be completely valid but only partially. In partially valid pattern some tiles are detected but there exist some noisy or uncovered cells. The hits approximate the cover level. The hits will converge to the cover level when the pattern gets valid and stable. The additional Rule B is \[ s''(x,y) = \left \{ \begin{array}{lll} random\in\{0,1\} &with ~\textit{prob.} ~\pi_0 ~~\textbf{if} ~hit(x,y)=0 &(c)\\ s'(x,y) &\textbf{otherwise} \textit{~no change} &(d) \\ \end{array} \right. . \] For our simple point tile problem, the influence of the probability $\pi_0$ on the convergence speed and the resulting patterns is negligible until about $\pi_0=0.20$. We have used $\pi_0=0.01$ in the further experiments. \subsection{Additional Rule C1: Maximizing the Number of Points When Rule A is applied, valid point pattern are evolved with a point number that lies between the maximum and the minimum, with a peak approximately at $p=(p_{min}+p_{max})/2$. How can pattern be evolved with a high probability for $p=p_{max}$. The idea is to inject additional noise when the local cover level $v$ is low, in expectation to yield a more dense solution. So noise is injected where $v=1$ and if the cell is a white/green border cell with state zero: \[ s'''(x,y) = \left \{ \begin{array}{lll} 1 &with ~\textit{prob.} ~\pi_1 ~\textbf{if} ~hit(x,y)=1 \wedge s''(x,y) = 0 &(e)\\ s''(x,y) &\textbf{otherwise} \textit{~no change} &(f) \\ \end{array} \right. . \] So we yield the combined rule ABC1 which means that rule A, rule B, and rule C1 are executed sequentially at the current site $(x,y)$. \subsubsection{Simulation Results Using Rule ABC1} \begin{table}[htb] \centering \caption{ Optimal or near-optimal min point patterns are evolved by rule ABC1. The time $t_{stable}$ to reach a stable pattern is short. } \includegraphics[width=10cm]{Figures/TableRuleABC1.png} \label{TableRuleABC1} \end{table} \begin{figure}[htb] \centering \includegraphics[width=10cm]{Figures/Simulation10x10-24points.png} \caption{ The evolution of a near-optimal pattern with 24 points. The pattern is stable for $t\geq8$. Squares 22/22 like the marked one have to be dissolved in order to yield max patterns. Points are colored in blue, state = 1. Colors for state = 0 and for cover levels: 0 (white), 1 (yellow), 2 (light green), 3 (green), 4 (dark green). } \label{Simulation10x10-24points} \end{figure} The performance of rule ABC1 with $\pi_0=0.01$ and $\pi_1=0.25$ was evaluated under 100 simulation runs, and averages were computed for $n$ even (Table \ref{TableRuleABC1}). The evolved pattern show a high number of points, they are max patterns or close to them. For instance, for $10 \times 10$ fields ($n=10$) the maximal number of points is $p_{max}=25$, and the average number of evolved points is $p_{avrg}=24.71$, so in this case 71\% were max patterns. One can see that the speed of convergence is high, and $t_{stable}$ to find an optimal or near-optimal min pattern is low. Now it was analyzed for $n$ even why not always optimal max point patterns evolve. In Fig. \ref{Simulation10x10-24points} we see the evolution of a near-optimal $10\times 10$ pattern with 24 points, not with the maximum of 25 points at which we aim at. We can find in such near-optimal stable patterns $2\times 2$ squares with state 0 and cover level 2 surrounded by 4 cells with state 1. Such 22/22 squares have to be dissolved in order to yield max patterns. \subsection{Additional Rule C2 for \textit{n} Even} \noindent This additional rule is able to dissolve the mentioned 22/22-squares. Now the evolution converges to stable max patterns for $n$ even. The rule C2 is: \[ s''''(x,y) = \left \{ \begin{array}{lll} 1 &with ~\textit{prob.} ~\pi_2 ~~\\ &\textbf{if} ~hit(x,y)=2 \wedge \exists ! s'''(x\pm1,y\pm1) = 1 &(g)\\ s'''(x,y) &\textbf{otherwise} \textit{~no change} &(h) \\ \end{array} \right. . \] If the local situation says that the cover level is $v=2$ approximated by \textit{hit}, and there exists exactly one NESW-neighbor with state $s=1$ (a point), then this situation corresponds to the one mentioned before (22/22) which will be dissolved by noise injection. For computing the condition in (g) the following equivalence can be used $\left( \exists ! s'''(x\pm1,y\pm1) = 1 \right) \equiv \left(1= \sum_{i=-1,+1}\sum_{j=-1,+1} s'''(x+ i,y+ j)\right)$. \vspace{5pt} The used rule probabilities are $\pi_0=0.01, \pi_1=0.25, \pi_2=0.02$. \begin{figure}[htb] \centering \includegraphics[width=10cm]{Figures/Evolution10x10Fast.png} \caption{ The evolution of a found $10\times10$ max pattern using rule ABC1C2. } \label{Evolution10x10Fast} \end{figure} \begin{figure}[htb] \centering \includegraphics[width=10cm]{Figures/TimeStepsMax.png} \caption{ The average number of time-steps to reach a stable max pattern for $n$ even. } \label{TimeStepsMax} \end{figure} \begin{figure}[htb] \centering \includegraphics[width=10cm]{Figures/PointsVSTime.png} \caption{ The average number of points vs time-steps for a $10\times10$ field. The maximum is reached for time-steps in the range 8 ... 175 approximately. } \label{PointsVSTime} \end{figure} The evolution of a stable $10\times10$ max pattern is shown in Fig. \ref{Evolution10x10Fast}. Fig. \ref{TimeStepsMax} shows the needed number of time-steps (average over 100 runs) $t_{max}$ vs $n$. The update method \textit{pure random} is about 1.6 times slower than \textit{random sequence}. As expected the computational effort increases exponentially, for the given range we can find the approximation $t_{max} \approx 0.0894 N^{1.336}$. Fig. \ref{PointsVSTime} shows the number of points vs the time-steps for a $10\times10$ field. We can see that the maximum of points is reached after about 80 time-steps on average. The fastest observed evolution took only a few time-steps, whereas the slowest took about 180 time-steps with a lot of fluctuations. \subsection{Does Rule C1 Work Also for \textit{n} Odd?} \begin{figure}[htb] \centering \includegraphics[width=10cm]{Figures/CaseOdd15.png} \caption{ Found $15\times15$ pattern using rule ABC1. The patterns were not stable. The first pattern is a constructed one. The others are evolved samples for a certain number of points. } \label{CaseOdd15} \end{figure} \begin{figure}[htb] \centering \includegraphics[width=12cm]{Figures/Sequence9x9.png} \caption{ Evolving an optimal $9\times9$ 18-point pattern using rule ABC1, random sequence updating, $\pi_0=0.01, \pi_1=0.025$. } \label{Sequence9x9} \end{figure} Now the question arises whether this rule ABC1C2 works also for $n$ odd. The problem is that optimal max patterns may contain sites with cover levels 1 and 2 besides 3 and 4. For instance for $n=7$ (Fig. \ref{RuleA7x7}) there is an optimal pattern with a cover level distribution of $1^{10+5 } 2^{27} 3^7$ and we find there also local 22/22 squares. Therefore optimal patterns may appear but they are transients (not stable) because noise is injected for $v=1$ by rule C1 and for $v=2$ by rule C2. Rule ABC1C2 was checked against ABC1 with respect to achieve a higher number of points. It turned out that C2 is not useful for $n$ odd because many sites with cover levels 2 may be part of optimal patterns and therefore should not be destroyed. Therefore for $n$ odd it is better to use ABC1 only. We have to note that max patterns that may appear during the evolution are usually not stable for $n$ odd. In Fig. \ref{CaseOdd15} we see some $15\times15$ patterns with 49, 50, 51, 52 points that were evolved using rule ABC1. 100 simulation runs were performed with the time limit $T^{max}=200, 400, 800$. The frequency $f$ of (49--52)--point pattern is also documented in Fig. \ref{CaseOdd15}. One can see that the probability to find a max pattern during the evolution increases with the maximal number of simulated time-steps $T^{max}$. Fig \ref{Sequence9x9} depicts a simulation sequence that evolves an optimal $9\times9$ pattern. The applied rule is again ABC1. \section{The Number of Points in Max Patterns and More Patterns} \begin{figure}[htb] \centering \includegraphics[width=8cm]{Figures/Special5x5And7x7.png} \caption{ Evolved max patterns of size $5\times5$ and $7\times7$. } \label{Special5x5And7x7} \end{figure} \begin{figure}[htb] \centering \includegraphics[width=10cm]{Figures/Special9x9And11x11And13x13.png} \caption{ Solutions found for $n=9,11,13$. } \label{Special9x9And11x11And13x13} \end{figure} \begin{figure}[htb] \centering \includegraphics[width=10cm]{Figures/SpecialPatterns.png} \caption{ Solutions found for $n=5,7,9,11,13$. The solutions for $n=5,9,13, ...$ seem to be unique and stable without cells having cover level 1. } \label{SpecialPatterns} \end{figure} \begin{figure}[htb] \centering \includegraphics[width=10cm]{Figures/Special17x17.png} \caption{ (R) Points are colored in blue, state = 1. Colors for state = 0 and for cover levels: 0 (white), 1 (yellow), 2 (light green), 3 (green), 4 (dark green).\\ R(a--c) Evolved $17\times17$ patterns by rule ABC1: R(a) unstable with 67 points, R(b, c) stable with the maximum of 68 points. R(d) $21\times 21$ stable max pattern with 105 points.\\ (S) Only points are displayed. (T) Points and cover level 2 (in light red) are shown. \\ (U) R(b) is displayed using different colors for cover levels. } \label{Special17x17} \end{figure} Analyzing the evolved max patterns for $n$ odd we constructed the following formula to be confirmed by a formal proof. \[ p_{max}(n) = \left \{ \begin{array}{lll} (n(n-1)-2)/4 &\textbf{if} ~n=3, 7, 11, ... = 3+4k, & k=0, 1, 2, 3 ... \\ n(n-1)/4 &\textbf{if} ~n=5, 9, 13, ... = 5+4k, &k=0, 1, 2, 3 ... \\ n^2/4 &\textbf{if} ~n ~\textit{even} \\ \end{array} \right. . \] The following table displays $p_{max}(n)$ for odd $n=3, \ldots,23$. \vspace{9pt} \begin{minipage}[h]{.9\textwidth} \begin{verbatim} n 3 5 7 9 11 13 15 17 19 21 pmax 1 10 27 52 85 (n(n-1)-2)/4 5 18 39 68 105 n(n-1)/4 \end{verbatim} \end{minipage} \vspace{9pt} Let us have a look at more evolved patterns. Fig. \ref{Special5x5And7x7}: There is only one very regular solution for $5\times5$ with 5 points. Three solutions were found for $7\times7$ that are not very regular. Fig. \ref{Special9x9And11x11And13x13}: For $9\times9$ and $13\times13$ we observe a similar regular construction principle. The found $11\times11$ patterns shows structures that lie between the $9\times9$ and $13\times13$ structures. Fig. \ref{SpecialPatterns}: The patterns were arranged in this way in order to find the above formula for the number of points for $n$ odd. We observe a certain relation between patterns for $n=5,9,13 ...$ and for $n=7, 11, ...$. Fig. \ref{Special17x17}: R(a) shows and unstable sub-optimal, and R(b, c) optimal $17\times17$ patterns. Interesting is that the optimal pattern R(b) does not follow the structure scheme observed for $n=5,9,13$, but R(c) does. The $21\times21$ max pattern R(d) shows again the structure scheme of $n=5,9,13$. Only the points are displayed in the patterns (S). Points and cover level 2 (in light red) are shown in (T). The bottom line (U) of patterns show that different structures can be elucidated from R(b) if different colors for the cover level are used. In fact we may derive different patterns by interpreting not only the state $s$ but also the cover level $v$. Therefore we may use such CA rules of pattern generation also to produce intentionally certain ``cover level patterns''. Patterns with $n=3+4k$ and near-optimal patterns may have some aesthetic value because there structure is not regular in a simple fashion. \newpage \section{Conclusion and Future Work} The aim was to find a cellular automata rule that can evolve a point pattern with a maximal number of points (max patterns). The problem was considered as a tiling problem using overlapping point tiles. Nine templates are systematically derived form the point tile that are used for local matching. The number of matches defines the number of hits. Four Rules were designed that supplement each other. Rule A adjust the cell to the template's center value (reference value) if there is a hit, otherwise noise is injected. Rule A is able to evolve stable point patterns very fast, but the number of points is seldom maximal. Rule B injects noise if no tile covers the cell (cover level zero) in order to avoid gaps. Rule C1 drives the evolution to a maximal number of points. Rule C2 drives the evolution to a stable optimum if the field length $n$ is even. A formula is presented that defines the maximal number of possible points. The patterns are more interesting if $n$ is odd. Two different structures for optimal patterns were found for $n=17$. It would be interesting to find the number and structure of possibly different optimal patterns for $n>17$, especially if $n$ is odd. The introduced method was already applied to other patterns (domino pattern, sensor point pattern) and offers a way to generate more complex patterns using more complex tiles. Future work can address the following problems \begin{itemize} \item parallel implementation, \item synchronous updating instead of asynchronous, \item using several tiles, more complex tiles, or more colors, \item using different grid dimensions, different regular grids (triangular, hexagonal, ...), or graphs in general, \item optimizing time efficiency using for instance a ``divide and conquer'' approach, \item relating this method to other methods, like constraints satisfaction, \item relating the problem to physical problems, like the attraction and repulsion of particles, \item consider the problem in the context of combinatorics, \item classify the results into the theory and applications of tilings, \item find interesting applications, like the generation of artistic patterns. \end{itemize} \DREIECK{3} \newpage \section{Appendix: Presentation 2019} \begin{figure}[h!] \centering \includegraphics[width=2.3cm]{Figures/s_p1.pdf} \includegraphics[width=2.3cm]{Figures/s_p2.pdf} \includegraphics[width=2.3cm]{Figures/s_p3.pdf} \includegraphics[width=2.3cm]{Figures/s_p4.pdf} \includegraphics[width=2.3cm]{Figures/s_p5.pdf} \includegraphics[width=2.3cm]{Figures/s_p6.pdf} \includegraphics[width=2.3cm]{Figures/s_p7.pdf} \includegraphics[width=2.3cm]{Figures/s_p8.pdf} \includegraphics[width=2.3cm]{Figures/s_p9.pdf} \includegraphics[width=2.3cm]{Figures/s_p10.pdf} \includegraphics[width=2.3cm]{Figures/s_p11.pdf} \includegraphics[width=2.3cm]{Figures/s_p12.pdf} \includegraphics[width=2.3cm]{Figures/s_p13.pdf} \includegraphics[width=2.3cm]{Figures/s_p14.pdf} \includegraphics[width=2.3cm]{Figures/s_p15.pdf} \includegraphics[width=2.3cm]{Figures/s_p16.pdf} \includegraphics[width=2.3cm]{Figures/s_p17.pdf} \includegraphics[width=2.3cm]{Figures/s_p18.pdf} \includegraphics[width=2.3cm]{Figures/s_p19.pdf} \includegraphics[width=2.3cm]{Figures/s_p20.pdf} \includegraphics[width=2.3cm]{Figures/s_p21.pdf} \includegraphics[width=2.3cm]{Figures/s_p22.pdf} \includegraphics[width=2.3cm]{Figures/s_p23.pdf} \includegraphics[width=2.3cm]{Figures/s_p24.pdf} \includegraphics[width=2.3cm]{Figures/s_p25.pdf} \includegraphics[width=2.3cm]{Figures/s_p26.pdf} \includegraphics[width=2.3cm]{Figures/s_p27.pdf} \includegraphics[width=2.3cm]{Figures/s_p28.pdf} \includegraphics[width=2.3cm]{Figures/s_p29.pdf} \includegraphics[width=2.3cm]{Figures/s_p30.pdf} \includegraphics[width=2.3cm]{Figures/s_p31.pdf} \includegraphics[width=2.3cm]{Figures/s_p32.pdf} \includegraphics[width=2.3cm]{Figures/s_p33.pdf} \includegraphics[width=2.3cm]{Figures/s_p34.pdf} \includegraphics[width=2.3cm]{Figures/s_p35.pdf} \includegraphics[width=2.3cm]{Figures/s_p36.pdf} \includegraphics[width=2.3cm]{Figures/s_p37.pdf} \includegraphics[width=2.3cm]{Figures/s_p38.pdf} \includegraphics[width=2.3cm]{Figures/s_p39.pdf} \includegraphics[width=2.3cm]{Figures/s_p40.pdf} \includegraphics[width=2.3cm]{Figures/s_p41.pdf} \includegraphics[width=2.3cm]{Figures/s_p42.pdf} \includegraphics[width=2.3cm]{Figures/s_p43.pdf} \includegraphics[width=2.3cm]{Figures/s_p44.pdf} \includegraphics[width=2.3cm]{Figures/s_p45.pdf} \includegraphics[width=2.3cm]{Figures/s_p46.pdf} \includegraphics[width=2.3cm]{Figures/s_p47.pdf} \includegraphics[width=2.3cm]{Figures/s_p48.pdf} \caption{ Slides presented at Summer Solstice Conference on Discrete Models of Complex Systems 2019, Dresden, July 15 - 17, 2019 } \label{presentation} \end{figure} \newpage
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} \label{sec:introduction} \vspace{-0.1cm} The enhancement of noisy speech in real acoustic scenarios is a challenging task, especially for low signal-to-noise ratios (SNRs) or non-stationary noises~{\cite{loizou2013speech}}. Recently, the advent of deep neural networks (DNNs) has significantly promoted the development of the speech enhancement (SE)~{\cite{wang2018supervised}} and leads to giant performance leaps over traditional methods. For conventional DNN-based methods, sophisticated network structures are often devised in an end-to-end manner to learn the nonlinear mapping relations between input and output pairs~{\cite{tan2019learning, luo2019conv}}. Despite being feasible and efficient, they lack interpretability as the whole network is designed in a black-box manner. As a solution, some more recent works attempt to decompose the original task into multiple sub-steps and provide intermediate supervision as the prior information to boost the subsequent optimization progressively~{\cite{li2020speech, li2021simultaneous, hao2020masking}}. Increasing results show that compared with the single-stage paradigm with blind prior, the introduction of pre-estimated prior can lead to more accurate target estimation. Nonetheless, it is still far from affirmative how to decompose the mapping task optimally, and current multi-stage strategies seem to be empirical and intuitive. In traditional statistical signal processing based SE methods, they usually derive optimal complex-spectral or spectral magnitude estimators~{\cite{gerkmann2014bayesian, gerkmann2012mmse}} following specific optimization criteria, \emph{e.g.}, maximum likelihood (ML), the Bayesian maximum a posteriori (MAP), and minimum squared error (MMSE). In detail, when specific prior terms and conditional distribution assumptions are provided, the optimal parameter estimator can be obtained using Bayes' theorem. It is evident that the performance of these model-based methods largely hinges on the model accuracy and rationality of the prior distribution that is often manually designed and can heavily degrade in adverse environments. In contrast, existing learning (NN)-based methods are highly-encapsulated, which skip the prior estimation and directly map to the target in a data-driven manner. Therefore, it will be attractive and meaningful to investigate the integration of both categories and leverage their respective merits. In this paper, we propose a model-driven network, termed as \textbf{MDNet}, for monaural speech enhancement. Different from previous blind NN-based works, we devise our method following the MAP criterion, which empowers each module with better interpretability. Concretely, under the MAP guidance, our enhancement task is converted into the joint posterior estimation problem \emph{w.r.t.} speech and noise. Instead of manually designing the prior distribution in traditional SE algorithms, we propose to learn the speech/noise priors from training data, which can effectively fit real parameter distribution. Besides, the unfolding strategy is proposed, where we explicitly predict the prior gradient and update the target in each iterative step. To our best knowledge, this is the first time to propose the deep prior gradient method in the speech front-end field and we expect it to promote the combination of model-based and learning-based methods. We conduct the experiments on the WSJ0-SI84 and DNS-Challenge datasets and quantitative results show that the proposed approach yields competitive performance over current top-performed SE systems. The rest of the paper is organized as follows. In Section~{\ref{sec:problem-formulation-and-map}}, we formulate the problem and introduce the MAP criterion. In Section~{\ref{sec:proposed-approach}}, the proposed approach is presented. Experimental setup is given in Section~{\ref{sec:experiments-setup}}, and we present the results and analysis in Section~{\ref{sec:results-and-analysis}}. Some conclusions are drawn in Section~{\ref{sec:conclusion}}. \vspace{-0.60cm} \section{Problem formulation and MAP} \label{sec:problem-formulation-and-map} \vspace{-0.1cm} In the short-time Fourier transform (STFT) doamin, the observed mixture speech can be modeled as \begin{gather} \label{eq2} X_{k, l} = S_{k, l} + N_{k, l}, \end{gather} where $\left\{X_{k, l}, S_{k, l}, N_{k, l}\right\}$ are the corresponding variables in the freqency index of $k\in\left\{1,...,K\right\}$ and time index of $l\in\left\{1,...,L\right\}$. In conventional DNN-based methods, the network serves as the mapping function to estimate the target from input mixture, given as \begin{gather} \label{eq3} \widetilde{\mathbf{S}} = \mathcal{F}\left(\mathbf{X}; \mathbf{\Theta}\right), \end{gather} where $\mathcal{F}\left(\cdot;\mathbf{\Theta}\right)$ denotes the network function with parameter set $\mathbf{\Theta}$, and tilde symbol denotes the estimated variable. However, as the network directly estimates the posterior probability from mixtures, it lacks the prior term and the performance may suffer from heavy degradation under low SNRs. To resolve this problem, we reformulate the problem based on the MAP framework: \begin{gather} \label{eq4} \resizebox{0.99\linewidth}{!}{$ \mathop{\arg\max}_{\mathbf{S}, \mathbf{N}} P\left(\mathbf{S}, \mathbf{N} | \mathbf{X}\right) \propto \mathop{\arg\max}_{\mathbf{S}, \mathbf{N}} P\left(\mathbf{X}| \mathbf{S}, \mathbf{N}\right) P\left(\mathbf{S}\right)P\left(\mathbf{N}\right), $} \end{gather} where $P\left(\mathbf{S}, \mathbf{N}|\mathbf{X}\right)$ denotes the joint posterior probability of $\left\{\mathbf{S}, \mathbf{N}\right\}$, $P\left(\mathbf{X}| \mathbf{S}, \mathbf{N}\right)$ is the conditional probability of $\mathbf{X}$, and $P\left(\mathbf{S}\right)$, $P\left(\mathbf{N}\right)$ denote the prior probability of speech and noise. Eqn.({\ref{eq4}}) holds when speech and noise are assumed to be statistically independent. In traditional SE methods, speech and noise are often assumed to follow certain probability distributions, where complex Gaussian distribution is most widely used~{\cite{ephraim1984speech}}. In contrast, in this study we focus on the reconstruction error $\mathbf{E} = \mathbf{X}-\mathbf{S}-\mathbf{N}$ and assume that it follows the zero-mean multivariate complex Gaussian probability density function (PDF), \emph{i.e.}, $\mathcal{N}_{\mathbb{C}}\left(\mathbf{0}, \mathbf{\Lambda}\right)$. For modeling convenience, we assume $\mathbf{\Lambda}$ is time-invariant and take a negative logarithmic operation on both sides of Eqn.({\ref{eq4}}), it can then be rewritten as \begin{gather} \label{eq5} \mathop{\arg\min}_{\mathbf{S}, \mathbf{N}}\left\|\mathbf{X} - \mathbf{S} - \mathbf{N}\right\|_{F}^{2} + \alpha_{S}\Psi_{S}\left(\mathbf{S}\right) + \alpha_{N}\Psi_{N}\left(\mathbf{N}\right), \end{gather} where $\left\{\Psi_{S}\left(\mathbf{S}\right), \Psi_{N}\left(\mathbf{N}\right) \right\}$ are prior terms of speech and noise with distribution parameters $\left\{\alpha_{S}, \alpha_{N}\right\}$. In~{\cite{wang2021compensation}}, the authors revealed the optimization compensation effect between magnitude and phase during the complex spectrum recovery process. To dampen this effect, a collaborative complex spectrum reconstruction method was developed, where the complex spectrum recovery can be decoupled into magnitude filtering with range of $\left(0, 1\right)$ and complex residual mapping~{\cite{li2022glance}}. As such, we rewrite the speech and noise as \begin{gather} \label{eq6} \mathbf{S} = \mathbf{G}_{S}\mathbf{X} + \mathbf{R}_{S},\\ \mathbf{N} = \mathbf{G}_{N}\mathbf{X} + \mathbf{R}_{N}, \end{gather} where $\mathbf{G}$ and $\mathbf{R}$ namely refer to real-valued gains and complex residual components. If we regard the prior term $\left\{\mathbf{S}, \mathbf{N}\right\}$ as the joint prior of $\left\{\mathbf{G},\mathbf{R}\right\}$ and further assume that they are statistically independent, Eqn.({\ref{eq5}}) can be rewritten as \begin{equation} \label{eq7} \begin{split} \mathop{\arg\min}_{\mathbf{G}_{S}, \mathbf{G}_{N}, \mathbf{R}_{S}, \mathbf{R}_{S}}&\left\|(\mathbf{1}-\mathbf{G}_{S}-\mathbf{G}_{N})\mathbf{X} - \mathbf{R}_{S} - \mathbf{R}_{N}\right\|_{F}^{2} +\\ &\alpha_{G_{S}}\Psi_{G_{S}}\left(\mathbf{G}_{S}\right) + \alpha_{G_{N}}\Psi_{G_{N}}\left(\mathbf{G}_{N}\right) \\ &+\alpha_{R_{S}}\Psi_{R_{S}}\left(\mathbf{R}_{S}\right)+\alpha_{R_{N}}\Psi_{R_{N}}\left(\mathbf{R}_{N}\right). \end{split} \end{equation} \begin{figure*}[t] \centering \centerline{\includegraphics[width=1.75\columnwidth]{architecture.pdf}} \caption{(a) Overall diagram of the proposed framework. (b) Internal structure of the gradient estimator. (c) Internal structure of the residual gradient calculator. (d) Internal structure of the gain gradient calculator. (e) Internal structure of the recalibration encoding layer. (f) Internal operation of the consistency layer. Different modules are indicated with different colors for better visualization.} \label{fig:architecture} \vspace{-0.6cm} \end{figure*} \vspace{-0.35cm} \section{Proposed approach} \label{sec:proposed-approach} \vspace{-0.1cm} \subsection{Iterative gradient descent optimization} \label{sec:iteractive-gradient-descent-optimization} \vspace{-0.1cm} To solve the multi-target optimization problem in Eqn.(\ref{eq7}), let us assume the prior terms are differentiable, then the problem can be addressed via the gradient descent method (GDM). Specifically, in the $(q+1)$th iteration, we update the above four targets as follows: \begin{gather} \label{eq8} \resizebox{0.99\linewidth}{!}{$ \widetilde{\mathbf{G}}_{S}^{(q+1)} = \widetilde{\mathbf{G}}_{S}^{(q)} - \eta_{G_{S}}\left(\nabla_{G_{S}}\mathbf{T}^{(q)} + \alpha_{G_{S}}\nabla_{G_{S}}\Psi_{G_{S}}\left(\widetilde{\mathbf{G}}_{S}^{(q)}\right)\right), $} \end{gather} \begin{equation} \label{eq9} \resizebox{0.99\linewidth}{!}{$ \widetilde{\mathbf{G}}_{N}^{(q+1)} = \widetilde{\mathbf{G}}_{N}^{(q)} - \eta_{G_{N}}\left(\nabla_{G_{N}}\mathbf{T}^{(q)} + \alpha_{G_{N}}\nabla_{G_{N}}\Psi_{G_{N}}\left(\widetilde{\mathbf{G}}_{N}^{(q)}\right)\right), $} \end{equation} \begin{equation} \label{eq10} \resizebox{0.99\linewidth}{!}{$ \widetilde{\mathbf{R}}_{S}^{(q+1)} = \widetilde{\mathbf{R}}_{S}^{(q)} - \eta_{R_{S}}\left(\nabla_{R_{S}}\mathbf{T}^{(q)} + \alpha_{R_{S}}\nabla_{R_{S}}\Psi_{R_{S}}\left(\widetilde{\mathbf{R}}_{S}^{(q)}\right)\right),\\ $} \end{equation} \begin{equation} \label{eq11} \resizebox{0.99\linewidth}{!}{$ \widetilde{\mathbf{R}}_{N}^{(q+1)} = \widetilde{\mathbf{R}}_{N}^{(q)} - \eta_{R_{N}}\left(\nabla_{R_{N}}\mathbf{T}^{(q)} + \alpha_{R_{N}}\nabla_{R_{N}}\Psi_{R_{N}}\left(\widetilde{\mathbf{R}}_{N}^{(q)}\right)\right), $} \end{equation} where $\mathbf{\eta} = \left\{\eta_{G_{S}}, \eta_{G_{N}}, \eta_{R_{S}}, \eta_{R_{N}}\right\}$ denote the optimization step of the four parameters, and $\mathbf{T}^{(q)}$ denotes the quadratic term in Eq.~(\ref{eq7}). The total iteration number is notated as $Q$. While the gradient of the quadratic term can be easily calculated, it remains troublesome on how to obtain the gradient representation of the above prior terms. In this regard, we propose to learn the prior gradients with the network and can thus be directly learned from training data. After gradient descent, accoding to Eqns.(\ref{eq6}) and (6), we can reconstruct the speech and noise components as \begin{gather} \label{eq12} \widetilde{\mathbf{S}}^{(q+1)} = \widetilde{\mathbf{G}}_{S}^{(q+1)}\mathbf{X} + \widetilde{\mathbf{R}}_{S}^{(q+1)}, \\ \widetilde{\mathbf{N}}^{(q+1)} = \widetilde{\mathbf{G}}_{N}^{(q+1)}\mathbf{X} + \widetilde{\mathbf{R}}_{N}^{(q+1)}. \end{gather} \vspace{-0.7cm} \subsection{Proposed model-driven framework} \label{sec:proposed-model-driven-framework} \vspace{-0.1cm} \subsubsection{Forward stream} \label{sec:forward-stream}\vspace{-0.1cm} To enable the gradient update iteratively, we devise an unfolding-style framework, whose overall diagram is shown in Fig.~{\ref{fig:architecture}}(a). It has four major parts, namely feature extractor, gradient estimator, target update, and consistency layer. Given the input of noisy complex spectrum $\mathbf{X}\in\mathbb{R}^{2\times K\times L}$, it first passes through multiple 2D convolution (2D-Conv) layers with consecutive frequency downsampling operations to extract the spectral features, say, $\mathbf{F}$. In the $q$th GDM iteration, given the input $\left\{\mathbf{F}, \widehat{\mathbf{S}}^{(q-1)}, \widehat{\mathbf{N}}^{(q-1)} \right\}$, the gradient estimator will predict the prior gradients \emph{w.r.t.} the four parameters in Eqn.({\ref{eq7}}) and implement the gradient descent for updating, which are utilized to reconstruct the speech and noise targets simultaneously. The updated parameters are further modified as the inputs of the next iteration via the consistency layer~{\cite{wisdom2019differentiable}}, whose calculation process is detailed in Fig.~{\ref{fig:architecture}}(f). For practical implementation, we unfold the process for $Q$ times, and the whole forward stream is formulated as \begin{gather} \label{eq13} \mathbf{F} = \text{Encoder}\left(\mathbf{X}\right),\\ \mathbf{\Omega}^{(q)} = \text{GradientUpdate}\left(\mathbf{F}, \mathbf{\Omega}^{(q-1)}\right),\\ \left\{\widetilde{\mathbf{S}}^{(q)}, \widetilde{\mathbf{N}}^{(q)}\right\} = \text{TargetUpdate}\left(\mathbf{\Omega}^{(q)}\right),\\ \left\{\widehat{\mathbf{S}}^{(q)}, \widehat{\mathbf{N}}^{(q)}\right\} = \text{ConsistencyLayer}\left\{\widetilde{\mathbf{S}}^{(q)}, \widetilde{\mathbf{N}}^{(q)}\right\}, \end{gather} where $\mathbf{\Omega}^{(q)}$ is the parameter set for the above four parameters and $q\in\left\{1,...,Q\right\}$. Note that as decent parameter initialization is indispensable for later gradient updates, we adopt the network with the same structure as the gradient estimator to generate the initialized parameter estimation $\emph{i.e.}$, $\widetilde{\mathbf{G}}_{S}^{(0)},\widetilde{\mathbf{G}}_{N}^{(0)}, \widetilde{\mathbf{R}}_{S}^{(0)}, \widetilde{\mathbf{R}}_{N}^{(0)}$. \vspace{-0.2cm} \subsubsection{Feature extractor} \label{sec:feature-extractor}\vspace{-0.1cm} Like~{\cite{li2022glance}}, in the feature extractor, we utilize recalibration encoding layers (RELs) to gradually downsample the feature map and abstract the features. The internal structure of each REL is shown in Fig.~{\ref{fig:architecture}}(e). Except for 2D-GLU~{\cite{dauphin2017language}}, a UNet-block is followed with residual connection~{\cite{qin2020u2}}, which is akin to UNet and takes the current feature map as the input and further encodes the feature. Compared with vanilla 2D-Conv, the UNet-block can explore features with multiple scales. Besides, the introduction of residual connection can effectively recalibrate the feature distribution and preserve the spectral patterns. \vspace{-0.3cm} \subsubsection{Gradient estimator} \label{sec:gradient-estimattor} \vspace{-0.1cm} The internal structure of the proposed gradient estimator (GE) is presented in Fig.~{\ref{fig:architecture}}(b). As stated above, the input includes the extracted feature $\mathbf{F}$, the modified speech $\widehat{\mathbf{S}}^{(q)}$, and noise $ \widehat{\mathbf{N}}^{(q)}$ after the consistency layer. As Fig.~{\ref{fig:architecture}}(b) shows, three gradient calculators are adopted, where the gain gradient calculator (GGC) is used to derive the gain gradients of both speech and noise, and another two complex residual gradient calculators (RGCs) aim at gradient prediction of the speech and noise residual, respectively. Note that we share the GGC for speech and noise as the gain function actually serves as the speech presence probability (SPP) in traditional SE algorithms and that of speech and noise are complementary from statistical perspective~{\cite{yoshioka2015ntt}}. The internal structure of RGC is presented in Fig.~{\ref{fig:architecture}}(c). Take the speech branch as an example, we first flatten the complex spectrum as $\widehat{\mathbf{S}}^{(q)}\in\mathbb{R}^{2K\times L}$ and then concatenate with $\mathbf{F}$ as the network input. The network first compresses the feature with 1D-GLU and then models the gradient distribution with stacked temporal convolution networks (TCNs)~{\cite{bai2018empirical}}. To alleviate the parameter burden, we adopt the simplified version, termed as S-TCNs~{\cite{zhang2020deepmmse}}, which can dramatically decrease the parameters. The prior gradients of the complex residual are attained with two linear layers, namely for real and imaginary (RI) parts. For GGC, it has the similar structure except the input is the concatenation of $\mathbf{F}$ and the magnitude of speech and noise, say, $\text{Concat}\left( \mathbf{F}, \widehat{\mathbf{S}}^{(q)}, \widehat{\mathbf{N}}^{(q)}\right)$. The gain gradients \emph{w.r.t.} speech and noise are obtained via two linear layers. \vspace{-0.3cm} \subsubsection{Target fusion} \label{sec:target-fusion} \vspace{-0.1cm} After repeating target updates, we obtain the estimates of speech and noise, \emph{i.e.}, $\widetilde{\mathbf{S}}^{(Q)}$, $\widetilde{\mathbf{N}}^{(Q)}$. Then another problem arises, \emph{how can we fuse the estimated speech and noise components to obtain the final speech estimation?} A common strategy is to apply a network to estimate time-frequency (T-F) bin-level weights and fuse both components dynamically~{\cite{zheng2021interactive}}, \emph{i.e.}, $\mathbf{M}\widetilde{\mathbf{S}}^{(Q)} + \left(\mathbf{1} - \mathbf{M}\right)\left(\mathbf{X}- \widetilde{\mathbf{N}}^{(Q)}\right)$. However, it is a linear combination within each T-F bin. Motivated by~{\cite{li2020two}}, we propose a residual recalibration module to better fuse the speech and noise parts. Concretely, given the original noisy and the estimated speech and noise as the input, a network is employed to estimate the residual structure, which is then added by the estimated speech: \begin{equation} \label{eq14} \widetilde{\mathbf{S}}^{(Q)'} \leftarrow \widetilde{\mathbf{S}}^{(Q)} + \text{FuseNet}\left(\mathbf{X}, \widetilde{\mathbf{S}}^{(Q)}, \widetilde{\mathbf{N}}^{(Q)}\right). \end{equation} Differnet from the linear combination, it works with residual learning and gives nonlinear output, which can better leverage the complementarity between speech and noise. \vspace{-0.35cm} \subsubsection{Loss function} \label{sec:losss-function} \vspace{-0.1cm} As the unfolding structure is utilized in the forward stream, we adopt the weighted loss for network training, given as \begin{gather} \label{eq15} \mathcal{L} = \sum_{q=0}^{Q}\gamma_{q}\mathcal{L}_{q} + \zeta\mathcal{L}_{Q}^{(f)}, \end{gather} where $\gamma_{q}$ and $\zeta$ are the weighting coefficients, and $\mathcal{L}_{Q}^{(f)}$ denotes the loss in the target fusion stage. Here $\gamma_{q}$ and $\zeta$ are empirically set to 0.1 and 1, respectively. For $\mathcal{L}_{q}$ and $\mathcal{L}_{Q}^{(f)}$ we have \begin{equation} \label{eq16} \mathcal{L}_{q} =\frac{1}{2}\left( \mathcal{L}\left(\widetilde{\mathbf{S}}^{(q)\beta}, \mathbf{S}^{\beta}\right)+\mathcal{L}\left(\widetilde{\mathbf{N}}^{(q)\beta},\mathbf{N}^{\beta}\right)\right), \end{equation} \begin{equation} \label{eq17} \mathcal{L}_{Q} = \mathcal{L}\left(\widetilde{\mathbf{S}}^{(Q)'\beta}, \mathbf{S}^{\beta}\right), \end{equation} where the MSE criterion is adopted for network training and $\beta$ is the power-compressed coefficient and empirically set to 0.5~{\cite{li2021importance}}. Besides, the RI loss with magnitude constraint is adopted, which can mitigate the compensation effect in the complex spectrum recovery~{\cite{li2020two}}. \vspace{-0.2cm} \renewcommand\arraystretch{0.85} \begin{table}[t] \caption{Ablation study on the proposed MDNet. The values are specified with PESQ/ESTOI(\%)/SISNR(dB) format. \textbf{BOLD} indicates the best score in each case. All the values are averaged among different SNRs and nosies in the test set.} \Large \centering \resizebox{\columnwidth}{!}{ \begin{tabular}{c|ccccccc} \specialrule{0.1em}{0.25pt}{0.25pt} \multirow{2}*{Entry} &Param. &MACs &\multirow{2}*{$Q$} &Fusion &\multirow{2}*{PESQ$\uparrow$} &\multirow{2}*{ESTOI(\%)$\uparrow$} &\multirow{2}*{SISNR(dB)$\uparrow$}\\ &(M) &(G/s) & &type & & &\\ \specialrule{0.1em}{0.25pt}{0.25pt} 1a &\textbf{3.00} &\textbf{2.16} &0 &R &2.71 &74.05 &10.00\\ 1b &4.79 &2.34 &1 &R &2.75 &74.95 &10.50\\ 1c &6.57 &2.52 &2 &R &2.76 &75.44 &10.66\\ 1d &8.36 &2.70 &3 &R &2.79 &76.03 &10.81\\ 1e &10.15 &2.88 &4 &R &2.79 &76.06 &10.78\\ 1f &11.93 &3.06 &5 &R &\textbf{2.82} &\textbf{76.55} &\textbf{10.93}\\ 1g &13.72 &3.24 &6 &R &2.79 &76.21 &10.84\\ \specialrule{0.1em}{0.25pt}{0.25pt} 2a &7.85 &2.38 &3 &A &2.73 &74.41 &10.38\\ 2b &8.33 &2.68 &3 &G &2.77 &75.30 &10.59\\ \specialrule{0.1em}{0.25pt}{0.25pt} \end{tabular}} \label{tbl:ablation-studies} \vspace{-0.6cm} \end{table} \renewcommand\arraystretch{0.75} \begin{table*}[t] \setcounter{table}{2} \caption{Quantitative comparisons with other state-of-the-art systems on the DNS Challenge dataset. ``-'' denotes no published result.} \normalsize \centering \scalebox{0.85}{ \begin{tabular}{cccccccccccc} \specialrule{0.1em}{0.25pt}{0.25pt} \multirow{2}*{Methods} &\multirow{2}*{Year} &\multirow{2}*{Do.} &\multicolumn{4}{c}{w/ Reverberation} & \multicolumn{4}{c}{w/o Reverberation}\\ \cmidrule(lr){4-7}\cmidrule(lr){8-11} & & &WB-PESQ$\uparrow$ &PESQ$\uparrow$ &STOI(\%)$\uparrow$ &SISNR (dB)$\uparrow$ &WB-PESQ$\uparrow$ &PESQ$\uparrow$ &STOI(\%)$\uparrow$ &SISNR(dB)$\uparrow$\\ \specialrule{0.1em}{0.25pt}{0.25pt} Noisy &- &- &1.82 &2.75 &86.62 &9.03 &1.58 &2.45 &91.52 &9.07\\ NSNet~{\cite{reddy2020interspeech}} &2020 &T-F &2.37 &3.08 &90.43 &14.72 &2.15 &2.87 &94.47 &15.61\\ DTLN~{\cite{westhausen2020dual}} &2020 &T-F &- &2.70 &84.68 &10.53 &- &3.04 &94.76 &16.34\\ DCCRN~{\cite{hu2020dccrn}} &2020 &T-F &- &3.32 &- &- &- &3.27 &- &- \\ FullSubNet~{\cite{hao2021fullsubnet}} &2021 &T-F &2.97 &3.47 &92.62 &15.75 &2.78 &3.31 &96.11 &17.29\\ TRU-Net~{\cite{choi2021real}} &2021 &T-F &2.74 &3.35 &91.29 &14.87 &2.86 &3.36 &96.32 &17.55\\ CTS-Net~{\cite{li2020two}} &2021 &T-F &3.02 &3.47 &92.70 &15.58 &2.94 &3.42 &96.66 &17.99\\ GaGNet~{\cite{li2022glance}} &2022 &T-F &3.18 &3.57 &93.22 &16.57 &3.17 &\textbf{3.56} &97.13 &18.91 \\ \textbf{MDNet(Ours)} &2022 &T-F &\textbf{3.24} &\textbf{3.59} &\textbf{93.61} &\textbf{16.94} &\textbf{3.18} &\textbf{3.56} &\textbf{97.20} &\textbf{19.17}\\ \specialrule{0.1em}{0.25pt}{0.25pt} \end{tabular}} \label{tbl:dns1} \vspace{-0.5cm} \end{table*} \vspace{-0.2cm} \section{Experimental setup} \label{sec:experiments-setup} \vspace{-0.2cm} Two datasets are adopted to carry out the experiments. WSJ0-SI84~{\cite{paul1992design}} consists of 7138 utterances by 83 speakers (42 males and 41 females). 5428 and 957 utterances are selected for training and validation, and 150 utterances spoken by unseen speakers are for testing. Around 20,000 types of noises are randomly selected from the DNS-Challenge noise set as the training noise set and mixed with clean utterances under SNRs $[-5\rm{dB}, 0\rm{dB}]$ with $1\rm{dB}$ interval. For testing, three challenging unseen noises are chosen, namely babble, factory1 from NOISEX92~{\cite{varga1993assessment}} and cafeteria from CHiME-3~{\cite{barker2015third}} under three SNRs, \emph{i.e.}, $\left\{-3, 0, 3\right\}\rm{dB}$. Totally, we create 150,000, 10,000 noisy-clean pairs for training and validation. For testing, 150 pairs are created for each SNR. Interspeech2020 DNS-Challenge\footnote{https://github.com/microsoft/DNS-Challenge} provides 562.72 hours clips from 2150 speakers and 181 hours of 60,000 clips from 150 classes. For model evaluation, it provides a non-blind validation set with two categories, namely with and without reverberation, and each includes 150 noisy-clean pairs. Following the script given by the organizer, we create around 3000 hours of pairs for training, and the input SNR ranges from -5\rm{dB} to 15\rm{dB}. In the feature extractor, the kernel size and stride of 2D-Convs are $\left(1, 3\right)$ and $\left(1, 2\right)$ along the time and frequency axes, respectively. For each UNet-block, the kernel size is $\left(2, 3\right)$. The channel number of 2D-Convs remains 64 by default. Let us notate the number of (de)encoder layers in the $i$th UNet-block as $U_{i}$, then $U = \left\{4,3,2,1,0\right\}$ where $0$ means no UNet-block is used. Two groups of S-TCNs are employed, each of which includes four temporal convolution modules (TCMs), with the kernel size of the dilated Convs and dilation rates being 3 and $\left\{1,2,5,9\right\}$, respectively. Note that we take causal convolution operations by zero-padding along the past frames. The optimization step $\mathbf{\eta}$ is set as trainable and initialized at 0.01. All the utterances are sampled at 16 kHz. 20 ms Hanning window is utilized with 50\% overlap between adjacent frames. 320-point FFT is utilized, leading to 161-D features in the frequency axis. The model is trained on the Pytorch platform with an NVIDIA V100 GPU. Adam optimizer is adopted for network training ($\beta_{1}=0.9$, $\beta_{2}=0.999$) and the total epoch is 60 with batch size being 8. The learning rate is initialized at 5e-4 and we halve the value if the validation loss does not decrease for consecutive two epochs. For fusing speech and noise components, we adopt the same ``Encoder-TCN-Decoder'' structure as~{\cite{li2020two}} except we adopt a lightweight version by halving the channel number in the 2D-Convs. \renewcommand\arraystretch{0.85} \begin{table}[t] \setcounter{table}{1} \caption{Quantitative comparisons with other SOTA systems on WSJ0-SI84 dataset. Scores are averaged upon different testing cases. ``Do.'' denotes the tranform domain of the method.} \Large \centering \resizebox{\columnwidth}{!}{ \begin{tabular}{cccccccc} \specialrule{0.1em}{0.25pt}{0.25pt} \multirow{2}*{Methods} &\multirow{2}*{Year} &\multirow{2}*{Do.} &Param. &MACs &\multirow{2}*{PESQ$\uparrow$} &ESTOI$\uparrow$ &SISNR$\uparrow$\\ & & &(M) &(G/s) & &(\%) &(dB) \\ \specialrule{0.1em}{0.25pt}{0.25pt} Noisy &- &- &- &- &1.82 &41.97 &0.00\\ DDAEC~{\cite{pandey2020densely}} &2020 &T &4.82 &36.56 &2.76 &74.84 &10.85 \\ DEMUCAS~{\cite{defossez2020real}} &2020 &T &18.87 &4.35 &2.67 &76.23 &11.08\\ GCRN~{\cite{tan2019learning}} &2020 &T-F &9.77 &2.42 &2.48 &70.68 &9.21 \\ DCCRN~{\cite{hu2020dccrn}} &2020 &T-F&\textbf{3.67} &11.13 &2.54 &70.58 &9.47 \\ PHASEN~{\cite{yin2020phasen}} &2020 &T-F&8.76 &6.12 &2.73 &71.77 &9.38\\ FullSubNet~{\cite{hao2021fullsubnet}} &2021 &T-F &5.64 &31.35 &2.55 &65.89 &9.16\\ CTSNet~{\cite{li2020two}} &2021 &T-F &4.35 &5.57 &2.86 &76.15 &10.92 \\ GaGNet~{\cite{li2022glance}} &2022 &T-F &5.94 &\textbf{1.63} &2.86 &76.87 &10.93\\ \textbf{MDNet(Ours)} &2022 &T-F &8.36 &2.70 &\textbf{2.88} &\textbf{77.37} &\textbf{11.12} \\ \specialrule{0.1em}{0.25pt}{0.25pt} \end{tabular}} \label{tbl:wsj0-si84-result} \vspace{-0.6cm} \end{table} \renewcommand\arraystretch{0.85} \vspace{-0.4cm} \section{Results and analysis} \label{sec:results-and-analysis} \vspace{-0.2cm} \subsection{Ablation study} \label{sec:ablation-study} \vspace{-0.2cm} Around 100 hours of training data from the WSJ0-SI84 corpus are used to conduct the ablation study in terms of step number $Q$ and fusion type. Three evaluation metrics are utilized, namely PESQ~{\cite{rix2001perceptual}}, ESTOI~{\cite{jensen2016algorithm}}, and SISNR~{\cite{le2019sdr}. Higher values indicate better performance. Quantitative results are shown in Table~{\ref{tbl:ablation-studies}} and several observations can be made. First, with the increase of steps, one can observe notable performance improvements over the non-update case (entry 1a), which shows that by gradient updating, we can obtain more accurate parameter estimation, leading to better reconstruction results. However, with more steps, the performance tends to get saturated and even degraded, \emph{e.g.}, from 1f to 1g. It can be explained as the estimation with GDM tends to get converged and GDM can not guarantee consistent optimization with the increase of steps for the non-convex optimization problem. Second, we compare three fusion strategies, namely ``R'' (the proposed fusion method), ``G'' (dynamic weighting), and ``A'' (average), and one can see that our method yields the best performance among above fusion schemes. It reveals the superiority of the proposed nonlinear residual fusion over dynamic weighting. Note that the average operation is the special case of dynamic weighting, where the weighting coefficients remain 0.5 for all T-F bins. However, as the local SNR varies a lot in different frequency bands, it is not reasonable to combine both parts with fixed weights, which can be proved from the notable performance degradations from entry 2b to 2a. \vspace{-0.4cm} \subsection{Comparsion with state-of-the-art methods} \label{sec:comparison-with-sota-methods} \vspace{-0.2cm} Based on the analysis in the ablation study, we choose entry 1d as the default network configuration, which well balances between calculation complexity and performance, to compare with current top-performed SE systems. Quantitative results on the WSJ0-SI84 dataset are presented in Table~{\ref{tbl:wsj0-si84-result}}. Compared with another eight baselines, the proposed approach yields the highest scores among three objective metrics, validating the superiority of our method in speech quality and intelligibility. Note that despite our method has around 8.36M trainable parameters, it is rather advantageous in MACs, say, 2.70G/s, which reveals the gap in trainable parameters and computation complexity. We also report the quantitative results on the DNS-Challenge non-blind test set, as shown in Table~{\ref{tbl:dns1}}. The wide-band version of PESQ (WB-PESQ)~{\cite{itu862}} and STOI~{\cite{taal2010short}} are also listed for evaluation. From the results, one can see that again, our method achieves the highest scores in different metrics over previous top-performed systems, which further attests to the superiority of our method under both reverberant and anechoic environments. Remark that different from previous literature where the mapping process lacks adequate interpretability, our method follows the MAP criterion and is explicitly optimized with the gradient descent method. Therefore, we think it is a promising direction to leverage the advantage of model-based methods to gradually open the black-box of DNNs in the speech enhancement area. \vspace{-0.35cm} \section{Conclusions} \label{sec:conclusion} \vspace{-0.2cm} In this paper, we propose a model-driven approach to tackle single-channel speech enhancement. Specifically, based on the maximum a posteriori criterion, the original problem is formulated into the joint posterior estimation of speech and noise, and it is proposed that deep prior distribution is learned via the network from training data. The framework is devised with the unfolding structure and the gradient descent method is employed to update parameter estimation and facilitate the target reconstruction progressively. Besides, another network serves as the fusion module to further recover the speech component from previous estimations. Experimental results on the WSJ0-SI84 and DNS-Challenge datasets show that the proposed approach performs favorably against previous top-performed SE systems. \vfill\pagebreak \bibliographystyle{IEEEtran}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} \vspace{5mm} The generalized Tower of Hanoi problem can be formally stated as following. \begin{defn} Let $n$ and $p\geq3$ be natural numbers. Than a generalized Tower of Hanoi problem is a problem of moving n ordered disks (we may number those disks from 1 to n) from an initial peg to another one, satisfying following conditions: 1. No larger disk can be on top of a smaller one 2. A disk can be moved from one peg to another peg only when no other disks are on top of it. \end{defn} We simply call a generalized Tower of Hanoi problem with $n$ disks and $p$ pegs as $(n,p)-problem$. \begin{defn} For $n$ and $p\geq3$, $M(n,p)$ is the least number of moves needed to solve $(n,p)-problem$. \end{defn} \begin{thm}[The original Tower of Hanoi problem] For natural number $n$, $$M(n,3)=2^n-1$$ \end{thm} \begin{thm}[A.A.K.Majumdar] For $n,p$, there exist an unique natural number $r$ satisfying $$ \binom{p+r-3}{p-2} \leq n < \binom{p+r-2}{p-2} $$ and $$M(n,p) \leq \sum_{t=0}^{r-1}{2^{t} \binom{p+t-3}{p-3}} +2^{r}(n-\binom{p+r-3}{p-2})$$ \end{thm} \textbf{Proof.} (See [2], for example) The existence of $r$ is clear. Define $ K(n,p)=\sum_{t=0}^{r-1}{2^{t} \binom{p+t-3}{p-3}} +2^{r}(n-\binom{p+r-3}{p-2}) $. We will prove the inequality by explicitly showing that it is possible to solve the $(n,p)-problem$ with exactly $K(n,p)$ times of move. (1) We use induction on $p$ and then on $n$. First, for $p=3$, we have $M(n,3)=2^n-1=K(n,3)$ for all $n$. Assume that it is possible to solve the $(n,p)-problem$ with $K(n,p)$ times of move for $3 \leq p \leq q-1$. For $p=q$, we use induction on $n$. For $n=1$, $K(1,q)=1$ and it is indeed possible to move a single disk with 1 move. Assume that (1) holds for $n \leq m-1$. For $n=m$, let $m=\binom{q+r-3}{q-2}+\alpha$ where $0 \leq \alpha < \binom{q+r-3}{q-3} $. Since $\binom{q+r-3}{q-3}=\binom{q+r-4}{q-4}+\binom{q+r-4}{q-3}$, there are natural numbers $\beta, \gamma$ such that $\alpha=\beta+\gamma$ and $\beta < \binom{q+r-4}{q-3}$ and $\gamma < \binom{q+r-4}{q-4}$. We call the peg on where every disks are at the beginning as initial peg ($I$ for short) and the peg on where every disks are at the end as final peg ($F$ for short). Also, since $p \geq 3$, we can pick a peg different from $I$ and $F$ and call it middle peg ($M$ for short). Note that $m=\binom{q+r-3}{q-2}+\alpha = (\binom{q+r-4}{q-2}+\beta) +(\binom{q+r-4}{q-3}+\gamma)$. Define $k:=\binom{q+r-4}{q-2}+\beta$ and we have $m-k=\binom{q+r-4}{q-3}+\gamma$. Then, we move $m$ pegs through the following process: \vspace{5mm} 1. Move disks 1 to k from $I$ to $M$ with $K(k,q)$ moves. 2. Move disks k+1 to m from $I$ to $F$ with $K(m-k,q-1)$ moves. (Note that we do not use the peg $M$ here.) 3. Move disks 1 to k from $M$ to $F$ with $K(k,q)$ moves. \vspace{5mm} So far, we have moved the $m$ disks with $2K(k,q)+K(m-k,q-1)$ moves. Now it is enough to check that $$K(m,q)=2K(k,q)+K(m-k,q-1)$$. \linebreak This can be shown by calculation: \vspace{5mm} We have $$K(k,q)=\sum_{t=0}^{r-2} 2^{t}\binom{q+t-3}{q-3}+2^{r-1}\beta$$ and $$K(m-k,q-1)=\sum_{t=0}^{r-1} 2^{t}\binom{q+t-4}{q-4}+2^{r}\gamma$$. \linebreak Thus, $$2K(k,q)+K(m-k,q-1)=\sum_{t=0}^{r-2} 2^{t+1}\binom{q+t-3}{q-3}+2^{r-1}+\sum_{t=0}^{r-1} 2^{t}\binom{q+t-4}{q-4}+2^{r}(\beta+\gamma)$$ $$=\sum_{t=0}^{r-1} 2^{t}(\binom{q+t-4}{q-3}+\binom{q+t-4}{q-4})+2^{r}\alpha=K(m,q)$$ \vspace{5mm} Which finishes the proof. Note that the proof works for every possible $\beta$ and $\gamma$ satisfying the conditions, which implies that the minimal solution might not be unique. \section{The Frame-Stewart Conjecture} The Frame-Stewart Conjecture states that the DP-algorithm in the proof of previous theorem is actually optimal and thus $M(n,p)=K(n,p)$. \begin{con}[Frame-Stewart Conjecture] For $n,p$, $M(n,p)=K(n,p)$ holds. \end{con} The conjecture indeed holds for $p=3$. \section{Preliminary Facts} For natural number $x$, we define $\bar{x} :=\{1,2,..,x\}$. \begin{defn} Given $n,p$, a \textbf{state} of the $(n,p)-problem$ ($(n,p)-state$ in short) is $n$ disks being allocated on $p$ pegs. Formally, a state is equivalent to a function $f:\bar{n} \to \bar{p}$ We define the set of all states of the $(n,p)-problem$ as $$X(n,p):=\{f:\bar{n} \to \bar{p}\}$$ \end{defn} \begin{defn} Given $n,p$ and two states $f,g$ of the $(n,p)-problem$, a \textbf{path} connecting $f$ and $g$ is a finite sequence of $(n,p)-states$ such that the initial term of the sequence is $f$ and the final term is $g$. If $P=\{P_{0}=f, P_{1}, ..., P_{k}=g\}$ is a path connecting(between) $f$ and $g$, define \textbf{length} of the path as $|P|:=k$. \end{defn} \begin{defn} Let $f,g$ be $(n,p)-states$. Define $P(f,g)$ as the set of all paths connecting $f$ and $g$. A path between $f$ and $g$ is a \textbf{shortest path} if its length is minimal among $P(f,g)$. A length 1 path is called \textbf{move} from $f$ to $g$. We formally write a shortest path between $f$ and $g$ as $f-g$. It is obvious that $|f-g|=|g-f|$ for any given $f$ and $g$. Note that shortest path between $f$ and $g$ may not unique and $f-g$ is not well-defined. Still, $|f-g|$ is well-defined. \end{defn} \begin{defn} Let $f,g$ be $(n,p)-states$ and $\psi$ be a path between $f$ and $g$. If $X$ is a subset of $\bar{n}$, we define $|\psi|_{X}$ be the number of moves of disks in $X$ while $\psi$. \end{defn} \begin{ex} $(n,p)-problem$ can be demonstrated as finding shortest path between two distinct constant states (i.e. constant function) $f$ and $g$. \end{ex} We introduce a notation by Roberto Demontis and a notion of demolishing sequence. The triple $(j,i,t)$ with $1 \leq j < i \leq \infty$ and $j<t \leq \infty$, denotes that the disk $j$ moves from being on the disk $i$ to be placed on the disk $t$. We write $i=\infty$ when there was no disk under $j$ before it moves onto $t$. Similarly, we write $t=\infty$ when disk $j$ moves to an empty peg. \begin{defn} A path $P$ between $f$ and $g$ is said to be \textbf{demolishing sequence} if 1. $f$ is a constant state 2. The last move of $P$ is $(n,\infty,\infty)$ 3. The move $(n,\infty,\infty)$ appears exactly once in $P$. \end{defn} We call the final state of a minimal demolishing sequence as \textbf{middle state}. \begin{defn} Let $P$ and $Q$ be sequences satisfying $P_{|P|}=Q_{0}$. Define $P+Q$ be a sequence concatenate $P$ and $Q$. $|P+Q|=|P|+|Q|$ holds. \end{defn} \begin{thm}[Roberto Demontis] Given $f \equiv I_{f},g \equiv I_{g}$ be two distinct constant states and $S:=f-g$. Assume $f-h$ be a minimal demolishing sequence of moves. Then, $|S|=2|f-h|+1$ holds. \end{thm} \textbf{Proof} Since $f$ and $g$ are two distinct constant states, there must be at least one $(n,\infty,\infty)$ move in $S$, which we will call $\psi$. Let $P$ be a subsequence of $S$ from the beginning to the last move before $\psi$. Than, $P$ is a demolishing sequence. Let $S=P+\psi+Q$. Define $P^{r}$ be a sequence which is reverse of $P$ but $I_{f}$ and $I_{g}$ are switched. If $|P|>|Q|$, we have $|S|=|P+\psi+Q|<|P+\psi+P^ {r}|$, which contradicts to the minimality of $S$. Similarly, if $|P|<|Q|$, we have $|S|=|P+\psi+Q|<|Q^{r}+\psi+Q|$, also contradiction. Thus, we have $|P|=|Q|$ and $|S|=2|Q|+1=2|f-h|+1$ since both $Q$ and $f-h$ are minimal demolishing sequences. By the theorem above, it is enough to find minimal demolishing sequence instead of the whole $(n,p)-problem$. \begin{thm}[Roberto Demontis] Let $S$ be a minimal demolishing sequence of $(n,p)-problem$. Suppose that the disks have been arranged on $r \leq p-1$ stacks at the end of $S$. Let $n,n-1$ and $j_{1} < ... < j_{r-2}$ be the disks at the bottom of the $r$ stacks at the end of $S$. Then during the demolishing phase, no disk $y>j_{1}$ has arranged on the peg on which the disk $j_{1}$ will be stacked at the end of $S$. \end{thm} \textbf{Proof} See [1]. \section{Main Results} \begin{defn} Let $\mu$ be a middle state of a solution $S$ of $(n,p)-problem$. Assume that $k<n$ be the largest disk which is not stacked on $n-1$ at $\mu$. We define $B(S)=n-k-1$ the \textbf{base} of $S$. \end{defn} The above definition implies that every disks $j$ of $k+1 \leq j < k+1$ are stacked on $n-1$ at $\mu$. \begin{defn} For a sequence $P$ and a state $\mu \in P$, let $\mu^{+}$ be the next state of $\mu$ and $\mu^{-}$ be the state before $\mu$. \end{defn} \begin{defn} $\chi$ be a sequence of $(n,p)-problem$ and $A \subset \bar{n}$. Let $\chi|_{A}$ be a sequence of $(A,p)-problem$ such that disks of $\bar{n} \backslash A$ are removed from $\chi$. $\chi|_{A}$ is called restriction of $\chi$ on $A$. \end{defn} \begin{lemma} Let $n,p$ are natural numbers satisfying $\binom{r+p-3}{p-2} \leq n < \binom{r+p-2}{p-2}$. Then, $K(n,p)-K(n-1,p)$ is either $2^r$ or $2^{r-1}$. \end{lemma} \textbf{Proof} If $n=\binom{r+p-3}{p-2}$, then $n-1=\binom{r+p-3}{p-2}-1$. We have $$K(n,p)=\sum_{t=0}^{r-1} 2^{t}\binom{p+t-3}{p-3}$$ and $$K(n-1,p)=\sum_{t=0}^{r-2}\binom{p+t-3}{p-3}+2^{r-1}(\binom{p+r-3}{p-2}-1-\binom{p+r-4}{p-2})$$ Thus $$K(n,p)-K(n-1,p)=2^{r-1}$$. Otherwise, $\binom{p+r-3}{p-2}<n<\binom{p+r-2}{p-2}$ holds. It is obvious that $K(n,p)-K(n-1,p)=2^{r}$. \begin{lemma} For a sequence $\chi$, $|\chi| \geq |\chi_{0}-\chi_{|\chi|}|$ holds. Equality holds when $\chi$ is minimal. \end{lemma} \begin{thm} For a solution $S$ of $(n,p)-problem$, if $B(S) \geq r$ where $r$ is the unique natural number satisfying $\binom{r+p-3}{p-2} \leq n < \binom{r+p-2}{p-2}$, $|S| \geq K(n,p)$. In other words, if there is a shorter solution $S$ which contradicts to the Frame-Stewart conjecture, than it must satisfy $B(S) < r$. \end{thm} \textbf{Proof} Let $j$ be the initial state and $\mu$ be the middle state of $S$. Define $\nu$ be the state when the $\bar{n-1} \backslash \bar{k}$ tower completes. i.e. the state right after the last $(k+1,*,k+2)$ move between $j$ and $\mu$. First, in case of $|j-\nu|_{\bar{k}} \geq \frac{K(n,p)}{2}-K(B(S),3)$, we have $$M(n,p)=|S|=2|j-\mu|+1=2(|j-\mu|_{\bar{k}}+|j-\mu|_{\bar{n-1} \backslash \bar{k}})+1 \geq 2|j-\nu|_{\bar{k}}+2|j-\mu|_{\bar{n-1} \backslash \bar{k}}+1$$ By \textbf{Theorem 3.2}, through the sequence $j-\mu$, any disks in $\bar{n-1} \backslash \bar{k}$ have not placed on any other pegs than the initial peg, $\mu(n-1)$ and $\mu^{+}(n)$ where $\mu^{+}$ is the state right after $\mu$. Therefore, we have $$|j-\mu|_{\bar{n-1} \backslash \bar{k}} \geq K(B(S),3)$$ Thus, in this case, $2|j-\nu|_{\bar{k}}+2|j-\mu|_{\bar{n-1} \backslash \bar{k}}+1 \geq 2(\frac{K(n,p)}{2}-K(B(S),3))+2K(B(S),3)+1 \geq K(n,p)$ and $|S| \geq K(n,p)$. Otherwise, $$|j-\nu|_{\bar{k}} < \frac{K(n,p)}{2}-K(B(S),3)$$ holds. Define $T:=j-\mu$ and we have $|S|=2|T|+1$. Since $\nu$ is the state right after the $\bar{n-1} \backslash \bar{k}$ tower has completed, no disks of $\bar{n-1} \backslash \bar{k}$ has moved after $\nu$. Let $\chi$ be a sequence such that $\chi_{0}=\nu$, $\chi_{|\chi|}=\mu(n-1)$ and $\chi|_{\bar{k}}=\nu|_{\bar{k}}-\nu(n-1)$ (i.e. $\chi$ is a sequence beginning from $\nu$ and moving disks of $\bar{k}$ onto $n-1$ minimally, instead of end up with $\mu$.) Note that $|\chi| \leq |j-\nu|_{\bar{k}}$ holds. This is because $\chi_{0}|_{\bar{k}}=\nu|_{\bar{k}}$, $\chi_{|\chi|}|_{\bar{k}}=j|_{\bar{k}}$ and \textbf{Lemma 4.2}. However, we also have the sequence $(j-\nu)+\chi$, which begins with $j$ and end up with complete $\bar{n-1}$ tower. This gives the following: $$|(j-\nu)+\chi| \geq K(n-1,p)$$ Thus we get \vspace{5mm} $$|T|+|j-\nu|_{\bar{k}} \geq |j-\nu|+|\chi| \geq K(n-1,p)$$ \vspace{5mm} \newpage and $$|S|=2|T|+1 \geq 2(K(n-1,p)-|j-\nu|_{\bar{k}})+1 \geq 2K(n-1,p)-2(\frac{K(n,p)}{2}-K(B(S),3)-1)+1$$ $$=2K(n-1,p)-K(n,p)+2K(B(S),3)+3 $$ \newline By \textbf{Lemma 4.1}, $2K(n-1,p)-K(n,p) \geq K(n,p) - 2^{r+1}$ and thus $$2K(n-1,p)-K(n,p)+2K(B(S),3)+3 \geq K(n,p)+2K(B(S),3)+3-2^{r+1}$$ $$=K(n,p)+2(2^B(S)-2^r)+1 \geq K(n,p)$$, which finishes the proof. \vspace{5mm} The only case left is $B(S) \leq t$ for proving the Frame-Stewart Conjecture. \begin{con} For $n,p$ such that $\binom{r+p-3}{p-2} \leq n < \binom{r+p-2}{p-2}$ and $S$ be a solution of $(n,p)-problem$. $|S| \geq K(n,p)$ holds for $B(S) <r$ \end{con} \newpage
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} The first published paper on noncommutative spacetime was done by Snyder~\cite{HS} for the purpose to cure ultraviolate divergences in quantum field theory although the idea might be traced back earlier. In 1999, Seiberg and Witten proposed~\cite{SW} that some low-energy effective theory of open strings with a nontrivial background can be described by a noncommutative filed theory. Subsequently, the great progress on noncommutative field theory has been made, see, for instance, the review articles~\cite{Review}. Black holes, originated from Einstein's field equations of general relativity, have played an important role in quantum gravity, and the relevant thermodynamics has acquired great strides~\cite{RW,SC}. In particular, the introduction of a negative cosmological constant makes black holes present rich thermodynamic behaviors. The variation of the cosmological constant $\Lambda$ in the first law of black hole thermodynamics has been widely accepted. Crucially, the cosmological constant can be interpreted as the thermodynamic pressure $P$ with \begin{equation} P=-\frac{\Lambda}{8\pi}=\frac{(n-1)(n-2)}{16 l^2 \pi},\label{pres} \end{equation} where $n$ stands for the dimension of spaetime and $l$ represents the curvature radius of the AdS spacetime. When the cosmological constant corresponds to the pressure as a thermodynamic variable, the black hole mass $M$ can be identified with the enthalpy, rather than the internal energy. Then, the conjugate variable corresponding to the cosmological constant is {\em thermodynamic volume} with $V=(\partial M/\partial P)_{S}$. In this way, one can describe the black hole thermodynamic behaviors in an {\em extended phase space} with the pressure and volume as thermodynamic variables. In particular, by an analogy with the van der Waals fluid, the equation of state of a charged AdS black hole has attained increasing interest~\cite{BPD,CEJM,KM}. In 1993, Susskind suggested \cite{LS} that stringy effects cannot be neglected in the string/black hole correspondence principle. Now it is known that noncommutative geometry inspired black holes~\cite{NSS} (sometimes in short {\em noncommutative black holes}) contain stringy effects, where such an effect is similar in some sense to that of noncommutative field theory induced by string theory. One way to introduce noncommutative (stringy) effects into black holes, as suggested in ref.~\cite{NSS}, is to modify energy-momentum tensors in terms of smeared matter distributions. Specifically, the point-like $\delta$-function mass density is replaced by the Gaussian smeared matter distribution in the right hand side of Einstein's field equations, while no changes are made in the left hand side. In this way, a self-regular black hole solution with noncummutative effects but without curvature singularities is given. Since the work of ref.~\cite{NSS}, a lot of developments have been made, such as generalizations to high-dimensional black holes~\cite{TG}, charged black holes~\cite{ANSS}, high-dimensional charged black holes~\cite{SSN}, and to the topics in other diverse aspects~\cite{PN,SS,MXZ}. Besides the Gaussian smeared matter distribution, non-Gaussian smeared matter distributions have also been considered, such as the Lorentzian smeared mass distribution \cite{NM}, the Rayleigh distribution \cite{MY}, and the ring-type distribution~\cite{Park}, etc. In fact, the Gaussian smeared matter distribution is not always required~\cite{POS}, for instance, the ring-type smeared matter distribution in $(2+1)$-dimensional spacetime has been found~\cite{Park} to have quite interesting features in phase transition and soliton-like behaviors of black holes. Naturally, in order to acquire more understanding to the noncommutative high-dimensional Schwarzschild-Tangherlini anti-de Sitter black hole, we apply the non-Gaussian smeared matter distribution proposed in ref.~\cite{Park} to the (ordinary) high-dimensional Schwarzschild-Tangherlini anti-de Sitter black hole and study the thermodynamic behaviors, in particular the phase transition, of such a noncommutative black hole. We find that the Maxwell equal area law maintains for this noncommutative AdS black hole if the Hawking temperature stays in a specific range. As a byproduct, we indicate that the Gaussian smeared matter distribution is not applicable for the 6- and higher-dimensional black holes in accordance with the so-called hoop conjecture\footnote{The matter mean radius of a black hole related to some mass distribution should not be larger than the horizon radius of the relevant extreme black hole in order to ensure the formation of a black hole.}~\cite{HC}. The arrangement of this paper is as follows. In section \ref{sec2}, we give an introduction of the noncommutative high-dimensional Schwarzschild-Tangherlini AdS black hole that is associated with the non-Gaussian smeared matter distribution proposed in ref.~\cite{Park}. Then, the thermodynamic quantities of the noncommutative black hole are calculated and the characteristics of phase transitions are analyzed in detail in section \ref{sec3}. Finally, a brief summary is made in section \ref{sec4}. \section{Noncommutative high-dimensional AdS black hole}\label{sec2} We consider a high-dimensional ($n \geq 4$), neutral, and non-rotating Schwarzschild-Tangherlini anti-de Sitter black hole \cite{ST} with a negative cosmological constant. The metric reads as \begin{equation} \text{ds}^2=-f(r)dt^2+\frac{dr^2}{f(r)}+r^2 d{\Omega}^{2}_{n-2}, \end{equation} where $d\Omega_{n-2}^2$ is the square of line element on an $(n-2)$-dimensional unit sphere and the general form of function $f(r)$ takes the form,\footnote{The geometric units, $\hbar=c=k_{_B}=G=1$, are adopted throughout this paper.} \begin{equation} f(r)=1-\frac{16\pi m(r)}{(n-2)\omega r^{n-3}}+\frac{r^2}{l^2},\label{xianyuan} \end{equation} where $\omega$ denotes the area of an $(n-1)$-dimensional unit sphere\footnote{$\omega=\frac{2\pi^{\frac{n-1}{2}}}{\Gamma\left(\frac{n-1}{2}\right)}$, where $\Gamma(x)$ is the gamma function.} and $m(r)$ stands for the black hole mass distribution we shall choose. In this $n$-dimensional AdS spacetime we adopt the non-Gaussian mass density of black holes proposed by ref.~\cite{Park}, \begin{equation} \rho(r)=A r^k e^{-\left(\frac{r}{2\sqrt{\theta}}\right)^2},\label{fenbu} \end{equation} where $\sqrt{\theta}$ is the noncommutative parameter with the dimension of length, $k$ is a non-negative integer, $k=0,1,2,\cdots$, and $A$ is a normalization constant that can be fixed by using the constraint: $\int_0^{\infty}\rho(r) dV_{n-1}=M$, \begin{equation} A=\frac{M}{\pi^{\frac{n-1}{2}}\left(2\sqrt{\theta}\right)^{n+k-1}} \,\frac{\Gamma\left(\frac{n-1}{2}\right)}{\Gamma\left(\frac{n+k-1}{2}\right)},\label{paraA} \end{equation} where the parameter $M$ is the ADM mass of black holes, and $dV_{n-1}$ is an $(n-1)$-dimensional volume element. We note that this kind of non-Gaussian smeared mass densities is general, which means that it includes the Gaussian distribution of $k=0$ and the Rayleigh distribution of $k=1$ as special cases. Now the corresponding mass distribution can be derived from eqs.~(\ref{fenbu}) and (\ref{paraA}), \begin{equation} m(r)=\int_0^{r}\rho(r^{\prime}) dV_{n-1}=\frac{M}{\Gamma\left(\frac{n+k-1}{2}\right)}\gamma\left(\frac{n+k-1}{2},\left(\frac{r}{2\sqrt{\theta}}\right)^2\right),\label{masfenbu} \end{equation} where $\gamma(a,x)$ is the lower incomplete gamma function. Moreover, using eqs.~(\ref{xianyuan}) and (\ref{masfenbu}) we can get the ADM mass $M$ in terms of the black hole horizon radius $r_h$, \begin{equation} M=\frac{(n-2)\omega \Gamma\left(\frac{n+k-1}{2}\right)}{16\pi \gamma\left(\frac{n+k-1}{2},\left(\frac{r_h}{2\sqrt{\theta}}\right)^2\right)}\left(r_h^{n-3}+\frac{r_h^{n-1}}{l^2}\right), \label{mas} \end{equation} where $r_h$ is thought to be the largest real root of $f(r)=0$. When taking the commutative limit $\theta\rightarrow 0$,\footnote{In the noncommutative case, the noncommutative effect can be neglected when the horizon radius $r_h$ is becoming large. Only in this sense, the commutative limit is equivalent to the large horizon radius limit.} we can see that the black hole mass turns out to be the known one \cite{ST}, \begin{eqnarray} M \rightarrow\frac{(n-2)\omega}{16\pi}\left(r_h^{n-3}+\frac{r_h^{n-1}}{l^2}\right),\label{mas10} \end{eqnarray} which shows the consistency of our noncommutative generalization. For the sake of convenience, we introduce two dimensionless parameters $b$ and $x_h$ defined by \begin{equation} b:=\frac{2\sqrt{\theta}}{l}, \qquad x_h:=\frac{r_h}{2\sqrt{\theta}}, \label{daihuan1} \end{equation} and rewrite eq.~(\ref{mas}) as follows, \begin{equation} \frac{M}{\left(2\sqrt{\theta}\right)^{n-3}}=\frac{(n-2)\omega \Gamma\left(\frac{n+k-1}{2}\right)}{16\pi \gamma\left(\frac{n+k-1}{2},x_h^2\right)}\left(x_h^{n-3}+b^2 x_h^{n-1}\right). \label{mas11} \end{equation} The purpose is to express the relation of a black hole mass with respect to a horizon radius in terms of the single parameter $b$. For a noncommutative spacetime with a small but finite value of $\sqrt{\theta}$, this parameter $b$ is becoming small when the curvature radius of the AdS spacetime $l$ is large, which corresponds to an asymptotic Minkowski spacetime; but $b$ is becoming large when $l$ is small, which corresponds to a spacetime with a strong AdS background. As a result, the range of parameter $b$ is usually from zero to infinity. Because $2\sqrt{\theta}$ can be regarded as the minimal length of the related noncommutative spacetime, $\frac{M}{\left(2\sqrt{\theta}\right)^{n-3}}$ can be understood as the black hole mass in the unit of the Planck mass and $\frac{r_h}{2\sqrt{\theta}}$ as the horizon radius in the unit of the Planck length if the minimal length is dealt with as the order of the Planck length. The horizon radius of extreme black holes $r_0$ satisfies the relation $\partial M /\partial r_h =0$ that can be written with the help of eq.~(\ref{mas}) or eq.~(\ref{mas11}) as follows: \begin{equation} G(n,k;x_0)=n-1-\frac{2}{1+b^2 x_0^2},\label{extreradi} \end{equation} where the parameter $x_0$ is defined by \begin{equation} x_0 :=\frac{r_0}{2\sqrt{\theta}} \label{daihuan2}, \end{equation} whose meaning is a horizon radius of extreme black holes in the unit of the minimal length, and the function $G(n,k;x)$ is defined by \begin{equation} G(n,k;x):=\frac{2 x^{n+k-1} e^{-x^2}}{\gamma\left(\frac{n+k-1}{2}, x^2\right)},\label{tezheng} \end{equation} where $x:=\frac{r}{2\sqrt{\theta}}$. See Appendix for a detailed analysis of the function $G(n,k;x)$. Before solving eq.~(\ref{extreradi}), we have to consider the {\em hoop conjecture} condition~\cite{HC}, that is, the matter mean radius of the mass distribution $\bar{r}$ should not be larger than the horizon radius of extreme black holes. The matter mean radius that relates to the non-Gaussian mass density distribution (eq.~(\ref{fenbu})) can be calculated, \begin{equation} \bar{r}=\int_0^{\infty}r\, \frac{\rho(r)}{M} dV_{n-1}=2\sqrt{\theta}\, \frac{\Gamma(\frac{n+k}{2})}{\Gamma(\frac{n+k-1}{2})}. \end{equation} That is, the {\em hoop conjecture} implies $\bar{r} \leq r_0$, or $\bar{x} \leq x_0$, where $\bar{x} :=\frac{\bar{r}}{2\sqrt{\theta}}$. If not, the black hole could not be formed. Considering the constraint $ 0 < b < \infty$ and the characteristics of the function $G(n,k;x)$ that are listed in Appendix, we obtain from eq.~(\ref{extreradi}) the range of the horizon radius of extreme black holes, $x_{*} < x_0 < \tilde{x}$, where $x_{*}$ is the root of the equation $G(n,k;x_{*})=n-1$ and $\tilde{x}$ is the root of the equation $G(n,k;\tilde{x})=n-3$. Further due to the {\em hoop conjecture}, the range of $x_0$ reads as \begin{equation} \text{Max}\{\bar{x},x_{*}\}< x_0 < \tilde{x}. \label{range} \end{equation} Then, using eqs.~(\ref{extreradi})-(\ref{range}) and keeping in mind that $n$ and $k$ are integers, we can get the allowed values of $k$ at various dimensions\footnote{In general, the dimension $n$ can take any positive integers. However, we prefer to consider the range of $n$ from four to eleven in physics.} in the following two cases. (i) When $\bar{x} > x_{*}$, we can see that $b$ is further constrained to the small range, $b \in \left(0, b|_{x_0=\bar{x}}\right)$, and obtain the results given in Table \ref{biao1}. \begin{table}[!hbp] \begin{center} \begin{tabular}{|c|*{9}{|c}} \hline \multicolumn{9}{|c|} {For various dimensions $n$, the allowed values of $k$} \\ \hline n & $4$ & $5$ & $6$ & $7$ & $8$ & $9$ & $10$ & $11$ \\ \hline k & $[0,5]$ & $[0,9]$ & $[4,15]$ & $[8,23]$ & $[14,33]$ & $[22,44]$ & $[32,56]$ & $[43,70]$\\ \hline \end{tabular} \end{center} \caption{When $\bar{x} > x_{*}$, according to eqs.~(\ref{extreradi})-(\ref{range}), we list the allowed values of $k$ for various $n$, where $b$ is further constrained to the small range, $b \in \left(0, b|_{x_0=\bar{x}}\right)$.} \label{biao1} \end{table} (ii) When $\bar{x} < x_{*}$, we find that $b$ has no extra constraints, $b \in (0, \infty)$, and obtain the results given in Table \ref{biao2}. \begin{table}[!hbp] \begin{center} \begin{tabular}{|c|*{9}{|c}} \hline \multicolumn{9}{|c|} {For various dimensions $n$, the allowed values of $k$} \\ \hline n & $4$ & $5$ & $6$ & $7$ & $8$ & $9$ & $10$ & $11$ \\ \hline k & $\geq 6$ & $\geq 10$ & $\geq 16$ & $\geq 24$ & $\geq 34$ & $\geq 45$ & $\geq 57$ & $\geq 71$\\ \hline \end{tabular} \end{center} \caption{When $\bar{x} < x_{*}$, according to eqs.~(\ref{extreradi})-(\ref{range}), we list the allowed values of $k$ for various $n$, where $b$ has no extra constraints, $b \in (0, \infty)$.} \label{biao2} \end{table} It is remarkable that for the high-dimensional Schwarzschild-Tangherlini AdS black hole the Gaussian smeared mass distribution (the case $k=0$) is not applicable to the $d \geq 6$ dimensions. Now we turn to the extremal horizon radius described by eqs.~(\ref{extreradi})-(\ref{tezheng}). The numerical results\footnote{In all of the Tables and Figures of this paper, the values of the dimensionless parameter $b$ we set coincide with the {\em hoop conjecture}. That is to say, these values satisfy eqs.~(\ref{extreradi})-(\ref{range}).} are shown in Table \ref{biao3}. We can see that the extremal horizon radius increases when the power $k$ increases for a fixed $n$, which is obvious because the matter mean radius increases. On the contrary, the extremal horizon radius decreases when the dimension $n$ increases for a fixed $k$, indicating that the higher the dimension is, the smaller the extremal horizon radius is. \begin{table}[!hbp] \begin{center} \begin{tabular}{|c|*{9}{|c}} \hline \multicolumn{9}{|c|} {The extremal horizon radius $x_0=r_0/(2\sqrt{\theta})$} \\ \hline \backslashbox{k}{n} & $4$ & $5$ & $6$ & $7$ & $8$ & $9$ &$10$ &$11$ \\ \hline \hline 0 & $1.5059$ & $1.3361$ & $-$ & $-$ & $-$ & $-$ & $-$ & $-$\\ \hline 1 & $1.7860$ & $1.6099$ & $-$ & $-$ & $-$ & $-$ & $-$ & $-$ \\ \hline 3 & $2.2097$ & $2.0299$ & $-$ & $-$ & $-$ & $-$ & $-$ & $-$ \\ \hline 4 & $2.3838$ & $2.2037$ & $2.1056$ & $-$ & $-$ & $-$ & $-$ & $-$ \\ \hline 5 & $2.5419$ & $2.3619$ & $2.2626$ & $-$ & $-$ & $-$ & $-$ & $-$ \\ \hline 8 & $2.9502$ & $2.7716$ & $2.6705$ & $2.6027$ & $-$ & $-$ & $-$ & $-$\\ \hline 10 & $3.1849$ & $3.0076$ & $2.9059$ & $2.8371$ & $-$ & $-$ & $-$ & $-$\\ \hline 15 & $3.6899$ & $3.5159$ & $3.4139$ & $3.3437$ & $3.2916$ & $-$ & $-$ & $-$\\ \hline 18 & $3.9542$ & $3.7821$ & $3.6803$ & $3.6097$ & $3.5568$ & $-$ & $-$ & $-$ \\ \hline 19 & $4.0375$ & $3.8660$ & $3.7642$ & $3.6935$ & $3.6405$ & $-$ & $-$ & $-$ \\ \hline 20 & $4.1186$ & $3.9477$ & $3.8461$ & $3.7753$ & $3.7221$ & $-$ & $-$ & $-$ \\ \hline 25 & $4.4973$ & $4.3292$ & $4.2281$ & $4.1571$ & $4.1034$ & $4.0608$ & $-$ & $-$ \\ \hline 35 & $5.1550$ & $4.9919$ & $4.8921$ & $4.8212$ & $4.7669$ & $4.7235$ & $4.6878$ & $-$ \\ \hline 39 & $5.3911$ & $5.2298$ & $5.1305$ & $5.0597$ & $5.0054$ & $4.9618$ & $4.9258$ & $-$ \\ \hline 40 & $5.4482$ & $5.2873$ & $5.1882$ & $5.1174$ & $5.0630$ & $5.0194$ & $4.9834$ & $-$ \\ \hline 45 & $5.7236$ & $5.5647$ & $5.4662$ & $5.3956$ & $5.3412$ & $5.2975$ & $5.2612$ & $5.2305$ \\ \hline 50 & $5.9839$ & $5.8268$ & $5.7291$ & $5.6587$ & $5.6043$ & $5.5604$ & $5.5240$ & $5.4931$ \\ \hline \end{tabular} \end{center} \caption{The numerical results of the extremal horizon radius $x_0$ for different dimensions $n$ and different powers $k$ are listed, where $b=0.0447$. A hyphen means that the corresponding black hole is forbidden by the hoop conjecture, so no extremal horizon radius exists.} \label{biao3} \end{table} For analyzing the relation of the black hole mass with respect to the horizon radius, we adopt the numerical method because eq.~(\ref{mas}) or eq.~(\ref{mas11}) cannot be solved analytically. For instance, taking $n=5$ and setting different powers, say, $k=0,1,3,5$, we plot the function $M=M(x_h)$ in Figure \ref{tu1}. One sees that there is one minimum mass $M_0$ when the horizon radius takes the extremal horizon radius $x_0$. That is to say, the extreme black hole exists. It is worth illustrating three cases: (i) when $M > M_0$, there exists a black hole with two horizons; (ii) when $M = M_0$, there exists an extreme black hole; (iii) when $M < M_0$, no black hole exists. In addition, under the commutative limit $\theta\rightarrow 0$ or the large horizon radius $r_h$ limit, the extreme black hole disappears and the noncommutative black hole turns back to the commutative one, see Figure \ref{tu11}. \begin{figure} \centering \subfloat[$n=5, b=0.0447$, and $k=0,1,3,5$, respectively, from left to right.]{\includegraphics[width=125mm]{3}}\\ \subfloat[$n=7, k=25$, and $b=0.0632, 0.0994, 0.134406, 0.1499, 0.1721$, respectively, from bottom to top, where the orange dashed curve corersponds to the critical value of $b$ at which the maximum and minimum Hawking temperatures just disappear. See also Figure \ref{tu2} for this critical case.]{\includegraphics[width=125mm]{4}} \caption{Plots of the relations of $M$ with respect to $x_h$.} \label{tu1} \end{figure} \begin{figure} \begin{center} \includegraphics[width=125mm]{5} \end{center} \caption{Plot of the relation of $M$ with respect to $r_h$ for the commutative limit $\theta\rightarrow 0$, see eq.~(\ref{mas10}), where $l=10$ and $n=5$.} \label{tu11} \end{figure} \section{Thermodynamic analysis}\label{sec3} In this section we analyze the phase transition of the noncommutative black hole introduced in the above section, and also investigate the relevant thermodynamic features, such as the entropy, Gibbs free energy, and equation of state. \subsection{Phase transition} If a phase transition exists, the noncommutative black hole will attain a stable or an equilibrium configuration. When a phase transition happens, some critical phenomena will appear. For example, the heat capacity at a constant pressure will be divergent or the temperature will approach an extremum. In the following we analyze the phase transition through studying the divergence of the heat capacity. The heat capacity at a constant pressure is defined by \begin{equation} C_p :=\left(\frac{\partial M}{\partial T_h}\right)_p=\frac{\partial M}{\partial r_h}\left(\frac{\partial T_h}{\partial r_h}\right)^{-1}.\label{cap} \end{equation} Considering the Hawking temperature $T_h=\frac{f^{\prime}(r_h)}{4\pi}$ and using eqs.~(\ref{xianyuan}), (\ref{masfenbu}), and (\ref{daihuan1}), we obtain \begin{equation} 2\sqrt{\theta}T_h =\frac{1}{4\pi}\left\{\frac{n-3-G(n,k;x_h)}{x_h}+b^2 x_h\left[n-1-G(n,k;x_h)\right]\right\}.\label{tem11} \end{equation} Again using eqs.~(\ref{mas}) and (\ref{tem11}) we derive the factors of the numerator and denominator of eq.~(\ref{cap}) multiplied by suitable normalization factors, respectively, \begin{equation} \begin{aligned} \left(\frac{1}{2\sqrt{\theta}}\right)^{n-4}\frac{\partial M}{\partial r_h}=&\frac{(n-2)\omega \Gamma\left(\frac{n+k-1}{2}\right)x_h^{n-2}}{16\pi \gamma\left(\frac{n+k-1}{2},x_h^2\right)}\left\{b^2 \left[n-1-G(n,k;x_h)\right]+\frac{n-3-G(n,k;x_h)}{x_h^2}\right\},\\ \vspace{0.4cm} \left(\frac{1}{2\sqrt{\theta}}\right)^{-2}\frac{\partial T_h}{\partial r_h}=& \frac{1}{4\pi}\left\{b^2 \left[n-1-G(n,k;x_h)\right]-\frac{n-3-G(n,k;x_h)}{x_h^2}\right.\\ &\,\,\,\,\,\,\,\,\,\,\,\, \left. -\left(b^2 x_h +\frac{1}{x_h}\right)G^{\prime}(n,k;x_h)\right\},\label{fenzifenmu} \end{aligned} \end{equation} where $G^{\prime}(n,k;x_h)$ stands for the first order derivative of $G(n,k;x_h)$ with respect to $x_h$. For the black hole with a large horizon radius $r_h$ or under the limit $\theta\rightarrow 0$, the Hawking temperature $T_h$ (eq.~(\ref{tem11})) turns back to that of the commutative black hole~\cite{BC}, \begin{eqnarray} T_h \rightarrow\frac{1}{4\pi}\left[\frac{n-3}{r_h}+\frac{(n-1)r_h}{l^2}\right].\label{temp} \end{eqnarray} In addition, using eqs.~(\ref{cap})-(\ref{fenzifenmu}) we observe that the heat capacity tends to the commutative formulation~\cite{RD} under $\theta\rightarrow 0$ or the large horizon radius $r_h$ limit, \begin{eqnarray} C_p \rightarrow \frac{(n-2)\omega}{4} \frac{(n-3)l^2 r_h^{n-2}+(n-1)r_h^{n}}{(n-1)r_h^2-(n-3)l^2}.\label{cap1} \end{eqnarray} The above limits show that the noncommutative generalizations of the Hawking temperature and the heat capacity are reasonable. Eq.~(\ref{tem11}) is plotted in Figure \ref{tu2}. For the extreme black hole, the temperature vanishes at the extremal horizon radius. For the non-extreme black holes, there are maximum temperature $2\sqrt{\theta}T_{max}$ and minimum temperature $2\sqrt{\theta}T_{min}$ at critical points labeled by $x_c$ and $x_c|_{{x_h}\uparrow}$, respectively, where $x_c|_{{x_h}\uparrow}$ means the critical radius under the large $x_h$ limit. In the second diagram of Figure \ref{tu2}, we observe that the maximum and minimum temperatures are gradually disappearing when the parameter $b$ is approaching the critical value $b_c=0.134406$. This feature happens for the noncommutative black hole with a strong noncommutativity with respect to the AdS radius, which can be seen clearly from the definition of the parameter in eq.~(\ref{daihuan1}). While for the commutative situation, that is, the case under $\theta\rightarrow 0$, there is only one minimum temperature shown in Figure \ref{tu22}. \begin{figure} \centering \subfloat[$n=5$, $b=0.0447$, and $k=0,1,3,5$, respectively, from left to right. For $k=3$ ({\color{blue}blue} curve), we give the critical radii and their corresponding extremal temperatures, i.e. $x_c=3.0185$, $2\sqrt{\theta}T_{max}=0.05053$ and $x_c|_{{x_h}\uparrow}=15.8189$, $2\sqrt{\theta}T_{min}=0.02012$.]{\includegraphics[width=125mm]{6}}\\ \subfloat[$n=7, k=25$, and $b=0.0932, 0.1194, 0.134406, 0.1471, 0.1551$, respectively, from bottom to top. The orange dashed curve is the critical curve at $b_c=0.134406$, below which the maximum and minimum temperatures exist but above which no such temperatures exist.]{\includegraphics[width=125mm]{7}} \caption{Plots of the relations of $T_h$ with respect to $x_h$.} \label{tu2} \end{figure} \begin{figure} \begin{center} \includegraphics[width=125mm]{8} \end{center} \caption{Plot of the relation of $T_h$ with respect to $r_h$ for the commutative limit $\theta\rightarrow 0$, see eq.~(\ref{temp}), where $l=10$ and $n=5$.} \label{tu22} \end{figure} We know that the thermodynamic stability of black holes is determined by the heat capacity. The black hole is locally stable for $C_p >0$, but unstable for $C_p <0$. It is shown in Figure \ref{tu3} that the heat capacity is divergent at the critical radius $x_c$ or $x_c|_{{x_h}\uparrow}$, which implies that the phase transition occurs at $x_c$ or $x_c|_{{x_h}\uparrow}$. When the parameter $b$ increases, $x_c$ and $x_c|_{{x_h}\uparrow}$ are approaching to each other, and finally the divergence will disappear. This gives rise to $C_p >0$, which means that the black hole is locally stable, see the green and yellow lines in the second diagram of Figure \ref{tu3}. In other words, no phase transition happens if $b \geq b_c$. Incidentally, the Gaussian matter distribution in the 5-dimensional AdS spacetime, i.e. the case of $n=5$ and $k=0$, was investigated in ref.~\cite{sss}, which can be dealt with as a specific case of our results (the black curves in the first diagram of Figure \ref{tu3}). \begin{figure} \centering \subfloat[$n=5, b=0.0447$, and $k=0,1,3,5$, respectively, from left to right.]{\includegraphics[width=125mm]{9}}\\ \vspace{4mm} \subfloat[$n=7$, $k=25$, and $b=0.1222$ (\text{{\color{black}black}}), 0.1295 (\text{{\color{blue}blue}}), 0.134406 (\text{\textcolor[rgb]{1.00,0.50,0.00}{orange}}), 0.1360 (\text{{\color{green}green}}), 0.1395 (\text{{\color{yellow}yellow}}), respectively. The orange dashed curves are the critical curves at $b_c=0.134406$.]{\includegraphics[width=125mm]{10}} \caption{Plots of the relations of $C_p$ with respect to $x_h$.} \label{tu3} \end{figure} The critical radius $x_c=r_c/(2\sqrt{\theta})$ can be obtained by setting the denominator of eq.~(\ref{cap}) equal zero, \begin{equation} \left.\frac{\partial T_h}{\partial r_h}\right|_{r_h=r_c}=0. \label{cap0} \end{equation} Although the above equation cannot be solved analytically, we can obtain its asymptotic behavior under the commutative limit $\theta\rightarrow 0$ or the large horizon radius limit, \begin{equation} r_c \rightarrow \left({\frac{n-3}{n-1}}\right)^{1/2}l. \label{capra} \end{equation} Instead of analytical results of eq.~(\ref{cap0}), we list some numerical results in Table \ref{biao4} for a certain value of the parameter $b$. Note that eq.~(\ref{cap0}) has two roots for a fixed dimension $n$: one root is small, given in Table \ref{biao4} for different $k$, which corresponds to the noncommutative case, and the other root is big, listed in the last line, which corresponds to the large horizon radius limit. From the data of the table we know that the first phase transition occurs at a small critical horizon radius $x_c$, where the black hole becomes locally unstable from locally stable, and second phase transition occurs at a large critical horizon radius, i.e., at $x_c|_{{x_h}{\uparrow}}$, where the black hole becomes locally stable from locally unstable. Incidentally, for the commutative black hole, only one phase transition happens at the position given by eq.~(\ref{capra}), see Figure \ref{tu33}. \begin{figure} \begin{center} \includegraphics[width=125mm]{11} \end{center} \caption{Plot of the relation of $C_p$ with respect to $r_h$ for the commutative limit $\theta\rightarrow 0$, see eq.~(\ref{cap1}), where $l=10$ and $n=5$.} \label{tu33} \end{figure} We can see from Table \ref{biao4} that the critical radius $x_c$ at which the first phase transition occurs increases when the power $k$ increases for a fixed $n$, which is quite natural because the extremal horizon radius $x_0$ increases (cf. Table \ref{biao3}). However, the situation is complicated when the dimension $n$ increases for a fixed $k$. For a small $k$ of zero to three, the critical radius $x_c$ decreases when the dimension $n$ increases. While for a large $k$ equal to and larger than four, an anomaly appears. That is, the critical radius $x_c$ decreases at the beginning and increases later. For instance, when $k=8$, the critical radius $x_c$ decreases when $n$ takes four to six, but it suddenly increases when $n$ takes seven. This feature shows that an anomalous trend of critical radii exists in the first phase transition for the 6- and higher-dimensional black holes, see Table \ref{biao4} for the details. Incidentally, no such an anomaly exists in the second phase transition, where the corresponding critical radius $x_c|_{{x_h}\uparrow}$ increases when the dimension $n$ increases for a fixed $k$, see the last line of Table \ref{biao4} for the details. \begin{table}[!hbp] \begin{center} \begin{tabular}{|c|*{9}{|c}} \hline \multicolumn{9}{|c|} {The critical horizon radius $x_c=r_c/(2\sqrt{\theta})$ at which a phase transition happens} \\ \hline \backslashbox{k}{n} & $4$ & $5$ & $6$ & $7$ & $8$ & $9$ & $10$ & $11$\\ \hline \hline 0 & $2.3991$ & $2.3826$ & $-$ & $-$ & $-$ & $-$ & $-$ & $-$ \\ \hline 1 & $2.6641$ & $2.6276$ & $-$ & $-$ & $-$ & $-$ & $-$ & $-$ \\ \hline 3 & $3.0751$ & $3.0185$ & $-$ & $-$ & $-$ & $-$ & $-$ & $-$ \\ \hline 4 & $3.2465$ & $3.1839$ & $3.1917$ & $-$ & $-$ & $-$ & $-$ & $-$ \\ \hline 5 & $3.4029$ & $3.3358$ & $3.3382$ & $-$ & $-$ & $-$ & $-$ & $-$ \\ \hline 8 & $3.8102$ & $3.7339$ & $3.7254$ & $3.7440$ & $-$ & $-$ & $-$ & $-$ \\ \hline 10 & $4.0458$ & $3.9655$ & $3.9521$ & $3.9654$ & $-$ & $-$ & $-$ & $-$ \\ \hline 15 & $4.5559$ & $4.4688$ & $4.4469$ & $4.4513$ & $4.4698$ & $-$ & $-$ & $-$ \\ \hline 18 & $4.8241$ & $4.7341$ & $4.7087$ & $4.7094$ & $4.7240$ & $-$ & $-$ & $-$ \\ \hline 19 & $4.9087$ & $4.8178$ & $4.7915$ & $4.7911$ & $4.8046$ & $-$ & $-$ & $-$ \\ \hline 20 & $4.9913$ & $4.8996$ & $4.8723$ & $4.8709$ & $4.8834$ & $-$ & $-$ & $-$ \\ \hline 25 & $5.3773$ & $5.2821$ & $5.2509$ & $5.2453$ & $5.2535$ & $5.2703$ & $-$ & $-$ \\ \hline 35 & $6.0504$ & $5.9498$ & $5.9132$ & $5.9018$ & $5.9040$ & $5.9146$ & $5.9311$ & $-$ \\ \hline 39 & $6.2928$ & $6.1903$ & $6.1521$ & $6.1389$ & $6.1393$ & $6.1482$ & $6.1628$ & $-$ \\ \hline 40 & $6.3515$ & $6.2485$ & $6.2099$ & $6.1964$ & $6.1963$ & $6.2048$ & $6.2190$ & $-$ \\ \hline 45 & $6.6347$ & $6.5295$ & $6.4892$ & $6.4738$ & $6.4719$ & $6.4785$ & $6.4907$ & $6.5072$ \\ \hline 50 & $6.9030$ & $6.7955$ & $6.7537$ & $6.7368$ & $6.7333$ & $6.7382$ & $6.7488$ & $6.7636$ \\ \hline \hline $x_c|_{{x_h}\uparrow}$ & $12.9161$ &$15.8189$ &$17.3288$ &$18.2661$ &$18.9073$ &$19.3742$ &$19.7297$ &$20.0096$ \\ \hline \end{tabular} \end{center} \caption{The numerical results of the critical radius $x_c$ for different dimensions $n$ and different powers $k$, where $b=0.0447$. A hyphen means that the corresponding black hole is forbidden by the hoop conjecture, so no critical horizon radius exists.} \label{biao4} \end{table} \begin{figure} \centering \subfloat[Plot of the relations of $T_h$ with respect to $x_h$ at $b=0.0932$.]{\includegraphics[width=125mm]{hct}}\\ \vspace{4mm} \subfloat[Plot of the relations of $C_p$ with respect to $x_h$ at $b=0.1222$.]{\includegraphics[width=125mm]{hcc}} \caption{We take $n=7$ as an example and show how the hoop conjecture works thermodynamically for $k=0$ (\text{{\color{black}black}}), $1$ (\text{{\color{red}red}}), $3$ (\text{{\color{blue}blue}}), $5$ (\text{{\color{green}green}}), $8$ (\text{\textcolor[rgb]{1.00,0.50,0.00}{orange}}), $10$ (\text{{\color{yellow}yellow}}), $15$ (\text{\textcolor[rgb]{0.50,0.00,0.51}{purple}}), and $20$ (\text{\textcolor[rgb]{1.00,0.50,0.50}{pink}}).} \label{hctc} \end{figure} It is interesting to mention how the hoop conjecture works thermodynamically. At first, we make an analysis qualitatively. If the hoop conjecture is satisfied, the extremal horizon radius of black holes is greater than the mean radius of mass distributions. As a result, the extreme black holes exist and the corresponding temperature and heat capacity equal zero. On the contrary, if the hoop conjecture is violated, the extremal horizon radius of black holes is smaller than the mean radius of mass distributions. Thus, no extreme configurations of black holes exist and the mean radius of mass distributions is just the horizon radius of the smallest black hole in order to ensure the formation of black holes, which leads of course to non-zero temperature and heat capacity for such a smallest black hole. Next, we turn to a quantitative analysis whose numerical results are plotted in Figure \ref{hctc}. In the first diagram we calculate the Hawking temperature for the cases $n=7$, $b=0.0932$, and $k=0, 1, 3, 5, 8, 10, 15, 20$, respectively. The eight cases can be classified into two groups, where the hoop conjecture is violated in the first group with $k=0, 1, 3, 5$, while it is satisfied in the second group with $k=8, 10, 15, 20$. One can see clearly that the Hawking temperature is non-zero for the smallest black holes in the first group, while it is zero for the extreme black holes in the second group. The four cases in the first group are inconsistent with the self-regularity of the noncommutative black hole~\cite{NSS}, while the other four in the second group are indeed consistent with the self-regularity. Moreover, in the second diagram of Figure \ref{hctc} we calculate the heat capacity for the cases $n=7$, $b=0.1222$, and $k=0, 1, 3, 5, 8, 10, 15, 20$, respectively. Completely following the treatment to the Hawking temperature, one can obtain the similar results, that is, the heat capacity is non-zero for the four cases in the first group in which the hoop conjecture is violated, while it is zero for the other four cases in the second group in which the hoop conjecture is satisfied. Thus, our qualitative and quantitative analyses coincide with each other. As a whole, the hoop conjecture leads, from the point of view of thermodynamics, to zero temperature and zero heat capacity for extreme black holes, which is required by the self-regularity of the noncommutative black hole. \subsection{Entropy and the Gibbs free energy} The entropy of this noncommutative black hole can be expressed in terms of a function of horizon radius $r_h$, \begin{equation} S=\int_{r_0}^{r_h} \frac{d S}{dr_h}dr_h. \label{entr} \end{equation} Because the extreme black hole corresponds to zero temperature, the integration must be made from the extremal radius $r_0$ because this radius is minimal and such a choice gives rise to a vanishing entropy for the extreme black hole. This treatment also coincides with the third law of thermodynamics. Now we begin with the first law of thermodynamics to calculate the entropy. The first law of thermodynamics at a constant pressure reads as $dM=T_h dS$, and alternatively $dM=\frac{\partial M}{\partial r_h} dr_h$ from the point of view of eq.~(\ref{mas}). As a result, the integrand of eq.~(\ref{entr}) takes the form, \begin{equation} \frac{d S}{dr_h}=\frac{1}{T_h}\frac{\partial M}{\partial r_h}.\label{sd} \end{equation} Again considering eqs.~(\ref{mas}) and (\ref{tem11}), we derive the left hand side of eq.~(\ref{sd}) multiplied by a suitable normalization factor, \begin{equation} \left(\frac{1}{2\sqrt{\theta}}\right)^{n-3}\frac{d S}{dr_h}=\frac{(n-2)\omega \Gamma\left(\frac{n+k-1}{2}\right)x_h^{n-3}}{4\gamma\left(\frac{n+k-1}{2},x_h^2\right)}. \label{Sderi} \end{equation} Substituting eq.~(\ref{Sderi}) into eq.~(\ref{entr}), and then integrating by parts, we obtain \begin{equation} \frac{S}{\left(2\sqrt{\theta}\right)^{n-2}}=\frac{\omega \Gamma\left(\frac{n+k-1}{2}\right) x_h^{n-2}}{4\gamma\left(\frac{n+k-1}{2},x_h^2\right)}+ \Delta S, \label{entr0} \end{equation} where $\Delta S$ reads as \begin{equation} \Delta S=\frac{\omega \Gamma\left(\frac{n+k-1}{2}\right)}{4}\left\{\int_{x_0}^{x_h}\frac{z^{n-3}G(n,k;z)}{\gamma\left(\frac{n+k-1}{2},z^2\right)}dz -\frac{x_0^{n-2}}{\gamma\left(\frac{n+k-1}{2},x_0^2\right)}\right\}. \label{entr1} \end{equation} We can verify that the entropy turns out to be the {\em standard} Bekenstein-Hawking entropy $S=\omega r_h^{n-2}/4$ under the commutative limit $\theta\rightarrow 0$ or the large horizon radius $r_h$ limit. Now we turn to the Gibbs free energy that is defined by \begin{equation} G:=M(r_h)-T_h (r_h)S(r_h),\label{free} \end{equation} where $M(r_h)$, $T_h (r_h)$, and $S(r_h)$ are given by eq.~(\ref{mas}), eq.~(\ref{tem11}), and eqs.~(\ref{entr0}) and (\ref{entr1}), respectively. Incidentally, under the commutative limit $\theta\rightarrow 0$ or the large horizon radius $r_h$ limit, the Gibbs free energy tends to the commutative formula~\cite{RD}, \begin{equation} G \rightarrow \frac{\omega}{16 \pi}\left(r_h^{n-3}-\frac{r_h^{n-1}}{l^2}\right).\label{free1} \end{equation} Note that the above limits of entropy and Gibbs free energy show the consistency of our noncommutative generalization. As the entropy cannot be integrated analytically, which results in the fact that eq.~(\ref{free}) cannot be written in an explicit form, so we adopt the numerical method to analyze the Gibbs free energy. Eq.~(\ref{free}) is plotted in Figure \ref{tu8}, where the relevant parameters are set and can be seen in its caption. Here we point out the important value of horizon radii $x_g$ at which the Hawking-Page phase transition occurs. When $x_h=x_g$, the Gibbs free energy vanishes at the Hawking-Page temperature $T_{HP}$. When $x_h > x_g$, the Gibbs free energy becomes negative for the temperature range $T_h > T_{HP}$, indicating a stable black hole which is shown in diagram (a) of Figure \ref{tu8}. Moreover, the {\em swallowtail} structure of the Gibbs free energy as a function of temperatures at a constant pressure is depicted in diagram (b) of Figure \ref{tu8}. Comparing Figure \ref{tu2}, Figure \ref{tu3}, and Figure \ref{tu8}, we can separate the configurations of the noncommutative black hole into the following two classes in terms of the critical $b$-parameter. \begin{enumerate} \item When $b < b_c$, there exist {\em multiple} configurations about this noncommutative black hole. \begin{itemize} \item If $x_h =x_0$, which means that the horizon radius shrinks to the extremal horizon radius, the black hole is in the extreme configuration. The temperature vanishes, indicating that this black hole is in the {\em frozen} state. \item If $x_0 < x_h < x_c$, which means that the black hole is in the near-extreme configuration, the temperature is $0 < 2\sqrt{\theta}T_{h} < 2\sqrt{\theta}T_{max}$ and the heat capacity $C_p >0$, indicating that the black hole is locally stable. \item If $x_h=x_c$, the temperature approaches a local maximum $2\sqrt{\theta}T_{max}$ and the Gibbs free energy drops to a local minimum. The heat capacity is divergent, which implies that the phase transition occurs. \item If $x_c < x_h < x_c|_{x_{h}\uparrow}$, the temperature presents a dropping trend and the Gibbs free energy presents an increasing trend. The heat capacity is negative $C_p <0$, indicating that the black hole is locally unstable. \item If $x_h = x_c|_{x_{h}\uparrow}$, the temperature drops to a local minimum $2\sqrt{\theta}T_{min}$ and the Gibbs free energy approaches a local maximum. The heat capacity is divergent, which implies that the phase transition occurs again. \item If $x_c|_{x_{h}\uparrow} < x_h < x_g$, the temperature is increasing and the Gibbs free energy is dropping. The heat capacity is positive $C_p >0$, indicating that the black hole is locally stable. \item If $x_h = x_g$, the Gibbs free energy is equal to zero and the temperature is equal to the Hawking-Page temperature $T_{HP}$. The first order Hawking-Page phase transition occurs between the thermal radiation and the large black hole. \item If $x_h > x_g$, the temperature is continuously increasing and the Gibbs free energy is negative. This black hole is in a stable state with a large radius and a high temperature. \end{itemize} \item When $b \geq b_c$, the extreme black hole still exists and the heat capacity is positive, implying that this black hole is locally stable. Once the horizon radius approaches $x_g$ which corresponds to the vanishing Gibbs free energy, the Hawking-Page phase transition occurs. Hence, this black hole is in a stable state with a high temperature. \end{enumerate} \begin{figure} \centering \subfloat[Plot of the relation of $G$ with respect to $x_h$, where the extremal horizon radius is $x_0=2.0299$, the critical radii are $x_c=3.0185$ and $x_c|_{{x_h}{\uparrow}}=15.8189$. When $x_h > x_g =22.40872$, the Gibbs free energy is negative. ]{\includegraphics[width=125mm]{14}}\\ \subfloat[Plot of the relation of $G$ with respect to $T_h$. It is called the {\em swallowtail} picture, where the four dots $A$, $B$, $C$ and $D$ correspond to the horizon radii (temperature) $x_0=2.0299$ ($2\sqrt{\theta}T_h=0$), $x_c=3.0185$ ($2\sqrt{\theta}T_{max}=0.05053$), $x_c|_{{x_h}{\uparrow}}=15.8189$ ($2\sqrt{\theta}T_{min}=0.02012$), and $x_g=22.40872$ ($2\sqrt{\theta}T_{HP}=0.02135$), respectively.]{\includegraphics[width=125mm]{15}} \caption{Plots of the relations of $G$ with respect to $x_h$ and $T_h$, respectively. We set $b=0.0447$, $n=5$, and $k=3$ in the both diagrams above.} \label{tu8} \end{figure} \subsection{Equation of state} The results in the above two subsections, including the results of extreme black holes, are based on the case in which the parameter $b$ is fixed. In light of eqs.~(\ref{pres}) and (\ref{daihuan1}), we can see that the pressure parameter $4\theta P$ is proportional to the square of the parameter $b$. So the above analysis is made in the isobaric process. Now we adopt an alternative way, i.e. the isothermal process, to discuss the thermodynamic features of this noncommutative black hole. As a {\em priori} choice, we proceed to study the equation of state $P=P(V,T_h)$ for the noncommutative black hole in which the thermodynamic volume $V$ can be expressed as the function of the horizon radius $x_h$ from eqs.~(\ref{pres}), (\ref{daihuan1}) and (\ref{mas11}), \begin{eqnarray} V=\left(\frac{\partial M}{\partial P}\right)_{S} =\frac{\left(2\sqrt{\theta}\right)^{n-1}\Gamma\left(\frac{n+k-1}{2}\right)}{\gamma\left(\frac{n+k-1}{2},x_h^2\right)}\frac{\omega}{n-1}x_h^{n-1}. \end{eqnarray} With the help of the eq.~(\ref{tem11}), we obtain the equation of state, \begin{equation} 4\theta P=\frac{(n-1)(n-2)}{n-1-G(n,k;x_h)}\left[\frac{\sqrt{\theta}T_h}{2x_h}-\frac{n-3-G(n,k;x_h)}{16\pi x_h^2}\right]. \label{state} \end{equation} Under the commutative limit $\theta\rightarrow 0$, eq.~(\ref{state}) turns back to the known formula \cite{RD,GKM}, \begin{eqnarray} P \rightarrow \frac{(n-2)T_h}{4r_h}-\frac{(n-2)(n-3)}{16\pi r_h^2}. \label{state1} \end{eqnarray} It seems that the equation of state is divergent at $x_*$ that is the solution of $n-1-G(n,k;x_*)=0$. However, according to eq.~(\ref{range}) we know $ x_* < x_0 < {\tilde x}$. This implies that such a divergence of the equation of state can be avoided since this inequality ensures that the horizon radius of black holes is always larger than $x_*$. The equation of state described by eq.~(\ref{state}) is plotted in Figure \ref{tu4}, where we can see that the Maxwell equal area law maintains for the noncommutative black hole when the Hawking temperature satisfies the inequality $T_{c2} \leq T_h < T_{c1}$, but fails when the Hawking temperature goes above the critical point $T_{c1}$ or below the other critical point $T_{c2}$. In addition, for the commutative black hole, precisely speaking, for the pure AdS black hole, whose equation of state is depicted by eq.~(\ref{state1}), the Maxwell equal area law does not maintain as it was known. \begin{figure} \centering \subfloat[$2\sqrt{\theta}T_h=0.1000$, $0.08164$, $0.0625$, $0.04869$, $0.0150$, respectively, from top to bottom, where $n=5$ and $k=3$. The blue dashed curve that corresponds to $2\sqrt{\theta}T_{c1}=0.08164$ is the critical curve and the red dashed curve that corresponds to $2\sqrt{\theta}T_{c2}=0.04869$ is the other critical curve. When $2\sqrt{\theta}T_{c2} < 2\sqrt{\theta}T_h < 2\sqrt{\theta}T_{c1} $, the Maxwell equal area law maintains, but it fails when $2\sqrt{\theta}T_h$ is above $2\sqrt{\theta}T_{c1}=0.08164$ and below $2\sqrt{\theta}T_{c2}=0.04869$. Note that the black curve is associated with the low temperature $2\sqrt{\theta}T_h =0.0150 < 2\sqrt{\theta}T_{c2}$, where the part of negative pressure is forbidden by the AdS background.]{\includegraphics[width=125mm]{12}}\\ \subfloat[$2\sqrt{\theta}T_h=0.0625$, where $n=5$ and $k=3$. Here the green curve of diagram (a) is amplified in a small region.]{\includegraphics[width=125mm]{13}} \caption{Plots of the relations of $P$ with respect to $x_h$.} \label{tu4} \end{figure} Finally, we make a comment that the appearance of the lower bound of temperature is only associated with the validity of the Maxwell equal area law, and the extreme black holes with vanishing Hawking temperature still exist below this temperature bound. When $T_h <T_{c2}$, the pressure is negative in some range of horizon radius, see, for instance, the black curve of Figure \ref{tu4}. As a negative pressure that corresponds to a positive cosmological constant is contradictory to our prerequisite that our background spacetime is the AdS with a negative cosmological constant, we thus only consider the range of horizon radius that is related to a positive pressure. When the temperature vanishes, i.e. $T_{h} = 0$, we can see that eq.~(\ref{state}) goes back to eq.~(\ref{extreradi}) that gives the extremal horizon radius satisfying the inequality eq.~(\ref{range}). From this point of view, when $0 < T_{h} < T_{c2}$, the black hole exists with its horizon radius from the extremal one to a larger one that is associated with $T_{c2}$, and when $0 < T_{h} < T_{c1}$, the corresponding range of black hole horizon radius is from the extremal horizon radius to infinity as expected. \section{Summary}\label{sec4} In Section \ref{sec2}, the noncommutativity is imposed~\cite{NSS} on the high-dimensional Schwarzschild-Tangherlini anti-de Sitter black hole in terms of the non-Gaussian smeared matter distribution~\cite{Park}, the condition for the existence of extreme black holes is derived, and the radii of extreme black holes are obtained for different dimensions $n$ and powers $k$. In order to make this noncommutative black hole formed, we consider the {\em hoop conjecture}, that is, the extremal horizon radius must be larger than the matter mean radius. Under this requirement, we derive the allowed values of $k$ at a fixed dimension $n$ in the two ranges of the parameter $b=2\sqrt{\theta}/l$, see Table \ref{biao1} and Table \ref{biao2}. In particular, we indicate that the Gaussian smeared matter distribution is not applicable for the 6- and higher-dimensional Schwarzschild-Tangherlini anti-de Sitter black holes. Moreover, from the point of view of thermodynamics, the {\em hoop conjecture} ensures that the formed smallest black hole (the extreme black hole) has zero temperature and zero heat capacity, which coincides with the self-regularity of the noncommutative black holes~\cite{NSS} as expected. In Section \ref{sec3}, the thermodynamic quantities of this noncommutative black hole are calculated, such as the Hawking temperature, heat capacity, entropy, Gibbs free energy and equation of state, in particular, the phase transition is analyzed in the isobaric process in detail. It is found that there exist two phase transitions: the first happens from a locally stable phase to a locally unstable one at a small horizon radius $x_c$, and the second phase transition occurs from a locally unstable phase to a locally stable one at a large horizon radius $x_c|_{{x_h}\uparrow}$. When the parameter $b$ is gradually increasing, these two locations $x_c$ and $x_c|_{{x_h}\uparrow}$ where the phase transitions occur are close to each other, and no phase transition will occur after $b$ is equal to and larger than the critical value $b_c$, resulting in this black hole being in a locally stable configuration. Table \ref{biao4} and Table \ref{biao3} show clearly that the critical radius $x_c$ is close to the extremal radius $x_0$, indicating that the first phase transition occurs at the near-extremal region, while the second happens at a large horizon radius. The two tables\footnote{In fact, we have calculated all numerical data when $n$ is from 4 to 11 and $k$ from $0$ to 50. Only one third of the data is put to the two tables because it is enough for us to see the tendency of the critical radius and the extremal radius.} also show that an anomalous trend of critical radii exists in the first phase transition for the 6- and higher-dimensional black holes, but no such an anomaly exists in the second phase transition. On the other hand, our analysis of the entropy and Gibbs free energy indicates that the Gibbs free energy is negative when the horizon radius is becoming large or the noncommutative parameter is going to zero, i.e. the commutative black hole is locally stable. Moreover, for the isothermal process, the equation of state described by eq.~(\ref{state}) reveals that the Maxwell equal area law maintains for the noncommutative black hole when the Hawking temperature satisfies the inequality $T_{c2} \leq T_h < T_{c1}$, but fails when the Hawking temperature goes above the critical point $T_{c1}$ or below the other critical point $T_{c2}$. At last, we point out that the reason that the noncommutative black hole has the new characteristics mentioned above originates solely from the specific mass distribution (eq.~(\ref{masfenbu})) together with its related formulation of the extremal radius $x_0$ (eqs.~(\ref{extreradi})-(\ref{tezheng})). It can be seen clearly from eq.~(\ref{masfenbu}) that $m(x)$ goes to the limit $M$ when $x$ is becoming large, where $x:={r}/({2\sqrt{\theta}})$. That is, the behavior of the noncommutative black hole is uniquely determined by the property of $m(x)$ together with its induced $G(n,k;x)$ at a small $x$ close to the extremal horizon radius. \section*{Acknowledgments} Y-GM would like to thank W. Lerche of PH-TH Division of CERN for kind hospitality. This work was supported in part by the National Natural Science Foundation of China under grant No.11175090 and by the Ministry of Education of China under grant No.20120031110027. At last, the authors would like to thank the anonymous referee for the helpful comment that indeed greatly improves this work. \section*{Appendix} The lower incomplete gamma function $\gamma(a,x)$ is defined by \begin{equation} \gamma(a,x):=\int_0^{x}t^{a-1}e^{-t}dt, \tag{A1} \end{equation} where $x>0$ and $a >0$. Its asymptotic behaviors take the forms, \begin{equation} \gamma(a,x) \rightarrow \frac{x^a}{a} \qquad \text{if} \qquad x\rightarrow 0, \tag{A2} \end{equation} \begin{equation} \gamma(a,x) \rightarrow \Gamma(a) \qquad \text{if} \qquad x\rightarrow \infty, \tag{A3} \end{equation} where $\Gamma(a):=\int_0^{\infty} t^{a-1} e^{-t} dt$. We have introduced a new function $G(n,k;x)$ which is defined by eq.~(\ref{tezheng}) for $x>0$, $n \geq 4$, and $k$ is a non-negative integer. It can be checked that the first order derivative of $G(n,k;x)$ with respect to $x$ is negative, i.e. $G^{\prime}(n,k;x)<0$ for $x>0$, which implies that $G(n,k;x)$ is monotone decreasing for fixed $n$ and $k$ at the region $x>0$. Based on the asymptotic behaviors of $\gamma(a,x)$, now we analyze $G(n,k;x)$. When $x$ is small, we make an expansion in Taylor series for $G(n,k;x)$, \begin{equation} G(n,k;x)=(n+k-1)+\left(\frac{4}{n+k+1}-2\right)x^2+\frac{8(n+k-1)}{(n+k+1)^2 (n+k+3)}x^4+\mathcal{O}(x^6), \tag{A4} \end{equation} from which its asymptotic behavior is \begin{equation} G(n,k;x)\simeq (n+k-1)+\left(\frac{4}{n+k+1}-2\right)x^2. \tag{A5} \end{equation} Moreover, when $x\rightarrow \infty$, the asymptotic behavior of $G(n,k;x)$ reads \begin{equation} G(n,k;x) \rightarrow 0. \tag{A6} \end{equation}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} The $b$-family of Camassa-Holm equations (namely $b$-CH equation) \begin{equation}\label{CH} u_t-u_{txx}+(b+1)uu_x=bu_xu_{xx}+uu_{xxx}, \end{equation} was mentioned by Degasperis and Dullin et al. in \cite{Degasperis2,Dullin} by using transformations of the integrable hierarchy of KdV equations, where $u=u(t,x)$ is the scalar velocity variable and $b$ is arbitrary parameter. The Camassa-Holm equation is used to describe the unidirectional propagation of water waves on a free surface in shallow water. Degasperis and Procesi \cite{Degasperis1} proved that the $b$-CH equation \eqref{CH} is not integrable in general, but includes both the integrable cases of $b=2$ called the Camassa-Holm equation \cite{Camassa,Liu2} and $b=3$ called the Degasperis-Procesi equation \cite{Degasperis1,Lundmark} as special cases. Furthermore, Camassa-Holm equation and Degasperis-Procesi equation for important modelling of shallow water waves with breaking phenomena were studied in \cite{Constantin1,Constantin2,Liu,Whitham}. Travelling waves are usually divided into the following four categories: (i) stationary wave solution, (ii) travelling wavefront, (iii) soliton, (iv) periodic wave solution. Moreover, peaked and smooth solitary waves exist in the $b$-CH equation \eqref{CH} and depend on parameter $b$ and parameter $k$ ($k$ is related to the critical shallow water speed), and both belong to solitary waves of (iii) soliton. Many methods are available for solving the travelling waves. Using the qualitative theory of differential equations and the bifurcation method of dynamical systems to study traveling waves was first put forward by Liu and Li in \cite{Liu1} and they have shown that the solitary waves correspond to homoclinic orbits in the bifurcation phase diagrams of planar Hamiltonian systems. In \cite{Guo}, the bifurcation method of dynamical systems and the numerical simulation approach of differential equations are used to investigate traveling waves of the $b$-CH equation \eqref{CH}. Lately, the travelling waves of the $b$-CH equation \eqref{CH} was learned by Barnes and Hone in \cite{Barnes} by using hodograph transformation. Regarding the stability theory of solitary waves, Grillakis, Shatah and Strauss showed an abstract and complete solitary wave orbit stability theory and found sharp conditions for the stability and instability of solitary waves in \cite{Grillakis}. Many results have been obtained for orbital stability of solitary waves for the $b$-CH equation \eqref{CH} with this method. For zero asymptotic value ($k=0$), the initial data were decomposed into a sequence of peaked solitary waves called \textit{peakons} for $b>1$ and a sequence of smooth solitary waves called \textit{leftons} for $b<-1$ by numerical simulations in \cite{Holm1,Holm2}. For $b\in(-1,1)$, a rarefactive wave with exponentially decaying tails is generated from the initial data. The orbital stability of \textit{leftons} for $b<-1$ in some exponentially weighted space, \textit{peakons} for $b=2$ in the energy space $H^1(\mathbb{R})$ and $b=3$ in the energy space $L^2(\mathbb{R})\cap L^3(\mathbb{R})$ is studied in \cite{Hone}, \cite{Constantin3,Constantin4} and \cite{Lin}, respectively. For nonzero asymptotic value ($k\neq0$), Constantin, Strauss \cite{Constantin5} and Li, Liu, Wu \cite{Li} proved that the smooth solitary waves for $b=2$ and $b=3$ are orbitally stable by using the conserved energy integrals in the energy space, respectively. In addition, Liu et. al in \cite{Liu2} showed that for $b=2$ the Camassa-Holm equation has a peakon solution and orbital stability of the peakon solution was discussed in \cite{Ouyang}. Recently, Lafortune and Pelinovsky \cite{Lafortune} deduced a precise condition for orbital stability of the smooth solitory waves for the $b$-CH equation \eqref{CH} and verified the stability criterion analytically for $b=2$ and $b=3$ and numerically for every $b>1$. They said that it is still open to verify the stability criterion analytically for every $b>1$, $c>0$, and $k\in(0, \frac{c}{b+1})$, where $c$ is a constant wave speed. Motivated by Lafortune and Pelinovsky \cite{Lafortune}, the main purpose of this article is to consider orbital stability of the smooth solitary waves of the $b$-CH equation \eqref{CH} for every $b>1$ by means of the monotonicity of the period function for planar Hamiltonian systems. The main result of this paper is as follows: \begin{theorem} For every $b>1$, $c>0$ and $k\in(0, \frac{c}{b+1})$, the smooth solitary waves of the $b$-CH equation \eqref{CH} are orbitally stable. \end{theorem} The paper is organized as follows: preparations and the process of transformation of the $b$-CH equation \eqref{CH} to verify the stability criterion analytically for any $b>1$ are put in Section 2. In the Section 3, we present a consequence to deal with the stability criterion and Section 4 is devoted to the proof of the main result. \section{Stability criterion and transformation of the b-CH equation}\label{sect-2} In this section, we need the following preparations and lemmas that will be used throughout this paper, and we will show the transformation of the smooth solitary waves (the homoclinic orbits) into the first integral of planar Hamiltonian system. If we plug $u(t,x)=\phi(x-ct)$ back into \eqref{CH}, then the $b$-CH equation \eqref{CH} becomes the following third-order differntial equation \begin{equation}\label{ode3} -(c-\phi)(\phi'''-\phi')+b\phi'(\phi''-\phi)=0. \end{equation} where $\phi:=\phi(x)$. Integrating in $x$ yield the second-order equation: \begin{equation}\label{ode4} (c-\phi)(\phi-\phi'')+\frac{1}{2}(b-1)(\phi'^{2}-\phi^2)=ck-\frac{1}{2}(b+1)k^2. \end{equation} The following lemma summarizes the existence of smooth solitary waves for the $b$-CH equation \eqref{CH}. \begin{lemma}\label{le1}{\rm(see \cite{Lafortune})} For fixed $b>1$ and $c>0$, there exists a one-parameter family of smooth solitary waves with profile $\phi\in C^{\infty}(\mathbb{R})$ satisfying $\phi'(0)=0$ and $\phi(x)\to k$ as $|x|\to\infty $ if and only if the arbitrary parameter $k$ belongs to the interval $(0,\frac{c}{b+1})$. Moreover, \begin{equation}\label{1} 0<\phi(x)<c, \ \ x\in\mathbb{R}, \end{equation} and the family is smooth with respect to parameter $k$ in $(0,\frac{c}{b+1})$. \end{lemma} Let us now give the definition of orbital stability of the smooth solitary waves for the $b$-CH equation \eqref{CH}. According to the analysis of \cite{Lafortune}, the $b$-CH equation \eqref{CH} takes the form by using the momentum density $m:=u-u_{xx}$ \begin{equation}\label{m} m_t+um_x+bmu_x=0, \end{equation} where $m\in X_k$, $X_k=\{m-k\in H^1(\mathbb{R}): \ m(x)>0, x\in\mathbb{R}\}$, and $H^1(\mathbb{R})$ is based on the Sobolev space on $L^2(\mathbb{R})$. \begin{definition}\label{de1} Let $m(t,x)=\mu(x-ct)$ be the travelling wave solution of the b-CH equation \eqref{m} with $\mu\in X_{k}$. We say that the travelling wave is orbitally stable in $X_{k}$ if for every $\varepsilon>0$ there exists $\delta>0$ such that for every $m_0\in X_{k}$ satisfying $\|m_0-\mu\|_{H^1}<\delta$, there exists a unique solution $m\in C^0(\mathbb{R},X_k)$ of the $b$-CH equation \eqref{m} with the initial datum $m(0,\cdot)=m_0$ satisfying $$\inf\limits_{x_0\in\mathbb{R}}\|m(t,\cdot)-\mu(\cdot-x_0)\|_{H^1}<\varepsilon, \ t\in\mathbb{R}.$$ \end{definition} The following lemma gives the stability criterion of smooth solitary waves for the $b$-CH equations \eqref{CH}. \begin{lemma}\label{le2}{\rm(see \cite{Lafortune})} For fixed $b>1$, $c>0$, and $k\in(0,\frac{c}{b+1})$, there exists a unique solitary wave $m(t,x)=\mu(x-ct)$ of the $b$-CH equation \eqref{m} with profile $\mu\in C^{\infty}(\mathbb{R})$ satisfying $\mu(x)>0$ for $x\in\mathbb{R}$, $\mu'(0)=0$, and $\mu(x)\to k$ as $|x|\to \infty$ exponentially fast. The solitary wave is orbitally stable in $X_k$ if the mapping \begin{equation}\label{Q} k\mapsto Q(\phi):=\int_{\mathbb{R}}\Big(b(\frac{c-k}{c-\phi})-(\frac{c-k}{c-\phi})^{b}-b+1\Big)dx \end{equation} is strictly increasing, where $\phi:=k+(1-\partial^2_x)^{-1}(\mu-k)$ is uniquely defined. \end{lemma} Thus orbital stability of smooth solitary waves for the $b$-CH equation \eqref{m} can be determined by the sign of $\frac{dQ(\phi)}{dk}>0$. Furthermore, the Lemma \ref{le2} implies that the travelling wave solution $u(t,x)=\phi(x-ct)$ of the $b$-CH equation \eqref{CH} is orbitally stable in $Y_k$ where $Y_k=\{u-k\in H^3(\mathbb{R}): \ u(x)-u''(x)>0, \ x\in\mathbb{R}\}$. Form \cite{Lafortune}, we obtain the normalized form of the second-order equation \eqref{ode4} after some transformations \begin{equation}\label{ode2} -\varphi''+\varphi(1-\varphi)^{b-2}\big(1-\frac{b+1}{2\gamma}\varphi\big)=0, \ \ \varphi\neq1 \end{equation} where $$ \zeta=\sqrt{c-k(b+1)}(c-k)^{\frac{b-2}{2}}z, \ \ \psi(z)=k+(c-k)\varphi(\zeta), $$ $$ z=\int_0^x\frac{1}{(c-\phi(x))^{\frac{b-1}{2}}}dx, \ \ \phi(x)=\psi(z), $$ and \begin{equation}\label{r} \gamma:=\frac{c-k(b+1)}{c-k}. \end{equation} Note that \eqref{r}, it is easy to verify that $\gamma\in(0,1)$ owing to $k\in(0,\frac{c}{b+1})$. On account of $\frac{d\gamma}{dk}=\frac{-bc}{(c-k)^2}<0$ with $b>1, c>0$, the mapping \eqref{Q} is strictly increasing if and only if $\frac{dQ}{d\gamma}<0$, where \begin{equation}\label{2} \begin{split} Q(\phi)&=\int_{\mathbb{R}}\Big(b(\frac{\phi-k}{c-\phi})+1-(\frac{c-k}{c-\phi})^{b}\Big)dx\\ &=\gamma^{-\frac{1}{2}}\int_{\mathbb{R}}\Big(b\varphi(1-\varphi)^{\frac{b-3}{2}}+(1-\varphi)^{\frac{b-1}{2}} -(1-\varphi)^{-\frac{b+1}{2}}\Big)d\zeta. \end{split} \end{equation} For convenience, denote $x:=\varphi$, $t:=\zeta$, then the equation \eqref{ode2} can be written as \begin{equation}\label{ex} -x''+x(1-x)^{b-2}\big(1-\frac{b+1}{2\gamma}x\big)=0, \end{equation} and let $x'=y$, we have the planar system as follows \begin{equation}\label{ex1} \left\{\begin{aligned} &\frac{dx}{dt}=y,\\ &\frac{dy}{dt}=x(1-x)^{b-2}\big(1-\frac{b+1}{2\gamma}x\big), \end{aligned} \right. \end{equation} with the first integral \begin{equation}\label{H1} \bar{H}(x,y)=\frac{(1-x)^{b-1}}{\gamma b(b-1)}\big(2(1-\gamma)+2(1-\gamma)(b-1)x+b(b-1)x^2\big)-y^2=\bar{h}. \end{equation} It is clear that there are two singular point $(0,0)$ (the saddle point), $(\frac{2\gamma}{b+1},0)$ (the center), and a singular line $x=1$, where $\frac{2\gamma}{b+1}<1$. \tikzset{ flow/.style = {decoration = {markings, mark=at position #1 with {\arrow{>}}}, postaction = {decorate} }} \captionsetup{font={scriptsize}} \tikzset{global scale/.style={ scale=#1, every node/.append style={scale=#1} } } \begin{figure}[htb] \centering \begin{tikzpicture}[global scale=0.8] \fill[black](0,0)circle(0.05); \draw[->] (-5,0)--(5,0); \draw(5,0)node[below]{$x$}; \draw[->](0,-3)--(0,3); \draw(0,3)node[above right]{$y$}; \draw (2,0)ellipse [x radius=0.4, y radius=0.2]; \draw (2,0)ellipse [x radius=0.8, y radius=0.6]; \draw[->](2,0.6)--(2.1,0.6); \draw (2,0)ellipse [x radius=1.2, y radius=1]; \draw[blue] (-0.3,-0.6) to [out=66,in=180] (2,1.2) to [out=0,in=90] (3.8,0); \draw[blue] (-0.3,0.6) to [out=-66,in=-180] (2,-1.2) to [out=-0,in=-90] (3.8,0); \draw[->](2,1.2)--(2.1,1.2); \draw (0,0)node[blue,left]{$(0,0)$}; \fill[black](2,0)circle(0.05); \draw (2,0)node[red,below]{$(\frac{2\gamma}{b+1},0)$)}; \draw (4,-3)--(4,3); \draw (4,-1) node [right] {$x=1$}; \draw(2.5,0.2)node[above right]{$\Gamma_{\bar{h}}$}; \end{tikzpicture} \caption{Diagram of the homoclinic orbit and the period annulus, where $\Gamma_{\bar{h}}=\{(x,y)|\bar{H}(x,y)=\bar{h},\bar{h}\in(\bar{h}_c,\bar{h}_s)\}$.} \label{fig1} \end{figure} For $b > 1$ and $c > 0$, by the translational invariance, the smooth solitary waves (the homoclinic orbits) profile satisfying $\varphi'(0)=0$ and correspond to the level curve \begin{equation}\label{orbit} \frac{(1-x)^{b-1}}{\gamma b(b-1)}\big(2(1-\gamma)+2(1-\gamma)(b-1)x+b(b-1)x^2\big)-y^2=\frac{2(1-\gamma)}{\gamma b(b-1)}. \end{equation} Additionally, there exists a punctured neighbourhood of the center $(\frac{2\gamma}{b+1}, 0)$ enclosed by the homoclinic orbit connecting the saddle $(0, 0)$, and the largest such punctured neighborhood is said to be \textit{the period annulus} of $(\frac{2\gamma}{b+1}, 0)$ (see Figue \ref{fig1}). Next, to better verify the stability criterion \eqref{2}, we consider a new system with the homoclinic orbits \eqref{orbit} as the first integral. For the homoclinic orbits \eqref{orbit}, it can be written as \begin{equation}\label{orbit1} \frac{A(x)}{B(x)}-\frac{b(b-1)y^2}{-B(x)}=\frac{1}{\gamma}, \end{equation} where \begin{equation}\label{AB} \begin{split} &A(x)=2(1-x)^{b-1}+2(b-1)x(1-x)^{b-1}-2,\\ &B(x)=A(x)+b(b-1)x^2(1-x)^{b-1}. \end{split} \end{equation} In order to separate variables of \eqref{orbit1}, we choose new variables $z,\bar{u}$ as follows: $$ z=x, \ \ \bar{u}=\sqrt{\frac{b(b-1)}{-B(x)}}y. $$ Then smooth solitary waves correspond to the level curve takes the form \begin{equation}\label{H0} \frac{A(z)}{B(z)}-\bar{u}^2=\frac{1}{\gamma}, \ \ \gamma\in(0,1). \end{equation} Note that the level curve \eqref{H0}, let $h=\frac{1}{\gamma}$, it is straightforward to show that \begin{equation}\label{H} H(z,\bar{u})=\frac{A(z)}{B(z)}-\bar{u}^2=h, \ h\in(1,\infty), \end{equation} is the first integral of the following new planar Hamiltonian system \begin{equation}\label{ex3} \left\{\begin{aligned} &\frac{dz}{d\tau}=2\bar{u},\\ &\frac{d\bar{u}}{d\tau}=\frac{2b(b-1)z(1-z)^{b-2}}{B^2(z)}\Big(2-(b+1)z-2(1-z)^b-(b-1)z(1-z)^b\Big), \end{aligned} \right. \end{equation} where $z=1$ is the singular line and $d\tau=\frac{1}{2}\sqrt{\frac{-B(z)}{b(b-1)}}dt$. The trajectories of smooth solitary waves correspond to the first integral of the system \eqref{ex3}. we denote by $\Gamma_h=\{(z,\bar{u})|H(z,\bar{u})=h, 1<h<\infty\}$ the trajectories of smooth solitary waves (see Figure \ref{fig2}). \tikzset{ flow/.style = {decoration = {markings, mark=at position #1 with {\arrow{>}}}, postaction = {decorate} }} \captionsetup{font={scriptsize}} \tikzset{global scale/.style={ scale=#1, every node/.append style={scale=#1} } } \begin{figure}[htb] \centering \begin{tikzpicture}[global scale=0.8] \fill[black](0,0)circle(0.05); \draw[->] (-5,0)--(5,0); \draw(5,0)node[below]{$z$}; \draw[->](0,-3)--(0,3); \draw(0,3)node[above right]{$\bar{u}$}; \draw[blue] (0.1,3) to [out=-90,in=-180] (1,0.8) to [out=0,in=90] (3.6,0); \draw[blue] (0.1,-3) to [out=90,in=-180] (1,-0.8) to [out=-0,in=-90] (3.6,0); \draw[->](1,0.8)--(1.1,0.8); \draw[blue] (0.3,3) to [out=-90,in=-180] (1.2,1) to [out=0,in=90] (3.8,0); \draw[blue] (0.3,-3) to [out=90,in=-180] (1.2,-1) to [out=-0,in=-90] (3.8,0); \draw[->](1.2,1)--(1.3,1); \draw[blue] (0.4,3) to [out=-90,in=-180] (1.4,1.2) to [out=0,in=90] (4,0); \draw[blue] (0.4,-3) to [out=90,in=-180] (1.4,-1.2) to [out=-0,in=-90] (4,0); \draw[->](1.4,1.2)--(1.5,1.2); \draw (0,0)node[blue,left]{$(0,0)$}; \draw (4.3,-3)--(4.3,3); \draw (4.3,-1) node [right] {$z=1$}; \draw(2.5,0.3)node[above right]{$\Gamma_h$}; \end{tikzpicture} \caption{Diagram of the smooth solitary waves correspond to the level curve $\Gamma_{h}$} \label{fig2} \end{figure} Using \eqref{AB}, in view of $z\in(0,1)$ and $b>1$, it is easy to check that \begin{equation}\label{8} \begin{split} & A'(z)=-2b(b-1)z(1-z)^{b-2}<0,\\ & B'(z)=-b(b-1)(b+1)z^2(1-z)^{b-2}<0,\\ &A''(z)=2b(b-1)(1-z)^{b-3}\big((b-1)z-1\big),\\ &B''(z)=b(b-1)(b+2)z(1-z)^{b-3}(bz-2), \end{split} \end{equation} and $A(z)<0$, $B(z)<0$ thanks to $A(0)=0, B(0)=0$. For the system \eqref{ex3}, denote $f(z)=2-2(1-z)^b-(b+1)z-(b-1)z(1-z)^b$, it can easily be proved that \begin{equation}\label{f} f'(z)=(b+1)\big((1-z)^{b-1}+(b-1)z(1-z)^{b-1}-1\big)=\frac{b+1}{2}A(z). \end{equation} Since $z\in(0,1)$, we obtain $f'(z)<0$ and $f(z)<0$ due to $f(0)=0$. That is, the system \eqref{ex3} has no singular point with $z\in(0,1)$. Moreover, by \eqref{2}, it follows that \begin{equation} \begin{split} Q(\phi)=&h^{\frac{1}{2}}\int_{\mathbb{R}}\Big(bz(1-z)^{\frac{b-3}{2}}+(1-z)^{\frac{b-1}{2}} -(1-z)^{-\frac{b+1}{2}}\Big)dt\\ =&h^{\frac{1}{2}}\int_{\mathbb{R}}\frac{(1-z)^{-\frac{b+1}{2}}}{2}A(z)dt\\ =&h^{\frac{1}{2}}\int_{\Gamma_h}\frac{A(z)(-B)^{\frac{3}{2}}(z)} {2b^{\frac{1}{2}}(b-1)^{\frac{1}{2}}z(1-z)^{\frac{3b-3}{2}}f(z)}d\bar{u}, \end{split} \end{equation} owing to \eqref{ex3}. For simplicity, we denote $A(z)$, $B(z)$ and $f(z)$ as $A$, $B$ and $f$, respectively. That is \begin{equation}\label{3} Q(\phi)=h^{\frac{1}{2}}\int_{\Gamma_h}\frac{A(-B)^{\frac{3}{2}}} {2b^{\frac{1}{2}}(b-1)^{\frac{1}{2}}z(1-z)^{\frac{3b-3}{2}}f}d\bar{u}, \end{equation} namely $Q$ function. In view of $\frac{dh}{d\gamma}=-\frac{1}{\gamma^2}<0$, the mapping $Q(\phi)$ \eqref{Q} is strictly increasing if and only if $\frac{dQ}{dh}>0$. \section{Preparations of analysis of the stability criterion}\label{sect-3} To address the monotonicity of the $Q$ function \eqref{3}, we need the following preparations. As stated in the previous section, we only need to verify that $\frac{dQ}{dh}>0$ for the planar Hamiltonian system \eqref{ex3}. We find that the $Q$ function is very similar to the \textit{period function} (the period function assigns to each orbit in the period annulus its period) of the center of those planar differential systems for which the first integral $H(x,y)$ has separable variables, i.e., $H(x,y)=F_1(x)+F_2(y)$ (see Figure \ref{fig1}). In the literatures, much attention is paid to the centers and the monotonicity of period functions of the planar quadratic polynomial systems, see for example \cite{Chicone2,Coppel,Garijo,Gasull,Li1,Zhao} and reference therein. In \cite{Jordi}, Villadelprat and Zhang considered the monotonicity of the period function of planar Hamiltonian differential systems with the first integral $H(x,y)=F_1(x)+F_2(y)$, where the period function can be written as $$T(\bar{h})=\int_{\Gamma_{\bar{h}}}dt=\int_{\Gamma_{\bar{h}}}\frac{1}{F'_1(y)}dx.$$ Later, the monotonicity of the period function as follows $$T(\bar{h})=\int_{\Gamma_{\bar{h}}}\frac{g(x)}{l(y)}dx,$$ with the first integral $H(x,y)=F_1(x)+F_2(y)$ was studied by our previous work in \cite{Long}. To solve the convergence problem, we multiply the period function $T(\bar{h})$ by $\bar{h}$ and take the derivative of $\bar{h}T(\bar{h})$ with respect to $\bar{h}$. In the following proof, we shall adopt the same procedure as in the proof of the monotonicity of the period function of planar Hamiltonian differential systems. Similarly, for the $Q$ function, denote $$Q(\phi)=h^{\frac{1}{2}}\int_{\Gamma_h}\frac{A(-B)^{\frac{3}{2}}} {2b^{\frac{1}{2}}(b-1)^{\frac{1}{2}}z(1-z)^{\frac{3b-3}{2}}f}d\bar{u}\triangleq h^{\frac{1}{2}}\int_{\Gamma_h}g(z)d\bar{u},$$ and we present a consequence to deal with the $Q$ function as follows, which will be applied to prove the main result in the next section. \begin{lemma}\label{le3} Assume for $b>1$, the following two hypotheses hold: \begin{itemize} \item [{\rm(H1)}] $2(1-z)A'f+(b-1)Af-(1-z)Af'>0$, \ $z\in(0,1)$, \item [{\rm(H2)}] $\frac{1}{2}z(1-z)B'+\frac{1}{2}(b-1)zB-(1-z)B<0$, \ $z\in(0,1)$. \end{itemize} Then the mapping $Q(\phi)$ \eqref{Q} is strictly increasing with respect to $k$. \end{lemma} \begin{proof} It is sufficient to verify that $\frac{dQ}{dh}>0$ for the planar Hamiltonian system \eqref{ex3}. For the $Q(\phi)=h^{\frac{1}{2}}\int_{\Gamma_h}g(z)d\bar{u}$, multiplying both sides of the above equation by $h^{\frac{1}{2}}$, we obtain \begin{equation}\label{Q1} h^{\frac{1}{2}}Q(\phi)=h\int_{\Gamma_h}g(z)d\bar{u}, \end{equation} and \begin{equation}\label{5} \begin{split} h^{\frac{1}{2}}Q(\phi)=&\int_{\Gamma_h}\big(\frac{A}{B}(z)-\bar{u}^2\big)g(z)d\bar{u}\\ =&\int_{\Gamma_h}\big(\frac{A}{B}g\big)(z)d\bar{u} -\int_{\Gamma_h}g(z)\bar{u}^2d\bar{u}\\ \triangleq&I_1(h)-I_2(h). \end{split} \end{equation} by virtue of the first integral \eqref{H}. Taking the derivative with respect to $h$ on both sides of the above equality, we have \begin{equation}\label{6} \frac{1}{2}h^{-\frac{1}{2}}Q(\phi)+h^{\frac{1}{2}}\frac{dQ(\phi)}{dh}=I'_1(h)-I'_2(h), \end{equation} where \begin{equation} \begin{split} I'_1(h)=&\int_{\Gamma_h}\frac{1}{2b^{\frac{1}{2}}(b-1)^{\frac{1}{2}}}\cdot\Big(\frac{A^2}{(1-z)^{b-1}f}\Big)' \frac{\partial z}{\partial h}\cdot\frac{-(-B)^{\frac{1}{2}}}{z(1-z)^{\frac{1}{2}(b-1)}}d\bar{u}\\ &+\int_{\Gamma_h}\frac{1}{2b^{\frac{1}{2}}(b-1)^{\frac{1}{2}}}\cdot\frac{A^2}{(1-z)^{b-1}f} \cdot\Big(\frac{-(-B)^{\frac{1}{2}}}{z(1-z)^{\frac{1}{2}(b-1)}}\Big)'\frac{\partial z}{\partial h}d\bar{u}\\ =&\int_{\Gamma_h}\frac{-A(-B)^{\frac{5}{2}}\big(2(1-z)A'f+(b-1)Af-(1-z)Af'\big)} {4b^{\frac{3}{2}}(b-1)^{\frac{3}{2}}z^2(1-z)^{\frac{5b-5}{2}}f^3}d\bar{u}\\ &+\int_{\Gamma_h}\frac{A^2(-B)^{\frac{3}{2}}\big(z(1-z)B'+\frac{1}{2}(b-1)zB-(1-z)B\big)} {4b^{\frac{3}{2}}(b-1)^{\frac{3}{2}}z^3(1-z)^{\frac{5b-5}{2}}f^2}d\bar{u}, \end{split} \end{equation} and \begin{equation} \begin{split} I'_2(h)=&\Big(\int_{\Gamma_h}\frac{b^{\frac{1}{2}}(b-1)^{\frac{1}{2}}A}{2(1-z)^{\frac{b+1}{2}}\sqrt{-B}}\bar{u}dz\Big)' =\int_{\Gamma_h}\frac{b^{\frac{1}{2}}(b-1)^{\frac{1}{2}}A}{2(1-z)^{\frac{b+1}{2}}\sqrt{-B}}\cdot\frac{\partial \bar{u}}{\partial h}dz\\ =&\int_{\Gamma_h}-\frac{b^{\frac{1}{2}}(b-1)^{\frac{1}{2}}A}{2(1-z)^{\frac{b+1}{2}}\sqrt{-B}}\cdot\frac{1}{2\bar{u}}dz\\ =&-\int_{\Gamma_h}\frac{A(-B)^{\frac{3}{2}}}{4b^{\frac{1}{2}}(b-1)^{\frac{1}{2}}z(1-z)^{\frac{3b-3}{2}}f}d\bar{u}, \end{split} \end{equation} due to \eqref{H} and \eqref{ex3}. In addition, we must verify that the integral $I'_1(h)$ and $I'_2(h)$ are well defined along the orbit $\Gamma_h$. It is easy to verify that for $\bar{u}\to0$, the denominator of $I'_1(h)$ and $I'_2(h)$ is not $0$. That is the integral $I'_1(h)$ and $I'_2(h)$ are Riemann integrals, which are well-defined. And it is simple to know that $\bar{u}\to+\infty$ is singularity of $I'_1(h)$ and $I'_2(h)$. Using the fact of the first integral \eqref{H}, we have $$\frac{z}{\frac{A(z)z}{B(z)}-hz}=\frac{1}{\bar{u}^2},$$ and $\bar{u}\to+\infty$, $$ z=\frac{1}{\bar{u}^2}+O(\frac{1}{\bar{u}^2}), $$ by means of Lagrange inversion theorem. For \begin{equation}\label{I} -\int_{0}^{+\infty}\frac{-A(-B)^{\frac{5}{2}}\big(2(1-z)A'f+(b-1)Af-(1-z)Af'\big)} {2b^{\frac{3}{2}}(b-1)^{\frac{3}{2}}z^2(1-z)^{\frac{5b-5}{2}}f^3}d\bar{u}, \end{equation} the Taylor expansion of the numerator and the denominator of the above function \eqref{I} at the origin have the form $$ -\int_{0}^{+\infty}\frac{\alpha_1z^{15}+o(z^{16})}{\alpha_2z^{\frac{25}{2}}+o(z^{\frac{27}{2}})}d\bar{u}, $$ where $\alpha_1, \alpha_2$ are parameters related to $b$. It follows that the integral \eqref{I} is absolutely convergent for $\bar{u}\to+\infty$ ($z\to0$), i.e., the integral \eqref{I} is well defined. Similarly, we can also check that the power of $z$ in the numerator is higher than the denominator of $I'_1(h)$ and $I'_2(h)$ thanks to \eqref{AB} and \eqref{8}, that is integral $I'_1(h)$ and $I'_2(h)$ are well defined. By \eqref{3}, it is simple to show that $$\frac{1}{2}h^{-\frac{1}{2}}Q(\phi)= \int_{\Gamma_h}\frac{A(-B)^{\frac{3}{2}}}{4b^{\frac{1}{2}}(b-1)^{\frac{1}{2}}z(1-z)^{\frac{3b-3}{2}}f}d\bar{u}=-I'_2(h).$$ It follows that \begin{equation}\label{7} \begin{split} h^{\frac{1}{2}}\frac{dQ(\phi)}{dh}=I'_1(h)=&\int_{\Gamma_h}\frac{-A(-B)^{\frac{5}{2}}\big(2(1-z)A'f+(b-1)Af-(1-z)Af'\big)} {4b^{\frac{3}{2}}(b-1)^{\frac{3}{2}}z^2(1-z)^{\frac{5b-5}{2}}f^3}d\bar{u}\\ &+\int_{\Gamma_h}\frac{A^2(-B)^{\frac{3}{2}}\big(z(1-z)B'+\frac{1}{2}(b-1)zB-(1-z)B\big)} {4b^{\frac{3}{2}}(b-1)^{\frac{3}{2}}z^3(1-z)^{\frac{5b-5}{2}}f^2}d\bar{u}\\ =&-\int_{0}^{+\infty}\frac{-A(-B)^{\frac{5}{2}}\big(2(1-z)A'f+(b-1)Af-(1-z)Af'\big)} {2b^{\frac{3}{2}}(b-1)^{\frac{3}{2}}z^2(1-z)^{\frac{5b-5}{2}}f^3}d\bar{u}\\ &-\int_{0}^{+\infty}\frac{A^2(-B)^{\frac{3}{2}}\big(\frac{1}{2}z(1-z)B'+\frac{1}{2}(b-1)zB-(1-z)B\big)} {2b^{\frac{3}{2}}(b-1)^{\frac{3}{2}}z^3(1-z)^{\frac{5b-5}{2}}f^2}d\bar{u}. \end{split} \end{equation} On account of $$\frac{-A(-B)^{\frac{5}{2}}}{2b^{\frac{3}{2}}(b-1)^{\frac{3}{2}}z^2(1-z)^{\frac{5b-5}{2}}f^3}<0,$$ and $$\frac{A^2(-B)^{\frac{3}{2}}}{2b^{\frac{3}{2}}(b-1)^{\frac{3}{2}}z^3(1-z)^{\frac{5b-5}{2}}f^2}>0,$$ and according to $(H1)$ and $(H2)$, we obtain $\frac{Q(\phi)}{dh}>0$. That is the mapping $Q(\phi)$ \eqref{Q} is strictly increasing with respect to $k$. The proof is completed. \end{proof} \section{The proof of the main result}\label{sect-4} In this section, we shall show analytically for any $b>1$ that the stability criterion is satisfied and smooth solitary waves of the the $b$-CH equation \eqref{CH} are orbitally stable. \begin{proof} From Lemma \ref{le3}, it is enough to check $(H1)$ and $(H2)$. $\bullet$ Firstly, we need to verify $(H1)$. From \eqref{8}, it follows that \begin{equation} \begin{split} &\frac{1}{2}z(1-z)B'+\frac{1}{2}(b-1)zB-(1-z)B\\ =&(1-z)^{b-1}\big((b-1)z^2+(3-b)z-2\big)-(b+1)z+2\\ \triangleq&R(z), \end{split} \end{equation} and it can easily be checked $R(1)=-b+1<0$. The Taylor expansion of the function $R$ at the origin has the form $$ R(z)=\frac{b}{6}(1-b)(1+b)z^3+o(z^3). $$ Thus, we have $R'(0)=R''(0)=0$, $R^3(0)=\frac{b}{6}(1-b)(1+b)<0$ and $z\to0+$, $\frac{R(z)}{z^3}<0$. To determine the sign of $R(z)$, assume that $b=3$, we obtain for $z\in(0,1)$, $$R(z)=2z^3(z-2)<0.$$ Taking the derivative of $R(z)$ with respect to $z$, we have \begin{equation} R'(z)=(b+1)(1-z)^{b-1}\big((b-1)z+1\big)-(b+1). \end{equation} In order to prove the theorem, we assertion that if the two equations $R(z)=0$ and $R'(z)=0$ have no the common roots for $z\in(0,1)$, $b>1$, then $R(z)<0$ for $z\in(0,1)$, $b>1$. If the assertion would not hold, then there exist a parameter $b_1\in(1,+\infty)$ such that $\frac{R(z)}{z^3}>0$. According to the continuous dependence of the solution on the parameters, we can find another parameter $b_0\in(1,+\infty)$ such that $R(z)=0$ and $R'(z)=0$ have one common root for $z\in(0,1)$ in light of $\frac{R(z)}{z^3}<0$ for $b=3$, $z\in(0,1)$. This leads to a contradiction. It is now obvious that the assertion holds (see Figure \ref{fig3}). \tikzset{ flow/.style = {decoration = {markings, mark=at position #1 with {\arrow{>}}}, postaction = {decorate} }} \captionsetup{font={scriptsize}} \tikzset{global scale/.style={ scale=#1, every node/.append style={scale=#1} } } \begin{figure}[htb] \centering \begin{tikzpicture}[global scale=0.8] \fill[black](0,0)circle(0.05); \draw[->] (-2,0)--(5,0); \draw(5,0)node[below]{$z$}; \draw[->](0,-4)--(0,2); \draw(4,-4)--(4,2); \draw(0,2)node[above right]{$\frac{R(z)}{z^3}$}; \fill[black] (4,0)circle(0.05); \draw[red](0,0)node[below left]{$0$}; \draw[red](4,0)node[below right]{$1$}; \draw[black] (-1,-4) to [out=45,in=180] (2,-1) to [out=0,in=120] (4.5,-3.5); \draw(4.5,-3.5)node[right]{$b=3$}; \draw[dashed] (-1,-3) to [out=45,in=180] (2,0) to [out=0,in=120] (4.5,-2.5); \draw(4.5,-2.5)node[right]{$b=b_0$}; \draw[black] (-1,-2) to [out=45,in=180] (2,1) to [out=0,in=120] (4.5,-1.5); \draw(4.5,-1.5)node[right]{$b=b_1$}; \end{tikzpicture} \caption{Diagram of the function $\frac{R(z)}{z^3}$, $z\in(0,1)$, the dotted line corresponds to $R(z)=0$ and $R'(z)=0$ have one common root.} \label{fig3} \end{figure} In other words, we only need to prove that $R(z)=0$, $R'(z)=0$ have no the common roots on $(0,1)$. Substitute $\nu$ for $(1-z)^{b-1}$ in $R(z)$ and $(1-z)R'(z)$, we can get $$R(z)=\nu\big((b-1)z^2+(3-b)z-2\big)-(b+1)z+2,$$ and $$R'(z)=(b+1)\nu\big((b-1)z+1\big)-(b+1),$$ where $R(z)$ and $R'(z)$ are functions of $z, \nu$, and $z\in(0,1)$, $\nu\in(0,1)$. With the help of elimination by eliminant, the common roots of $R(z)=0$ and $R'(z)=0$ satisfy the following equation $$b(b+1)(b-1)z^2=0.$$ It is simple to check that for $b>1$ and $z\in(0,1)$, $b(b+1)(b-1)z^2>0$. It follows that $R(z)=0$ and $R'(z)=0$ have no the common roots for $z\in(0,1)$. As what we have hoped, that is for $b>1$, $z(1-z)B'+\frac{1}{2}(b-1)zB-(1-z)B<0$, $z\in(0,1)$. $\bullet$ The next thing to do in the proof is to verify $(H2)$. We now proceed as in the proof of $(H1)$. If we plug $\nu=(1-z)^{b-1}$ back into $2(1-z)A'f+(b-1)Af-(1-z)Af'$, it can easily be shown that \begin{equation} \begin{split} &2(1-z)A'f+(b-1)Af-(1-z)Af'\\ =&\big(2(b-1)^2z^2-2(b^2-5b+2)z-6b+2\big)\nu^2\\ &+\big(2b(b-1)^2z^2-4(3b-1)z+12b-4\big)\nu+2b(b+1)z-6b+2\\ \triangleq&P(z), \end{split} \end{equation} thanks to \eqref{8}, and it can easily be verified that $P(1)=2(b-1)^2>0$. The Taylor expansion of the function $P$ with respect to $z$ at the origin has the form $$ P(z)=\frac{1}{6}b^2(b+1)(b-1)^2z^4+o(z^4). $$ Thus, we have $P'(0)=P''(0)=0$, $P^3(0)=0$, $P^4(0)=\frac{1}{6}b^2(b+1)(b-1)^2>0$ and $z\to0+$, $\frac{P(z)}{z^4}>0$. Supposed that $b=2$, it is evident that $P(z)=2z^4>0$ for $z\in(0,1)$. Taking the derivative of $P(z)$ with respect to $z$, we get \begin{equation} \begin{split} (1-z)P'(z)=&(1-z)\big((b-3)A'f+2(1-z)A''f+bAf'\big)\\ =&\big(-4b(b-1)^2z^2+2b(2b^2-9b+5)z+10b^2-6b\big)\nu^2\\ &+\big(-2b(b+1)(b-1)^2z^2+4b^2(b+1)z-12b^2+4b\big)\nu\\ &-2b(b+1)z+2b(b+1)\\ \end{split} \end{equation} due to \eqref{8}. It follows that if for $b>1$ the two equations $P(z)=0$, $P'(z)=0$ have no the common roots, then $P(z)>0$, $z\in(0,1)$. The common roots of $P(z)=0$ and $P'(z)=0$ satisfy the following equation $$ z^4\big((b-1)^3z^2+(-12b+4)z+(12b-4)\big)\triangleq z^4l(z)=0. $$ Now that $b>1$, it can easily be verified that $$l(z)=(b-1)^3z^2+(12b-4)(1-z)>0, \ z\in(0,1),$$ i.e., $P(z)>0$ for $z\in(0,1)$, $b>1$. Therefore, we obtain $2(1-z)A'f+(b-1)Af-(1-z)Af'>0$ for $z\in(0,1)$, $b>1$. Based on the above analyses, the two hypotheses $(H1)$ and $(H2)$ hold. Hence, by the Lemma \ref{le2}, for any $b>1$, $c>0$ and $k\in(0, \frac{c}{b+1})$, the stability criterion is verified analytically and the smooth solitary waves are orbitally stable. This completes the proof. \end{proof} \subsection*{Acknowledgements} The paper is supported by the National Natural Science Foundation of China (No. 12171491).
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction}\label{sec:introduction} The Milky Way nuclear star cluster (NSC) is the closest galactic nucleus and therefore target of detailed observations over the last few decades. It offers the unique possibility to resolve the stellar population and to study its composition and the dynamics close to a central black hole at an unrivaled level of detail. Precision astrometry of the innermost stars over almost two decades has proven the existence of a $4.3 \times 10^6 \msun$ supermassive black hole (SMBH) \citep{eisenhauer05,ghez08,gill09}. \\ Past studies of the Milky Way's NSC found that the stellar population can be divided into two classes: the cool and evolved giant stars and the hot and young main-sequence$/$post-main sequence stars. While the bulk of the stellar population is $\rm>5\,Gyrs$ old \citep{pfuhl11}, the existence of the massive young stars is evidence for very recent star formation \citep{forr87,allen90}. The most massive stars (WR$/$O stars) reside in a combination of a prominent warped disk, a second disk-like structure highly inclined relative to the main disk, and a more isotropic component \citep{paum06,lu09,bartko09} at a projected distance of 0.8\arcsec-12\arcsec~ from Sgr\,A* ($\rm 1\arcsec \equiv 0.04\,pc$, assuming a distance of $R_0=8.3$\,kpc). The GC disks must have formed in a rapid star burst $\sim 6\rm \,Myrs$ ago \citep{paum06,bartko10}, with a highly unusual initial mass function (IMF) that favored the formation of massive stars \citep{bartko10,2013ApJ...764..155L}. This extreme IMF deviates significantly from the standard Chabrier$/$Kroupa IMF with a powerlaw slope of $\alpha=-2.3$ \citep{kroupa01,chabrier03} and seems to exist only in the vicinity of the SMBH. The extreme IMF is currently pictured as the result of an infalling gas cloud that settled in a disk around the SMBH. Compressional heating of the fragmenting disk due to the tidal field of the SMBH raised the gas temperature, leading to the formation of massive stars \citep{2003ApJ...590L..33L,bonnell08}. The fragmentation of an accretion disk however is not only expected to produce massive stars, but also to favor the formation of binary systems \citep{alexander08}. Fast cooling (shorter than the orbital timescale $\approx$ 1000 yrs) in fragmenting self-gravitational disks leads to a dramatic increase in the formation of binaries. The simulations predict a fraction of multiple systems close to unity in that case. Apart from the massive O-star disks, a second population of ordinary B-stars can be found in the innermost 1\arcsec~ around the SMBH, the so called S-stars \citep{eisenhauer05,ghez08,gill09}. The origin of the S-stars is a mystery. In-situ formation seems impossible due to the strong tidal forces of the SMBH. On the other hand, inward migration from the greater GC area is limited by the short main-sequence lifetime of only a few ten Myrs. This requires the stars to have formed not far from todays location. One of the currently favored mechanisms that explains the formation of the S-stars is a 3-body interaction of the SMBH and a binary system \citep{gould03,perets07}. The binary gets disrupted by the SMBH \citep[Hills' mechanism; ][]{1988Natur.331..687H}. The one companion is ejected and eventually ends as a hypervelocity star, while the other companion gets tightly bound to the SMBH.\\ Thus, the formation of both stellar populations is closely tied to the binarity of the massive O-stars in the Galactic Center. Although dynamical effects, like stellar capture or disruption due to stellar encounters, can change the initial binary fraction on timescales of only a few Myrs, the detection of binaries in the Galactic Center can constrain some of the formation models. \subsection{Observed binary fractions}\label{sec:observed_binary_fractions} \textit{The binary fraction} of massive stars is subject of intense studies. The detection difficulties of long-period and extreme-mass ratio binaries makes it hard to estimate the true binary fraction and the underlying distribution of the binaries. However, it has been established that the binary fraction strongly varies with the environment. Dense stellar clusters seem to have lower binary fractions than less dense clusters. \cite{garcia01} tabulated binary fractions of massive O- and B-type stars for various clusters and concluded that the binary fraction decreases from 80\% to 14\% with increasing cluster density. The lowest fraction was found in Trumpler 14, one of the densest ($\rm \sim10^5\,\msun \,pc^{-3}$) young clusters in our galaxy.\\ \cite{sana12} found in six low-density open clusters a binary fraction of 70\%. \cite{mason09} found very similar values for O-stars in comparable clusters, yet lower fractions of 59\% and 43\% for field and runaway O-stars. They claim that field O-stars are ejected cluster stars and the lower binary fraction is related to the ejection mechanism (supernovae and close encounters).\\ Similar to the correlation for massive stars, \cite{milone08} found a strong anti-correlation between globular cluster masses and the corresponding binary fractions. \\ Two theoretical approaches try to explain the environment dependence of the binary fraction. The first focuses on the conditions in the star-forming cloud that regulate the formation of multiple systems \citep[e.g. ][]{sterzik03}. The second approach argues for a universal initial binary population, that is altered by dynamical evolution, which depends on the environment. While in the first scenario, the multiplicity only depends on the density of the cluster, the second scenario predicts that the multiplicity also depends on age. For example, \cite{marks12} were able to reproduce numerically the observed densities and binary fractions of eight young clusters through dynamical evolution, starting from an universal initial binary population. \\ While the anti-correlation of density and binarity is well established, the age dependence is still an open debate. \cite{sollima07} found a slight binary-age correlation in globular clusters. However a subsequent study of open clusters \citep{sollima10} remained inconclusive in this respect.\\ Binaries among low-mass stars ($\approx 1\,\msun$) seem to be less frequent than among massive stars. Recent studies suggest binary fractions between 4\% and 15\% \citep{sollima07,sommariva09} in old globular clusters. In less dense open clusters, \cite{geller12} found 29\% low-mass binaries. For comparison, the field binary fraction is 57\% \citep{duq91}, i.e. significantly enhanced compared to dense clusters. \subsection{Binary statistics} \label{sec:binary_statistics} Studies of large OB associations, such as Cygnus OB2, with hundreds of stars allowed to derive binary distribution statistics, namely binary mass ratio-, orbital separation- and eccentricity distributions. The observed distribution functions are found to be well described by power laws. \cite{kobul07} and \cite{kiminki12}, for example, found a mass ratio $q=M_{\rm sec}/M_{\rm prim}$ distribution $f(q)\propto q^\alpha$, with $\alpha=0.1 \pm 0.5$, a log-period distribution of $f({\rm log} P)\propto {\rm log} P^\beta$, with $\beta=0.2\pm0.4$ and an eccentricity distribution of $f(e)\propto e^\gamma$, with $\gamma=-0.6\pm0.3$. A similar study of the Tarantula Nebula by \cite{sana12b} found a somewhat steeper mass ratio $\alpha=-1 \pm 0.4$ but also shorter periods $\beta=-0.45\pm0.3$. \subsection{What is known about binaries in the Galactic Center}\label{sec:GCbinaries} Due to the large distance of the GC and the extreme extinction in the optical \citep[$A_V > 30$; e.g.][]{fritz11}, the study of GC binaries is limited to the most massive early-type stars. \\ There is only one confirmed binary so far: \begin{itemize} \item IRS\,16SW consists of two equal 50\,\msun ~constituents, with a period of 19.5 days \citep{ott99,martins07}. The star is an eclipsing contact binary, which shows a large photometric and spectroscopic variability during its revolution. \end{itemize} However a few more stars were speculated to be binaries: \begin{itemize} \item The bow-shock star IRS\,8, about 30\arcsec~ north of SgrA* was speculated to be a binary due to its apparently young age of only 3\,Myrs \citep{geballe06}. No binary signature has been detected so far, however the seemingly young age might be explained by the influence of a close companion on the primary evolution. We did not consider this star in our study due to its relatively large distance from the cluster center. Considering the steep radial profile of the early-type stars, it is not clear if IRS\,8 can be associated with the WR disk formation. \item For IRS\,7E2, a massive Ofpe/WN9, \cite{paumard01} reported a significant radial velocity change with respect to a previous measurement by \cite{genzel00}. Unfortunately, we obtained only four additional epochs for that star. Among the few observations, we did not detected a significant radial velocity change. Yet, the star features very broad emission lines (FWHM=1140\,km/s), which show some intrinsic variability. Thus to conclude on a binarity of the star, more observations are required. So far it can only be considered a binary candidate. \item Photometric variability of IRS\,29N was interpreted by \cite{rafelski07} as the potential signature of a wind colliding binary. However, older data from \cite{ott99} showed less variability and no periodicity. The star is classified as a dust producing WC star \citep{paum06}. Some irregular variability could thus be attributed to circumstellar dust. The stellar spectrum is very red and shows some extremely broad emission features. The width and the intrinsic variability of the features prevents a precise radial velocity measurement. Therefore we were not able to confirm or rule out a binarity of IRS\,29N. \item \cite{peebles07} classified a photometrically variable star with a period of $\sim42$ days as an eclipsing binary candidate. As for IRS\,8 the star has relatively large distance ($\approx27\arcsec$) from the cluster center and is therefore likely not related to the WR disk. Due to the large distance it was not included in our photometric and spectroscopic survey. However its spectroscopic confirmation is a viable target for future observations. \end{itemize} \section{Observations and data processing}\label{sec:obs_and_proc} This work relies on spectroscopic and imaging data obtained at the VLT in Cerro Paranal Chile between 2003 and 2013. The observations were carried out under the program-ids 075.B-0547, 076.B-0259, 077.B-0503, 087.B-0117, 087.B-0280, 088.B-0308, 288.B-5040, 179.B-0261 and 183.B-0100. \subsection{Imaging and photometry} The photometric data were obtained with the adaptive optics camera NACO \citep{rousset03,2003SPIE.4841..944L}. The photometric reference images were taken on the 29th of April 2006 and on the 31st of March 2010. We used the $H$- and $Ks$-band filter together with a pixel scale of 13\,$\rm{mas/pixel}$. To each image we applied sky-subtraction, bad-pixel and flat-field correction \citep{trippe08}. All images of good quality obtained during the same night were then combined to a mosaic with a field of view of $\approx20\arcsec \times20\arcsec$. In total we used 102 $Ks$-band images and 34 $H$-band images with temporal spacings between a few hours and up to 9 years to construct lightcurves for a few thousand stars within the FoV. \subsection{Spectroscopy} Our spectroscopic data were obtained with the adaptive optics assisted integral field spectrograph SINFONI \citep{eis03, bon04}. In total we used 45 observations obtained between spring 2003 and summer 2013 with pixel scales between $50 \times 100$ and $125 \times 250$\,mas. The data output of SINFONI consists of cubes with two spatial axes and one spectral axis. Depending on the plate scale, an individual cube covers $3.2\arcsec \times3.2\arcsec$ or $8\arcsec \times8\arcsec$; the spectral resolution varies between 2000 and 4000 depending on the chosen bandpass and the field-of-view. We used the data reduction SPRED \citep{schreiber04,abu06}, including bad-pixel correction, flat-fielding and sky subtraction. The wavelength scale was calibrated with emission line gas lamps and fine-tuned on the atmospheric OH lines. Finally we removed the atmospheric absorption features by dividing the spectra through a telluric spectrum obtained in each respective night. \subsection{Spectroscopic sample selection} Of order 200 early-type stars are known within 1\,pc from Sgr A*. Their spectral types range from the most luminous O/WR stars with main-sequence lifetimes of a few Myrs to early B-stars with main-sequence lifetimes of several 10\,Myrs. Stars fainter than B-dwarfs ($m_K>16$) are too faint to be identified spectroscopically. Among the known early-type stars we chose the brightest ones ($m_K <12$) with prominent emission or absorption lines, that allowed a precise radial velocity measurement. We excluded stars with fit errors larger than 20\,km/s. For instance, we excluded several bright WR stars with very broad wind emission lines. Our final sample consisted of 13 stars in close proximity to Sgr A*, which we repeatedly observed with SINFONI. Additionally we used archival data from the same region, that gave us up to 45 independent observations per star (see table\,\ref{tab:data}). The observations cover period spacings between one day up to 9 years. \subsection{Velocity measurement and uncertainty} \label{sec:vel_uncertainty} For the velocity measurement, we chose the most prominent spectral feature of each star. Depending on the spectral type, this was either the He\,I line at 2.059\,$\rm \mu m$, the He\,I line at 2.113\,$\rm \mu m$ or the Br$\gamma$ line at 2.166\,$\rm \mu m$. The absolute velocity of the individual stars was measured by fitting a Gaussian. In order to detect relative velocity changes, we cross-correlated each spectrum with a reference spectrum of the same star. This provided significantly better velocity residuals and scatter than individual Gaussian fits. All velocities were corrected for the earth bary- and heliocentric motion. The velocity uncertainties of the individual measurements are a combination of the fit errors, systematic errors (e.g. wavelength calibration) and intrinsic line variations of the stars. The formal fit errors in the best cases were as small as $\approx2$\,km/s. However, this does not include systemic errors like wavelength calibration and drifts of the spectrograph. To determine the overall velocity uncertainty (including systematics), we used three late-type stars contained in the integral field unit (IFU) fields and measured their velocities. Late-type giants are good spectroscopic references because they can't have close companions (due to their physical size of few AU) and they show only slow pulsations. Thus intrinsically their radial velocities are thought to be very stable. Observationally, they are well suited due to their prominent absorption features in the K-band, the CO bandheads. The absorption shape allows a very precise radial velocity measurement (fit errors $<2$\,km/s). It turns out that the late-type giants show a radial velocity scatter in our measurements of $\sim6\,$km/s RMS. We therefore estimate, that our systematic errors are of that order. \section{The long-period binary GCIRS\,16NE}\label{sec:16NE} The star IRS\,16NE is the most luminous early-type star in the Galactic Center ($m_H\approx11.0$, $m_K\approx9.2$). With a bolometric luminosity of $L\approx 2.2\times 10^6\,L_{\odot}$ \citep{najarro97,martins07} it is even one of the most luminous stars in the Milky Way. It is part of a young and massive population in the GC, thought to be luminous blue variables \citep[LBVs, e.g. ][]{paumard01}. Of the same type are the stars IRS\,16C, IRS\,16NW, IRS\,16SW, IRS\,33E, and IRS\,34W. IRS\,16NE is at least 0.5 magnitudes brighter than the other LBV stars \citep{paum06}. LBVs are evolved massive stars that populate a region in the H-R diagram, where the luminosity approaches the Eddington luminosity, which leads to instabilities in their atmospheres. Therefore those stars show strong variability in photometry and spectroscopy (Humphreys \& Davidson 1994). Characteristic for this stellar phase is strong wind-driven mass loss and drastic changes in the stellar temperature and radius. Given that strong outbursts have not been observed yet for the six stars, they are thought to be LBVs in a stable phase \citep{martins07}. \\ \cite{martins06,tanner06,zhu08} recognized a significant radial velocity change of IRS\,16NE and speculated about a binary origin. However they were not able to deduce an orbital solution and deemed the star only a candidate.\\ After collecting another 6 years worth of data and effectively doubling the number of observations, we can finally confirm the binarity of IRS\,16NE. \subsection{Orbital solution and physical parameters} We obtained 43 spectra of IRS\,16NE, spread over roughly 10 years with spacings of a few days up to years. The orbit of a single line spectroscopic binary is defined by the period $P$, the eccentricity $e$, the systemic velocity $\gamma$, the longitude of periastron $\omega$, the time of periastron $T$ and the mass function $f(m)$. However, fitting periodic functions such as a binary signature can be problematic, especially if one tries to fit the period. Often the algorithm fails to converge or gets trapped in a local minimum. Therefore, we set the period as a prior and tried to fit the velocity curve for the given prior. We repeated the fitting for periods between 0.5 days up to 1000\,days with a spacing of 1\,day. The solution with the lowest $\chi^2$ of all individual fittings was taken as the true orbital solution. We used the IDL fitting tool \textit{MPFIT} \citep{2009ASPC..411..251M}, which provided a fast fitting algorithm, and which allowed defining parameter constrains. For stability reasons, we constrained the eccentricities $e<0.98$. Although the period was set as a prior, we allowed the fitting routine to adjust the period by $\pm0.5$\, days with respect to the prior period. Thus we could refine the crude 1\,day period sampling.\\ It turned out that the fitting clearly favored a solution with a period of 224.09\,days. The second best period had a factor four higher $\chi^2$ than the best solution. A folded sequence of spectra covering one orbital period is shown in Figure\,\ref{fig:sequence}. The orbit is clearly eccentric with $e=0.32$. Figure\,\ref{fig:fold_period} shows the folded radial velocity data together with the best-fit orbital solution. Since we were only able to measure the velocity of one companion, only the mass function; \begin{equation} f(m)=\frac{(M_2~{\rm sin}\,i)^3}{(M_1 +M_2)^2}=4.58\pm0.17 \,\msun \nonumber \end{equation} of the system could be determined ($M_1$ is the mass of the observed star and $M_2$ is the mass of the unobserved star). However the spectroscopic similarity of the primary star with the known eclipsing binary IRS\,16SW argues for a primary mass close to 50\,\msun. This means that the companion mass is $\ge30\,\msun$. In case of an edge-on orbit ($i=90^\circ$) the secondary mass is 30\,\msun. A more massive companion requires a lower inclination. The lowest possible inclination is $\approx46^\circ$, for an equal mass companion. In fact, a roughly equal mass companion would explain the $\approx0.5$\,mag excess of IRS\,16NE compared to the other IRS\,16 stars. The binary IRS\,16NE, with a semi-major axis of $a\, {\rm sin}\, i =144\,{\rm \mu as}$ will be a valuable test-case for the upcoming 2nd generation VLTI instrument GRAVITY. With its unprecedented $\approx10\,{\rm \mu as}$ astrometric accuracy, it will be possible to determine the full orbital parameters of the system in less than one year of observations. \begin{figure}[!ht] \begin{center} \includegraphics[width=0.9\columnwidth]{fold_period.eps} \caption{Radial velocity curve of IRS\,16NE together with the best orbital solution (for $P=224.09$ days and $e =0.32$, solid line). The typical uncertainty on the radial velocity is $\pm 6.6\,{\rm km\,s^{-1}}$. Parameters for the best-fit solution are given in Table\,\ref{tab:parameter}.} \label{fig:fold_period} \end{center} \end{figure} \begin{deluxetable}{lc} \tablecaption{Orbital Parameters as Derived from the Analysis of the Radial Velocity Curve\label{tab:parameter}} \tablewidth{0pt} \tablehead{ \colhead{Parameter} & \colhead{Value} } \startdata Semi-major axis, $a\,{\rm sin}\,i~ ({\rm 10^6\,km})$ & $179.1\pm7.3$ \\ Eccentricity, $e$ & $0.32\pm0.01$ \\ Systemic velocity, $v_0$ ($\rm km/s$)& $52.45\pm0.46$ \\ Semi-amplitude $K1$ ($\rm km/s$) & $61.57\pm1.7$ \\ Longitude of periastron, $\omega$ (deg) & $144.54\pm1.65$ \\ Orbital period, $P$ (days) & $224.09\pm0.09$\\ Time of periastron, $T$ (mjd)& $52523.63\pm1.47$\\ $f(m)$ ($M_{\odot}$) & $4.58\pm0.17$\\ \enddata \end{deluxetable} \begin{figure}[!h] \begin{center} \includegraphics[width=0.5\columnwidth]{timesequence_new.eps} \caption{Sequence of spectra following one orbital period of IRS\,16NE. The number indicates the period. The spectra have been corrected for the earth bary-centric motion. The solid line indicates the rest-wavelength of He\,I (2.059\,$\rm \mu m$)} \label{fig:sequence} \end{center} \end{figure} \section{Eclipsing binaries}\label{sec:eclipsing_binaries} In order to find eclipsing binaries, we focused on the spectroscopically confirmed sample of early-type stars in the inner 10\arcsec~ around SgrA*. Among those stars, we selected those with average photometric errors $<0.1$ magnitudes. This left us with 113 early-type stars within the FoV of the NACO 13\,mas/pix camera. We checked each of the lightcurves for variability. About half of the stars showed non-periodic variability on a few 0.1 mag level as expected for OB stars \citep{lefevre09}. In order to detect periodic variability, we used the phase dispersion minimization \citep{stellingwerf78}, a widely used technique in Cepheid and eclipsing binary searches. The inspection of the individual periodograms allowed us to identify two periodic variables with short periods ($<100$\,days). The previously reported eclipsing binary IRS\,16SW \citep{ott99,martins07} with a period of 19.447 days and a new periodic variable with a period of 2.276 days. \subsection{The eclipsing binary E60} The new periodic star is the second reported eclipsing binary in the GC. \cite{paum06} identified the star as a WN7 Wolf-Rayet type with $m_K=12.4$, located at $\Delta \alpha=-4.36\arcsec$ and $\Delta \delta =-1.65\arcsec$ from Sgr A*. Following the nomenclature of \cite{paum06}, the star is referred to as E60. The back-folded $H$- and $K$-band lightcurve can be seen in Fig.\,\ref{fig:lightcurve}. The color independent variability argues for an occultation event such as an eclipse of a companion. Variability due to pulsation or extinction from circumstellar dust typically leads to strong color changes. The WN7 star features broad emission lines, which results in relatively large radial velocity errors. Nonetheless, E60 shows a significant radial velocity change within days (only one companion is detectable). The radial velocity change is co-phased with the photometric periodicity, as the back-folded radial velocity curve indicates (Fig.\,\ref{fig:radialvel_E60}). Using the available photometric and spectroscopic data, we tried to model the binary with the program NIGHTFALL\footnote{The software was developed by R. Wichmann and is freely available at http://www.hs.uni-hamburg.de/DE/Ins/Per/Wichmann/Nightfall.html}. The near sinusoidal lightcurve argues for a very close binary. In fact, to model the light curve, we had to use a Roche lobe filling factor of 1.1. This means that the companions are in contact. This is not surprising, given the short orbital period of only 2.276 days. Furthermore, the large photometric amplitude requires the inclination to be $>60^\circ$. The mass and the mass ratio of the system are essentially determined by the radial velocity amplitude. Unfortunately the few velocity measurements and the relatively large errors limit the ability to constrain those parameters. For a well determined fit, we would require more spectroscopic epochs, especially with short time spacings of only a few hours. However, we found a reasonable solution (see Table\,\ref{tab:NIGHTFALL_para}), that can reproduce the observations. \begin{center} \begin{table}[!h] \begin{center} \caption{Orbital parameters of the eclipsing binary E60 \label{tab:NIGHTFALL_para}} \begin{tabular}{lc} \hline\hline Parameter & Value\\ \hline Separation, $a\,~ (R_{\odot})$ & $22.6\pm3$ \\ Eccentricity, $e$ & $\approx 0$ \\ Systemic velocity, $v_0$ ($\rm km/s$)& $467\pm10$ \\ Semi-amplitude $K1$ ($\rm km/s$) & $150\pm7$ \\ Inclination, $i$ (deg) & $70\pm10$ \\ Orbital period, $P$ (days) & $2.276$\\ Mass ratio, $m$ & $2\pm0.5$\\ $M_{\rm system}$ ($M_{\odot}$) & $30\pm10$\\ \hline \end{tabular} \end{center} \end{table} \end{center} We modeled the system with a total system mass of 30\,\msun~ and a mass ratio of two. An uneven mass ratio is required due to the relatively low velocity change for the given orbital period and inclination. Thus the primary mass is 20\,\msun~ and the secondary mass is 10\,\msun. In fact, those masses are typical for evolved WR stars of similar brightness and spectral type WN7 \citep[compare Table\,2 in][]{martins07}. The stellar radii of WN7 stars inferred by \cite{martins07} are between 10\,$R_{\odot}$ and 18\,$R_{\odot}$, which matches the inferred binary contact separation of $22.6\,R_{\odot}$. \\ The binary E60 has a remarkably high systemic radial velocity of $422\pm10$\,km/s. The proper motion of the system is $4.73\pm0.14$\,mas/yr (Fritz, priv. comm.), which corresponds to $184\pm6$\,km/s at the distance of the GC. Thus the total systemic velocity is $v_{\rm 3D}=460\pm12\, \rm km/s$. The velocity exceeds (2\,$\sigma$) the escape velocity ($v_{\rm esc}\approx440\,\rm km/s$) at 4.7\arcsec~ projected distance from Sgr A*. The actual 3D distance of E60 could be larger, i.e. the escape velocity could be even lower. On the other hand, the absolute velocity of the star is probably less certain than the formal fit error might indicate. In particular the strong wind lines of E60 could be biased by the actual wind morphology. In any case, the star seems to be at best marginally bound to Sgr A*. \begin{figure}[!h] \begin{center} \includegraphics[width=0.9\columnwidth]{phot.eps} \caption{Back-folded $H$- and $K$-band lightcurve of the eclipsing binary E60. The orbital period is 2.276 days. Overplotted is a model lightcurve calculated with the free program NIGHTFALL (for the parameters, see Table\,\ref{tab:NIGHTFALL_para}).} \label{fig:lightcurve} \end{center} \end{figure} \begin{figure}[!h] \begin{center} \includegraphics[width=0.9\columnwidth]{radialvel.eps} \caption{Measured radial velocities (the systemic velocity is subtracted) of the eclipsing binary E60. The broad wind lines of the star leads to relatively large velocity errors. Overplotted is the model radial velocity, which was calculated using the same model as in Fig.\,\ref{fig:lightcurve}} \label{fig:radialvel_E60} \end{center} \end{figure} \section{Determining the spectroscopic binary fraction}\label{sec:detection_probability} The observed spectroscopic binary fraction (two out of 13 stars; IRS\,16SW and IRS\,16NE), represents only a lower limit to the true binary fraction. Note, the eclipsing binary E60 was not included in the initial spectroscopic sample because its radial velocity uncertainty did not match the criterion. It was detected later due to its photometric variability. Only after the photometric detection, the radial velocity change was found. In order to keep the spectroscopic sample unbiased, E60 therefore is not considered in the spectroscopic binary fraction.\\ Naturally, the probability to detect a stellar companion depends on the primary mass, the secondary mass (i.e. the mass ratio $q$), the eccentricity $e$ and the orbital period $P$. It also depends on the number of observations, the radial velocity uncertainty and how well the orbital period is sampled. To derive the true binary fraction, it is therefore necessary to take the detection incompleteness into account. \begin{landscape} \begin{center} \begin{table*} \begin{center} \caption{Bright early-type stars targeted in the spectroscopic survey \label{tab:data}} \begin{tabular}{lcccccccc} \hline\hline id & sp type\tablenotemark{a} & N data\tablenotemark{b}& $v_{\rm z}$ [km/s] & RMS($v_{\rm z}$)\tablenotemark{c} [km/s]& Fit error [km/s] & $M [\msun]$\tablenotemark{d} & $p_{\rm det} (\rm Kiminki)$\tablenotemark{e} & $p_{\rm det}(\rm Sana)$\tablenotemark{f}\\ \hline IRS\,16SW\tablenotemark{g} &Ofpe/WN9 & 25 & 459.5 & - & 20 & 50 & Binary & Binary \\ \hline IRS\,16NE & Ofpe/WN9 & 43 & 52.5 & 46.4 & 2.2 & $>40$ & Binary & Binary \\ IRS\,16C & Ofpe/WN9& 43 & 186 & 10.3 & 2.7 & 40 & 0.70 & 0.71 \\ IRS\,16NW &Ofpe/WN9 & 37 & 17 & 11.4 & 2.7 & 40 & 0.70 & 0.71 \\ IRS\,33E &Ofpe/WN9 & 42 & 214 & 10.1 & 3.6 & 40 & 0.73 & 0.74 \\ IRS\,34W & Ofpe/WN9& 19 & -184 & 6.5 & 2.3 & 25 & 0.76 & 0.78 \\ IRS\,13E2 & WN8& 23 & -2 & 20.5 & 5.5 & 82.5 & 0.59 & 0.61 \\ IRS\,16SE2 & WN5/6& 38 & 191 & 15.4 & 6.5 & 17.2 & 0.52 & 0.55 \\ IRS\,29NE1 & WC8/9& 27 & -99 & 27.5 & 19.8 & 25 & 0.36 & 0.41 \\ IRS\,33N (64) &B0.5–1 I & 31 & 93 & 22.5 & 7.3 & 25 & 0.43 & 0.47 \\ IRS\,16CC & O9.5–B0.5 I & 35 & 145 & 27.1 & 8.5 & 40 & 0.42 & 0.47 \\ IRS\,16S (30)& B0.5–1 I & 30 & 123 & 15.0 & 7.3 & 40& 0.63 & 0.65 \\ IRS\,1E & B1-3 I& 14 & 18 & 19.6 & 8.7 & 25 & 0.46 & 0.51 \\ \hline \end{tabular} \footnotetext{Spectral type taken from \cite{paum06}.} \footnotetext{Number of individual spectroscopic observations.} \footnotetext{Standard deviation of the N individual velocity measurements.} \footnotetext{Main-sequence masses are derived from the H-R diagrams of \cite{paum06} and \cite{martins07}.} \footnotetext{Companion detection probability determined with the distribution functions from \cite{kiminki12} (see Sec.\,\ref{sec:binary_statistics}).} \footnotetext{Companion detection probability determined with the distribution functions from \cite{sana12b} (see Sec.\,\ref{sec:binary_statistics}).} \footnotetext{Binary published in \cite{martins06}. } \end{center} \end{table*} \end{center} \end{landscape} \subsection{Companion detection probability}\label{sec:detection_probability} Thanks to the long term monitoring of the stars in our sample, we were able to set tight constrains on radial velocity changes of the respective stars. The masses of the stars can be quite well constrained from their luminosities and temperatures, i.e. the positions in the H-R diagram \citep[e.g.][]{paum06,martins07}. In order to determine the binary detection completeness, we used the assumption that the binaries in the Galactic Center follow similar distribution functions as galactic and extra-galactic OB clusters (described in Section\,\ref{sec:binary_statistics}). This assumption might seem somewhat arbitrary, since it is not obvious that the distribution functions in disk star forming regions are applicable to the special environment around a massive black hole. For lack of better alternatives and keeping this limitation in mind, we used the observed distribution functions for a Monte-Carlo analysis. \\ For each star in the observed sample we created $10^6$ artificial companions, where the mass ratio $0.1<q<1$, the eccentricity $0<e<0.9$ and the period $1<P<1000$ were drawn with the observed distribution functions (Sec.\,\ref{sec:binary_statistics}). The longitude $\omega$, the time of periastron $T$ and the system inclination (${\rm cos}\,i$) were drawn with uniform probability. Each companion realization resulted in an artificial radial velocity curve, which was sampled with the same time spacing as the actual observations. From the artificial discrete radial velocity points, a velocity RMS was calculated. To account for the measurement uncertainties, we added in squares a systematic velocity uncertainty of 6\,km/s (see Sec.\,\ref{sec:vel_uncertainty}).\\ The ratio of companion realizations with a velocity RMS equal or greater than the observed RMS, to the total number of realizations was then taken as the probability of detecting companions. The Monte-Carlo simulation did not produce false positive detections, i.e. we assumed the false positive detection probability to be zero. This approach seems to be justified because we only deemed stars as binaries were a unique orbital solution could be found. Table\,\ref{tab:data} states the detection probabilities for the stars in our sample. To check the robustness of the results, we ran the simulations with the two observed binary distribution functions of \cite{sana12b} and \cite{kiminki12}. Although the distribution functions seem quite different, the detection probabilities for both cases turned out to be very similar. The Sana distribution features on-average lower companion masses compared to the Kiminki study but also on-average shorter periods. This causes the results to be almost identical. The stars with the lowest radial velocity RMS have detection probabilities $>$ 0.7. In other words, the chance to have missed a companion is lower than 0.3. Naturally the detection probability can never reach unity simply due to the random inclination. Systems observed close to face-on are not detectable. The sample stars with the largest radial velocity errors, low primary masses or only few observations have detection probabilities as low as 0.36. \subsection{Spectroscopic binary fraction} Assuming that all our sample stars intrinsically have the same probability to have a companion (the stars are all massive OB/WR stars), it is reasonable to use the average detection probability $\left\langle p_{\mathrm{det}}\right\rangle =0.57$ (Table 3). The observed binary fraction (2 binaries out of 13 sources) is $F_{\mathrm{obs}}=2/13\approx0.15$, which after the correction for $p_{\mathrm{det}}<1$ rises to $F_{SB}=F_{\mathrm{obs}}/p_{\mathrm{det}}\approx0.27$. The distribution of binaries among the observed sources can be viewed as the outcome of a binomial process with probability $P_{SB}$, which is determined by the physics of binary formation. The detection-corrected binary fraction $F_{SB}$ is then the sample estimator of $P_{SB}$. The confidence interval around it can be obtained in the small sample size limit by the Wilson score interval with continuity correction \citep{newcombe98,brown01}. We thus find the 95\% confidence interval to be $P_{SB}\in[0.08,0.56]$, or $F_{SB}=0.27_{-0.19}^{+0.29}$. The lower bound of the binary fraction is lower than the observed fraction. This takes into account that we could have been 'lucky' in our choice of targets. While the uncertainties are quite large, we can exclude a binary fraction close to unity at high confidence. For example, $P_{SB}>0.85$ is ruled out at the $99.999999\%$ level. \section{Eclipsing binary fraction} Estimating the true eclipsing binary fraction in the Galactic Center is non-trivial because the detection probability depends strongly on the data sampling, the duration of the eclipse (i.e. the orbital separation and stellar radii) and the photometric amplitude (i.e. the inclination and relative sizes) of the system. However, we can compare the number of detected eclipsing binaries in the GC with the number of eclipsing binaries in local OB associations. We make the assumption that the binary E60 with an amplitude of $\Delta m_K\approx0.45$ and average photometric error of $\sigma_K\approx 0.04$ mag, represents the detection limit. Among the initial photometric sample of 113 early-type stars, 70 stars had photometric errors smaller or equal to E60. Only one star showed greater photometric variability on short timescales and that is IRS\,16SW. Some of the other stars showed variability on a $\sim0.1$ mag level with no obvious periodicity. Thus, out of the 70 stars, only two, IRS16\,SW and E60, are confirmed eclipsing binaries i.e. $F_{\rm EB}=3\pm2\%$ (at 1\,$\sigma$ confidence) with photometric amplitudes ($>0.4$ mag). The fraction of eclipsing binaries in local OB associations with similar amplitude variations $\Delta m \ge 0.4$ is $1.1\pm0.3$\% \citep[out of $\sim2400$\, OB stars, see][]{lefevre09}. \\ It is likely that the deduced fraction of massive eclipsing binaries in the GC reflects their initial formation fraction, given the very recent formation ($\sim$ 6\,Myr) of the massive stars, and given the strong observational bias of eclipsing binaries to be tight, dynamically hard binaries (e.g. the soft$/$hard boundary for E60 would be $a\approx300\, \rm R_{\odot}$; Section\,\ref{sec:binary_evol}), whereas eclipsing binaries have typically $a\approx {\rm few}~ 10\, \rm R_{\odot}$. Therefore, while the large Poisson errors do not allow us to place tight constraints on the eclipsing binary fraction in the GC, we conclude that their initial fraction is close to the local value. \section{Binary evolution in the Galactic Center}\label{sec:binary_evol} The evolution of binaries in dense galactic nuclei can be strongly modified by interactions with other stars and with the SMBH. We argue here that such effects will not significantly influence the observed properties and statistics of massive binaries in the GC.\\ Disruption by the Galactic SMBH affects only the very small fraction of binaries that approach it within the tidal disruption radius, $r_{t}\simeq a_{12}(M_{\bullet}/M_{12})^{1/3}$, where $M_{12}=M_{1}+M_{2}$ is the binary's total mass, $a_{12}$ is its semi-major axis, and $M_{\bullet}$ is the SMBH mass. The timescale for the center-of-mass of a binary at radius $r$ (assumed here to be of the order of the semi-major axis of its orbit around the SMBH) to be deflected by stochastic 2-body encounters with field stars to an eccentric orbit that will lead to its tidal separation by the SMBH is $T_{t}\sim\log(2\sqrt{r/r_{t}})T_{\mathrm{rlx}}(r)$, where $T_{\mathrm{rlx}}$ is the 2-body relaxation timescale, and the $\log$ term reflects the typical time to diffuse in phase space into near radial orbits with eccentricity $e_{t}=1-r_{t}/r$ \citep{1976MNRAS.176..633F}. The value of $T_{\mathrm{rlx}}$ in the GC, and especially whether it is longer or shorter than the age of the Milky Way (usually estimated by the Hubble time, $t_{H}=10^{10}\,\mathrm{yr}$), depends on the yet unknown dynamical state of the inner parsec, and in particular whether it harbors a ``dark cusp'' of stellar remnants and faint stars \citep{alexander11}. Estimates bracket it between $T\mathrm{_{rlx}}\sim\mathrm{few\times10^{9}\,}\mathrm{yr}$ \citep{preto10} and $T\mathrm{_{rlx}}\sim\mathrm{few\times10^{10}\,}\mathrm{yr}$ \citep{merritt10}. Given that $T_{\mathrm{rlx}}\sim{\cal O}(t_{H})$, and that typically $\log(2\sqrt{r/r_{t}})>1$, tidal separation of binaries, especially short-lived massive ones, is negligible (Figure \ref{f:binevol}). Rapid angular momentum relaxation by Resonant Relaxation \citep{rauch96} is expected to become marginally relevant for this tidal separation only in the inner $0.01$ pc \citep[Figure 7]{hopman06}.\\ The binary's internal orbit also evolves stochastically due to the exchange of energy and angular momentum with field stars. The direction of the energy exchange, that is, whether on average the binary gains energy and becomes wider until it is disrupted (``evaporation''), or whether it loses energy and shrinks until it coalesces, depends on its softness parameter $s$, defined as the ratio between its binding energy, $|E_{12}|=GM_{1}M_{2}/2a_{12}$, and the typical kinetic energy of the field stars, $E_{K}\sim\left\langle M_{\star}\right\rangle \sigma^{2}$, where $\sigma(r)$ is the 1D velocity dispersion. Soft binaries with $s<1$ will ultimately evaporate, while hard binaries with $s>1$ will ultimately coalesce \citep{heggie75}. In terms of the binary's semi-major axis, the soft/hard boundary is at the critical semi-major axis $a_{0}=GM_{1}M_{2}/2\left\langle M_{\star}\right\rangle \sigma^{2}\sim\left(M_{12}^{2}/8M_{\bullet}\left\langle M_{\star}\right\rangle \right)r$, where the approximations $\sigma^{2}\sim GM_{\bullet}/r$ (consistent with the results of \citealt{trippe08}) and $M_{1}\sim M_{2}\sim M_{12}/2$ are assumed. Figure (\ref{f:binevol}) shows $a_{0}(r)$ for the very massive binaries with $M_{12}\sim{\cal {O}}(100\, M_{\odot})$ and the moderately massive binaries with $M_{12}\sim{\cal {O}}(10\, M_{\odot})$, that are relevant for this study. IRS16NE and E60 are close to their critical semi-major axis ($s\sim1$). It is therefore unclear what direction their evolution would take, evaporation or coalescence. However, the evolutionary timescales will in any case be much longer than the binary's lifespan. A rough estimate of the time to coalescence is $T_{c}\sim{\cal O}([s_{c}-s]T_{\mathrm{rlx}})$, where $s_{c}$ is the softness parameter where the binary's orbital decay is taken over by non-dynamical effects (contact binary evolution or gravitational wave losses), not considered here, while the time to evaporation is $T_{e}\sim{\cal O}\left(\left[\left\langle M_{\star}\right\rangle \left/M_{12}\right.\right]sT_{\mathrm{rlx}}\right)$ (\citealt{1987gady.book.....B}, Alexander, Pfuhl \& Genzel, 2013, in prep.). It follows that as long as $0\ll s\ll s_{c}$, dynamical binary evolution can be neglected for massive binaries in the GC.\\ These considerations do not apply for low-mass binaries, which are expected to undergo substantial evolution in the inner $\sim0.1$ pc of the GC \citep[e.g.][]{hopman09}. A detailed study of the dynamical constraints that can be deduced from future detections of low-mass binaries in the GC is presented in Alexander, Pfuhl \& Genzel (2013, in prep.). \begin{figure} \noindent \begin{centering} \includegraphics[width=0.9\columnwidth]{sec5} \par\end{centering} \caption{\label{f:binevol}The tidal separation timescale $T_{t}$ and the soft/hard critical semi-major axis $a_{0}$ as function of distance for the MBH in the GC, for very massive binaries $(M_{12}=100\, M_{\odot}$) and moderately massive binaries $M_{12}=10\, M_{\odot}$). The tidal separation timescales (solid lines) are evaluated for $a_{12}=0.1$ AU (of the order of that found for eclipsing binary E60), and $a_{12}=1$ AU (of the order found for spectroscopic binary IRS16NE). The critical semi-major axes (dotted lines) are evaluated for the assumed mean stellar mass $\left\langle M_{\star}\right\rangle =10\, M_{\odot}$ expected very close to the MBH due to mass segregation \citep[e.g.][]{alexander_hopman09}, or for a top heavy initial mass function (Alexander, Pfuhl \& Genzel, 2013, in prep.) and for $ $ $\left\langle M_{\star}\right\rangle =1\, M_{\odot}$, as is expected further out for a universal initial mass function. Approximate values for $a_{0}$ assuming $\left\langle M_{\star}\right\rangle =10\, M_{\odot}$ are plotted for the binaries IRS16NE ($r\sim0.15$ pc $M_{12}\sim80\, M_{\odot}$ , $a_{12}\sin i\simeq1.2$ AU) and E60 ( $r\sim0.2$ pc $M_{12}\sim30\, M_{\odot}$, $a_{12}\simeq0.1$ AU). These two binaries are close to their critical semi-major axis. } \end{figure} \section{Discussion} Our survey of more than a dozen massive OB/WR stars in the Galactic Center revealed two previously unknown binaries, the long period binary IRS\,16NE and the eclipsing binary E60. Within the uncertainties, the spectroscopic binary fraction $F_{\rm SB}=0.27^{+0.29}_{-0.19}$ in the GC seems to be close to the fraction observed in other dense clusters such as Trumpler 14 ($F_{SB}=0.14$). The same is true for the fraction of eclipsing binaries ($\Delta m \ge 0.4$) of $3\pm2\%$ compared to $1\%$ in other OB clusters. The fraction of multiple systems is significantly lower than unity. This is especially interesting since the multiplicity of stars formed in an SMBH accretion disk, is regulated by the cooling timescale of the parental disk \citep{alexander08}. Fast cooling timescales ($t_{\rm cool}<t_{\rm dyn}$), as are believed to be present in black hole disks \citep[e.g.][]{goodman03}, lead to binary fractions close to unity and mainly equal mass companions \citep{alexander08}. This is clearly not supported by our observations. The GC binary fraction can also not have been altered significantly by dynamical effects. Massive binary systems in the GC are either hard binaries or the evolution timescale exceeds the age of the OB/WR disk (6\,Myrs). With an extraordinary long period of 224 days, IRS\,16NE is an example for such a system that survived the dense cluster environment. The observed low binary fraction seems to be inconsistent with the current understanding of massive star formation in SMBH accretion disks. In that sense, the inferred binary fraction provides an additional constraint for future theoretical models that try to explain the formation of stars in the vicinity of SMBHs. \section{Conclusions} \begin{itemize} \item The massive 224 day long-period binary IRS\,16NE is a rare system, even in less extreme environments than the Galactic Center. Less than 10\% of all known OB binaries have longer periods. The high mass of the binary constituents allows the large separation even in the dense cluster environment. The binary is dynamically hard, and it is therefore expected to survive dynamical evaporation. \item We identified a new WR eclipsing binary (E60) at a distance of 4.7\arcsec~ from Sgr A*. The system is a contact binary with a short period of only 2.3 days and a system mass of $M_{\rm prim}\approx 20\,\msun$ and $M_{\rm sec}\approx 10\,\msun$. Together with IRS\,16SW, this star is the second known eclipsing binary in the Galactic Center. The system has a remarkably high velocity of $v_{\rm 3D}\approx 460\, \rm km/s$, which is close to or even higher than the escape velocity at the radial distance from Sgr A*. \item The spectroscopic binary fraction of the massive OB/WR stars in the Galactic Center is $F_{\rm SB}=0.27^{+0.29}_{-0.19}$, where the lower and upper limit represent the 95\% confidence interval. This result is broadly consistent with the massive binary fraction observed in dense young clusters (see discussion Sec.\,\ref{sec:observed_binary_fractions}). It seems to be inconsistent with the current understanding of star formation in SMBH accretion disks, which predicts a binary fraction close to unity. \item The eclipsing binary fraction ($\Delta m \ge 0.4$) in the GC is $3\pm2\%$. Within the errors this is consistent with the fraction in other dense OB clusters ($\approx1\%$). \end{itemize} \acknowledgements{T.A. acknowledges support by ERC Starting Grant No. 202996, DIP-BMBF Grant No. 71-0460-0101 and Israel Science Foundation I-CORE grant No. 1829/12. }
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} Alkali halide materials are of technological importance due to their excellent electron-emitting properties in the ultraviolet (UV), vacuum ultraviolet (VUV), extreme ultraviolet (EUV) and X-ray energy ranges. These materials are currently employed in vacuum and gas-based photon detectors~\cite{C.Lu,D. Morrman}, detection of scintillation light~\cite{Daisuke}, medical imaging~\cite{Wei}, positron emission tomography~\cite{F. Garibaldi} as well as a protective layer in visible-sensitive photon detectors~\cite{A. Breskin1996}. Among alkali-halide photocathodes, CsI is the best choice, owing to its high quantum efficiency (QE) in the VUV wavelength range ~\cite{A. Breskin1997, A. Breskin1995}. CsI films are also used to enhance the field emission (FE) sources which have potential applications including display devices ~\cite{V. Vlashos}, X-ray tubes~\cite{Toru}, charged particle accelerators~\cite{A. Jhingan} and high power microwave devices ~\cite{R.J. Umstattd}. Shiffler et al~\cite{D.A. Shiffler2002} has reported a reduction in outgassing and improved emission uniformity after CsI coatings on carbon fibers. Even two orders of magnitude reduction in turn-on voltage was successfully achieved by means of CsI coating on carbon fiber-based FE devices by the same group ~\cite{D.A. Shiffler2008}. Due to the importance of CsI photocathodes, several thin film preparation methods, such as thermal evaporation~\cite{V. Dangendorf, J. Seguinot}, ion beam sputtering~\cite{M.A. Nitti2}, e-gun evaporation~\cite{P. Maier-Komor}, spray pyrolysis~\cite{S.O. Klimonsky}, pulsed laser deposition~\cite{S.B. Fairchild} are used to study the various physical and chemical properties of CsI. However, it has been observed that the thermal evaporation is the best choice forming a stoichiometric Cs:I ratio ~\cite{B. K. Singh} as well as the highest absolute quantum efficiency (QE) compared to other preparation techniques~\cite{M.A. Nitti2, P. Maier-Komor, S.O. Klimonsky, S.B. Fairchild}. Even with its enormous applications in a variety of fields discussed above, very few of the earlier studies in this field deal with characterization of CsI film structure~\cite{A.S. Tremsin,MA Nitti,Coluzza, J.Almeida, H. Hoedlmoser,triloki}. X-ray diffraction (XRD) and transmission electron microscopy (TEM) are the two important techniques which are commonly used for the structural characterization. The XRD Peak profile analysis endeavors to characterize microstructural features of the sample from the shape and breadth of Bragg's diffraction peaks, which arise due to finite crystallite size and microstrain. As broadening due to finite crystallite size and microstrain occurs together, various analytical method, such as Variance method~\cite{florentino}, Warren-Averbach method ~\cite{Warren} and Williamson-Hall analysis~\cite{G. K.Williamson}, have been adopted to separate both effects. Among all available methods, Williamson-Hall is a simplified approach to deconvolute strain and finite size induced broadening by plotting the total breadths of the reciprocal lattice point against their distance from origin ~\cite{langford}. On the contrary, Variance and Warren-Averbach methods are more complex to analyze and their application is restricted to materials having high symmetry or which exhibit a high degree of prefered orientation. Therefore, in present manuscript, we emphasized on W-H method to study the variation of crystallite size with thickness of the films and to separate the strain and finite size induced braodening. In Williamson-Hall method, broadening in Bragg's peak is assumed to be the sum of peak broadening due to finite crystallite size and induced strain. If strain is assumed to be uniform in all crystallographic directions then W-H model turns to uniform deformation model (UDM). In UDM, all the material properties are independent of the direction along which they are measured. Further, in uniform deformation stress model (UDSM) the strain is assumed to have a linear proportionality with stress according to Hook's law. UDSM is an approximation which is valid only for the small strain present in the crystal. Another model, uniform deformation energy density model (UDEDM) is used to determine the energy density of a crystal. In this approach the crystals are assumed to have a homogeneous and isotropic distribution. However, this assumption does not hold good and constants of proportionality associated with stress-strain relation are no longer independent when stress energy density is considered. The present paper accounts for the surface characterization of as-deposited CsI thin films of different thickness prepared by thermal evaporation technique. The characterization of crystalline materials mainly comprises the description of grain size and internal stress or strain due to various lattice defects~\cite{Tamas ungar}. Usually the size obtained by XRD corresponds to the average of the smallest undistorted region in the material whereas TEM counting is related to regions separated by continuous boundaries in the TEM micrograph. To distinguish the two sizes, we will use terms as crystallite size for XRD and grain size for TEM results. A comparative evaluation of the mean grain size of as-deposited CsI thin films obtained from direct TEM measurement, as well as the the crystallite size obtained from Williamson-Hall method using XRD measurement is studied. In addition, the strain associated with the as-deposited CsI films due to lattice deformation is estimated by a modified form of Williamson-Hall approach namely uniform deformation model (UDM). The other modified models such as UDSM and UDEDM are also used to provide an idea of the stress as well as the uniform deformation energy density. \section{Experimental Details} The experimental setup for CsI consists of a high vacuum evaporation chamber which includes an oil-free Pfeiffer-made pumping unit equipped with a turbo-molecular pump having a pumping speed of 510 liter/second for $N_{2}$ and a diaphragm pump. Base pressure of this vacuum chamber is of the order of $3\times 10^{-7}$ Torr. Small pieces of CsI crystal were placed in a tantalum boat inside the chamber and carefully heated to allow out-gassing from the surface of the crystal, if any, under a shutter. After proper out-gassing and melting of CsI crystals, thin films of different thickness were deposited on polished aluminum (Al) substrates and formvar coated copper (Cu) grids. Before deposition, typical composition of different residual gases including water vapor inside the chamber were monitored through a residual gas analyzer (SRS RGA 300 unit) as shown in Figure 1. It has been confirmed that the amount of water vapor inside the vacuum chamber was under controlled manner. \begin{figure}[!ht] \begin{center} \includegraphics[scale=0.75]{rga.eps} \caption{Residual gas composition inside the vacuum chamber.} \label{fig1} \end{center} \end{figure} During the film deposition, the rate of evaporation was about 1-2 nm per second and the boat and substrate were kept at a distance of about 20 cm. The thickness of the film was controlled by a quartz crystal thickness monitor (Sycon STM100). After film deposition, the vacuum chamber was purged with nitrogen $(N_{2})$ gas in order to avoid the effect of humidity on the prepared CsI samples. Immediately after the chamber opening under constant flow of $N_{2}$, as-deposited CsI thin films were extracted and placed in a vacuum desiccator. Further, CsI films deposited on formvar coated copper grid were used for TEM measurement while those deposited on Al substrate for XRD measurement. The structural measurements were performed by X-ray diffraction (XRD) technique in the Bragg-Brentano parafocussing geometry using PANalytical XÕPert PRO XRD system. The incident beam optics consists of a $CuK_ {\alpha}$ radiation source ($\lambda=1.5406\AA$) and a nickel (Ni) filter. XRD measurements have been performed in continuous scan mode in the range $2\theta = 20^{0}-80^{0}$. The diffracted beam optics consists of a 0.04 rad solar slit and a scintillator detector. Similarly, transmission electron microscopy (TEM) measurements were done by means of FEI Tecnai $20G^{2}$ operating at 200 KV voltage for the examination of structure and grain size of CsI films. \section{Results and discussion} \subsection{Crystallite size and strain by XRD analysis} XRD patterns of cesium iodide thin films with different thickness prepared by thermal evaporation technique are shown in Figure 2. \begin{figure}[!ht] \begin{center} \includegraphics[scale=0.75]{XRD.eps} \caption{X-ray diffraction pattern of CsI thin films of different thickness, deposited on aluminum substrate and of CsI crystal.} \label{comparision_thickness.eps} \end{center} \end{figure} No extra diffraction peaks corresponding to Cs, $Cs_{2}O$, $CsIO_{3}$ or other CsI phases are detected indicating that pure CsI is of polycrystalline, stoichiometric nature. Further, the XRD result of raw CsI crystal used for thermal evaporation is shown for comparison. The XRD scan exhibits a number of intense and sharp peaks which are assigned to the indicated Bragg reflections from CsI crystal. We may observe that the lattice plane corresponding to the preferred peaks for CsI crystal are: (110), (200), (211), (220), (310), (222) and (321). In the case of 4 nm Òas-depositedÓ CsI thin films, we observe the peak of (110) lattice plane only. In case of 20 nm Òas-depositedÓ film, we observe the lattice planes of (110) and (220) only. However, for thicker Òas-depositedÓ CsI films (100 nm and 500 nm), most intense peaks of (110) followed by (200), (211), (220) and (321) can be clearly observed. As peak (321) is contaminated by (311) peak of aluminum substrate, it is excluded from the present analysis. These peaks match with the peak positions listed for cesium iodide in ASTM card No. 060311, confirming the films to be of CsI. The value of full width at half maximum (FWHM) and $2\theta$ corresponding to the most intense (110) peak for various thickness of thin CsI films are shown in Table 1. Using XRD profile (shown in Figure 2), lattice parameters of CsI crystal as well as CsI thin films are calculated. The lattice constant (a) for all thicknesses of CsI film is obtained as $4.666\AA$, however lattice constant for CsI crystal is about $4.566\AA$. \begin{table}[ht] \begin{center} \caption[]{The FWHM and $2\theta$ corresponding to different thicknesses of CsI film for most intense (110) peak.} \begin{tabular}{|c|c|c|c|} \hline Thickness & FWHM & $2\theta$ \\\hline 500 nm & 0.1476 & 27.0594 \\ 100 nm &0.1476 & 27.0617 \\ 20 nm & 0.1476 &27.0596 \\ 4 nm & 0.1968 & 27.0797 \\ \hline \end{tabular} \label{tab:fntable} \end{center} \end{table} \begin{figure}[!ht] \begin{center} \includegraphics[scale=0.75]{shift.eps} \caption{Shifts in the (110) peaks of the X-ray diffraction pattern as compared to single crystal shown with sharp solid line.} \label{xrd_peak_shift} \end{center} \end{figure} The average crystallite size is calculated by using {the Debye-Scherrer}'s equation ~\cite{P. Scherrer} as follows: \begin{equation} D= \frac{k \lambda }{ \beta_{hkl} \cos\theta} \end{equation} where D is the volume weighted crystallite size, k is the shape factor (0.89), $\lambda$ is the wavelength of $CuK_{\alpha}$ radiation, $\beta_{hkl}$ is full width at half maximum (FWHM) of the particular peak and $\theta$ is the {Bragg}'s angle. From the calculations, the average crystallite size of CsI thin films are obtained as 41 nm and 55 nm for 4 nm and 20 nm thin films respectively, while for 100 nm and 500 nm thick CsI films it is obtained to be 54.74 nm. The crystallite size obtained by us is in good agreement with the reported crystallite size of 45 nm for 100 nm CsI thin films by Nitti et al ~\cite{M.A. Nitti} using same thermal evaporation technique. Klimonsky et al ~\cite{S.O. Klimonsky} have also reported the crystallite size of about 45-50 nm for different CsI samples prepared by spray pyrolysis technique. However, for same thickness of 100 nm film deposited by means of ion beam sputtering and ion beam assisted sputtering techniques, Nitti et al ~\cite{M.A. Nitti2} have reported the increased crystallite size of about 334 and 288 nm respectively. Further, the crystallite size depends on the broadening of the diffracted peak and Williamson-Hall approach~\cite{G. K.Williamson} allows us to find two different reason for it: One is the finite crystallite size, which varies as $1/\cos\theta$ (see equation 1), the other is the induced strain ($\epsilon$), which is given by Wilson formula ( $\beta_{hkl} = 4 \epsilon\tan\theta$)~\cite{A.R. Stokes}. Therefore, XRD profile can be used to determine residual stress and strain in the sample and the apparent shift in diffraction patterns from their corresponding crystal data indicates a uniform stress originated in the film due to the thermal evaporation ~\cite{G.C. Budakoti, P. Arun}. A shift in the peak position is also observed in our CsI films as shown in Figure 3 for (110) plane in comparison with the peaks observed in XRD scan of CsI crystals. It indicates that microstrain has developed in the prepared thin films. In our case, CsI (110) peaks are shifted towards lower angles of $\theta$ as compared to the crystal data (2$\theta=27.592^{0}$) from ASTM card No. 060311 as shown in Figure 3. These stresses acting in the film arise due to the various methods of film preparation and can cause some effects on the properties of the materials, in particular photoemissive properties are affected by the method of film preparation as shown and discussed in reference~\cite{M.A. Nitti2}. In Williamson-Hall approach the line broadening due to finite size of coherent scattering region and the internal stress in the prepared films are considered. The finite size is taken care by Scherrer's equation and the stress by Wilson formula in Williamson-Hall equation as follows~\cite{G. K.Williamson}. \begin{equation} \beta_{hkl}\cos\theta = \frac{k\lambda}{D} + 4 \epsilon\sin\theta \end{equation} where $\epsilon$ is the strain, which is usually assumed to be proportional to the square root of the density of dislocations, $\beta cos\theta/\lambda$ is the total integral breadth in reciprocal space and $2sin\theta/\lambda$ is the distance of reciprocal point from the origin. Figure 4(a) and 4(d) shows the measured values of $\beta_{hkl} \cos\theta$ as a function of $4\sin\theta$ for 500 nm and 100 nm CsI films. One can estimate the strain from the slope of the fitted line and crystallite size (D) from its intersection with the ordinate. Equation (2) corresponds to uniform deformation model, which consider the isotropic nature of crystal. In Table 2, it is shown that the strain as well as the estimated crystallite size obtained for 100 nm is more than the 500 nm film (see Table 2 for details). It indicates that by increasing the thickness of CsI film strain and crystallite size decrease. \begin{figure*}[!ht] \begin{center} \includegraphics[scale=0.6]{williamson_500.eps} \includegraphics[scale=0.6]{williamson_100.eps} \caption{{Williamson-Hall plots of 500 nm and 100 nm CsI film assuming (a, d) uniform deformation model (b, e) uniform deformation stress model and (c, f) uniform deformation energy density model.}} \label{final_100} \end{center} \end{figure*} Further, to incorporate more realistic situation, an anisotropic approach is adopted in uniform deformation stress model. Therefore Williamson-Hall equation is modified by an anisotropic strain $\epsilon = \sigma/E_{hkl}$, where $E_{hkl}$ is the {Young}'s modulus in direction hkl and $\sigma$ is the stress. The modified equation is written as: \begin{equation} \beta_{hkl}\cos\theta = \frac{k\lambda}{D} + \frac{4\sigma\sin\theta}{E_{hkl}} \end{equation} here $E_{hkl}$ for a cubic system in the direction of unit vector $l_{i}$, can be calculated using the following equation: \begin{eqnarray} \frac{1}{E_{hkl}}&=& s_{11} -2\left (s_{11} - s_{12} - \frac{1}{2}\,s_{44} \right )\Big({l_1}^2{l_2}^2\nonumber\\ & +& {l_2}^2{l_3}^2 + {l_3}^2{l_1}^2\Big) \end{eqnarray} where $s_{11}$, $ s_{12}$ and $s_{44}$ are the elastic compliances of CsI. The relations which provide the connection between the elastic compliances and the stiffness $c_{ij}$ are as follows: \begin{equation} s_{11} = \frac{(c_{11}+c_{12})}{(c_{11}-c_{12})(c_{11}+2c_{12})} \end{equation} \begin{equation} s_{12} = \frac{-c_{12}}{(c_{11}-c_{12})(c_{11} + 2c_{12})} \end{equation} \begin{equation} s_{44} =\frac{1}{c_{44}} \end{equation} where the stiffness values are $2.434{\times}10^{11}~ {dyne}/cm^{2}$, $0.636{\times}10^{11} dyne/cm^{2}$ and $0.6316{\times}10^{11} dyne/cm^{2}$ corresponding to $c_{11}, c_{12}$ and $c_{44}$ respectively~\cite{K.Reinitz}. Figure 4(b) and 4(e) show the measured value of $ \beta\cos\theta$ as a function of 4$\sin\theta/E_{hkl}$ and the uniform deformation stress $\sigma$ is calculated from the slope of the line. The anisotropic lattice strain can be calculated if $E_{hkl}$ values for CsI films are known. Crystallite size can also be estimated from the intercept on ordinate as shown in Table 2 for 100 nm and 500 nm CsI films respectively using uniform deformation stress model (UDSM). However, in UDEDM (equation 8), the deformation energy density(u) is considered as a source of strain and it is assumed to be uniform in all crystallographic directions. For an elastic system that follows the Hook's law, uniform energy per unit volume(u) can be calculated from $u=(\epsilon^{2}E_{hkl})/2$. Then equation (3) can be rewritten according to the energy and strain relation. \begin{equation} \beta_{hkl}\cos\theta = \frac{K\lambda}{D} + 4\sin\theta\,\left(\frac{2u}{E_{hkl}}\right)^{1/2} \end{equation} \begin{table*}[!ht] \renewcommand{\arraystretch}{1.0} \caption{Geometric parameters of CsI thin films of different thickness: (b) Crystallite size from Scherrer's method, (c,d and e) W. H. Analysis and (f) Grain size from TEM counting.} \vskip .3cm \begin{tabular}{|p{15.8cm}|} \hline \centering Williamson-Hall method \end{tabular}\\ \begin{tabular}{|p{1.2 cm}|p{1.5cm}|p{2.5 cm}|p{3.5 cm}|p{4cm}|p{1cm}|} \hline \centering (a)~CsI Sample&\centering (b)~Scherre's method ~~D (nm)&\centering (c)~Uniform Deformation Model (UDM)&\centering (d)~Uniform Deformation Stress Model (UDSM)&\centering (e)~Uniform Deformation Energy Density Model (UDEDM) &(f)~TEM grain size (nm) \\\hline \end{tabular} \begin{tabular}{|p{1.2 cm}|p{1.5cm}|p{1cm}|p{1.09 cm}|p{0.89cm}|p{0.89cm}|p{0.82cm}|p{0.65cm}|p{0.82cm}|p{0.65cm}|p{0.65cm}|p{1.0cm}|} && \centering D (nm)& \centering Strain $(\epsilon)\times10^{-4}$& \centering D (nm)& \centering Stress $(\sigma)$ MPa& \centering Strain $(\epsilon)\times10^{-4}$&\centering D (nm)&Energy Density (u) $kJ m^{-3}$& \centering Stress $(\sigma)$ MPa&\centering Strain $(\epsilon)\times10^{-4}$& \\\hline \raggedleft500 nm&\centering 54.74& \centering 95.02&\centering 10.54&\centering 66.62&\centering 10.68&\centering 5.8&\centering 71.37&\centering 4.56&\centering 12.95&\centering 7.03&306\\\hline \raggedleft100 nm&\centering 54.74&\centering 115.6&\centering 11.36&\centering 84.53&\centering 18.43&\centering 10.02&\centering 93.40&\centering12.08&\centering 25.68&\centering 13.96&303 \\\hline \raggedleft20 nm&\centering 55.0&&&&&&&&&&116 \\\hline \raggedleft 4 nm&\centering 41.0&&&&&&&&&&42\\\hline \end{tabular} \end{table*} Uniform deformation energy density (u) can be calculated from the slope of the line plotted between $\beta_{hkl}\cos\theta$ and $4\sin\theta(2/E_{hkl})^{1/2}$ as shown in Figure 4(c) and 4(f). The strain can also be calculated by knowing the $E_{hkl}$ values and is reported in Table 2. The {Young}'s modulus ($E_{hkl}$) has been calculated and resulted to be 17.2873 GPa for (110) lattice plane followed by $E_{hkl}$ = 21.7048 GPa for (200), $E_{hkl}$ = 17.2873 GPa for (211) and $E_{hkl}$ = 17.2873 GPa for (220) lattice plane. Table 2 summarizes the geometrical parameters of CsI films of different thickness obtained from Debye-Scherrer's formula, various methods of W-H analysis and TEM measurements. The average value of crystallite size, internal strain and stress obtained from the various models of modified W-H analysis are different, thus indicating that the inclusion of strains in various form of W-H analysis have an impact on the average crystallite size of CsI films. However, there is a variation between the crystallite size obtained from Debye-Scherrer's equation and the modified W-H analysis. This difference might be due to the strain contribution to the peak broadening in thin films. A well aligned X-ray diffractometer is used for the present study. However, errors due to finite step size of measurement in determining $2\theta$ are considered and propagated properly. The error bars are within the experimental data points in Figure 4 and the correlation coefficients in case of 4(a), 4(b) and 4(c) are 0.7, 0.5 and 0.6 while in case of 4(d), 4(e) and 4(f) they are 0.9, 0.8, and 0.8 respectively, showing a good correlation between the data points. The results are summarized in Table 2 for strain-stress analysis. The crystallite size obtained from ScherrerÕs method using equation (1) is shown in column (b). In column (c) the crystallite size and strain are mentioned from the Uniform Deformation Model using the slope and intercept from Figure 4(a) and 4(d). In column (d) the values of crystallite size, stress and strain from Uniform Deformation Stress Model calculated from Figure 4(b) and 4(e) are shown. In column (e) crystallite size, energy density, stress and strain are reported using the fitting parameters from Figure 4(c) and 4(f). Column (f) shows the TEM results for grain size which is discussed in the next section. \subsection{Particle size and diffraction pattern from TEM} TEM measurements are supposed to be a better tool for grain size determination due to the produced image of the sample. The results obtained from TEM analysis presented in Figure 5 show that in case of 4 nm CsI film, the layer does not appear to be continuous exhibiting a surface coverage of $ 29\%$ only. The average grain size estimated from TEM image is about 42 nm. This is in close agreement with the results obtained from Scherrer's method (see Table 2). In case of 20 nm films, layers exhibit morphology of interconnected crystallites of discontinuous structure; the average size is about 116 nm. Thicker CsI layers exhibit quite uniform surface morphology and larger grain size than the thinner film and having columnar shape with hexagonal structure. 100 nm and 500 nm thick CsI films have average grain size of about 300 nm as shown in Figure 6. The average grain size of a particular TEM image is estimated from the grain size distributions. The size of a particular grain is calculated by using the length of scale given by TEM system. One may observe from Figure 5 that the grain size and density of grains depend on the thickness of the film. In case of thinner CsI films, grain size as well as grain density is small and surface morphology is discontinous with small coverage of surface area. However, with increasing thickness, both grain size as well as the density of grains increases and film surface becomes fully covered. \begin{figure}[!ht] \begin{center} \includegraphics[scale=0.37]{TEM_image.eps} \caption{Transmission electron microscope (TEM) surface image of a) 4 nm, b) 20 nm, c) 100 nm and d) 500 nm Òas-depositedÓ CsI thin films.} \label{Sem_image} \end{center} \end{figure} Figure 7 shows selected area electron diffraction (SAED) patterns of CsI thin film of various thicknesses i.e. (a) 4 nm, (b) 20 nm, (c) 100 nm and (d) 500 nm respectively. In SAED patterns, the close examination of rings reveals that they consist of a large number of spots, each arising from Bragg's reflection from an individual crystallite. Although in case of polycrystalline specimens, the diffraction spots occur at all azimuthal angles and give the appearance of continuous rings if many grains lie within the path of the electron beam (grain size $<<$ beam diameter at the specimen). \begin{figure}[!ht] \begin{center} \includegraphics[scale=0.45]{hist.eps} \caption{Grain size distribution obtained from transmission electron microscope (TEM) surface image of a) 4 nm, b) 20 nm, c) 100 nm and d) 500 nm "as-deposited" CsI thin films.} \label{Sem_image} \end{center} \end{figure} It has been observed that the SAED pattern obtained from CsI thin films of various thicknesses are crystalline in nature. The SAED pattern of 4 nm CsI thin film demonstrates that the film has randomly oriented grains like a polycrystalline specimen. However, SAED patterns obtained for 20 nm, 100 nm and 500 nm CsI thin films show a discrete lattice of sharp spots which demonstrates that the films have single crystal domains. The crystallographic planes obtained from CsI thin film corresponds to a body centered cubic (bcc) structure with lattice constant $a = 4.666\AA$. \begin{figure}[!ht] \begin{center} \includegraphics[scale=0.45]{TEM_diffraction.eps} \caption{The electron diffraction pattern obtained from as-deposited CsI thin films of a) 4 nm, b) 20 nm, c) 100 nm and d) 500 nm thickness.} \label{Tem_image} \end{center} \end{figure} By comparing the results for crystallite size obtained from XRD and TEM analysis, there is a good agreement for 4 nm CsI film. However, for the sample with increasing thickness, there is an apparent difference between the grain and crystallite sizes obtained by these two methods in which grain size measured by TEM counting is higher than that the crystallite size from XRD analysis. When the thickness of film is increased from 4 nm to 500 nm, crystallite size obtained from Scherrer's equation remains constant, however in case of TEM measurements, grains size increases sharply. It indicates that according to the results from equation (1), grain growth settles to be saturated around 20 nm and further adding more thickness does not boost the crystallite size. However, W-H analysis suggest that by increasing the film thickness from 100 nm to 500 nm, the crystallite size and strain decreases. The results from TEM analysis suggest that the grain size of thicker films such as 100 nm and 500 nm is much larger than the crystallite size obtained from XRD analysis (see Table 2). The reason behind the size variation obtained from these two different techniques (XRD and TEM) can be understood in the following way: crystallite size obtained from XRD is the measurement of coherently scattering domain normal to the diffracting planes, having same orientation. While the grain size obtained from the TEM measurement is the cluster of such coherently scattering domain separated by the sharp contours (grain boundary). Further, this variation can be understood in terms of dislocations. When dislocations are arranged in a configuration causing small orientation differences between two adjacent regions, crystallite size obtained from the XRD shows two different regions. On the other hand, these two regions seem to be merged into one (single bigger grain) due to the quite small orientation difference and the contrast difference between them is not visible in TEM technique. Therefore the boundary is not considered as grain boundary in TEM technique~\cite{Tamas ungar, bolmaro}. \section{Conclusion} CsI films of various thicknesses were deposited by thermal evaporation technique and are characterized by XRD and TEM measurements. The displacement of (110) diffraction peaks towards the lower side of $\theta$ from their corresponding crystal data indicates that tensile stress exists for all CsI samples. The line broadening of as-deposited CsI films due to small crystallite size is analyzed by Debye-Scherrer's formula. A modified W-H method is used to estimate the crystallite size, and strain induced broadening due to the lattice deformation. Further, the origin of internal stress in a thin film comes from lattice defects such as dislocations, due to lattice misfit with it's substrate and due to differential thermal expansion between the film and it's substrate etc. In the present work, small values of stress suggest less density of lattice defects in our prepared CsI thin films. Further, both XRD and TEM measurements show that for 4 nm thin CsI film, the grain size and crystallite size are comparable. While for other films with 20 nm, 100 nm and 500 nm thickness, TEM provides grain size larger than the crystallite size calculated with XRD analysis. It indicates that for very small grain size regime there is a good correlation between TEM and XRD but in larger grain size regime TEM counting provides a larger average grain size than crystallite size from XRD. It suggest that as we increase the thickness, the coherent domains start merging and make a bigger grain. Also by increasing the thickness from 100 nm to 500 nm, although the grain size increases, but the coherent scattering domains start decreasing. Further the difference in crystallite size from W-H analysis may be due to the variation of strain treatment within three models. To the best of our knowledge, a detailed study using UDM, UDSM and UDEDM on the CsI films is not reported yet. We may suggest that these models can be precisely used for the estimation of crystallite size and strain of CsI films. \section{Acknowledgment} This work was partially supported by the Department of science and technology (DST), the council of scientific and industrial research (CSIR) and by Indian Space Research Organization (ISRO), Govt. of India. Triloki acknowledges the financial support obtained from UGC under research fellowship scheme for meritorious students (RFSMS) program and P. Garg acknowledges the financial support from CSIR, New Delhi, India.
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} \label{sec1} The code of federal regulations (CRF) Chapter 21, Section~314.126, states that ``The purpose of conducting clinical investigations of a drug is to distinguish the effect of a drug from other influences$\ldots$'' and the purpose is achieved through ``adequate and well-controlled clinical investigation.'' \mbox{According} to the CRF, an adequate and well-controlled trial has a number of characteristics, including: (1) ``The method of assigning patients to treatment and control groups minimizes bias and is intended to assure comparability of the groups with respect to pertinent variables such as age, sex, severity of disease$\ldots$'' and (2) ``Adequate measures are taken to minimize bias on the part of the subjects, observers$,\ldots\,$.'' Characteristic (1) and part of (2) aim to minimize bias through balancing the population between the two treatment arms. By conducting well-controlled clinical trials, we generally anticipate that systematic bias is minimized in superiority trials. However, this belief may be more tenuous in noninferiority trials. Note that noninferiority trials are the major vehicle to evaluate new treatments in many disease areas, after the pioneering consideration of ethical issues in Placebo-controlled trials by \citet{RotMic94}. Consider a Palivizumab-controlled noninferiority trial of Motavizumab for prophylaxis of serious respiratory syncytial virus (RSV) disease in high risk children [\citet{Caretal10}]. This trial will be called MOTA throughout the paper. The goal of the trial was to evaluate whether Motavizumab was noninferior to Palivizumab in the rate of hospitalization attributed to RSV. Let $\hat{\mu}_{\TC}$ be the estimated log-odds ratio of Palivizumab vs. Motavizumab, and let $\hat{\mu}_{\CP}$ be the estimated log-odds ratio of Placebo vs. Palivizumab. Because the log-odds ratio of Placebo vs. Motavizumab cannot be estimated directly (the noninferiority trial does not have a placebo arm), $\hat{\mu}_{\TC}+\hat{\mu}_{\CP}$ is often used as an indirect estimate, with a standard error of $\sqrt{\sigma_{\TC}^{2} + \sigma_{\CP}^{2}}$. We may consider the noninferiority of Motavizumab to Palivizumab to be met at level $\alpha$ if \mbox{$\hat{\mu} = (\hat{\mu}_{\TC} + \hat{\mu}_{\CP})/\sqrt{\sigma_{\TC}^{2} + \sigma_{\CP}^{2}} > Z_{\alpha}$}, where $Z_{\alpha}$ is the ($100 - \alpha$)th percentile of a standard normal distribution. In this example, $\hat{\mu}_{\CP}$ and its standard error $\sigma_{\CP}$ were obtained from an earlier Placebo-controlled trial of Palivizumab, \citet{autokey12}, in which $\hat{\mu}_{\CP}=0.86$ (corresponding to odds ratio of~2.4) with a standard error of 0.21. This trial will be called IMPACT throughout the paper. Now, keeping in mind the fact that the statistics $\hat{\mu}$ synthesizes $\hat{\mu}_{\CP}$ and $\hat{\mu}_{\TC}$, with the former estimated from the IMPACT population and the latter estimated from the MOTA population, we illustrate the following issues. First, both MOTA and IMPACT enrolled subjects exclusively from two disjoint subgroups: (1) children $\leq$24 months with a clinical diagnosis of Bronchopulmonary dysplasia (BPD); and (2) children with $\leq$35 weeks gestation and $\leq$6 months, who did not have BPD. The proportion of subjects with BPD was 51\% in IMPACT and only 22\% in MOTA. Second, treatment heterogeneity of Palivizumab was observed in these two subgroups in IMPACT. For subjects enrolled with BPD, the odds ratio was 4.88 with a 95\% C.I. of $(2.17, 10.96)$, and for subjects enrolled without BPD, the odds ratio was 1.72 a 95\% C.I. of $(1.06, 2.79)$ (see Section \ref{sec4}). The Wald Chi-square test of treatment by BPD interaction through a logistic regression was significant with a $p$-value of 0.03. Because of the population difference and treatment heterogeneity, an appropriate odds ratio $\hat{\mu}_{\CP}$ used in $\hat{\mu}$ should reflect the population of MOTA, while the value of 0.86 instead reflects the population of IMPACT. Using data provided in Section \ref{sec4}, we obtain the adjusted incidence rate of Placebo in the MOTA population of $34/266 \times22\mbox{\%} + 19/234 \times78\%=9.2\%$, and the adjusted incidence rate of Palivizumab of $39/496 \times22\% + 9/506 \times 78\%=3.1\%$. Therefore, the adjusted log-odds ratio is 1.14 and the adjusted odds ratio is 3.1. Consequently, the adjusted log-odds ratio of Placebo vs. Palivizumab in the MOTA population should be better quantified as 1.14 rather than 0.86, the unadjusted log-odds ratio $\hat{\mu}_{\CP}$. The difference between 1.14 and 0.86 is a bias associated with this inference. In the previous example, it was easy to adjust for the population difference, which only involved heterogeneity in BPD status. In some other examples, the situation could be more complicated. For example, in the development of Elvitegrevir [\citet{Moletal12}], the trial population was different from the historical trial population in several characteristics, for which treatment heterogeneity has been reported [\citet{Cooetal08}]. These examples show that analysis of a noninferiority trial may rely on a combination of information from the trial itself and one or more historical trials. The main issue is that the populations of the noninferiority and historical trials may be different. If treatment heterogeneity is present, an inference that does not adjust for the population difference can be biased. Covariate adjustment approaches [\citet{Zha09} and \citet {NieSoo10}] have been proposed to address the problem. Both approaches involve a regression model relating the clinical outcome to treatment and relevant covariates. They cannot be directly applied to obtain the marginal (crude) odds ratio, which is the prespecified primary endpoint in the aforementioned examples. This paper proposes a calibration method through likelihood reweighting so that a study designed to estimate a marginal treatment effect size for one trial population (e.g., IMPACT) may be used to calibrate the effect size in a different but closely related study population (e.g., MOTA). We prove that the maximum likelihood estimator for this reweighted likelihood is a consistent estimator of the treatment effect size in the targeted population. In addition, we also propose a nonparametric approach based on the calibration method. The proposed calibration approach using the likelihood reweighting method is (asymptotically) equivalent to the covariate adjustment approach in some cases such as linear regression, however, they are different in other cases. The choice between the two approaches can be subtle and subjective. An important consideration is to make sure that $\hat{\mu}_{\TC}$ and $\hat{\mu}_{\CP}$ in the statistics $\hat{\mu}_{ks} = (\hat{\mu}_{\TC} + \hat{\mu}_{\CP})/\sqrt{\sigma_{\TC}^{2} + \sigma_{\CP}^{2}}$ have similar interpretations. Specifically, if $\hat{\mu}_{\TC}$ is a marginal (i.e., overall) treatment effect as in the previous two examples and in most randomized clinical trials, then $\hat{\mu}_{\CP}$ should probably be calibrated using the method presented in this paper so as to maintain the marginal interpretation. In Section~\ref{sec3.3} we also make some observations on the likelihood reweighing method as an alternative to the covariate adjustment approach used in randomized clinical trials, along with the differences noted in the literature. Although this paper mainly targets noninferiority trials, the results are also applicable to historically controlled trials, which have similar issues [\citet{FriFurDem98}]. A comparison of the likelihood reweighting method to related methods in historically controlled trials, for example, \citet{Zha07}, \citet{Sigetal10} and \citet{Sigetal11}, is provided in the supplement [\citet{NieZhaRub}]. This paper focuses on calibrating the treatment effect size from one population to another population which is different but overlapping. It is related to but different from studies generalizing results from a subpopulation to a strictly larger population (whole population); see \citet{ColStu10}, \citet{Greetal08}, \citet{WeiHayPon09} and \citet{Fra09}, among others. These references are restricted to the clinical trial literature, although other areas, such as observational studies, involve similar problems. \section{Motivation, assumptions, and notation} \label{sec2} \subsection{Motivation} The idea behind our method is simple. Recall the \hyperref[sec1]{Intro-} \hyperref[sec1]{duction} where we obtained the expected incidence rate of Placebo in the MOTA population as \[ 12.8\% \times22 \% + 8.1\% \times78\% = \frac{\sum_{i = 1}^{500} y_{i}p_{x_{i},\mathrm{MOTA}}/p_{x_{i},\mathrm{IMPACT}}} {500}, \] where $p_{x_{i},\mathrm{MOTA}}$ and $p_{x_{i},\mathrm{IMPACT}}$ are the percentage of Placebo subjects with characteristics $x_{i}$ in MOTA and IMPACT trials, respectively. When $x_{i}=1$ (a~diagnostic of BPD), the $p_{x_{i},\mathrm{MOTA}}$ and $p_{x_{i},\mathrm{IMPACT}}$ are 22\% and 53\%, respectively; when $x_{i}=0$, they are 78\% and 47\%. By defining $p_{x_{i},\mathrm{MOTA}}/\break p_{x_{i},\mathrm {IMPACT}}$ as $r_{i}$, the expected incidence rate of Placebo in the MOTA population is simply the mean of reweighted response from all subjects and the weight reflects the change of population difference with respect to BPD status. In many other situations, the parameters cannot be directly calibrated as shown in this example. They can, however, be estimated using a likelihood approach to be described shortly. Robins and colleagues gave an intuitive explanation of how the inverse probability weighting approach reduces bias in the context of estimating marginal structural models (MSMs) in epidemiology [\citet{RobHerBru00}]. Heuristically, weighting each subject by the inverse of the propensity score for the treatment actually received creates a confounding-free pseudo-population, where treatment assignment is independent of the potential outcomes. Typically, the inverse probability weighting approach is used to estimate marginal means of potential outcomes in an estimating equation framework. However, the insights of the work by Robins and colleagues certainly extend to likelihood-based inference and allow us to calibrate the treatment effect. Specifically, upon appropriately reweighting the likelihood function contributed by each subject, a calibrated treatment effect can be obtained. Before illustrating the reweighted likelihood approach, we introduce some notation and assumptions. \subsection{Assumptions and notations} Consider a trial conducted in a population~$P$ (e.g., the IMPACT population) to compare treatment 1 (e.g., Palivizumab) to treatment 2 (e.g., Placebo). We assume that a random sample from $P$ is randomly assigned into these two treatment groups. The objective of the trial is to quantify the treatment effect size of treatment 1 relative to treatment 2 in population $P$. The objective of this paper is to calibrate the effect size of treatment 1 relative to treatment 2 from the original population to a different but closely related population~$P^*$ (e.g., MOTA population). We assume that the populations $P$ and $P^*$ are different. In our first example, $P$ refers to a population comprised of subjects with BPD (51\%) and without BPD (49\%) and $P^*$ refers to a population with a different composition (22\% with BPD, 78\% without BPD). We also assume that the populations $P$ and $P^*$ are closely related and that the differences between $P$ and $P^*$ are entirely captured by the value of a predictive covariate (vector) $X$ representing subjects' baseline disease characteristics. In addition, we assume that all subjects with covariate value $X=x$ are expected to have the same treatment effect, regardless of their origin (population $P$ or population~$P^*$). That is, subjects with the same covariate value $X$ are exchangeable in $P$ and $P^*$. In our first example, this means subjects with the same BPD diagnostic status, whether in the IMPACT population or the MOTA population, are exchangeable in terms of response to treatments. The difference and close relationship between population $P$ and $P^*$ is further illustrated in mathematical form below after we clearly state the objective of the paper. Let $Y$ be the response variable. We write $\mu_{t}(X) = E ( Y|X,T = t )$ for the conditional mean response of subjects with covariate $X$ who were assigned into treatment $T=t$, and $\mu_{tP} = \nu [ E_{X \in P} \{ \mu_{t}(X) \} ]$ for the transformed marginal mean response with respect to population $P$. When $\nu(\cdot)$ is the identity function, $\mu_{tP}$ is the marginal mean; when $Y$ is a binary variable and $\nu(\cdot)$ is the logit function, $\mu_{tP}$ is the log odds in the population $P$. $\mu_{tP}$ may be used to quantify the response of treatment $T=t$ from a historical trial, although this is not the focus of this paper but a by-product. Instead this paper focuses on noninferiority trials, in which we are interested in the treatment effect of treatment 1 vs. treatment 2. We thus consider $\mu_{P} = \pi [ E_{X \in P} \{ \mu_{1}(X) \},E_{X \in P} \{ \mu_{2}(X) \} ]$ as a metric to measure treatment effect of treatment 1 vs. treatment 2. In the historical trial (e.g., IMPACT), $\mu_{tP}$ or $\mu_{P}$ is estimated. However, the objective in this paper is to estimate $\mu _{tP^*} = \nu [ E_{X \in P^*} \{ \mu_{t}(X) \} ]$ or $\mu_{P^*} = \pi [ E_{X \in P^*} \{ \mu_{1}(X) \},E_{X \in P^*} \{ \mu_{2}(X) \} ]$ through calibration, without conducting a different trial in population $P^*$ (MOTA population). Let $F(x)$ and $F^*(x)$ denote the cumulative distribution functions of $X$ in $P$ and~$P^*$, respectively, and let $f(x)$ and $f^*(x)$ be the corresponding probability density functions. We first assume that $f^*(x)/f(x) \ne1$ for some $X=x$, which illustrates the differences between populations $P$ and $P^*$. We also assume that $\infty> r(x) = f^*(x)/f(x)$ is well defined. Because the populations are fully described by $X$, this assumption means that any subject included in $P^*$ should have representatives with the same measurements in population $P$. This highlights the close relationship between $P$ and $P^*$. When a value of $x$ does not present in $P^*$, then $r(x)=0$. In this case, we shall not use the subjects in the historical trials with value $x$. \section{Calibration of treatment effect size through likelihood reweighting} \label{sec3} In our first example only the BPD status is considered and the weight is easy to define. However, in our second example many predictive covariates may need to be considered. In the latter case, the definition of the weight is straightforward using the concept of the propensity score [\citet{RosRub83}]. \subsection{Parametric approach}\label{sec3.1} Assume two random samples of size $n_{1}, n_{2}$ from population $P$ are assigned into treatment 1 and treatment 2, respectively. We assume $y_{it}$, the ith subject's response from treatment group $t$, follows a generalized linear model (GLM) with canonical link, \begin{equation}\label{eq1} l_{t}(y,\theta_{tx}) = \exp\biggl\{ \frac{y\theta_{tx} - b ( \theta_{tx} )}{a_{tx} ( \varphi_{tx} )} + c_{tx} ( y,\varphi_{tx} ) \biggr\}. \end{equation} Let $g(\cdot)$ be the canonical link function; then $\mu_{t}(X) = g^{ - 1} ( \theta_{t} )$. We assume that $g(\cdot)$ is a monotone function with continuous second derivative functions. One possible metric to measure the treatment effect is $E_{X} ( \theta_{tx} )$ and another possible metric is $\mu_{tP} = g [ E_{X \in P} \{ \mu_{t}(X) \} ]$. In the binomial-logistic regression case, we implicitly assume that the log odds is additive for the metric $E_{X} ( \theta_{tx} )$ and assume the proportion is additive for the metric $\mu_{tP} = g [ E_{X \in P} \{ \mu_{t}(X) \} ]$. The former metric was used in the covariate adjustment approach of \citet{NieSoo10}. The latter metric shall be used in the likelihood reweighting method, as introduced in this paper. These two metrics are related but usually are not identical in nonlinear models. To estimate $\mu_{tP}$, we construct the likelihood function \begin{equation} \label{eq2} \prod_{i = 1}^{n_{t}} l_{t}(y_{it},\alpha_{t}). \end{equation} The maximum likelihood estimate (MLE) $\hat{\alpha}_{t}$ of $\alpha _{t}$ is a consistent estimate of the treatment effect size $\mu_{tP} = g [ E_{X \in P} \{ \mu_{t}(X) \} ]$. The proof for this is standard and similar to that of Theorem \ref{theo1} below, and is therefore omitted. However, in this paper, our goal is to provide a consistent estimate of $\mu _{tP^*} = g [ E_{X \in P^*} \{ \mu_{t}(X) \} ]$. Our strategy is to ``tilt'' the population $P$ so that it matches the population $P^*$ and our matching tool is the propensity score. In the likelihood (\ref{eq2}) we reweight the contribution of the likelihood function from the ith subject from the historical trial with the weight $r(x)$, and form a new likelihood function (\ref{equZV}) \renewcommand{\theequation}{2*} \begin{equation}\label{equZV} \prod_{i = 1}^{n_{t}} \bigl\{ l_{t}(y_{it},\alpha_{t}) \bigr\}^{r(x_{i})}. \end{equation} \begin{theorem}\label{theo1} $\hat{\alpha}_{t}^{*}$, the MLE which maximize (\ref{equZV}), is a consistent estimate of $\mu_{tP^*} = g [ E_{X \in P^*} \{ \mu_{t}(X) \} ]$. In addition, $\hat{\alpha}_{t}^{*}\sim N ( \mu_{iP^*},A^{ - 1} ( \mu_{iP^*} )B ( \mu_{iP^*} )\*A^{ - 1} ( \mu_{iP^*} ) )$, where \begin{eqnarray*} A ( \alpha_{t} ) &=& E \biggl\{ r(x)\frac{d^{2}\log l_{t}(y_{it},\alpha_{t})}{d\alpha_{t}^{2}} \biggr\}, \\ B ( \alpha_{t} ) &=& E \biggl\{ r^{2}(x)\frac{d\log l_{t}(y_{it},\alpha_{t})}{d\alpha_{t}} \frac{d\log_{t}(y_{it},\alpha_{t})}{d\alpha_{t}} \biggr\}. \end{eqnarray*} \end{theorem} The proof of Theorem \ref{theo1} is given in the \hyperref[app]{Appendix}. Theorem \ref{theo1} indicates that the calibrated treatment effect converges to the treatment effect that would be presented in population $P^*$. In other words, the likelihood function (\ref{equZV}) is reweighted in such a way that the units can be treated as randomly sampled from a target population, not the population of the study. Note that, if the two trials have the same population, then $r(x ) = 1$, so that likelihood function (\ref{equZV}) reduces to (\ref{eq2}). Briefly, we note that the parametric approach easily extends to include some key covariates, including a treatment indicator as typically used in noninferiority trials, in the likelihood (\ref{equZV}) as follows: \[ \prod_{i = 1}^{n} \bigl\{ l(y_{i}, \alpha+ \beta z) \bigr\}^{r(x_{i})}, \] where $l(y_{i},\alpha+ \beta z)$ is the likelihood function contributed by the ith subject and $z$ is a vector of treatment and/or covariate of interest. The MLE converges to the parameters in the target population $P^*$. \subsection{Nonparametric approach}\label{sec3.2} Section \ref{sec3.1} is based on the model assumption~(\ref{eq1}). In this subsection we take a nonparametric approach similar to the reweighting method of \citet{Zha07} [see also \citet{Sigetal10} and \citet{Sigetal11}] for a historical control problem, and estimate $E_{X \in P^*} \{ \mu_{t}(X) \}$ by $\hat{\delta}_{t} = \sum_{i = 1}^{n_{t}} y_{it}r ( x_{i} ) /\sum_{i = 1}^{n_{t}} r ( x_{i} )$. When $n_{t} \to\infty$, \[ \frac{\sum_{i = 1}^{n_{t}} y_{it}r ( x_{i} )} {n_{t}} \to E_{X \in P} \biggl\{ \mu_{t}(X) \frac{f^* ( X )}{f ( X )} \biggr\} = E_{X \in P^*} \bigl\{ \mu_{t}(X) \bigr \} = \mu_{iP^*}. \] Here we used the fact that $r(x) = f^*(x)/f(x)$, shown in the proof of Theorem~\ref{theo1} in the \hyperref[app]{Appendix}. Similarly, $\sum_{i = 1}^{n_{t}} r ( x_{i} ) /n_{t} \to E_{X} \{ f^* ( x )/f ( x ) \} = 1$. Therefore, $\sum_{i = 1}^{n_{t}} y_{it}r ( x_{i} ) /n_{t} \to E_{X \in P^*} \{ \mu_{t}(X) \}$ and, thus, \[ \mu_{tP} = \nu \bigl[ E_{X \in P} \bigl\{ \mu_{t}(X) \bigr\} \bigr] \] can be estimated by $\nu ( \hat{\delta}_{t} ) = v \{ \sum_{i = 1}^{n_{t}} y_{it}r ( x_{i} ) /\sum_{i = 1}^{n_{t}} r ( x_{i} ) \}$. The variance of the estimator and therefore the confidence interval for the desired parameter can be obtained, for example, through the bootstrap method proposed in \citet{Efr81}. \subsection{Likelihood reweighting method vs. the previous covariate adjustment approach}\label{sec3.3} Aside from the differences between two approaches previously mentioned in the \hyperref[sec1]{Introduction}, we have the following observations on the likelihood reweighing method as an alternative to the covariate adjustment approach used in randomized clinical trails, along with the differences noted in the literature. In the covariate adjustment approach, only the covariates interacting with treatment are considered influential and relevant to the adjustment. However, there are other types of ``influential'' covariates. One type of ``influential'' covariates relates to noncollapsibility, as illustrated in Table 1 from \citet{GrePeaRob99}. Assuming that Table 1 represents the population $P$ and that $X$ and $Z$ represent the treatments and status of a disease, 50\% of enrolled subjects have a certain disease and the other 50\% of them do not have it. The event rates are 40\% and 20\% for treatment $1$ $(X=1)$ and $2$ $(X=0)$, respectively, in subjects with the disease and are 80\% and 60\% in subjects without the disease. The treatment 1 vs. treatment 2 odds ratio is 2.67 whether subjects have the disease or not, hence no treatment heterogeneity (i.e., no treatment by covariate interaction when measured in odds ratio). Consider a population $P^*$, in which 86\% of enrolled subjects have the disease and the other 14\% of them do not have it. The overall odds ratio in population $P^*$ is thus 2.44. While the covariate adjustment approach would find that 2.67 is the odds ratio of treatment 1 vs. treatment 2 in $P$ and $P^*$, the likelihood reweighting method would find that 2.25 and 2.44 are the odds ratio in $P$ and $P^*$, respectively. In other words, the covariate adjustment approach estimates the conditional odds ratio and the likelihood reweighting method estimates the marginal odds ratio. In this example, because the conditional odds ratio is not the same as the marginal odds ratio, the issue of noncollapsibility arises, leading to some questions on using the odds ratio as the measure of treatment effect. The likelihood reweighting method provides a flexible way to circumvent this difficulty. With the percentage difference as an alternative metric (measure of treatment effect), we could work with $\tau_{P^*} = \mu _{1P^*} - \mu_{2P^*}$ and $\mu_{tP^*} = \nu [ E_{X \in P^*} \{ \mu_{t}(X) \} ]$, where $\nu (\cdot)$ is the identity function. The covariate adjustment approach relies on the ability of the data to detect treatment effect heterogeneity, that is, the treatment by covariate interaction. However, in some situations, trials may not be large enough to detect moderate interactions because they are not designed for that purpose. Even if some trials are large, the rarity of events could hamper the ability to detect all heterogeneity. Therefore, it is likely that some important interactions are not going to be detected, leading to partial adjustment with a residual bias. In these scenarios, the likelihood reweighting method can be a good alternative, as it estimates the marginal effect by maximizing the ``unadjusted'' likelihood (\ref{equZV}). Although modeling was also used to estimate propensity scores, the dependent variable is the trial indicator, not the outcome of interest. The covariate adjustment approach may face co-linearity problems when there are many covariates. When that happens, the model-based adjustment requires a difficult decision on how to omit some covariates and select a good model. When this problem occurs, the likelihood reweighting method can be a good alternative, as it is less susceptible to this issue: while co-linearity may also cause problems with parameter estimation in the propensity score models, it does not adversely affect prediction of the propensity score itself. The covariate adjustment approach utilizes the same outcome data for both model selection and formal inference based on the chosen model, and the results could be too optimistic. In case this becomes a concern, the likelihood reweighting method can be considered as an alternative, as propensity score modeling and model selection do not involve outcome data. The techniques presented here are expected to be useful in uncontrolled observational studies as well. Although uncontrolled observational studies inherit many more issues than these controlled trials, the difference or change in the patient populations associated to different comparing groups remains one of the key issues. As pointed out by the reviewers, a possible weakness of both approaches is that they require subject-level data from the historical control trial. Readers are referred to \citet{NieSoo10} for some discussions. One possible solution could be defining the weight based on summary statistics, an idea as illustrated in \citet{Sigetal10} and \citet{Sigetal11}. \section{Applications}\label{sec4} In noninferiority trials, we aim to calibrate the effect size of the active control (e.g., Palivizumab) relative to Placebo from the historical trial population $P$ (e.g., IMPACT) to the noninferiority trial population $P^*$ (e.g., MOTA). Using Bayes' rule, we obtain $r(x) \propto\Pr(P^*|X = x)/\Pr (P|X = x)$. As the population $P^*$ is associated with the experimental treatment (e.g., Motavizumab) and the population $P$ with Placebo, $r(x) \propto\Pr(T = \mathrm{MOTA}|X = x)/\Pr(T = \mathrm{Placebo}|X = x)$, that is, $r(x)$ is proportional to the odds through the propensity score. \subsection{Development of Motavizumab, a second generation of Palivizumab}\label{sec4.1} Pa\-livizumab is a humanized monoclonal antibody, approved and marketed for passive immunoprophylaxis of respiratory syncytial virus (RSV) in infants at risk for serious RSV disease. It was studied in the IMPACT trial, a phase III randomized, double-blind, Placebo-controlled clinical trial that was conducted to evaluate the ability of prophylaxis with Palivizumab to reduce respiratory syncytial virus infection in high-risk infants. A total of 1502 children with prematurity or bronchopulmonary dysplasia (BPD), also called chronic lung disease (CLD) in infancy, were randomized to receive either Palivizumab or Placebo intramuscularly. The primary endpoint was RSV related hospitalization within 150 days since administration of the first dose of treatment. For more information of this trial please refer to the \citet{autokey12}. This trial enrolled subjects exclusively from two disjoint subgroups: (1) children 24 months old or younger with a clinical diagnosis of BPD requiring ongoing medical treatment; and (2) children with 35 weeks gestation or less and 6 months old or younger, who did not have a clinical diagnosis of BPD. Among subjects enrolled with a diagnosis of BPD, the incidence rate of RSV-related hospitalization was 12.8\% ($34/266$) in the Placebo arm and 7.9\% ($39/496$) in the Palivizumab arm. Among subjects enrolled without a diagnosis of BPD, the incidence rate of RSV-related hospitalization was 8.1\% ($19/234$) in the Placebo arm and 1.8\% ($9/506$) in the Palivizumab arm. Among the 500 subjects who received Placebo, 53 (10.6\%) had an RSV-related hospitalization; among the 1002 subjects who received Palivizumab, 48 (4.8\%) had an RSV-related hospitalization (see Table \ref{tab1} for details). It is clear that the treatment effect of Palivizumab vs. Placebo was better in subjects enrolled without a diagnosis of BPD than in subjects enrolled with a diagnosis of BPD. The overall treatment effect size of Palivizumab, as measured in the odds ratio of the Placebo vs. Palivizumab, was 2.4 with a 95\% C.I. of $(1.6, 3.5)$. \begin{table} \tablewidth=172pt \caption{Subject distribution: numbers of subjects (numbers of RSV-hospitalizations)} \label{tab1} \begin{tabular*}{\tablewidth}{@{\extracolsep{\fill}}lcc@{}} \hline \textbf{IMPACT trial} & \textbf{BPD} & \textbf{Non-BPD}\\ \hline Placebo & 266 (34) & \hphantom{0}234 (19)\\ Palivizumab & 496 (39) & \hphantom{0}506 (9)\hphantom{0}\\ MOTA trial & & \\ Palivizumab & 723 (28) & 2607 (34)\\ Motavizumab & 722 (22) & 2583 (24)\\ \hline \end{tabular*} \end{table} To evaluate Motavizumab, a second generation version of Palivizumab, a~phase~3, randomized, double-blind, Palivizumab-controlled, multi-center, multinational noninferiority trial (MOTA) was conducted to assess whether Motavizumab was noninferior to Palivizumab. More precisely, the question was whether Motavizumab is at least not too much worse than Palivizumab in the sense that the difference of Motavizumab vs. Palivizumab is greater than the difference of Placebo vs. Palivizumab. With the risk difference metric this means that the rate difference of RSV hospitalization between Motavizumab and Palivizumab is smaller than the rate difference of RSV hospitalization between Placebo and Palivizumab. In the metric of odds ratio this means that the odds ratio between Motavizumab and Palivizumab is smaller than the odds ratio between Placebo and Palivizumab. In order to evaluate noninferiority, one possible test statistic is \[ \hat{\mu}_{ks} = \frac{\hat{\mu}_{\TC} + \hat{\mu}_{\CP}}{\sqrt{\sigma_{\TC}^{2} + \sigma_{\CP}^{2}}}, \] where $\hat{\mu}_{\TC}$ is the overall log-odds ratio of Palivizumab vs. Motavizumab and $\hat{\mu}_{\CP}$ is the overall log-odds ratio of Palivizumab vs. Placebo. \subsection{Calibrated effect size of Palivizumab vs. Placebo in the new MOTA study population}\label{sec4.2} Assume that $Y_{ixt}$, the incidence of RSV hospitalization of the ith subject, follows a logistic regression model, $y_{ixt}\sim \operatorname{Binomail}(1,p_{xt});\break\operatorname{logit}(p_{xt}) = \theta_{xt}$ with $x=0, 1$, representing subjects enrolled without and with a diagnosis of BPD and $t=0,1$ representing Placebo and Palivizumab. Whether to make inference on $p_{xt}$ (the incidence rate) or $\theta_{xt}$ (the log odds of an event) is generally subjective. Both are used extensively in noninferiority trials. In IMPACT and MOTA, the log-odds ratio was the primary metric, but the risk difference is the primary metric in current HIV trials. Therefore, we shall illustrate both metrics in the Motavizumab example. Let us first consider quantifying the treatment effect using the risk difference. Let $p_{n}$ denote the proportion of subgroup with $x=1$ in the target population (e.g., MOTA population) and $p_{h}$ denote the proportion of subgroup with $x=1$ in the historical population (e.g., IMPACT population). It is easy to show that the MLEs of likelihood in (\ref{eq2}) and (\ref{equZV}) are \[ \hat{\alpha}_{t} = \frac{n_{1t}\bar{y}_{\cdot1t} + n_{0t}\bar{y}_{\cdot 0t}}{n_{1t} + n_{0t}};\qquad \hat{\alpha}_{t}^{*} = \frac{n_{1t}({p_{n}}/{p_{h}})\bar{y}_{\cdot1t} + n_{1t}(({1 - p_{n}})/({1 - p_{h}}))\bar{y}_{\cdot0t}}{n_{1t}({p_{n}}/{p_{h}}) + n_{1t}({1 - p_{n}})/({1 - p_{h}})}. \] In our example, these are \begin{eqnarray*} \hat{\alpha}_{0} &=& \frac{266 \times12.8\% + 234 \times8.1\%} {500} = 10.6\%;\\ \hat{\alpha}_{0}^{*} &=& \biggl({266\frac{0.22}{266/500}12.8\% + 234\frac{0.78}{234/500}8.1\%}\biggr)\\ &&{}\Big/\biggl({266\frac{0.22}{266/500} + 234\frac{0.78}{234/500}}\biggr) = 9.1\%. \end{eqnarray*} Similarly, $\hat{\alpha}_{1} = 4.8\%; \hat{\alpha}_{1}^{*} = 3.1\%$. For the nonparametric approach presented in Section \ref{sec3.2}, the same results are obtained. Indeed, whether using the risk difference or the log-odds ratio as metrics, the parametric approach presented in Section \ref{sec3.1} and the nonparametric approach presented in Section \ref{sec3.2} lead to the same results for this example. The standard error of the calibrated effect size $\hat{\alpha}_{j}^{*}$ can be calculated directly here as \begin{eqnarray*} \operatorname{std}\bigl(\hat{\alpha}_{0}^{*}\bigr) &=& \sqrt{0.22^{2} \times\frac{12.8\% \times ( 1 - 12.8\% )}{266} + 0.78^{2} \times\frac{8.1\% \times ( 1 - 8.1\% )}{234}} \\ &=& 0.015. \end{eqnarray*} Similarly, $\operatorname{std}(\hat{\alpha}_{1}^{*}) = 0.005$. We could also use statistical software to obtain the standard error. In this paper we used the SAS procedure PROC GEMMOD with the generalized estimating equation (GEE) option to compute the standard error described in Theorem \ref{theo1}. The resulting standard errors for $\hat{\alpha}_{0}^{*}$ and $\hat{\alpha}_{1}^{*}$ are 0.015 and 0.005, which are the same as obtained in our direct computation. Now, let us quantify the treatment effect using the log-odds ratio metric. The estimate can be obtained through PROC NLMIXED with the replicate statement. However, the standard error obtained from this procedure is not the standard error stated in Theorem \ref{theo1}. In order to obtain the correct standard errors, we could obtain a bootstrap standard error [\citet {Efr81}], which is 0.25. Alternatively, we can also use PROC GEMMOD to obtain the same point estimate and the standard error. It results in the same estimate and standard error. Note that the unadjusted log-odds ratio is $\hat{\mu}_{\CP}=0.86$ with standard error 0.21. The estimated log-odds ratio of Palivizumab vs. Motavizumab is 0.31 with a standard error of 0.20. Using the unadjusted or adjusted effect size, we calculate the unadjusted and adjusted statistics, \begin{eqnarray*} \hat{\mu} &=& \frac{\hat{\mu}_{\TC} + \hat{\mu}_{\CP}}{\sqrt{\sigma_{\TC }^{2} + \sigma_{\CP}^{2}}} = \frac{0.31 + 0.86}{\sqrt{0.20^{2} +0.21^{2}}} = 4.0;\\ \hat{\mu}_{\mathrm{adj}} &=& \frac{\hat{\mu}_{\TC} + \hat{\mu}_{\CP}}{\sqrt{\sigma_{\TC}^{2} + \sigma_{\CP}^{2}}} = \frac {0.31 + 1.14}{\sqrt{0.20^{2} +0.25^{2}}} = 4.5. \end{eqnarray*} The significance levels associated with the unadjusted and adjusted inference are 0.00003 and 0.000003, respectively. Other than this approach, one can also use a more conservative approach, the fixed margin approach [see \citet{autokey7} for details], to make the following inference: \begin{eqnarray*} \hat{\mu}_{f} &=& \frac{\hat{\mu}_{\TC} + \hat{\mu}_{\CP}}{\sigma_{\TC} + \sigma_{\CP}} = \frac{0.31 + 0.86}{0.20 +0.21} = 2.9;\\ \hat{\mu}_{\mathrm{adj},f} &=& \frac{\hat{\mu}_{\TC} + \hat{\mu}_{\CP}}{\sigma_{\TC} + \sigma_{\CP}} = \frac{0.31 + 1.14}{0.20 +0.25} = 3.2. \end{eqnarray*} The significance level associated with the unadjusted inference is 0.002 and it is 0.0006 for the adjusted inference. Although both are less than 0.05, the latter one is approximately $0.025^{2}$, which means the significance level is as low as that of two independent clinical trials, each significant at a level of 0.025, fulfilling the regulatory requirement on the quantity of the evidence [see \citet{Sooetal13}]. The quantity requirement has been interpreted in the FDA guidance of drug effectiveness [\citet{autokey6}] as follows: ``With regard to quantity, it has been FDAs position that Congress generally intended to require at least two adequate and well-controlled studies, each convincing on its own, to establish effectiveness.'' Therefore, the adjusted approach could make a difference because Motavizumab was evaluated in a single noninferiority trial. However, we emphasize that this analysis only takes the published data into consideration and assumes that there are no other potential issues associated with the trial design and conduct. \subsection{Calibrating the effect size using subject-level data}\label{sec4.3} Section \ref{sec4.2} presented a simple example to illustrate calibration through likelihood reweighting. In general, subject-level data from a clinical trial is much more complex than the data presented in Table \ref{tab1}. Although the FDA typically has access to all subject-level data for regulatory purposes, we have no authority to use them for purposes other than regulatory decision-making. Therefore, we cannot share our experiences analyzing real data with readers. For the purpose of illustrating methodology, we will use a simulated data set based on the IMPACT data set and the MOTA data set. We randomly generate variables $\mathrm{x}_{1}$, $\mathrm{x}_{2}$, and $\mathrm{x}_{3}$ so the generated data set, say, IMPACT$_{0}$ and MOTA$_{0}$, will have 5 variables: BPD status, treatment, $\mathrm{x}_{1}$, $\mathrm{x}_{2}$, and~$\mathrm{x}_{3}$. In IMPACT$_{0}$, we randomly generate a data set for 1502 subjects, with $\mathrm{x}_{1}$, $\mathrm{x}_{2,} \mathrm{x}_{3}$ following three independent Bernoulli distributions with success rates of 0.4, 0.6, and 0.5. Similarly, in MOTA$_{0}$, we randomly generate another data set for 6635 subjects, with $\mathrm{x}_{1}$, $\mathrm{x}_{2}$, and $\mathrm{x}_{3}$ following three independent Bernoulli distributions with success rates of 0.6, 0.5, and 0.4. We then pool the two data sets together and define a trial indicator to distinguish IMPACT$_{0}$ and MOTA$_{0}$. To obtain the weight for each subject, we use logistic regression to model the logit of the trial probability as a linear function of BPD status, $\mathrm{x}_{1}$, $\mathrm{x}_{2,}$ and $\mathrm{x}_{3}$. Using the fitted model, we predict the probability of each subject in IMPACT$_{0}$ being located in MOTA$_{0}$. We then define the weight $r(x)$ by the odds of the predicted probability times $1502/6635$. Note here vector x means all variables: BPD status, treatment, $\mathrm{x}_{1}$, $\mathrm{x}_{2,}$ and $\mathrm{x}_{3}$. Now, the estimated propensity score ratio $r(x)$ is defined for all 1502 subjects. The MLE of the reweighted likelihood (\ref{equZV}) can be obtained through implementation of SAS procedure PROC GEMMOD (see the attached programming code). With the repeat and weight statement [in the repeat statement, subject is the ID number and the weight is $r(x)$], the GEMMOD procedure provides the GEE [\citet{ZegLia86}] type sandwich estimates for standard error, corresponding to the variance formula given in Theorem \ref{theo1}. All the programs, including the simulated data, are available upon request. The point estimate of the adjusted log-odds ratio is 1.23 with a standard error of 0.28. The final inference of the noninferiority trial may use the following adjusted statistics: \begin{eqnarray*} \hat{\mu}_{\mathrm{adj}} &=& \frac{\hat{\mu}_{\TC} + \hat{\mu}_{\CP}}{\sqrt{\sigma_{\TC}^{2} + \sigma_{\CP}^{2}}} = \frac {0.31 + 1.23}{\sqrt{0.20^{2} +0.28^{2}}} = 4.5;\\ \hat{\mu}_{\mathrm{adj},f} &=& \frac{\hat{\mu}_{\TC} + \hat{\mu}_{\CP}}{\sigma_{\TC} + \sigma_{\CP}} = \frac{0.31 + 1.23}{0.20 +0.28} = 3.2. \end{eqnarray*} Although the problem did not occur in our example, we note that the weighted likelihood approach may result in an estimate with large variance if we have very small or very large values of $r(x)$. The problem has been described in the propensity score literature and is not unique to our setting. Some promising methods for dealing with extreme propensity score weights include using generalized boosted regression (GBR) [\citet{McCRidMor04}, \citet{RidMcC} and \citet{LeeLesStu09}]. Based on our experience, we recommend that the propensity score model should be limited to effect modifiers, that is, baseline variables that are associated with the treatment difference, echoing what was recommended by \citet{ColHer08}. Including many variables that are not effect modifiers in the propensity score model generally increases the chance of extreme weight due to the larger population heterogeneity, with no clear benefits in reducing bias. The supplement [\citet{NieZhaRub}] provides more details and a simulation study to illustrate this recommendation. At the end of this section, we also describe some additional alternatives approaches. Before doing that, we would like to discuss some special issues when this problem occurs in historically controlled trials and noninferiority trials.\looseness=-1 When the weights are extremely small, such as when some subjects with certain characteristics in the historical trial have no or few counterparts (subjects with the same characteristics) in the noninferiority trials, $r(x)$ is 0 or near 0 for these subjects. For example, this may happen when more stringent inclusion criteria are implemented so that subjects with less severe disease conditions at baseline were included in the historical trials but are excluded from the noninferiority trial. It is understandable these subjects (e.g., with less severe disease condition) may not always be used to make inferences about the control vs. placebo (i.e., $\hat{\mu}_{\CP}$) in the noninferiority trials subjects (e.g., with more severe disease condition) unless we make an assumption that the treatment effect $\hat{\mu}_{\CP}$ in these subjects dose not depend on baseline disease status. Only with this assumption, we may multiply their weight $r(x)$ using a large number so that these subjects still represent the subjects with different characteristics. Without this assumption, the method expectedly leads to a relatively larger variance because we discard a portion of information from the historical trial. The weights can be extremely large, such as when a subpopulation presented in a noninferiority trial is not well represented in the historical trial. For example, some subjects in noninferiority trials of HIV may use a newly approved potent background drug that was rarely used or never used in the historical trials. In this case, using historical data from another group (subjects who did not have the new background drug) to make inference about the relative effect of the control vs. placebo (i.e., $\hat{\mu}_{\CP}$) may not be prudent without additional assumptions. In this case, we might have to consider alternative approaches. One possibility is to restrict the proposed analysis to the subpopulation of the current study that is also represented in the historical study. This is essentially equivalent to what propensity score matching would achieve, where unmatched subjects are automatically excluded. Another possibility is to consider the hybrid design idea presented in \citet{Sooetal11}. Now we briefly describe a stratified approach based on stratified propensity scores. Suppose a control treatment is evaluated in historical trials but we would like to calibrate its effect size in a new population for which $r(x)$ is computed. We group $r(x)$ into a number of strata $g_{1},\ldots,g_{L}$. The percentage of subjects falling into $g_{l}$ is $w_{lh}$ and $w_{ln}$ in population $P$ and $P^*$, respectively. Let $\hat{\beta}_{l}$ with variances $s_{l}^{2}$ be the treatment effect size in group $l$. Then a combined calibrated treatment effect in the population $P^*$ is $\sum_{l = 1}^{L} \hat{\beta}_{l}\mathrm{w}_{\ln}$ with variance $\sum_{l = 1}^{L} s_{l}^{2}w_{ln}^{2}$. When a new treatment is evaluated by its comparison to the control in population $P^*$, a stratified analysis based on the subclasses of the propensity score can be implemented as follows. Let $\hat{\gamma}_{l}$ be an estimate of treatment effect of the new treatment, with variances $s_{\ln}^{2}$, in the lth group of the propensity score. We evaluate the new treatment through a stratified analysis such as \[ \frac{\sum_{l = 1}^{L} \{ \hat{\gamma}_{l} - \hat{\beta}_{l} \} w_{l^*}}{\sum_{l = 1}^{L} \{ s_{\ln}^{2} + s_{lh}^{2} \} w_{l^*}^{2}}. \] Depending on the objective, $w_{l^*}$ may be chosen differently. Alternatively, we may also use the stratification method as described in Section \ref{sec4.3} or define a threshold $\Delta_{2} > \Delta_{1} > 0$ so that subjects with $r(x)$ beyond [$\Delta_{1},\Delta_{2}$] should redefine the weight of [$\Delta_{1},\Delta_{2}$], whichever is closer, similar to the method used in \citet{ColHer08}. The actual determination of [$\Delta_{1},\Delta_{2}$] depends on the actual data and practical assumptions. \section{Concluding remarks}\label{sec5} Motivated by a real example, we show that bias can arise in active controlled noninferiority trials when estimated treatment effect size for the control treatment is obtained from a historical trial that has been conducted for a different population. Covariate adjustment approaches [\citet{Zha09} and \citet{NieSoo10}] have been proposed to address the problem. However, they may be directly applied to obtain the marginal treatment effects, which are often the pre-specified primary endpoints. This paper proposes a likelihood reweighting method through propensity scoring to estimate the marginal treatment effect size in the target population of a noninferiority trial based on data obtained from a historical trial that has been conducted for a different population. \begin{appendix}\label{app} \section*{Appendix} \begin{pf*}{Proof of Theorem \ref{theo1}} It is easy to verify that the generalized linear model satisfies Assumptions A1--A3 in \citet{Whi82}, therefore, Theorem 2.2 is applicable. The log likelihood of (\ref{equZV}) is \[ \sum_{i = 1}^{n} \frac{f^*(x_{i})}{f(x_{i})}\log l_{t}(y_{it},\alpha_{t}). \] According to Theorem 2.2 [\citet{Whi82}], the maximum likelihood estimate $\hat{\alpha}_{t}$ converges to the parameter, say, $\alpha_{t0}$, which maximizes \[ \int E_{Y|X = x,T = t} \bigl\{ \log f(x) + \log l(y_{it}|x,t) - r(x) \log l(y_{it},\alpha_{t}) \bigr\} \,dF(x), \] where $\log l(y_{it}|x,t)$ is the log-likelihood function obtained from model (\ref{eq1}). Taking derivatives with respect to $\alpha_{t}$, we know $\alpha_{t0}$ is the solution of the following equation: \[ \int E_{Y|X = x,T = t} \bigl[ r(x) \bigl\{ y - b' ( \alpha_{t} ) \bigr\} \bigr] \,dF(x) = 0. \] Consequently, the estimating equation can be written as \[ \int E_{Y|X = x,T = t} \bigl\{ y - b' ( \alpha_{t} ) \bigr\} \,dF^*(x) = 0. \] Noting $b' (\cdot) = g^{ - 1}(\cdot)$, we have $\alpha_{t0}=\mu_{iP^{*}} = g [ E_{X \in P^{*}} \{ \mu_{t}(X) \} ]$. As the assumptions A4--A6 are easily verifiable, the asymptotic properties of the MLE of (\ref{equZV}) are immediately obtained from Theorem 3.2 in \citet{Whi82}. \end{pf*} \end{appendix} \section*{Acknowledgments} We would like to express our great appreciation to the three reviewers, an Associate Editor, and Dr. Paddock for all of their comments and suggestions, which substantially improved the quality of the paper. \begin{supplement \stitle{Supplement to ``Likelihood reweighting methods to reduce potential bias in noninferiority trials which rely on historical data to make inference''} \slink[doi]{10.1214/13-AOAS655SUPP} \sdatatype{.pdf} \sfilename{aoas655\_supp.pdf} \sdescription{The supplement provides an assessment of the efficiency loss for the weighted likelihood method and a comparison between the likelihood reweighting method and related methods in historically controlled trials.} \end{supplement}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} The study of networks is driven by the many real-world systems which can be represented in this way~\cite{newman2003}, such as networks of neuronal~\cite{pernice2011}, cardiac~\cite{casaleggio2014} or smooth muscle~\cite{xu2015} cells. In networks of oscillators it is often the conditions under which the oscillators synchronise which are of interest~\cite{pikros01,str00}. For networks of nonidentical oscillators there is often a continuous transition from asynchrony to higher and higher levels of synchrony as the strength of coupling between oscillators is increased~\cite{kuramoto1984,str00}. But recently a different form of transition known as ``explosive synchrony'' (ES) has been observed. Here, there is a discontinuous increase in network coherence as the coupling strength between oscillators is increased (see~\cite{dsouza2019} for a review). This behaviour was first reported by~\cite{gomez2011} in an undirected network of Kuramoto oscillators~\cite{kuramoto1984} for which the intrinsic frequency of an oscillator was equal to its degree, the degrees having an inverse power law distribution. The authors emphasised that the explosive transition was due to there being a relationship between a local network property (an oscillator's degree) and a property of an oscillator (its intrinsic frequency) --- in this case a positive correlation between the two. Since the publication of~\cite{gomez2011} there have been many studies of ES~\cite{leynav13,bocalm16,zhazou14,zhaboc15}. As recently noted by~\cite{kuebic21}, ES is not actually an unexpected phenomenon, since if there is a continuous transition to synchrony through either a transcritical or pitchfork bifurcation as, say, the strength of coupling between oscillators is increased, then by varying a second parameter sufficiently the criticality of the bifurcation will change. This change leads to bistability and a region of hysteresis as the original parameter is varied (i.e.~an ``explosive'' transition). Many studies of ES use Kuramoto phase oscillators which are coupled through phase differences. However, more realistic oscillators are not coupled in this way, instead having interactions which depend on the states of the two oscillators which are coupled~\cite{tsuter07,moocoo15,borkop03,lukbar13}. The oscillator we study -- the Winfree model~\cite{winfree67} -- is a phase oscillator similar in form to the Kuramoto oscillator but is not described with phase differences, instead involving a ``phase response curve'' and a pulsatile function of phase. We are not aware of any reports of ES in networks of Winfree oscillators, and as one of our contributions we report here our observations of this phenomenon in our numerical explorations of networks of Winfree oscillators in a variety of network configurations. Most previous studies of networks of Winfree oscillators have focused on the all-to-all coupled case~\cite{ariaratnam2001,pazmon14,ha2015,gallego2017,pazmon19}, with an exception being \cite{laing2021}. Many previous studies of ES have considered undirected networks, where oscillators are either coupled or not. However, we consider directed networks -- as seen in neuronal systems -- where one oscillator influences other oscillators coupled downstream but without necessarily a reciprocal influence back upstream. The bistability that often accompanies ES is of interest as it allows a network to have more than one stable state for a given set of parameters, with the current state often determined by the network's history. A prominent example of such bistability is in networks which exhibit Up and Down states~\cite{wilkaw96,plekit98}, with transitions between these states driven by either fluctuations or noise of some form, or by transient inputs. The abrupt transitions between states may also provide a model for the conscious-unconscious transition when awaking from anesthesia~\cite{steste99}. The structure of the paper is as follows. In Sec.~\ref{sec:model} we present the network we consider and the degree-based mean field description of its dynamics, derived using the Ott/Antonsen ansatz. Specific cases of independent in- and out-degrees, and equal in- and out-degrees, are considered. In Sec.~\ref{sec:gauss} we consider a Gaussian distribution of intrinsic frequencies which are positively correlated with oscillators' in-degrees. In Sec.~\ref{sec:pow} we consider an inverse power law distribution of intrinsic frequencies and again correlate them with oscillators' in-degrees. We consider the cases of independent in- and out-degrees, and equal in- and out-degrees. A variety of ``explosive'' transitions are observed and explained using simple bifurcation theory. We conclude in Sec.~\ref{sec:disc}. \section{Model} \label{sec:model} The model describing a directed network of Winfree oscillators is as presented in~\cite{laing2021,pazmon14}: \begin{equation} \frac{d\theta_j}{dt}=\omega_j+U(\theta_j)\frac{\epsilon}{\langle k \rangle}\sum_{n=1}^NA_{jn}T(\theta_n); \qquad j=1,2\dots N \label{eq:dthdt} \end{equation} where $\epsilon$ is the strength of coupling, $\langle k \rangle$ is the mean degree of the network and the connectivity is given by the adjacency matrix $A$ with $A_{jn}=1$ if oscillator $n$ connects to oscillator $j$ and zero otherwise. The $\omega_j$ are chosen from a Lorentzian distribution with centre and half-width at half-maximum to be discussed below. The phase response curve~\cite{schpri11,netban05} is chosen to be $U(\theta)=-\sin{\theta}$ and the pulsatile function $T$ which has a maximum at $\theta=0$ is \begin{equation} T(\theta)=\frac{8}{35}(1+\cos{\theta})^4 \end{equation} The in-degree of oscillator $j$ is \begin{equation} k_{in,j}=\sum_{n=1}^N A_{jn} \end{equation} and the out-degree of oscillator $n$ is \begin{equation} k_{out,n}=\sum_{j=1}^N A_{jn} \end{equation} We consider large networks where all in- and out-degrees are large and assume that the network can be characterised by its degree distribution $P({\bf k})$ where ${\bf k}=(k_{in},k_{out})$, and an assortativity function $a({\bf k}'\to {\bf k})$ giving the probability that an oscillator with degree ${\bf k}'$ connects to one with degree ${\bf k}$, given that such oscillators exist. $P({\bf k})$ is normalised such that $\sum_{{\bf k}}P({\bf k})=N$ and we choose the marginal distributions of the in-degrees and the out-degrees to be equal. Using the theory in~\cite{laing2021} (or see~\cite{laibla20,chahat17} for similar derivations) one can show that the long-time dynamics of the network is described by \begin{equation} \frac{\partial b({\bf k},t)}{\partial t}=\frac{\epsilon R({\bf k},t)}{2}+[i\omega_0({\bf k})-\Delta({\bf k})]b({\bf k},t)-\frac{\epsilon R({\bf k},t)}{2}[b({\bf k},t)]^2 \end{equation} where $\omega_0({\bf k})$ and $\Delta({\bf k})$ are the centre and half-width at half-maximum, respectively, of the Lorentzian distribution from which the values of $\omega_j$ for oscillators with degree ${\bf k}$ are chosen: \begin{equation} g(\omega({\bf k}))=\frac{\Delta({\bf k})/\pi}{[\omega({\bf k})-\omega_0({\bf k})]^2+\Delta^2({\bf k})} \label{eq:lor} \end{equation} The variable \begin{equation} b({\bf k},t)=\int_{-\infty}^\infty\int_0^{2\pi} f(\theta,\omega|{\bf k},t)e^{-i\theta}d\theta\ d\omega \end{equation} is the complex-valued order parameter for oscillators with degree ${\bf k}$, where $f(\theta,\omega|{\bf k},t)d\theta d\omega$ is the probability that an oscillator with degree ${\bf k}$ has phase in $[\theta,\theta+d\theta]$ and frequency in $[\omega,\omega+d\omega]$ at time $t$. $R({\bf k},t)$ is given by \begin{equation} R({\bf k},t)=\frac{1}{\langle k \rangle}\sum_{{\bf k}'}P({\bf k}')a({\bf k}'\to{\bf k})G({\bf k}',t) \end{equation} where \begin{eqnarray} G({\bf k},t) & = & 1+\frac{4\left[b({\bf k},t)+\overline{b}({\bf k},t)\right]}{5}+\frac{2\left[b^2({\bf k},t)+\overline{b}^2({\bf k},t)\right]}{5} \nonumber \\ & & +\frac{4\left[b^3({\bf k},t)+\overline{b}^3({\bf k},t)\right]}{35}+\frac{b^4({\bf k},t)+\overline{b}^4({\bf k},t)}{70} \label{eq:G} \end{eqnarray} where overline indicates the complex conjugate. The form of $G$ is determined by the function $T(\theta)$. This derivation of a degree-dependent mean field description uses the Ott/Antonsen ansatz~\cite{ottant08,ottant09}. A crucial ingredient for the use of this ansatz is the sinusoidal form of the function $U(\theta)$. We are interested in the case of neutral assortativity, for which~\cite{resott14} \begin{equation} a({\bf k}'\to{\bf k})=\frac{k_{out}'k_{in}}{N\langle k \rangle} \end{equation} and cases where either the in- and out-degree of an oscillator are independent, or they are equal (and thus perfectly correlated). Writing $P(k_{in}',k_{out}')$ instead of $P({\bf k}')/N$ we have \begin{equation} R(k_{in},k_{out})=\frac{k_{in}}{\langle k \rangle^2}\sum_{k_{in}'}\sum_{k_{out}'}P(k_{in}',k_{out}')k_{out}'G(k_{in}',k_{out}',t) \end{equation} which is clearly independent of $k_{out}$. Thus both $b$ and $G$ must also be independent of $k_{out}$ and we have \begin{equation} R(k_{in},t)=\frac{k_{in}}{\langle k \rangle^2}\sum_{k_{in}'}Q(k_{in}')G(k_{in}',t) \end{equation} where \begin{equation} Q(k_{in}')=\sum_{k_{out}'}P(k_{in}',k_{out}')k_{out}'. \end{equation} For the case of the in- and out-degrees of an oscillator being independent, $P(k_{in}',k_{out}')$ factorises: $P(k_{in}',k_{out}')=p(k_{in})p(k_{out}')$ where $p$ is the marginal distribution of either the in- or out-degrees. In this case, \begin{equation} R(k_{in},t)=\frac{k_{in}}{\langle k \rangle}\sum_{k_{in}'}p(k_{in}')G(k_{in}',t). \label{eq:Rind} \end{equation} Alternatively, if $k_{in}=k_{out}$ for each oscillator (i.e.~each oscillator has the same in- and out-degree) then $Q(k_{in}')=k_{in}'p(k_{in}')$ and \begin{equation} R(k_{in},t)=\frac{k_{in}}{\langle k \rangle^2}\sum_{k_{in}'}p(k_{in}')k_{in}' G(k_{in}',t). \label{eq:Rsame} \end{equation} In either case the ``input'' to an oscillator with in-degree $k_{in}$ is proportional to its in-degree, i.e.~$k_{in}$. One could use this theory to investigate the effects of choosing different degree distributions $p$ as in~\cite{lai21} but instead we fix $p$ as a power law distribution and look at the effects of correlating an oscillator's frequency with its degree. Specifically, we make $\Delta$ independent of degree but choose $\omega_0$ to be a function of $k_{in}$ only. (One could make $\omega_0$ a function of $k_{out}$ as well, or instead, but doing so increases the computational cost, and the phenomena we are interested in occur when $\omega_0$ is a function of $k_{in}$ only.) Thus the equations of interest are \begin{equation} \frac{\partial b(k_{in},t)}{\partial t}=\frac{\epsilon R(k_{in},t)}{2}+[i\omega_0(k_{in})-\Delta]b(k_{in},t)-\frac{\epsilon R(k_{in},t)}{2}[b(k_{in},t)]^2 \label{eq:dbdt} \end{equation} where $m\leq k_{in}\leq M$, where $m$ and $M$ are the minimum and maximum in- and out-degrees respectively, $R$ is given by either~\eqref{eq:Rind} or~\eqref{eq:Rsame} and $G$ is given by~\eqref{eq:G} but with only dependence on in-degree. Note that in previous work~\cite{laing2021} we used similar equations as those above to investigate the effects of varying the correlation between in- and out-degrees of Winfree oscillators, and of parameter- and degree-based assortativity. We also examined the effects of correlating oscillators' intrinsic frequencies with their in-degrees, but did not consider varying the coupling strength. Also, there we considered uniform degree distributions rather than power law, as we do here and as many other researchers do. Below we will use the mean firing frequency of a network to describe its behaviour, so now derive an expression for that. The dynamics of an oscillator with in-degree $k_{in}$ is \begin{equation} \frac{d\theta}{dt}=\omega(k_{in})-\epsilon R(k_{in})\sin{\theta}. \end{equation} If $|\omega(k_{in})|>\epsilon R(k_{in})$ then the oscillator will fire periodically with frequency \begin{equation} \frac{\sqrt{\omega^2(k_{in})-\epsilon^2 R^2(k_{in})}}{2\pi}. \end{equation} Thus the expected frequency for oscillators with in-degree $k_{in}$ is \begin{align} f(k_{in}) & = \frac{1}{2\pi}\mbox{Re}\left(\int_{-\infty}^\infty g(\omega(k_{in}))\sqrt{\omega^2(k_{in})-\epsilon^2 R^2(k_{in})}\ d\omega(k_{in})\right) \nonumber \\ & = \frac{1}{2\pi}\sqrt{\frac{\omega_0^2(k_{in})-\Delta^2-\epsilon^2 R^2(k_{in})+\sqrt{[\omega_0^2(k_{in})-\Delta^2-\epsilon^2 R^2(k_{in})]^2+4\Delta^2\omega_0^2(k_{in})}}{2}} \label{eq:fkin} \end{align} where $g$ is the Lorentzian~\eqref{eq:lor} and thus the mean firing rate over the network is \begin{equation} F=\frac{1}{2\pi}\sum_{k_{in}}p(k_{in})f(k_{in}) \end{equation} For comparison with our results below we briefly describe the dynamics of a fully connected network of Winfree oscillators. Two types of behaviour are typically seen in such a network: synchronous and asynchronous~\cite{pazmon14,gallego2017}, although the fraction of oscillators actually oscillating varies in different asynchronous states. Increasing $\epsilon$ tends to destroy synchronous behaviour through a saddle-node-on-invariant-circle (SNIC) bifurcation, as many of the oscillators ``lock'' to an approximate fixed point. For moderate $\epsilon$ increasing the spread of intrinsic frequencies tends to destroy synchronous behaviour through a supercritical Hopf bifurcation, as the oscillators become too dissimilar in frequency to synchronise~\cite{pazmon14}. Below we will see a wider variety of bifurcations resulting from the networks' structure. \section{Gaussian frequency distribution} \label{sec:gauss} We choose the degree distribution $p(k)$ to be a truncated power law distribution with exponent $-3$, as many others have done when studying ES~\cite{liu2013,gomez2011}: \begin{equation} p(k)=\begin{cases} a/k^3 & m\leq k\leq M \\ 0 & \mbox{ otherwise} \end{cases} \label{eq:pk} \end{equation} where $a$ is a normalisation such that \begin{equation} \sum_{k=m}^M \frac{a}{k^3} =1. \label{eq:norm} \end{equation} Since the degrees are all large (i.e.~$1\ll m$) we treat $k$ as a continuous variable and approximate the sum in~\eqref{eq:norm} by an integral, giving $a=2m^2M^2/(M^2-m^2)$. The cumulative distribution function for $k$ is \begin{equation} \widehat{p}(k)=\int_m^k p(s)\ ds=\frac{a}{2}\left(\frac{1}{m^2}-\frac{1}{k^2}\right). \end{equation} We need to specify the dependence of $\omega_0$ on $k_{in}$. In this section we consider the case where for a particular realisation of the network, we randomly choose the {\em target} frequencies from a Gaussian distribution with mean 1 and standard deviation $\sigma$, then assign the smallest target frequency to the oscillator with smallest in-degree, the second smallest target frequency to the oscillator with second smallest in-degree, etc. (For oscillators with equal in-degree, they are ranked in random order.) For oscillator $j$ the {\em actual} $\omega_j$ is then chosen from a Lorentzian with centre equal to the target frequency for oscillator $j$ and half-width at half-maximum (HWHM) equal to $\Delta$, similar to the scheme in~\cite{skares15}. In this case we have \begin{equation} \omega_0(k)=\widehat{q}^{-1}(\widehat{p}(k)) \label{eq:om0k} \end{equation} where $\widehat{q}$ is the cumulative distribution function of the appropriate Gaussian distribution, i.e. \begin{equation} \widehat{q}(\omega)=\frac{1}{2}\left[1+\mbox{erf}\left(\frac{\omega-1}{\sqrt{2}\sigma}\right)\right] \end{equation} A demonstration of this is shown in Fig.~\ref{fig:dist} where we create a directed network with $N=2000$ nodes using the configuration model~\cite{newman2003}, then randomly chose $N$ target frequencies from a Gaussian distribution with mean 1 and standard deviation $0.01$, then associated the smallest frequency with the node with smallest in-degree etc. These values are shown as dots in panel (a), along with the theoretical relationship~\eqref{eq:om0k}. Panel (b) shows the actual values of the $\omega_j$ (dots), taken from a Lorentzian with centre equal to the values in panel (a) and HWHM 0.00005. \begin{figure} \begin{center} \includegraphics[width=14cm]{dist} \caption{(a): solid line:~\eqref{eq:om0k}; dots: target frequencies and degrees for one realisation of a network. (b): solid line:~\eqref{eq:om0k}; dots: actual frequencies and degrees for one realisation of a network. Parameters: $m=100,M=400, N=2000, \sigma=0.01,\Delta=0.00005$.} \label{fig:dist} \end{center} \end{figure} Clearly there is a positive correlation between $k_{in,j}$ and $\omega_j$, but the Pearson correlation coefficient between them, defined by \begin{equation} \rho_{k\omega}=\frac{\sum_{j=1}^N (k_{in,j}-\langle k \rangle)(\omega_j-\bar{\omega})}{\sqrt{\sum_{j=1}^N(k_{in,j}-\langle k \rangle)^2}\sqrt{\sum_{j=1}^N(\omega_j-\bar{\omega})^2}} \end{equation} where $\bar{\omega}$ is the mean of the $\omega_j$, is less than 1 and (using this construction) cannot be systematically varied, as was possible in~\cite{laing2021,laibla20}. Note that the idea of not having a perfect match between an oscillator's degree and a prescribed frequency (as we have here for $\Delta\neq 0$) was discussed in~\cite{skaare14}, where it was shown that such a mismatch actually {\em created} ES. To investigate the influence of varying the correlation between $k_{in,j}$ and $\omega_j$, we created a sequence of the appropriate degrees, chose values of $\omega$ from a Gaussian distribution and randomly paired them. We then repeatedly performed Monte Carlo swaps of $\omega$ values; potential swaps were accepted if they increased the Pearson correlation coefficient toward a target value -- typically a positive number. This approach required substantial effort to satisfy high correlations $(\geq 0.9)$, since a sequence of $\omega$ randomly paired with degree sequences typically exhibited no correlation. Alternatively, we maximised the correlation between the $k_{in,j}$ and $\omega_j$ by initially sorting both sequences as above. Aligning maximal-minimal values then produced the highest possible correlation coefficient for given sequences which we then reduced using Monte Carlo swaps, with swaps accepted if they pushed the correlation value toward a target value. Sequences of degrees and frequencies generated with this approach were assembled into adjacency matrices utilising our network assembly scheme, the ``permutation method'', presented previously~\cite{means2020}. Note that with this approach of constructing networks the Ott/Antonsen approach cannot be used, and we must simulate the resulting networks to determine their behaviour. \subsection{Results} We numerically investigate~\eqref{eq:dbdt} with~\eqref{eq:Rind}. We evaluate functions at all integer in-degrees satisfying $m+1\leq k_{in}\leq M-1$ to avoid the singularities in $\widehat{q}^{-1}$ when its argument is either 0 or 1, resulting in a moderately large set of ordinary differential equations. We could use a more efficient method which approximates the sum in~\eqref{eq:Rind} with fewer ``virtual'' degrees as explained in~\cite{laibla20}, but that is unnecessary here. Typically we integrate~\eqref{eq:dbdt} to a stable fixed point and then use pseudo-arclength continuation to follow the fixed point as parameters are varied, determining the stability of the fixed point from the eigenvalues of the linearisation of the dynamics about the fixed point~\cite{lai14B,gov00}. Periodic solutions are studied in a similar way by putting a Poincar{\'e} section in the flow (at Re[$b(m+1,t)]=0$) and integrating from this section until the solution next hits this section. Stability is given by the Floquet multipliers of the periodic solution. The usual complex-valued order parameter defined for the network~\eqref{eq:dthdt} is \begin{equation} Y(t)=\frac{1}{N}\sum_{j=1}^N e^{i\theta_j} \end{equation} and for~\eqref{eq:dbdt} the appropriate measure is \begin{equation} Z(t)=\sum_{k_{in}}p(k_{in})\bar{b}(k_{in},t) \end{equation} We first vary $\epsilon$ with $\sigma=0.01$. Results are shown in Fig.~\ref{fig:gau}, where panel (a) shows results from~\eqref{eq:dbdt}. For small $\epsilon$~\eqref{eq:dbdt} has a stable fixed point at which the network is incoherent, with $|Z|$ being small. As $\epsilon$ is increased the fixed point undergoes a subcritical Hopf bifurcation, becoming unstable. (In the all-to-all coupled network, this Hopf bifurcation is supercritical.) The unstable periodic orbit created in this bifurcation is shown with red crosses and it becomes stable in a saddle-node bifurcation. Thus there is a small range of $\epsilon$ values for which the network is bistable. (For periodic orbits, the quantity plotted on the vertical axis is the average over one period of $|Z(t)|$.) Panel (b) of Fig.~\ref{fig:gau} shows $|Y|$ for the network~\eqref{eq:dthdt} with $\epsilon$ quasistatically increased or decreased, using the final state of the network at one value of $\epsilon$ as the initial condition for the next value. The value plotted is the mean over 5000 time units of $|Y(t)|$. The bistability and hysteresis is clear. Networks of the form used in~\eqref{eq:dthdt} were created using the configuration model~\cite{newman03}. Both self-connections and multiple connections between oscillators removed in a random way~\cite{laing2021}. Values of $\omega_j$ were then assigned as above, using Lorentzian distributions centred at target frequencies. \begin{figure} \begin{center} \includegraphics[width=14cm]{gau} \caption{(a): $|Z|$ for fixed point (lines) and periodic solutions (symbols) of~\eqref{eq:dbdt}. Blue solutions are stable while red are unstable. (b): $|Y|$ for the network~\eqref{eq:dthdt}. Black corresponds to increasing $\epsilon$ and magenta to decreasing. Parameters: $m=100,M=400, N=2000, \sigma=0.01,\Delta=0.0005$.} \label{fig:gau} \end{center} \end{figure} Following the Hopf bifurcation and saddle-node bifurcation of periodic orbits shown in Fig.~\ref{fig:gau}(a) as both $\sigma$ and $\epsilon$ are varied we obtain Fig.~\ref{fig:gau2p}. Although the range of values of $\epsilon$ for which the system is bistable is small, it increases as $\sigma$ is increased. We have shown that even with very different distributions of in-degrees and intrinsic frequencies, positively correlating them can induce ES in a directed network of Winfree oscillators. \begin{figure} \begin{center} \includegraphics[width=14cm]{gau2p} \caption{Curves of Hopf and saddle-node bifurcations as seen in Fig.~\ref{fig:gau}(a) as both $\sigma$ and $\epsilon$ are varied. The network is bistable between the curves. Parameters: $m=100,M=400,\Delta=0.0005$.} \label{fig:gau2p} \end{center} \end{figure} The results of our numerical investigation into the effects of varying the correlation between the $k_{in,j}$ and $\omega_j$ ($\rho_{k\omega}$) are shown in Fig~\ref{fig:result_rho_progress1} (panels C and D). We varied $\rho_{k\omega}$ between 0.6 and 0.95 using the Monte-Carlo degree-swapping method explained above and for each network we quasistatically increased or decreased the coupling strength $\epsilon$ and measured the time-averaged value of $|Y|$. This is shown in Fig~\ref{fig:result_rho_progress1}C where we see results similar to those in Fig.~\ref{fig:gau} --- a small region of bistability. Panel D of Fig~\ref{fig:result_rho_progress1} shows the fraction of effective frequencies as the coupling strength is progressively increased, at $\rho_{k\omega}=0.95$. The distribution of frequencies and their fractional evolution corresponds to the mean and standard deviation of the Gaussian distribution, and the highest concentration of frequencies around the mean emerge dominant. However, the effects of having a large value of $\rho_{k\omega}$ for Gaussian distributed intrinsic frequencies is minimal compared with that for power law distributed frequencies, considered next. \begin{figure} \includegraphics[width=.95\linewidth]{new4} \caption{Panels A and C: $|Y|$ as $\epsilon$ is quasistatically varied for networks with different values of $\rho_{k\omega}$. Panels B and D: Progression of mean-effective frequency fraction distribution over increasing coupling strength $\epsilon$, for $\rho_{k\omega}$ = 0.95; note distinct colour scales for effective frequency evolution due to underlying $\omega$ distributions (histograms, insets Panels B \& D), and distinct $\epsilon$ scales reflecting hysteresis over different coupling strengths. Panels A and B show results when the intrinsic frequencies are power law distributed with an exponent of $-3.07$ and in- and out-degrees drawn from power law distribution with same exponent and neutral correlation between the in- and out-degrees. Panels C and D show results for Gaussian distributed frequencies with mean $1.0$ and $\sigma=0.01$, while in- and out-degrees are power law distributed with exponent $-2.66$ and the in-degrees are highly correlated with the out-degrees ($\rho=0.89$). } \label{fig:result_rho_progress1} \end{figure} \section{Power law frequency distribution} \label{sec:pow} We keep the power law distribution of degrees~\eqref{eq:pk} and now consider the case where the target distribution frequency distribution is also power law distributed and limited between $c$ and $C$, but with the exponent as a parameter, i.e. \begin{equation} p_{\omega_0}(\omega_0)=\begin{cases} a_\omega/\omega_0^{\gamma+1} & c\leq \omega_0\leq C \\ 0 & \mbox{ otherwise} \end{cases} \label{eq:pw} \end{equation} where $a_\omega=\gamma c^\gamma C^\gamma/(C^\gamma-c^\gamma)$. As above, having created a network we randomly choose target frequencies from the distribution~\eqref{eq:pw}, then assign the smallest target frequency to the oscillator with the smallest in-degree, all the way up to the largest target frequency being associated with oscillator with the largest in-degree. The actual $\omega_j$ are then chosen from a Lorentzian with HWHM equal to $\Delta$ centred at the target frequency, as above. The dependence of $\omega_0$ on $k$ is then \begin{equation} \omega_0(k)=\widehat{p}_{\omega_0}^{-1}(\widehat{p}(k)) \label{eq:pom} \end{equation} where $\widehat{p}_{\omega_0}$ is the cumulative distribution function of $p_{\omega_0}$, i.e \begin{equation} \widehat{p}_{\omega_0}(\omega_0)=\frac{a_\omega}{\gamma}\left(\frac{1}{c^\gamma}-\frac{1}{\omega_0^\gamma}\right) \end{equation} \subsection{Highly correlated degree and frequency} \subsubsection{Independent degrees} We first consider the case of independent degrees, as in Sec.~\ref{sec:gauss}. Thus we numerically investigate~\eqref{eq:dbdt} with~\eqref{eq:Rind}, but using~\eqref{eq:pom}. We choose $\Delta=0.01$, set $c=1,C=6$, and initially choose $\gamma=2$. As in Sec.~\ref{sec:gauss}, for small $\epsilon$ the system has a stable fixed point and this becomes unstable through a subcritical Hopf bifurcation as $\epsilon$ is increased. The results are shown in Fig.~\ref{fig:powA} where we see the periodic orbit created in the Hopf bifurcation, giving the same scenario as in Fig.~\ref{fig:gau}~(a). Quasistatically increasing $\epsilon$ the solution of~\eqref{eq:dbdt} would jump from a fixed point to a periodic state with amplitude significantly larger than zero. Decreasing $\epsilon$, the solution would jump from a finite-amplitude periodic orbit to a fixed point. (As above, for periodic orbits, the quantity plotted on the vertical axis is the average over one period of $|Z(t)|$.) \begin{figure} \begin{center} \includegraphics[width=14cm]{powA} \caption{$|Z|$ for fixed point (lines) and periodic (symbols) solutions of~\eqref{eq:dbdt}. Blue solutions are stable while red are unstable. Parameters: $c=1,C=6,m=100,M=400,\gamma=2,\Delta=0.01$.} \label{fig:powA} \end{center} \end{figure} The behaviour of a particular realisation of the discrete network~\eqref{eq:dthdt} is slightly different, since a fixed point of~\eqref{eq:dbdt} corresponds to an incoherent solution of~\eqref{eq:dthdt} for which $|Y|$ is not constant, having small fluctuations about an average value. An example of such dynamics is shown in Fig.~\ref{fig:dyn}(a), with $\epsilon=0.92$. (Other parameters have the same values as in Fig.~\ref{fig:powA}.) We see that most oscillators are oscillating, but with independent phases. Similarly, a periodic solution of~\eqref{eq:dbdt} corresponds to a solution of~\eqref{eq:dthdt} for which $|Y|$ is nearly periodic, with the vast majority of the oscillators having the same average frequency, as shown in Fig.~\ref{fig:dyn}(b) ($\epsilon=1$). Some with high in-degree fire at multiples of this frequency and some are unlocked, giving a state referred to by~\cite{ariaratnam2001} as ``partially locked hybrid states.'' See also~\cite{pazmon14} for examples of these dynamics. \begin{figure} \begin{center} \includegraphics[width=14cm]{dyn} \caption{$\sin{\theta}$ shown in colour for a simulation of~\eqref{eq:dthdt}. The oscillators are sorted by their in-degree. (a): $\epsilon=0.92$. (b): $\epsilon=1$. Other parameters: $N=5000,c=1,C=6,m=100,M=400,\gamma=2,\Delta=0.01$.} \label{fig:dyn} \end{center} \end{figure} However for $\gamma=1$ the results are very different. As seen in Fig.~\ref{fig:powB}(a) the branch with the lower value of $|Z|$ still undergoes a Hopf bifurcation. Numerical integration of~\eqref{eq:dbdt} near the bifurcation does not show a stable periodic orbit so this bifurcation seems subcritical, creating an unstable periodic orbit as $\epsilon$ is decreased. This orbit never becomes stable, and we hypothesise that it is destroyed in a homoclinic bifurcation with the ``middle'' unstable branch. Thus the system has no stable periodic orbits, but rather a region of bistability between two fixed points with different values of $|Z|$. Either increasing or decreasing $\epsilon$ the network jumps from one fixed point to another. Fig.~\ref{fig:powB}(b) shows the mean firing rate $F$ across the network and we see that the branch with large $|Z|$ has small $F$ and vice versa. So even if $|Z|$ is large, normally indicating synchronous oscillations, here it corresponds to a state in which most of the oscillators are locked at an approximate fixed point, not firing. \begin{figure} \begin{center} \includegraphics[width=14cm]{powBc} \caption{(a) $|Z|$ and (b) mean firing rate $F$ for fixed points of~\eqref{eq:dbdt}. Solid: stable; dashed: unstable. The bifurcation on the lower branch in (a) is a subcritical Hopf. Parameters: $c=1,C=6,m=100,M=400,\gamma=1,\Delta=0.01$.} \label{fig:powB} \end{center} \end{figure} Fig.~\ref{fig:prof6} shows (on a logarithmic scale) the expected frequency for oscillators with in-degree $k_{in}$, given by~\eqref{eq:fkin}, for three coexisting steady states in Fig.~\ref{fig:powB} at $\epsilon=1.7$. The stable solution with highest frequencies has the lowest value of $|Z|$ and vice versa. Interestingly, for the upper curve the frequency increases with in-degree $k_{in}$, but for the lower two curves the maximum frequency does not occur at either extreme of the $k_{in}$ values. \begin{figure} \begin{center} \includegraphics[width=12cm]{prof6} \caption{Expected frequency for oscillators with in-degree $k_{in}$, given by~\eqref{eq:fkin}, for three coexisting steady states at $\epsilon=1.7$. See Fig.~\ref{fig:powB}. Blue: stable solutions; red: unstable solution. Other parameters: $c=1,C=6,m=100,M=400,\gamma=1,\Delta=0.01$.} \label{fig:prof6} \end{center} \end{figure} A third scenario occurs for $\gamma=1.5$, as seen in Fig.~\ref{fig:powC}. The fixed point that is stable for small $\epsilon$ becomes unstable through a subcritical Hopf bifurcation as $\epsilon$ is increased as in Fig.~\ref{fig:powA}, but the stable periodic orbit is destroyed in a SNIC bifurcation which occurs at a slightly lower value of $\epsilon$ than that at which the Hopf bifurcation occurs. So if $\epsilon$ is slowly increased the network will jump from one fixed point to another fixed point. But if $\epsilon$ is then decreased the network will switch from a fixed point to a stable periodic orbit. This stable orbit is then destroyed in a saddle-node bifurcation as $\epsilon$ is further decreased and the network will jump to the original fixed point. \begin{figure} \begin{center} \includegraphics[width=14cm]{powC} \caption{$|Z|$ for fixed point (lines) and periodic (symbols) solutions of~\eqref{eq:dbdt}. Blue solutions are stable while red are unstable. The stable branch of periodic orbits terminates at the SNIC bifurcation. Parameters: $c=1,C=6,m=100,M=400,\gamma=1.5,\Delta=0.01$.} \label{fig:powC} \end{center} \end{figure} \subsubsection{Identical degrees} We now consider the case of identical in- and out-degrees. We numerically investigate~\eqref{eq:dbdt} with~\eqref{eq:Rsame}, using the power law distribution of degrees~\eqref{eq:pk} and the power law distribution of frequencies~\eqref{eq:pw}. The results for $\gamma=2$ are shown in Fig.~\ref{fig:idA}. The scenario is qualitatively the same as in Fig.~\ref{fig:powC}, with bistability between either two fixed points or between a fixed point and a periodic solution. \begin{figure} \begin{center} \includegraphics[width=14cm]{idA} \caption{Identical degrees. $|Z|$ for fixed point (lines) and periodic (symbols) solutions of~\eqref{eq:dbdt}. Blue solutions are stable while red are unstable. The stable branch of periodic orbits terminates at the SNIC bifurcation. Parameters: $c=1,C=6,m=100,M=400,\gamma=2,\Delta=0.01$.} \label{fig:idA} \end{center} \end{figure} For $\gamma=1$ we obtain Fig.~\ref{fig:idB}, showing yet another scenario. Here there are no Hopf bifurcations, only two saddle-node bifurcations of fixed points, with region of bistability between them. \begin{figure} \begin{center} \includegraphics[width=14cm]{idBc} \caption{Identical degrees. (a) $|Z|$ and (b) mean firing rate $F$ for fixed points of~\eqref{eq:dbdt}. Solid: stable; dashed: unstable. Parameters: $c=1,C=6,m=100,M=400,\gamma=1,\Delta=0.01$.} \label{fig:idB} \end{center} \end{figure} All of the results reported in this section have been verified using simulations of the discrete network (results not shown). \subsubsection{Varying degree-frequency correlation} We repeated our numerical investigation into the effects of varying degree-frequency correlations --- which we denoted $\rho_{k\omega}$ --- generated via the Monte-Carlo swapping and permutation assembly method with power law distributed frequencies. The results are shown in Fig. \ref{fig:result_rho_progress1} panels A and B. Increasing $\rho_{k\omega}$ corresponds with the appearance and subsequent widening of the region of hysteresis, for the power law distributed $\omega$. The Gaussian distribution (Panels C and D) alternatively demonstrates hysteresis but with narrowing at the highest level of correlation and is more sensitive to the network realisation, or final assembly of the connectivity matrix $A$ given the same degree and $\omega$ sequence. Some realisations demonstrated hysteresis but others did not with all other elements identical, and compared to the power law distributed frequencies, we observed less consistency in emergent hysteresis across the multiple network realisations. Further, the Gaussian $\omega$ results are clearly and simply noisier, partly due to the scale of plots showing hysteresis at far lower $\epsilon$ values. This is also due to the modes of synchronisation for these two $\omega$ distributions: the Gaussian with dynamic synchrony and the power law with destructive or a quiescent synchrony as observed with locking the system at fixed points of zero frequency. Note the progression of mean effective frequency distributions with increasing coupling strength (at the highest $\rho_{k\omega} = 0.95$), where the power law $\omega$ system collapses into quiescence (Fig. \ref{fig:result_rho_progress1} panel B) as opposed to the scenario for Gaussian distributed $\omega$ (Fig. \ref{fig:result_rho_progress1} panel D). Coherence frequencies emerge at levels dictated by the concentration of $\omega$ values for each distribution: power law close to zero and Gaussian close to the mean ($\mu=1.0$). \section{Discussion} \label{sec:disc} We have considered directed networks of Winfree oscillators with truncated inverse power law distributions of both in- and out-degrees. We considered the case of oscillators having independent in- and out-degrees, and also the case where they are equal. For independent degrees we examined both Gaussian and power law distributed intrinsic frequencies, having these frequencies highly correlated with oscillators' in-degrees as a result of sorting both sets and associating quantities of equal rank. We investigated the effects of varying this degree of correlation. For identical in- and out-degrees we also considered power law distributed frequencies with a positive correlation between an oscillator's in-degree and its intrinsic frequency. We varied both the width of the Gaussian frequency distribution (when used) and the exponent in the power law frequency distribution and examined the transitions that occurred as the strength of coupling within the network was varied. In all cases shown there was an ``explosive'' transition as the coupling strength was varied from one fixed point to another, or between a fixed point and a periodic solution. An exception occurs with lower degree-frequency correlations for the power law frequency distributed case. A variety of scenarios were seen. This is in contrast to the similar and much more widely studied Kuramoto model, for which transitions are only between an incoherent fixed point and a partially synchronous periodic solution~\cite{dsouza2019,gomez2011}. The range of scenarios is a result of the form of a Winfree oscillator: for strong enough coupling an oscillator will approximately lock to a fixed point, so that even though the measure of synchrony $|Z|$ may increase as coupling strength is increased, this does not necessarily correspond to synchronous firing, as can be seen in Figs.~\ref{fig:powB} and~\ref{fig:idB}. Acknowledgements: This work was partially supported by the Marsden Fund Council from Government funding, managed by Royal Society Te Aparangi, grant number 17-MAU-054.
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{introduction} The Classical Stekloff problem in its original form is the eigenvalue problem \begin{align*} & \begin{cases} \Delta u=0\,\,\,\,&\text{ in } \Omega ,\\ \frac{\partial u}{\partial\nu}=\sigma u\,\,\,\,&\text{ on } \partial\Omega, \end{cases} \end{align*} was introduced by him in \cite{stekloff} for bounded domains $\Omega$ of the plane and afterward this was studied by Payne in \cite{payne} for bounded domains in the plane with non-negative curvature. This problem has a physical interesting because the eigenfunctions represents the steady state temperature on a domain and the flux on the boundary is proportional to the temperature, see \cite{stekloff} for more details. Thenceforth many authors have studied about this subject and they had many advances on the understanding of this topic, see for instance \cite{stekloff, weinstock, payne, escobar, escobar2, escobar 3, kuttler, lima, raulot, xia, wang, binoy, GP} and references therein. More specifically, many authors studied ways to estimate or determine exactly the eigenvalues associated with the Stekloff problem and modifications of the latter, see \cite{escobar, xia, wang}. Another interesting object in Differential Geometry are the Riemannian manifolds endowed with a smooth positive density function, this is directly related to Ricci flow, mean curvature flow, theory of optimal transportation, see \cite{morgan, espinar} for a good overview of this subject. Aiming study the various Stekloff's problems in the weighted context, we will introduce the necessary concepts. We recall that a {weighted Riemannian manifold} is a Riemannian manifold $({M},g)$ endowed with a real-valued smooth function $f: M \to \mathbb{R}$ which is used as a density to measure geometric objects on $M$. Associated to this structure we have an important second order differential operator defined by $$\Delta_f u= \Delta u - \langle\nabla u, \nabla f\rangle,$$ where $u \in C^\infty$. This operator is known as Drift Laplacian. Also, following Lichnerowich \cite{Lichnerowich} and Bakry and \'Emery \cite{bakry}, the natural generalizations of Ricci curvatures are defined as \begin{equation}\label{BE} {\rm{Ric}}_f={\rm{Ric}} + {\rm{Hess\,}} f \end{equation} and \begin{equation}\label{mBE} {\rm{Ric}}_f^k={\rm{Ric}}_f-\frac{df\otimes df}{k-n-1}, \end{equation} where $k>n+1$ or $k=n+1$ and $f$ a constant function. \medskip In this paper we will consider $M^{n+1}$ a compact oriented Riemannian manifold with boundary $\partial M$. Let $i: \partial M \hookrightarrow M$ be the standard inclusion and $\nu$ the outward unit normal on $\partial M$. We will denote by $II$ its {second fundamental form} associate to $\nu$, $\langle \nabla_X \nu, Y\rangle = II(X,Y),$ and by $H$ the {mean curvature} of $\partial M$, that is, the trace of $II$ over $n$. We recall that the {weighted mean curvature}, introduced by Gromov in \cite{g}, of the inclusion $i$ is given by $$H_{f}=H-\dfrac{1}{n}\langle \nu,\nabla f\rangle.$$ Finally, consider the following three kinds of the weighted Stekloff problems: \begin{align}\label{steklov} & \begin{cases} \Delta_fu=0\,\,\,\,&\text{ in } M ,\\ \frac{\partial u}{\partial\nu}=p u\,\,\,\,&\text{ on } \partial M; \end{cases} \intertext{} & \begin{cases}\label{2} \Delta_f^2u=0\,\,\,\,&\text{ in } M,\\ u=\Delta_fu-q\frac{\partial u}{\partial\nu}=0\,\,\,\,&\text{ on } \partial M; \end{cases} \intertext{} & \begin{cases}\label{15} \Delta_f^2u=0\,\,\,\,&\text{ in } M,\\ u=\frac{\partial^2u}{\partial \nu^2}-q\frac{\partial u}{\partial \nu}=0\,\,\,\,&\text{ on } \partial M, \end{cases} \end{align} where $\nu$ denotes the outward unit normal on $\partial M$. The first non-zero eigenvalues of the above problems will be denoted by $p_1$ and $q_1$, respectively. We will use the same letter for the first non-zero eigenvalues of last two problems because whenever the weighted mean curvature of $\partial M$ is constant then the problems are equivalents. Lastly, for the sake of simplicity, we will omit the weighted volume element in the integrals in all text. Now, we are able to introduce our results. \medskip Our first result reads as follows: \begin{theorem}\label{3} Let $M^{n+1}$ be a compact weighted Riemannian manifold with ${\rm{Ric}}_f^k\geq0$ and boundary $\partial M$. Assume that the weighted mean curvature of $\partial M$ satisfies $H_f\geq\frac{(k-1)c}{n}$, to some positive constant $c$, and that second fundamental form $II\geq cI$ , in the quadratic form sense. Denote by $\lambda_1$ the first non-zero eigenvalue of the Drift Laplacian acting on functions on $\partial M$. Let $p_1$ be the first eigenvalue of the weighted Stekloff eigenvalue problem $(\ref{steklov})$. Then, \begin{equation} p_1\leq \frac{\sqrt{\lambda_1}}{(k-1)c}(\sqrt{\lambda_1}+\sqrt{\lambda_1-(k-1)c^2}) \end{equation} with equality occurs if and only if $M$ is isometric to an $n$-dimensional euclidean ball of radius $\frac{1}{c}$, $f$ is constant and $k=n+1$. \end{theorem} The second result is the following: \begin{theorem}\label{12} Let $M^{n+1}$ be a compact connected weighted Riemannian manifold with ${\rm{Ric}}_f^k\geq0$ and boundary $\partial M$. Assume that the weighted mean curvature of $\partial M$ satisfies $H_f\geq \frac{k-1}{k}c$, to some positive constant $c$. Let $q_1$ be the first eigenvalue of the weighted Stekloff eigenvalue problem $(\ref{2})$. Then $$q_1\geq nc.$$ Moreover, equality occurs if and only if $M$ is isometric to a euclidean ball of radius $\frac{1}{c}$ in $\mathbb R^{n+1}$, $f$ is constant and $k=n+1$. \end{theorem} The next results are \begin{theorem}\label{17} Let $M^{n+1}$ be a compact connected weighted Riemannian manifold with boundary $\partial M$. Denote by $A,\,V$ the weighted area of $\partial M$ and the weighted volume of $M$, respectively. Let $q_1$ be the first eigenvalue of the weighted Stekloff eigenvalue problem $(\ref{2})$. Then, $$q_1\leq \frac{A}{V}.$$ Moreover, if in addition that the ${\rm{Ric}}_f^k$ of $M$ is non-negative and that there is a point $x_0\in \partial M$ such that $H_f(x_0)\geq\frac{A}{(n+1)V}$, and $q_1=\frac{A}{V}$ implies that $M$ is isometric to an $(n+1)$-dimensional Euclidean ball, $f$ is constant and $k=n+1$. \end{theorem} and \begin{theorem}\label{191} Let $M^{n+1}$ be a compact connected weighted Riemannian manifold with ${\rm{Ric}}_f^k\geq0$ and boundary $\partial M$ nonempty. Assume that $H_f\geq \frac{(k-1)c}{n}$, for some positive constant $c$. Let $q_1$ be the first eigenvalue of the problem $(\ref{15})$. Then $$q_1\geq c.$$ Moreover, equality occurs if and only if $M$ is isometric to a ball of radius $\frac{1}{c}$ in $\mathbb R^{n+1}$, $f$ is constant and $k=n+1$. \end{theorem} Lastly, we announce a sharp estimate of the first non-zero Stekloff eigenvalue of surfaces about suitable hypotheses. \begin{theorem}\label{Escobar} Let $M^2$ be a compact weighted Riemannian manifold with boundary. Assume that $M$ has non-negative ${\rm{Ric}}_f$, and that the geodesic curvature of $\partial M,\,k_g$ satisfies $k_g-f_{\nu}\geq c>0$. Let $p_1$ be the first non-zero eigenvalue of the Stekloff problem $(\ref{steklov})$. Assume that $f$ is constant on the boundary $\partial M$, then $p_1\geq c.$ Moreover, the equality occur if and only if $M$ is the Euclidean ball of radius $c^{-1}$ and $f$ is constant. \end{theorem} \section{Preliminaries} In this section we recall some necessary results to prove the theorems enunciated in the introduction. We will present some proofs for the sake of completeness. \medskip In \cite{batista} the authors proved the following useful inequality. \begin{proposition}\label{9} Let $u$ be a smooth function on $M^{n+1}$. then we have \begin{equation*} |{\rm{Hess\,}} u|^2+{\rm{Ric}}_f(\nabla u,\nabla u)\geq \frac{(\Delta_f u)^2}{k}+{\rm{Ric}}_f^k(\nabla u,\nabla u), \end{equation*} for every $k>n+1$ or $k=n+1$ and $f$ is a constant. Moreover, equality holds if and only if ${\rm{Hess\,}} u=\frac{\Delta u}{n+1} \langle\,,\rangle$ and $\langle\nabla u,\nabla f\rangle=-\frac{k-n-1}{k}\Delta_f u \footnote{This term only appear in the case of a non constant function.}$. \end{proposition} \begin{proof} Let $\{e_1,\ldots,e_{n+1}\}$ be a orthonormal basis of $T_pM$, then by Cauchy-Schwarz inequality we have that \begin{align}\label{10} (\Delta u)^2 \leq(n+1)|{\rm{Hess\,}} u|^2. \end{align} Using that $\frac{1}{n+1}a^2+\frac{1}{k-n-1}b^2\geq\frac{1}{k}(a-b)^2$ with equality if and only if \begin{equation}\label{11} a=-\frac{(n+1)b}{k-n-1}, \end{equation} we obtain \begin{align} |{\rm{Hess\,}} u|^2+{\rm{Ric}}_f(\nabla u,\nabla u)&\geq \frac{1}{n+1}(\Delta u)^2+ {\rm{Ric}}_f^k(\nabla u,\nabla u)+ \frac{\langle\nabla f,\nabla u\rangle^2 }{k-n-1}\nonumber\\ &\geq\frac{1}{k}(\Delta u-\langle\nabla f,\nabla u\rangle)^2+{\rm{Ric}}_f^k(\nabla u,\nabla u)\\ &=\frac{1}{k}(\Delta_f u)^2+{\rm{Ric}}_f^k(\nabla u,\nabla u).\nonumber \end{align} If the equality holds, then since we use the Cauchy-Schwarz's inequality in $(\ref{10})$ we obtain that ${\rm{Hess\,}} u=\lambda\langle\,,\rangle$, and by $(\ref{11})$ $$\Delta u=-\frac{(n+1)\langle\nabla f,\nabla u\rangle}{k-n-1},$$ Consequently $$\Delta_fu=-\frac{(n+1)\langle\nabla f,\nabla u\rangle}{k-n-1}-\langle\nabla f,\nabla u\rangle=-\frac{k}{k-n-1}\langle\nabla f,\nabla u\rangle.$$ The converse is immediate. \end{proof} In \cite{lima} the authors showed that, for a smooth function $u$ defined on an $n$-dimensional compact weighted manifold $M$ with boundary $\partial M$, the following identity holds if $h=\frac{\partial u}{\partial \nu},\,z=u|_{\partial M}$ and ${\rm{Ric}}_f$ denotes the generalized Ricci curvature of $M$: \begin{align}\label{1} \int_M[(\Delta_f u)^2-&|{\rm{Hess\,}} u|^2- {\rm{Ric}}_f(\nabla u,\nabla u)]=\\ &= \int_{\partial M}\left[nH_fh^2+2h\overline{\Delta}_fz+II(\overline{\nabla}z,\overline{\nabla}z)\right].\nonumber \end{align} Here, $\overline{\Delta}$ and $\overline{\nabla}$ represent the Laplacian and the gradient on $\partial M$ with respect to the induced metric on $\partial M$, respectively. Using the Proposition \ref{9} we have that \begin{align}\label{6} \int_M\frac{k-1}{k}[(\Delta_fu)^2-&{\rm{Ric}}_f^k(\nabla u,\nabla u)]\geq\\ &\geq\int_{\partial M}[nH_fh^2+2h\overline{\Delta}_fz+II(\overline{\nabla}z,\overline{\nabla}z)]\nonumber. \end{align} \medskip The next result is an estimate for the first non-zero eigenvalue of the Drift Laplacian on closed submanifolds. This result is a slight modification of Theorem 1.6 in \cite{huang} and it reads as follows. \begin{proposition}\label{4} Let $M^{n+1}$ be a compact weighted Riemannian manifold with nonempty boundary $\partial M$ and ${\rm{Ric}}_f^k\geq0$. If the second fundamental form of $\partial M$ satisfies $II\geq cI$, in the quadratic form sense, and $H_f\geq\frac{k-1}{n}c$, then $$\lambda_1(\partial M)\geq (k-1)c^2,$$ where $\lambda_1$ is the first non-zero eigenvalue of the Drift Laplacian acting on functions on $\partial M$. The equality holds if and only if $M$ is isometric to an Euclidean ball of radius $\frac{1}{c}$, $f$ is constant and $k=n+1$. \end{proposition} \begin{proof} Let $z$ be an eigenfunction corresponding to the first non-zero eigenvalue $\lambda_1$ of the Drift Laplacian of $\partial M$, that is, \begin{equation} \overline{\Delta}_fz+\lambda_1z=0. \end{equation} Let $u\in C^{\infty}(M)$ be the solution of the Dirichlet problem $$\begin{cases} \Delta_fu=0\,\,\,\,\,&\text{ in } M,\\ u=z\,\,\,\,\,&\text{ on }\partial M. \end{cases} $$ It then follows from $(\ref{6})$ and the non-negativity of ${\rm{Ric}}_f^k$ of $M$ that \begin{align} 0&\geq \int_{\partial M}[nH_fh^2+2h\overline{\Delta}_fz+II(\overline{\nabla}z,\overline{\nabla}z)]. \end{align} Since $II\geq cI$, we have $$II(\overline{\nabla}z,\overline{\nabla}z)\geq c |\overline{\nabla z}|^2,$$ and noticing that $$\int_{\partial M}|\overline{\nabla z}|^2=-\int_{\partial M}z\overline{\Delta}z=\lambda_1\int_{\partial M}z^2,$$ we obtain \begin{align*} 0&\geq \int_{\partial M}[nH_fh^2+2h\overline{\Delta}_fz+II(\overline{\nabla}z,\overline{\nabla}z)]\\ &\geq\int_{\partial M}[(k-1)ch^2-2\lambda_1zh+c\lambda_1z^2]\\ &=\int_{\partial M}\bigg[(k-1)c\left(h-\frac{\lambda_1z}{(k-1)c}\right)^2+\lambda_1\left(c-\frac{\lambda_1}{(k-1)c}\right)z^2\bigg]\\ &\geq\lambda_1\left(c-\frac{\lambda_1}{(k-1)c}\right)\int_{\partial M}z^2. \end{align*} Consequently, $$\lambda_1\geq(k-1)c^2,$$ which proof the first part of theorem. The equality case follows by Proposition \ref{9} and a careful analysis in the equalities that occur. The converse is immediate. \end{proof} Recall the following version of Hopf boundary point lemma, see its proof in \cite{gilbarg}, Lemma 3.4. \begin{proposition}[Hopf boundary point lemma]\label{25} Let $(M^n,g)$ be a complete Riemannian manifold and let $\Omega\subset M$ be a closed domain. If $u:\Omega\rightarrow\mathbb R$ is a function with $u\in C^2(\text{int}(\Omega))$ satisfying $$\Delta u+\langle X,\nabla u\rangle\geq0,$$ where $X$ is a bounded vector field, $x_0\in\partial\Omega$ is a point where $$u(x)<u(x_0)\,\,\,\forall x\in \Omega,$$ $u$ is continuous at $x_0$, and $\Omega$ satisfies the interior sphere condition at $x_0$, then $$\frac{\partial u}{\partial \nu}(x_0)>0$$ if this outward normal derivative exists. \end{proposition} \section{proof of the sharp bounds for the Stekloff eigenvalues} In this section we will give the proof of the four first results announced in introduction and for this we will use all tools presented in the preliminaries. \medskip \noindent{ \bf Proof of Theorem \ref{3}.} Let $u$ be the solution of the following problem $$\begin{cases} \Delta_fu=0\,\,\,\,&\text{in } M,\\ u|_{\partial M}=z,& \end{cases} $$ where $z$ is a first eigenfunction on $\partial M$ corresponding to $\lambda_1$, that is, $z$ satisfies $\overline{\Delta}_fz+\lambda_1z=0$ on $\partial M$. Set $h=\frac{\partial u}{\partial \nu}\big|_{\partial M}$, then we have from the Rayleigh inequality that (cf.\cite{kuttler}) \begin{align} p_1&\leq\frac{\int_{\partial M} h^2}{\int_M|\nabla u|^2}\label{7} \intertext{and} p_1&\leq\frac{\int_M|\nabla u|^2}{\int_{\partial M} z^2}\label{8} \end{align} Notice that $(\ref{8})$ it is the variational principle, and $(\ref{7})$ it is obtained as follows, \begin{align*} p_1&\leq \frac{\int_M|\nabla u|^2}{\int_{\partial M}z^2}=\frac{-\int_M u\Delta_fu+ \int_{\partial M} u\langle\nabla u,\nu\rangle}{\int_{\partial M}z^2}\\ &=\frac{\int_M|\nabla u|^2}{\int_{\partial M}z^2}\cdot\frac{\int_{\partial M} u\langle\nabla u,\nu\rangle}{\int_M|\nabla u|^2}\\ &=\frac{1}{\int_{\partial M}z^2}\cdot\frac{\left(\int_{\partial M} u\langle\nabla u,\nu\rangle\right)^2}{\int_M|\nabla u|^2}\\ &\leq \frac{\int_{\partial M}z^2}{\int_{\partial M}z^2}\cdot\frac{\int_{\partial M} \langle\nabla u,\nu\rangle^2}{\int_M|\nabla u|^2}\\ &=\frac{\int_{\partial M} h^2}{\int_M|\nabla u|^2}, \end{align*} which gives \begin{equation}\label{5} p_1^2\leq\frac{ \int_{\partial M} h^2}{ \int_{\partial M} z^2}. \end{equation} It then follows by substituting $u$ into the equation $(\ref{1})$, and using the { Proposition \ref{9}}, that \begin{align} \label{351} 0\geq\int_M\frac{k-1}{k}[(\Delta_fu)^2-&{\rm{Ric}}_f^k(\nabla u,\nabla u)]\geq\\ &\geq\int_{\partial M}[nH_fh^2+2h\overline{\Delta}_fz+II(\overline{\nabla}z,\overline{\nabla}z)]\nonumber\\ &\geq\int_{\partial M}[(k-1)ch^2-2\lambda_1z+c|\overline{\nabla}z|^2].\nonumber \end{align} Note that, by Green's formula, $$\int_{\partial M}|\overline{\nabla}z|^2=\int_{\partial M}\langle \overline{\nabla}z,\overline{\nabla}z\rangle= -\int_{\partial M}z\overline{\Delta}_fz=\lambda_1\int_{\partial M}z^2.$$ Putting this expression in (\ref{351}) we have that \begin{align*} 0 &\geq(k-1)c\int_{\partial M}h^2-2\lambda_1\int_{\partial M}hz+c\lambda_1\int_{\partial M}z^2\\ &\geq (k-1)c\int_{\partial M}h^2-2\lambda_1\left(\int_{\partial M}h^2\right)^{\frac{1}{2}} \left(\int_{\partial M}z^2\right)^{\frac{1}{2}}+c\lambda_1\int_{\partial M}z^2\\ &=\frac{(k-1)c^2-\lambda_1}{c}\int_{\partial M}h^2+\left[\sqrt{\frac{\lambda_1}{c}}\left(\int_{\partial M}h^2\right)^{\frac{1}{2}}- \sqrt{c\lambda_1}\left(\int_{\partial M}z^2\right)^{\frac{1}{2}}\right]^2, \end{align*} from where $$\frac{\sqrt{\lambda_1-(k-1)c^2}}{\sqrt{c}}\left(\int_{\partial M}h^2\right)^{\frac{1}{2}}\geq\sqrt{\frac{\lambda_1}{c}} \left(\int_{\partial M}h^2\right)^{\frac{1}{2}}-\sqrt{c\lambda_1}\left(\int_{\partial M}z^2\right)^{\frac{1}{2}}$$ and $$\frac{\sqrt{\lambda_1}-\sqrt{\lambda_1-(k-1)c^2}}{\sqrt{c}}\left(\int_{\partial M}h^2\right)^{\frac{1}{2}}\leq \sqrt{c\lambda_1}\left(\int_{\partial M}z^2\right)^{\frac{1}{2}},$$ that is, \begin{align*} \left(\int_{\partial M}h^2\right)^{\frac{1}{2}}&\leq \frac{c\sqrt{\lambda_1}}{\sqrt{\lambda_1}- \sqrt{\lambda_1-(k-1)c^2}}\left(\int_{\partial M}z^2\right)^{\frac{1}{2}}\\ &=\frac{\sqrt{\lambda_1}}{(k-1)c}(\sqrt{\lambda_1}+\sqrt{\lambda_1-(k-1)c^2})\left(\int_{\partial M}z^2\right)^{\frac{1}{2}}. \end{align*} Using $(\ref{5})$, we obtain \begin{align*} p_1&\leq \frac{\sqrt{\lambda_1}}{(k-1)c}\left(\sqrt{\lambda_1}+\sqrt{\lambda_1-(k-1)c^2}\right).\\ \end{align*} Now, assume that $$p_1= \frac{\sqrt{\lambda_1}}{(k-1)c}\left(\sqrt{\lambda_1}+\sqrt{\lambda_1-(k-1)c^2}\right).$$ So, we also have that $$\left(\int_{\partial M}h^2\right)^{\frac{1}{2}}= \frac{\sqrt{\lambda_1}}{(k-1)c}\left(\sqrt{\lambda_1}+\sqrt{\lambda_1-(k-1)c^2}\right)\left(\int_{\partial M}z^2\right)^{\frac{1}{2}}\vspace{3mm}$$ and all inequalities above become equality. Thus $h=\alpha z$ and $$\alpha=\frac{\left(\alpha^2\int_{\partial M}z^2\right)^{\frac{1}{2}}}{\left(\int_{\partial M}z^2\right)^{\frac{1}{2}}}= \frac{\sqrt{\lambda_1}}{(k-1)c}\left(\sqrt{\lambda_1}+\sqrt{\lambda_1-(k-1)c^2}\right),$$ that is, $$h= \frac{\sqrt{\lambda_1}}{(k-1)c}(\sqrt{\lambda_1}+\sqrt{\lambda_1-(k-1)c^2})z.$$ Furthermore we infer, by Proposition \ref{9}, that ${\rm{Hess\,}} u=0$. Now, on the boundary $\partial M$, we can write \begin{align*}\nabla u&=(\nabla u)^\top+(\nabla u)^{\perp}\\ &=(\nabla u)^\top+\langle\nabla u,\nu\rangle\nu, \end{align*} where $(\nabla u)^\top$ is tangent to $\partial M$ and $(\nabla u)^{\perp}$ is normal to $\partial M$. Then, take a local orthonormal fields $\{e_i\}_{i=1}^{n}$ tangent to $\partial M$. We obtain \begin{align*} 0&=\sum_{i=1}^{n}{\rm{Hess\,}} u(e_i,e_i)=\sum_{i=1}^{n}\langle\nabla_{e_i}\nabla u,e_i\rangle\\ &=\sum_{i=1}^{n}\langle\nabla_{e_i}[(\nabla u)^\top+\langle\nabla u,\nu\rangle\nu],e_i\rangle\\ &=\sum_{i=1}^{n}\langle\nabla_{e_i}(\nabla u)^\top+\langle\nabla u,\nu\rangle\nabla_{e_i}\nu+e_i(\langle\nabla u,\nu\rangle)\nu,e_i\rangle\\ &=\overline{\Delta}z+\sum_{i=1}^{n}\langle\nabla u,\nu\rangle \, II(e_i,e_i)\\ &=\overline{\Delta}z+nHh\\ &=\overline{\Delta}_fz - f_\nu h+nHh\\ &==\overline{\Delta}_fz +nH_fh\\ &=-\lambda_1z+c(k-1)h\\ &=-\lambda_1z+c(k-1) \frac{\sqrt{\lambda_1}}{(k-1)c}(\sqrt{\lambda_1}+\sqrt{\lambda_1-(k-1)c^2})z, \end{align*} from where $$\lambda_1=(k-1)c^2.$$ Therefore, follows by Proposition \ref{4} that $M$ is isometric to an $(n+1)$-dimensional Euclidean ball of radius $\frac{1}{c}$, $f$ is constant and so $k=n+1$. The converse follows the ideas of the Riemannian case. \qed \medskip {\bf Proof of Theorem \ref{12}.} Let $w$ be an eigenfunction corresponding to the first eigenvalue $q_1$ of problem $(\ref{2})$, that is, \begin{equation} \begin{cases} \Delta_f^2w=0\,\,\,\,\,\,&\text{ in } M,\\ w=\Delta_fw-q_1 \frac{\partial w}{\partial \nu}=0\,\,\,\,\,\, &\text{ on } \partial M. \end{cases} \end{equation} Set $\eta=\frac{\partial w}{\partial\nu}|_{\partial M}$; then by divergence theorem we obtain \begin{align*} \int_M(\Delta_fw)^2&=-\int_{M}\langle\nabla(\Delta_fw),\nabla w\rangle+\int_{\partial M}\Delta_fw\, \langle\nabla w,\nu\rangle\\ &=\int_{M}w\, \Delta_f(\Delta_fw)-\int_{\partial M}w\, \langle\nabla(\Delta_fw),\nu\rangle+\int_{\partial M}\Delta_fw \,\langle\nabla w, \nu\rangle\\ &=q_1\int_{\partial M}\eta^2, \end{align*} that is, $$q_1=\frac{ \int_M(\Delta_fw)^2}{\int_{\partial M}\eta^2}.$$ Substituting $w$ in $(\ref{6})$, and noting that $w|_{\partial M}=z$, we have \begin{align*} \frac{k-1}{k}\int_{M}(\Delta_fw)^2&\geq\int_M{\rm{Ric}}_f^k(\nabla w,\nabla w)+\int_{\partial M}nH_f\eta^2\nonumber\\ &\geq \frac{(k-1)nc}{k}\int_{\partial M}\eta^2, \end{align*} from where $q_1\geq nc,$ as we desired.\vspace{2mm} Assume now that $q_1=nc$, then the inequalities above become equalities and consequently $H_f=\frac{k-1}{k}c$. Furthermore, we have equality in the Proposition \ref{9}, thus ${\rm{Hess\,}} w=\frac{\Delta w}{n+1}\langle\,,\rangle$ and $\Delta_fw=\frac{k}{n+1}\Delta w$. Take an orthonormal frame $\{e_1,\ldots,e_n,e_{n+1}\}$ on $M$ such that when restricted to $\partial M,\,e_{n+1}=\nu$. Since $w|_{\partial M}=0$ we have \begin{align*} e_i(\eta)&=e_i\langle\nabla w,\nu\rangle\\ &=\langle\nabla_{e_i}\nabla w,\nu\rangle+\langle\nabla w,\nabla_{e_i}\nu\rangle\\ &={\rm{Hess\,}} w(e_i,\nu)+II((\nabla w)^\top,e_i)=0, \end{align*} that is, $\eta=\rho=$ constant, and so $(\Delta_fw)|_{\partial M}=q_1\eta=nc\rho$ is also a constant. Using the fact that $\Delta_fw$ is a $f$-harmonic function on $M$, we conclude by maximum principle that $\Delta_fw$ is constant on $M$. Since $\Delta_fw=\frac{k}{n+1}\Delta w$, then $w$ satisfies $$\begin{cases} {\rm{Hess\,}} w=\frac{\Delta_f w}{k}\langle\,,\rangle\hspace{3mm} \text{in} \hspace{3mm} M,\\ w|_{\partial M}=0. \end{cases}$$ Thus, by Lema 3 in \cite{robert}, we conclude that $M$ is isometric to a ball in $\mathbb R^{n+1}$ of radius $c^{-1}.$ Now, using the hessian of $w$ is possible see that $w=\frac{\lambda}{2}r^2 + C,$ where $\lambda=\frac{\Delta_f w}{k}$ and $r$ is the distance function from its minimal point, see \cite{robert} for more details for this technique. Lastly, we will show that $f$ is constant. In fact, if $k> n+1$, then $\langle\nabla f,\nabla w\rangle$ is constant and so $f=-(k-n-1)\ln r +C$. It is a contradiction, since $f$ is a smooth function. \qed \bigskip \noindent{\bf Proof of Theorem \ref{17}.} Now, let $w$ be the solution of the following Drift Laplace equation \begin{equation} \begin{cases} \Delta_f w=1\,\,\,\,\text{ in } M,\\ w|_{\partial M}=0. \end{cases} \end{equation} Follows from Rayleigh characterization of $q_1$ that \begin{equation} q_1\leq \frac{\int_M|\nabla w|^2}{\int_{\partial M}\eta^2}=\frac{\int_M(\Delta_fw)^2}{\int_{\partial M}\eta^2}= \frac{V}{\int_{\partial M}\eta^2}, \end{equation} where $\eta=\frac{\partial w}{\partial\nu}\big|_{\partial M}$. Integrating $\Delta_f w=1$ on $M$ and using the divergence theorem, it gives $$V=\int_{\partial M}\eta.$$ Hence we infer from Schwarz inequality that \begin{equation}\label{18} V^2\leq A\int_{\partial M}\eta^2. \end{equation} Consequently, $$q_1\leq\frac{V}{\int_{\partial M}\eta^2}\leq\frac{V}{V^2/A}=\frac{A}{V}.$$ Assume now that ${\rm{Ric}}_f^k\geq0$, $H_f(x_0)\geq\frac{(k-1)A}{k\, n\, V}$ for some $x_0\in \partial M$ and $q_1=\frac{A}{V}$. In this case $(\ref{18})$ become a equality and so $\eta=\frac{V}{A}$ is a constant. Consider the function $\phi$ on $M$ given by $$\phi=\frac{1}{2}|\nabla w|^2-\frac{w}{k}.$$ Using the Bochner formula $(\ref{16})$, $\Delta_fw=1$, the {\ Proposition \ref{9}} and that ${\rm{Ric}}_f^k\geq0$, we have that \begin{align}\label{28} \frac{1}{2}\Delta_f\phi&=|{\rm{Hess\,}} w|^2+\langle \nabla w,\nabla(\Delta_f w)\rangle+{\rm{Ric}}_f(\nabla w,\nabla w)-\frac{1}{k}\\ &\geq\frac{1}{k}(\Delta_fw)^2-\frac{1}{k}=0.\nonumber \end{align} Thus $\phi$ is $f$-subharmonic. Observe that $\phi=\frac{1}{2}\left(\frac{V}{A}\right)^2$ on the boundary. In fact, if we write $\nabla w=(\nabla w)^\top+(\nabla w)^{\perp}$, where $(\nabla w)^\top$ is tangent to $\partial M$ and $(\nabla w)^{\perp}$ is normal to $\partial M$, and since $w|_{\partial M}=0$, it follows that $\nabla w=(\nabla w)^{\perp}=C\nu$ on $\partial M$. On the other hand, $$1=\Delta_fw=q_1\langle\nabla w,\nu\rangle=\frac{A}{V}C\,\,\,\mbox{implies} \,\,\,C=\frac{V}{A}\,\,\,\text{ and }\,\,\,|\nabla w|=\frac{V}{A}.$$ Therefore $\phi=\frac{1}{2}\left(\frac{V}{A}\right)^2$ on the boundary, and so we conclude by Proposition \ref{25} that either \begin{align} \phi=\frac{1}{2}\left(\frac{V}{A}\right)^2\,\,\,\,\,\text{ in } M\label{26} \intertext{or} \frac{\partial\phi}{\partial\nu}(y)>0,\,\,\,\,\, \forall \, y\in\partial M.\label{27} \end{align} From $w|_{\partial M}=0$, we have \begin{align*} 1=(\Delta_fw)|_{\partial M}&=nH\eta+{\rm{Hess\,}} w(\nu,\nu)-\frac{V}{A}\langle \nabla f,\nu\rangle\\ &=\frac{nV}{A}\left(H_f+\frac{\langle\nabla f,\nu\rangle}{n}\right)+{\rm{Hess\,}} w(\nu,\nu)-\frac{V}{A}\langle \nabla f,\nu\rangle\\ &=\frac{nV}{A}H_f+{\rm{Hess\,}} w(\nu,\nu). \end{align*} Hence it holds on $\partial M$ that \begin{align*} \frac{\partial\phi}{\partial\nu}&=\frac{V}{A}{\rm{Hess\,}} w(\nu,\nu)-\frac{V}{k\, A}\\ &=\frac{V}{A}\left(1-\frac{nV}{A}H_f\right)-\frac{V}{k\, A}\\ &=n\frac{V}{A}\left(\frac{k-1}{k\, n}-H_f\frac{V}{A}\right), \end{align*} which shows that $(\ref{27})$ is not true since $H_f(x_0)\geq\frac{(k-1)A}{k\, n\, V}$. Therefore $\phi$ is constant on $M$. Since the Drift Laplacian of $\phi$ vanishes, we infer that equality must hold in $(\ref{28})$ and that give us equality in the { Proposition \ref{9}}, and consequently $1=\Delta_fw=\frac{k}{n+1}\Delta w$ and ${\rm{Hess\,}} w=\frac{\Delta w}{n+1}\langle\,,\rangle$. The remainder of the proof follows a similar arguments as in proof of Theorem \ref{12}. \medskip\qed \noindent{\bf Proof of Theorem \ref{191}.} Let $w$ be an eigenfunction corresponding to the first eigenvalue $q_1$ of the problem $(\ref{15})$: \begin{equation} \begin{cases} \Delta_f^2u=0\,\,\,\,&\text{ in } M,\\ u=\frac{\partial^2u}{\partial \nu^2}-q\frac{\partial u}{\partial \nu}=0\,\,\,\,&\text{ on } \partial M. \end{cases} \end{equation} Observe that $w$ is not a constant. Otherwise, we would conclude from $w|_{\partial M}=0$ that $w\equiv0$. Set $\eta=\frac{\partial w}{\partial \nu}|_{\partial M}$; then $\eta\neq0$. In fact, if $\eta=0$ then $$w|_{\partial M}=(\nabla w)|_{\partial M}=\frac{\partial^2w}{\partial\nu^2}=0$$ which implies that $(\Delta_fw)|_{\partial M}=0$ and so $\Delta_fw=0$ on $M$ by the maximum principal, which in turn implies that $w=0$. This is a contradiction. \medskip Since $w|_{\partial M}=0$, we have by the divergence theorem that \begin{equation} \int_M\langle\nabla w,\nabla(\Delta_fw)\rangle=-\int_Mw\Delta_f^2w=0, \end{equation} hence \begin{equation}\label{36} \int_{\partial M}\Delta_fw\, \frac{\partial w}{\partial \nu}=\int_M\langle\nabla(\Delta_f w),\nabla w\rangle+\int_M(\Delta_fw)^2=\int_M(\Delta_fw)^2. \end{equation} Since $w|_{\partial M}=0$, we have $\nabla w=\frac{\partial w}{\partial \nu}\nu$ and \begin{align}\label{39} (\Delta_fw)|_{\partial M}&=\frac{\partial^2w}{\partial\nu^2}+nH\frac{\partial w}{\partial\nu}-\langle\nabla f,\nabla w\rangle\\ &=q_1\frac{\partial w}{\partial\nu}+nH_f\frac{\partial w}{\partial\nu}+\langle\nabla f,\nu\rangle\frac{\partial w}{\partial\nu}-\langle\nabla f,\nu\rangle\frac{\partial w}{\partial\nu}\nonumber\\ &=q_1\frac{\partial w}{\partial\nu}+nH_f\frac{\partial w}{\partial\nu}\nonumber \end{align} using $(\ref{36})$ and $(\ref{39})$ we obtain that \begin{align*} q_1&=\frac{\int_M(\Delta_f w)^2-n\int_{\partial M}H_f\eta^2}{\int_{\partial M}\eta^2}. \end{align*} On the other hand, substituting $w$ into $(\ref{6})$, we obtain \begin{align}\label{38} \frac{k-1}{k}\int_M(\Delta_fw)^2&=\int_M{\rm{Ric}}_f^k(\nabla w,\nabla w)+\int_{\partial M}nH_f\eta^2\\ &\geq\int_{\partial M}nH_f\eta^2,\nonumber \end{align} that is, $$\int_M(\Delta_fw)^2 - \int_{\partial M}nH_f\eta^2\geq \dfrac{n}{k-1}\int_{\partial M}H_f\eta^2\geq c\int_{\partial M}\eta^2.$$ By expression for $q_1$ and estimate above, we obtain the desired estimate \begin{equation}\label{37} q_1\geq c. \end{equation} Assume now that $q_1=c$. So all inequalities in $(\ref{38})$ become equalities. Thus, by Proposition \ref{9}, we have that \begin{equation}\label{40} {\rm{Hess\,}} w=\frac{\Delta w}{n+1}\langle\,,\rangle\hspace{5mm}\text{ and }\hspace{5mm}\Delta_f w=-\frac{k}{k-n-1}\langle\nabla f,\nabla w\rangle. \end{equation} Choice an orthonormal frame $\{e_1,\ldots, e_n\}$ on $M$ so that restricted to $\partial M,\,e_n=\nu$. On the other side, to $i=1,\ldots,n-1$, using that $w|_{\partial M}=0$, we obtain \begin{align*} 0={\rm{Hess\,}} w(e_i,e_n)&=e_ie_n(w)-\nabla_{e_i}e_n(w)\\ &=e_i(\eta)-\langle\nabla_{e_i}e_n,e_n\rangle\eta=e_i(\eta), \end{align*} follow that $\eta=b_0=const.$ Since $(\ref{37})$ takes equality and $\eta$ is constant, we conclude that $H_f = \frac{k-1}{n}c$, which implies from $(\ref{39})$ that $(\Delta_f w)|_{\partial M}=kcb_0$, therefore, by maximum principle $\Delta_fw$ is constant on $M$ which implies from $(\ref{40})$ that $\Delta w$ is constant on $M$. The remainder of the proof follows a similar arguments as in proof of Theorem \ref{12}. \qed \section{Escobar type theorem for the Stekloff problem} Recall the Bochner type formula for weighted Riemannian manifold, which says: Any smooth function $u$ on $M$ holds that \begin{equation}\label{16} \frac{1}{2}\Delta_f|\nabla u|^2=|{\rm{Hess\,}} u|^2+\langle \nabla u,\nabla(\Delta_f u)\rangle+{\rm{Ric}}_f(\nabla u,\nabla u). \end{equation} An immediate consequence of the Bochner type formula is the result below, however we believe that this is not a sharp estimate. \begin{theorem} Let $M^{n+1},\,n\geq2$ be a compact weighted Riemannian manifold with boundary $\partial M$. Assume that ${\rm{Ric}}_f\geq 0, H_f\geq 0$ and that the second fundamental form satisfies $II\geq cI$ on $\partial M,\,c>0$. Then $$p_1>\frac{c}{2}.$$ \end{theorem} \begin{proof} Set $h=\frac{\partial u}{\partial \nu}$, and $z=u|_{\partial M}$ where $u$ is solution of problem $(\ref{steklov})$. We have $p_1z=p_1u=h$, thus $p_1\overline{\nabla}z=\overline{\nabla}h$. By $(\ref{1})$, we have \begin{align*} 0>-\int_M|{\rm{Hess\,}} u|^2 &\geq\int_M[(\Delta_f u)^2-|{\rm{Hess\,}} u|^2- {\rm{Ric}}_f(\nabla u,\nabla u)]\\ &= \int_{\partial M}\left[nH_fh^2+2h\overline{\Delta}_fz+II(\overline{\nabla}z,\overline{\nabla}z)\right]\\ &\geq -2\int_{\partial M}\langle\overline{\nabla}h,\overline{\nabla}z\rangle+c\int_{\partial M}|\overline{\nabla}z|^2\\ &\geq -2p_1\int_{\partial M}|\overline{\nabla}z|^2+c\int_{\partial M}|\overline{\nabla}z|^2 \end{align*} Note that $$\int_{\partial M}|\overline{\nabla}z|^2>0.$$ Otherwise $z$ is constant on the Boundary and hence $f$ is constant on $M$ which is a contradiction. Thus $p_1>\frac{c}{2}$. \end{proof} \bigskip Below we present the proof of a sharp estimate of the non-zero first Stekloff eigenvalue on surfaces. The technique was introduced by Escobar in \cite{escobar}, and just enable us to attack this problem in context of surfaces. \medskip {\bf Proof of Theorem \ref{Escobar}.} Let $\phi$ be a non-constant eigenfunction for the Stekloff problem $(\ref{steklov})$. Consider the function $v=\frac{1}{2}|\nabla \phi|^2$, then by $(\ref{16})$ \begin{align*} \Delta_fv=|{\rm{Hess\,}} \phi|^2+\langle \nabla \phi,\nabla(\Delta_f \phi)\rangle+{\rm{Ric}}_f(\nabla \phi,\nabla \phi). \end{align*} Since $\phi$ is a $f$-harmonic function and ${\rm{Ric}}_f\geq0$ we find that \begin{equation}\label{33} \Delta_fv=|{\rm{Hess\,}} \phi|^2+{\rm{Ric}}_f(\nabla \phi,\nabla \phi)\geq0. \end{equation} Therefore the maximum of $v$ is achieved at some point $P\in\partial M$. The {Proposition \ref{25}} implies that $(\partial v/\partial\eta)(P)>0$ or $v$ is identically constant.\vspace{5mm}\\ Let's assume $(\partial v/\partial\eta)(P)>0$ and let $(t,x)$ be Fermi coordinates around the point $P$, that is, $x$ represents a point on the curve $\partial M$ and $t$ represents the distance to the boundary point $x$. The metric has the form \begin{equation}\label{19} ds^2=dt^2+h^2(t,x)dx^2, \end{equation} where $h(P)=1,\,(\partial h/\partial x)(P)=0$. Thus $$|\nabla \phi|^2=\left(\frac{\partial \phi}{\partial t}\right)^2+h^{-2}\left(\frac{\partial \phi}{\partial x}\right)^2,$$ and $$\frac{\partial v}{\partial x}=\frac{\partial \phi}{\partial t}\frac{\partial^2\phi }{\partial x\partial t}+h^{-2} \frac{\partial \phi}{\partial x}\frac{\partial^2\phi }{\partial x^2}-h^{-3}\frac{\partial h}{\partial x}\left(\frac{\partial \phi} {\partial x}\right)^2.$$ Evaluating at the point $P$ we obtain \begin{equation}\label{24} \frac{\partial v}{\partial x}(P)=\frac{\partial \phi}{\partial t}\frac{\partial^2\phi }{\partial x\partial t}+\frac{\partial \phi} {\partial x}\frac{\partial^2\phi }{\partial x^2}=0. \end{equation} The $f$-Laplacian with respect to the metric given by $(\ref{19})$ in Fermi coordinates $(t,x)$ is $$\Delta_f=\frac{\partial^2}{\partial t^2}+h^{-1}\frac{\partial h}{\partial t}\frac{\partial }{\partial t}+h^{-1}\frac{\partial } {\partial x}\left(h^{-1}\frac{\partial }{\partial x}\right)-\frac{\partial f}{\partial t}\frac{\partial }{\partial t}- h^{-2}\frac{\partial f}{\partial x}\frac{\partial }{\partial x}.$$ The geodesic curvature of $\partial M$ can be calculated in terms of the function $f$ and its first derivative as follows: \begin{align}\label{20} k_g&=-\left\langle \nabla_{\partial/\partial x}\frac{\partial}{\partial t},\frac{\partial}{\partial x}\right\rangle=-\left\langle \nabla_{\partial/\partial t}\frac{\partial}{\partial x},\frac{\partial}{\partial x}\right\rangle\nonumber\\ &=-\frac{1}{2}\frac{\partial}{\partial t}\left\langle \frac{\partial}{\partial x},\frac{\partial}{\partial x}\right\rangle=-\frac{1}{2} \frac{\partial}{\partial t}(h^2)=-hh^{\prime}. \end{align} Hence at $P$ we find that \begin{equation}\label{21}0=\Delta_f\phi=\frac{\partial^2 \phi}{\partial t^2}-k_g\frac{\partial \phi}{\partial t}+\frac{\partial^2 \phi }{\partial x}-\frac{\partial f}{\partial t}\frac{\partial \phi}{\partial t}- \frac{\partial f}{\partial x}\frac{\partial \phi}{\partial x}. \end{equation} Using the equality $(\ref{20})$ we get that \begin{equation}\label{22} \frac{\partial v}{\partial t}(P)=\frac{\partial \phi}{\partial t}\frac{\partial^2\phi }{\partial t^2}+\frac{\partial \phi}{\partial x}\frac{\partial^2\phi}{\partial t\partial x}+k_g\left(\frac{\partial \phi}{\partial x}\right)^2. \end{equation} Multiplying the equation $(\ref{21})$ by $-\frac{\partial \phi}{\partial t}$ and adding with the equation $(\ref{22})$ we obtain \begin{equation}\label{23} \frac{\partial v}{\partial t}(P)=k_g|\nabla \phi|^2-\frac{\partial \phi}{\partial t}\frac{\partial^2\phi }{\partial x^2}+ \frac{\partial\phi }{\partial x}\frac{\partial^2\phi}{\partial t\partial x}+\frac{\partial f}{\partial t}\left(\frac{\partial \phi}{\partial t}\right)^2+ \frac{\partial f}{\partial x}\frac{\partial \phi}{\partial x}\frac{\partial \phi}{\partial t}. \end{equation} If $\frac{\partial \phi}{\partial x}(P)\neq0$, the equation $(\ref{24})$ and the boundary condition yields \begin{equation}\label{29} \frac{\partial^2 \phi}{\partial x^2}(P)=p_1\frac{\partial \phi}{\partial t}(P). \end{equation} Therefore the equation $(\ref{23})$ can be re-written using the boundary condition as \begin{equation}\label{30} \frac{\partial v}{\partial t}(P)=(k_g-p_1)|\nabla \phi|^2+p_1\left(\frac{\partial \phi}{\partial x}\right)^2+ \frac{\partial\phi }{\partial x}\frac{\partial^2\phi}{\partial t\partial x}+\frac{\partial f}{\partial t}\left(\frac{\partial \phi}{\partial t}\right)^2+ \frac{\partial f}{\partial x}\frac{\partial \phi}{\partial x}\frac{\partial \phi}{\partial t}. \end{equation} Notice that by $(\ref{24})$ we obtain, using $(\ref{29})$, \begin{equation} 0=\frac{\partial \phi}{\partial t}\frac{\partial^2\phi }{\partial x\partial t}+\frac{\partial \phi} {\partial x}\frac{\partial^2\phi }{\partial x^2}=\frac{\partial \phi}{\partial t}\left(\frac{\partial^2\phi }{\partial x\partial t}+p_1\frac{\partial \phi} {\partial x}\right), \end{equation} that is, $$ p_1\frac{\partial \phi}{\partial x}=-\frac{\partial^2\phi }{\partial x\partial t}. $$ Thus $(\ref{30})$ becomes \begin{align*} \frac{\partial v}{\partial t}(P)&=(k_g-p_1)|\nabla \phi|^2+\frac{\partial f}{\partial t}\left(\frac{\partial \phi}{\partial t}\right)^2+ \frac{\partial f}{\partial x}\frac{\partial \phi}{\partial x}\frac{\partial \phi}{\partial t} \end{align*} and we write $$ \frac{\partial v}{\partial t}(P)=(k_g-p_1)|\nabla \phi|^2+\frac{\partial \phi}{\partial t}\langle \nabla\phi, \nabla f\rangle. $$ Since $f|_{\partial M}$ is constant, so $\frac{\partial f}{\partial x}(P)=0$, and using that $\nu$ coincide with normalized gradient of $f$ in $\partial M$ we have that $\frac{\partial f}{\partial t}\leq0$ \begin{align*} \frac{\partial v}{\partial t}(P)&=(k_g-p_1)|\nabla \phi|^2+\frac{\partial f}{\partial t}\left(\frac{\partial \phi}{\partial t}\right)^2\\ &\geq\left(k_g+\frac{\partial f}{\partial t}-p_1\right)|\nabla \phi|^2, \end{align*} hence \begin{equation} (k_g+\frac{\partial f}{\partial t}-p_1)|\nabla \phi|^2<0, \end{equation} and $p_1>k_g+\frac{\partial f}{\partial t}=k_g-f_{\nu}\geq c$. Now we assume that $\frac{\partial \phi}{\partial x}(P)=0$. A straighforward calculation yields $$\frac{\partial^2 v}{\partial x^2}(P)=\left(\frac{\partial^2\phi}{\partial x\partial t}\right)^2+\frac{\partial \phi}{\partial t}\frac{\partial^3\phi}{\partial x^2\partial t}+\left(\frac{\partial^2\phi}{\partial x^2}\right)^2.$$ Using the boundary condition we get that \begin{equation}\label{31} \frac{\partial^2 v}{\partial x^2}(P)=p_1^2\phi\frac{\partial^2\phi}{\partial x^2}+\left(\frac{\partial^2\phi}{\partial x^2}\right)^2\leq0. \end{equation} Since $\frac{\partial \phi}{\partial x}(P)=0$, the equation $(\ref{23})$ implies that $$\frac{\partial v}{\partial t}(P)=k_g\left(\frac{\partial \phi}{\partial t}\right)^2+p_1\phi\frac{\partial^2\phi}{\partial x^2}+\frac{\partial f}{\partial t}\left(\frac{\partial \phi}{\partial t}\right)^2 =\left(k_g+\frac{\partial f}{\partial t}\right)p_1^2\phi^2+p_1\phi\frac{\partial^2\phi}{\partial x^2}.$$ Thus \begin{equation}\label{32} \left(k_g+\frac{\partial f}{\partial t}\right)p_1^3\phi^2+p_1^2\phi\frac{\partial^2\phi}{\partial x^2}<0. \end{equation} Adding inequality $(\ref{31})$ with $(\ref{32})$ we obtain $$\left(\frac{\partial^2\phi}{\partial x^2}+p_1^2\phi\right)^2+p_1^3\left(k_g+\frac{\partial f}{\partial t}-p_1\right)\phi^2<0.$$ Hence $$p_1>k_g+\frac{\partial f}{\partial t}=k_g-f_{\nu}\geq c.$$ Let's assume that $v$ is the constant function. Observe that $v\notequiv0$ because $\phi$ is non-constant. Since $v$ is $f$-harmonic, inequality $(\ref{33})$ implies that $${\rm{Hess\,}}\phi=0\,\,\,\,\,\,\text{ and }\,\,\,\,\,\, {\rm{Ric}}_f(\nabla\phi,\nabla\phi)=0,\,\,\,\,\,\text{ on }\,\,\,\,M.$$ Now, using that $\Delta_f\phi=0$, we obtain that $\langle \nabla \phi, \nabla f\rangle=0$ and hereby ${\rm{Hess\,}} f(\nabla\phi, \nabla\phi)=0.$ Thus, the Gaussian curvature $K$ of $M$ vanishes. Moreover, using the structure of surfaces, \begin{equation}\label{854}\nabla f=\lambda\, J(\nabla\phi),\end{equation} where $J$ is the anti-clockwise rotation of $\pi/2$ in the tangent plane. Let $\{e_1,e_2\}$ be a local orthonormal frame field such that $e_1$ is tangent to $\partial M$ and $e_2=\eta$. So, \begin{align*} 0={\rm{Hess\,}}\phi(e_1,e_2)&=e_1e_2(\phi)-\nabla_{e_1}e_2(\phi)\\ &=e_1(p_1\phi)-\langle\nabla_{e_1}e_2,e_1\rangle\phi_1\\ &=(p_1-k_g)\phi_1. \end{align*} Observe that if $\phi_1=0$ on $\partial M$, then $\phi=$constant on $\partial M$ and hence $\phi$ is a constant function on $M$ which is a contradiction. Thus $p_1=k_g$ except maybe when $\phi_1=0$. Since ${\rm{Hess\,}}\phi(e_1,e_1)=0$ we have \begin{align*} 0={\rm{Hess\,}}\phi(e_1,e_1)&=e_1e_1(\phi)-\nabla_{e_1}e_1(\phi)\\ &=e_1(e_1\phi)-\langle\nabla_{e_1}e_1,e_2\rangle e_2(\phi)\\ &=e_1(e_1\phi)+k_gp_1\phi. \end{align*} Hence $\phi$ satisfies on the boundary a second order differential equation \begin{align}\label{35} \frac{d^2\phi}{dx^2}+k_gp_1\phi&=0\\ \phi(0)&=\phi(\ell)\nonumber \end{align} where $\ell$ represents the length of $\partial M$. The function $\phi$ does not vanishes identically, thus $\phi_1=0$ except for a finite number of points. Therefore $p_1=k_g$ except for a finite number of points and using the continuity of $k_g$, we conclude that $p_1=k_g$ everywhere. Therefore, $$p_1 = k_g-f_\nu + f_\nu\geq c,$$ and the equality between $p_1$ and $c$ occurs if $k_g=k_0$ and $f_\nu=0$. Using $K=0$ and $k_g$ is a positive constant, we conclude that $M$ is an Euclidean ball. Furthermore, the identity $(\ref{854})$ and after a straightforward computations we obtain that $${\rm{Hess\,}} f = \dfrac{\Delta f}{2v}\left( J(\nabla\phi)\otimes J(\nabla\phi)\right).$$ It easy to see, using that $M$ is an Euclidean ball, that $\phi=x_i$, that is, $\phi$ is a coordinate function. Thus, using the expression of $\phi$, $f$ satisfies ${\rm{Hess\,}} f = 0$ and as $f$ is constant on the boundary, we have $f$ constant. \qed \bigskip
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction and Results} The trajectory of a free point particle in a Riemannian manifold is modeled by a geodesic. Geodesics arise from a variational problem, the existence of closed solutions can be studied by various methods and is well understood at present. A powerful method to ensure their existence is the so called heat flow method. Here, one deforms a given initial curve \(\gamma_0\) by a heat-type equation and obtains a closed geodesic in the end. For geodesics, this method was successfully applied by Ottarsson \cite{MR834094} building on the famous existence result for harmonic maps into manifolds with non-positive curvature due to Eells and Sampson \cite{MR0164306}. \par\medskip The aim of this article is to study the equation which determines the trajectory of a point particle in a Riemannian manifold in the presence of an external magnetic field. This equation is known as equation for \emph{magnetic geodesics} or \emph{prescribed geodesic curvature equation}. This equation can also be derived from a variational principle, however the corresponding energy functional is \(U(1)\)-valued in general. \par\medskip The existence of magnetic geodesics has been ensured by various methods. This includes methods from dynamical systems and symplectic geometry \cite{MR890489}, \cite{MR902290}, \cite{MR1458315}, \cite{MR1888853}, \cite{MR1432462}, \cite{MR1417851}, \cite{MR2250797} and variational methods for multivalued functionals (Morse-Novikov-theory) \cite{MR1133303}, \cite{MR1185286}, \cite{MR730159}. Finally, one can also derive an existence result by Aubry-Mather's theory \cite{MR2036336}. Recently, a new approach was introduced: An existence result for magnetic geodesics on \(S^2\) \cite{MR2788659} and also on closed hyperbolic surfaces \cite{MR2959932} is established by studying the zeros of a certain vector field. \par\medskip In this article we use the heat flow method to approach the existence of magnetic geodesics. Namely, we deform a given initial curve \(\gamma_0\) by a heat type equation and study in which cases this equation converges to magnetic geodesic. This approach was initiated in \cite{MR2551140}. In \cite{MR1800592} a similar problem is discussed, namely the heat flow for harmonic maps coupled to a potential. Let us describe the problem in more detail. Suppose that $N$ is a closed Riemannian manifold and \(\gamma\colon S^1\to N\) is a smooth curve. The $U(1)$-valued energy functional for magnetic geodesics is given by \begin{equation} \label{energy-functional-holonomy} C^\infty(S^1,N)\to U(1),\qquad\gamma\mapsto e^{i\frac{1}{2}\int_{S^1}|\gamma'|^2ds}\hol(\gamma), \end{equation} where \(\hol(\gamma)\) represents the holonomy of the magnetic field (more precisely, of the corresponding gerbe \(\cG\)) along \(\gamma\). Moreover, \(\gamma'\) represents the derivative with respect to the curve parameter \(s\). In the case that the magnetic field is given by an exact form (or in the language of gerbes: the curvature 2-form is exact), we may rewrite the energy functional as the sum of a kinetic and a magnetic contribution \begin{equation} \label{energy-functional} E(\gamma)= E_{kin}(\gamma) + E_A(\gamma) := \frac{1}{2}\int_{S^1}|\gamma'|^2ds+\int_{S^1}\gamma^\ast Ads, \end{equation} where the one-form \(A\) is the potential for the magnetic field. The critical points of \eqref{energy-functional-holonomy} and \eqref{energy-functional} are given by \begin{equation} \tau(\gamma)=Z(\gamma'), \end{equation} where \(\tau\) denotes the tension field of the curve \(\gamma\) and \(Z\in\Gamma (\Hom(TN,TN))\) represents the magnetic field strength. Note that the critical points are globally defined, even in the case that the energy functional is not. In the following, we are studying the \(L^2\)-gradient flow of the energy functional (\ref{energy-functional}), which is given by \begin{equation} \label{gradient-flow} \left\{ \begin{array}{l l} \dot{\gamma}_t(s,t)=\tau(\gamma_t)(s,t)-Z(\gamma'_t)(s,t),\qquad (s,t)\in S^1\times [0,\infty),\\ \gamma(s,0)=\gamma_0(s)\\ \end{array} \right. \end{equation} and we use a \(\dot{\gamma}\) to denote the derivative of $\gamma$ with respect to \(t\). Note that neither the functionals \eqref{energy-functional-holonomy} and \eqref{energy-functional} nor the equations of motion derived from them are invariant under rescaling of the domain of definition. Thus, strictly speaking, we have a different problem for each class of curves $C^\infty(S^1_r, N)$, where $S^1_r$ denotes the circle of length $2\pi r$. From a physical point of view, the length of the circle encodes the period of revolution of the charged particle. Since we are interested in the existence of closed trajectories regardless of this time, we ultimately look for critical points defined on some $S^1_r$. We will nevertheless mostly work with $S^1 := S^1_1$ but make use of the possibility of rescaling the circle in Corollary \ref{CorRescaling}.\\ The existence of a short-time solution of the evolution equation \eqref{gradient-flow} with existence interval \([0,T)\) for initial data \(\gamma_0\in C^{2+\alpha}(S^1,N)\) is guaranteed by Theorem 19 in \cite{MR2551140}. Moreover, Theorem 1 in the same reference shows that \eqref{gradient-flow} has a unique, smooth solution for all \(t\in[0,\infty)\). However, the question in which cases the gradient flow converges was not fully answered in \cite{MR2551140}.\\ In this article, we will prove the following main results: \begin{Satz} Let \(\gamma_t\colon S^1\times [0,\infty)\to N\) denote the unique solution of \eqref{gradient-flow} associated to \(Z\in\Gamma (\Hom(TN,TN))\) and initial condition \(\gamma_0\in C^{2+\alpha}(S^1,N)\). \begin{enumerate} \item If the magnetic field admits a global potential, then (\ref{gradient-flow}) subconverges and we obtain a smooth magnetic geodesic \(\gamma_\infty\). If $\gamma_0$ is not null-homotopic or $E(\gamma_0) \leq 0$ and $\gamma_0$ is not constant, then the magnetic geodesic \(\gamma_\infty\) is not trivial (i.e. not a point). \item If $|Z|_{L^\infty}$ is sufficiently small and the initial curve has sufficiently small kinetic energy (see Lemma \ref{LemApplyOttarson} for a precise formulation), then (\ref{gradient-flow}) subconverges. Under a slightly stronger assumption on $|Z|_{L^\infty}$, $\gamma_t$ converges to a point. \label{Item2} \end{enumerate} \end{Satz} Case \eqref{Item2}, where the magnetic field is not derived from a global potential, is particularly challenging since \eqref{gradient-flow} is no longer the gradient flow associated to a globally defined energy.\\ This paper is organized as follows: In Section 2 we recall the derivation of the equation for magnetic geodesics and the Bochner formulas for the associated evolution equation. Section 3 then discusses the convergence of the gradient flow. We give a general criterion for subconvergence of the flow (Theorem \ref{Convergence}) and show that it can be applied to the cases where either the field admits a global potential (Theorem \ref{ThmConvExactCase}) or the initial curve has sufficiently low energy and the magnetic field is weak (Theorem \ref{ThmConvergenceOttarson}). Finally, in Section 4 we calculate some examples and compare our general theorems with these explicit results. \section{Critical points and the Gradient Flow} Let us briefly recall the derivation of the critical points, see for example \cite{KohDiss} (Proposition 2.4). \begin{Lem}[Critical points and Second Variation] \begin{enumerate} \item The critical points of the energy functional (\ref{energy-functional}) satisfy \begin{equation} \label{first-variation} \tau(\gamma)(s)=Z(\gamma')(s), \end{equation} where \(\tau\) is the tension field of the curve \(\gamma\) and \(Z\in\Gamma(\Hom(TN,TN))\). \item The second variation of the energy functional (\ref{energy-functional}) yields \begin{equation} \label{second-variation} \frac{\delta^2}{\delta\gamma^2}E(\gamma)=\int_{S^1}(|\nabla\eta|^2-\langle R^N(\eta,\gamma')\eta,\gamma'\rangle+\langle(\nabla_\eta Z)(\gamma'),\eta\rangle +\langle Z(\nabla\eta),\eta\rangle)ds. \end{equation} \end{enumerate} \end{Lem} Solutions of (\(\ref{first-variation}\)) are called \emph{magnetic geodesics}. \begin{proof} Consider a family of smooth variations of $\gamma$ satisfying $\frac{\partial\gamma_t}{\partial t}\big|_{t=0}=\eta$. The first variation of the energy of a curve is given by \[ \frac{d}{dt}\bigg|_{t=0}\frac{1}{2}\int_{S^1}|\gamma_t'|^2ds=-\int_{S^1}\langle\tau(\gamma),\eta\rangle ds \] with variational vector field \(\eta\), see for example \cite{MR2431658}, p.2. The first variation of the holonomy functional on the other hand gives \[ \frac{d}{dt}\bigg|_{t=0}\int_{S^1}\gamma_t^\ast A=-\int_{S^1}\Omega(\eta,\gamma')ds \] with variational vector field \(\eta\) and $\Omega = dA$, see for example \cite{MR2362847}, p.234. To the 2-form \(\Omega\in\Gamma(\Lambda^2T^*N)\), we associate the smooth section $Z$ of \(\Hom(TN,TN)\) defined by the equation \begin{equation} \label{EqZOmega} \langle\eta,Z(\xi)\rangle=\Omega(\eta,\xi) \end{equation} for all \(\eta,\xi\in TN\) and thus the formula for the first variation follows. \par\medskip Concerning the second variation, remember the second variation of the energy of a curve, (see for example \cite{MR2431658}, p.8) \[ \frac{d}{dt}\bigg|_{t=0}\frac{1}{2}\int_{S^1}|\gamma_t'|^2ds=\int_{S^1}(|\nabla\eta|^2-\langle R^N(\eta,\gamma')\eta,\gamma'\rangle)ds. \] For the second variation of the magnetic term, consider again a variation of \(\gamma\) satisfying $\frac{\partial\gamma_t}{\partial t}\big|_{t=0}=\eta$ and calculate \[ \frac{\nabla}{\partial t}Z(\gamma'_t)=(\nabla_{\dot{\gamma_t}}Z)(\gamma_t')+Z(\nabla\dot{\gamma_t}). \] Evaluating at \(t=0\) yields the result. \end{proof} \begin{Bem} If the magnetic field is not exact, then the first variation of \eqref{energy-functional-holonomy} also gives \eqref{first-variation}. \end{Bem} Now we recall two standard Bochner-formulas, which were already proven in \cite{MR2551140}, Proposition 10. \begin{Lem}[Bochner Formulas] Let \(\gamma_t\colon S^1\times [0,T)\to N\) be a solution of (\ref{gradient-flow}) and let \(Z\in\Gamma(\Hom(TN,TN))\). Then the following Bochner formulas hold: \begin{align} \label{bochner1} \frac{\partial}{\partial t}\frac{1}{2}|\gamma'_t|^2=&\Delta\frac{1}{2}|\gamma'_t|^2-|\tau(\gamma_t)|^2+\langle Z(\gamma'_t),\tau(\gamma_t)\rangle, \\ \label{bochner2} \frac{\partial}{\partial t}\frac{1}{2}|\dot{\gamma}_t|^2=&\Delta\frac{1}{2}|\dot{\gamma}_t|^2-|\nabla\dot{\gamma}_t|^2+\langle R^N(\dot{\gamma}_t,\gamma'_t)\dot{\gamma}_t,\gamma'_t\rangle -\langle(\nabla_{\dot{\gamma_t}}Z)(\gamma'_t),\dot{\gamma_t}\rangle-\langle Z(\frac{\nabla}{\partial t}\gamma'_t),\dot{\gamma}_t\rangle. \end{align} \end{Lem} \begin{proof} Regarding the first equation, a direct computation yields \[ \frac{\partial}{\partial t}\frac{1}{2}|\gamma'_t|^2=\Delta\frac{1}{2}|\gamma'_t|^2-|\tau(\gamma_t)|^2-\langle \nabla Z(\gamma'_t),\gamma'_t\rangle. \] Using the identity \[ \langle \nabla Z(\gamma'_t),\gamma'_t\rangle=\frac{\partial}{\partial s}\underbrace{\langle Z(\gamma'_t),\gamma'_t\rangle}_{=0}-\langle Z(\gamma'_t),\tau(\gamma_t)\rangle, \] the first assertion follows. For the second statement, again by a direct computation we get \[ \frac{\partial}{\partial t}\frac{1}{2}|\dot{\gamma}_t|^2=\Delta\frac{1}{2}|\dot{\gamma}_t|^2-|\nabla\dot{\gamma}_t|^2 +\langle R^N(\dot{\gamma}_t,\gamma'_t)\dot{\gamma}_t,\gamma'_t\rangle -\langle\frac{\nabla}{\partial t} Z(\gamma'_t),\dot{\gamma}_t\rangle. \] Differentiating with respect to \(t\) we find \[ \frac{\nabla}{\partial t}Z(\gamma'_t)=(\nabla_{\dot{\gamma_t}}Z)(\gamma_t')+Z(\frac{\nabla}{\partial t}\gamma_t') \] and thus the statement follows. \end{proof} With the help of the maximum principle we are now able to derive estimates, which are obtained similar to Corollary 13 in \cite{MR2551140}: \begin{Lem} Let \(\gamma_t\colon S^1\times [0,T)\to N\) be a solution of (\ref{gradient-flow}) and let \(Z\in\Gamma (\Hom(TN,TN))\). Then the following estimates hold \begin{align} \label{estimate-gamma'} |\gamma'_t|^2\leq& |\gamma'_0|^2e^{C_1t},\\ \label{estimate-gamma-dot} |\dot{\gamma}_t|^2\leq& |\dot{\gamma}_0|^2e^{C_2e^{t}+C_3t}. \end{align} The constant \(C_1\) depends on \(|Z|_{L^\infty}\), the constant \(C_2\) depends on \(N,|Z|_{L^\infty},|\nabla Z|_{L^\infty},|\gamma_0'|\) and the constant \(C_3\) depends on \(|Z|_{L^\infty},|\gamma_0'|\). \end{Lem} \begin{proof} To derive the first inequality we estimate the Bochner formula (\ref{bochner1}) and find \[ \frac{\partial}{\partial t}\frac{1}{2}|\gamma'_t|^2\leq\Delta\frac{1}{2}|\gamma'_t|^2+\frac{1}{4}|Z|^2_{L^\infty}|\gamma'_t|^2. \] Now, apply the maximum principle and set \(c_1=\frac{1}{2}|Z|^2_{L^\infty}\). Regarding the second inequality, we estimate the Bochner formula (\ref{bochner2}) and use the estimate on \(|\gamma_t'|^2\) to obtain \begin{align} \frac{\partial}{\partial t}\frac{1}{2}|\dot{\gamma}_t|^2 \leq& \Delta\frac{1}{2}|\dot{\gamma}_t|^2-|\nabla\dot{\gamma}_t|^2+c_2|\gamma'_t|^2|\dot{\gamma}_t|^2+c_3|\dot{\gamma}_t|^2|\gamma'_t| +\sqrt{2c_1}|\dot{\gamma}_t||\frac{\nabla}{\partial t}\gamma'| \label{EqBochnerEstimate} \\ \leq&\Delta\frac{1}{2}|\dot{\gamma}_t|^2 +c_2|\dot{\gamma}_t|^2|\gamma'_0|^2e^{c_1t}+c_3|\dot{\gamma}_t|^2|\gamma'_0|e^{\frac{c_1}{2}t}+\frac{c_1}{2}|\dot{\gamma}_t|^2 \notag \\ \leq&\Delta\frac{1}{2}|\dot{\gamma}_t|^2 +|\dot{\gamma}_t|^2\left(c_2|\gamma'_0|^2e^{c_1t}+c_3|\gamma'_0|e^{\frac{c_1}{2}t}+\frac{c_1}{2}\right) \notag \end{align} with the constants \(c_2=|R^N|_{L^\infty}\) and \(c_3=|\nabla Z|_{L^\infty}\). Again, by application of the maximum principle, we have to estimate the solution of \[ \frac{\partial}{\partial t}|\dot{\gamma}_t|^2\leq c_4|\dot{\gamma}_t|^2(e^{c_1t}+c_5) \] with some positive constants \(c_4\) and \(c_5\). Integrating the ODE and rearranging the constants completes the proof. \end{proof} The estimates from the last Lemma are sufficient to establish the following statement, which is proven in \cite{MR2551140}{, Theorem 1}: \begin{Satz}\label{Dennislangzeit} For any $\gamma_0\in C^{2+\alpha}(S^1,N)$, there exists a unique $\gamma \in C^{\infty}(S^1 \times [0,\infty),N)$ satisfying \eqref{gradient-flow} with initial condition $\gamma(\cdot,0)=\gamma_0$. \end{Satz} \section{Convergence of the Gradient Flow} \label{SecConvergence} If we want to achieve the convergence of the gradient flow, we have to establish energy estimates that are independent of the deformation parameter \(t\). Thus, we have to improve the estimates obtained by the maximum principle \eqref{estimate-gamma'} and \eqref{estimate-gamma-dot}. We will make use of the following Lemma, which combines the pointwise maximum principle with an integral norm. \begin{Lem} \label{maximum-principle-l2} Assume that \((M,h)\) is a compact Riemannian manifold. If a function \(u(x,t)\geq 0\) satisfies \[ \frac{\partial u}{\partial t}\leq \Delta_h u+Cu, \] and if in addition we have the bound \[ U(t)=\int_Mu(x,t)dM\leq U_0, \] then there exists a uniform bound on \[ u(x,t)\leq e^CKU_0 \] with the constant \(K\) depending only on the geometry of \(M\). \end{Lem} \begin{proof} A proof can for example be found in \cite{MR2744149}, p.\ 284. \end{proof} Based on this result, we obtain the following general result concerning convergence of the heat flow: \begin{Thm}[Convergence] \label{Convergence} Let \(\gamma:S^1\times [0,\infty)\to N\) be a solution of \eqref{gradient-flow} and \((N,g)\) be a compact Riemannian manifold. If we have uniform bounds \begin{equation} \int_{S^1}|\gamma'_t|^2ds\leq C,\qquad \int_0^\infty\int_{S^1}|\dot{\gamma}_t|^2dsdt\leq C \end{equation} for all \(t\in [0,\infty)\), then the evolution equation \eqref{gradient-flow} subconverges in \(C^2(S^1,N)\) to a magnetic geodesic \(\gamma_\infty\). \end{Thm} \begin{proof} By assumption, we have a uniform bound on the \(L^2\) norm of \(\gamma'_t\). Together with the pointwise equation (\ref{bochner1}) and Lemma \ref{maximum-principle-l2} we get the pointwise bound \(|\gamma'_t|^2\leq C\). Using this bound on \(|\gamma'_t|^2\) together with an estimate analogous to \eqref{EqBochnerEstimate}, we note that \(|\dot{\gamma}_t|^2\) now satisfies \[ \frac{\partial}{\partial t}|\dot{\gamma}_t|^2\leq\Delta|\dot{\gamma}_t|^2+C|\dot{\gamma}_t|^2 \] for some constant \(C\). Integrating over \(S^1\) and \(t\) from \(0\) to \(T\), we find the following bound \begin{equation} \label{EqEstimate2} \int_{S^1}|\dot{\gamma}_t|^2ds\leq \int_{S^1}|\dot{\gamma}_0|^2ds + \int_0^T\int_{S^1}|\dot{\gamma}_t|^2dsdt\leq C. \end{equation} By Lemma \ref{maximum-principle-l2} and the assumption, we now also obtain a pointwise bound on \(|\dot\gamma_t|^2\leq C\), uniform in $t$.\\ As a next step, we apply Nash's embedding theorem, that is we assume that $N$ is realized as a Riemannian submanifold of some $\R^q$. Thus, we may work with vector valued maps $S^1 \times [0,\infty) \rightarrow \R^q$ and $S^1 \rightarrow \R^q$ whose image is contained in $N \subset \R^q$ and use the associated H\"older spaces $C^{k,\alpha}(S^1,\R^q)$. Equation \eqref{gradient-flow} now takes the form \begin{align} \label{EqGradientFlowRn} \dot{\gamma} &= -\Delta_{S^1,\R^n}(\gamma) + \mathbb{I}^N(\gamma',\gamma') + Z(\gamma'), \end{align} where $\Delta_{S^1,\R^n}$ denotes the linear Hodge-Laplacian acting on $C^\infty(S^1,\R^q)$ and $\mathbb{I}^N$ the second fundamental form of the embedding $N \subset \R$. We improve our estimates with the help of Schauder theory. Following the presentation in \cite{MR2551140}{(proof of Theorem 22, in particular (31))} and viewing \eqref{EqGradientFlowRn} as a linear elliptic equation for $\gamma$, the elliptic Schauder estimates (\cite{MR2371700}, p.463, Thm.27) in $\R^q$ yield \begin{align} \label{Eq_1_alpha_Reg} |\gamma_t|_{C^{1,\alpha}(S^1,\R^q)} &\leq C \bigl( |\gamma_t'|^2_{L^\infty(S^1,\R^q)} + |\gamma_t'|_{L^\infty(S^1,\R^q)} + |\gamma_t|_{L^\infty(S^1,\R^q)} + |\dot{\gamma}_t|_{L^\infty(S^1,\R^q)} \bigr), \end{align} where the constant $C$ may depend on $N,Z, \gamma_0, \alpha$ and the embedding $N \hookrightarrow \R^q$ but not on $t \in [0,\infty)$. By the first part of the proof and the compactness of $N$, we thus get the bound \begin{align*} |\gamma_t|_{C^{1,\alpha}(S^1,\R^q)} &\leq C \end{align*} uniform in $t$. Viewing \eqref{EqGradientFlowRn} as a linear parabolic equation for $\gamma$ and using the corresponding Schauder estimates (see again \cite{MR2551140}, proof of Theorem 22 and also \cite{MR1465184}, Theorem 4.9 for the local version of the estimate), we obtain \begin{align*} |\gamma_t|_{C^{2,\alpha}(S^1,\R^q)} + |\dot{\gamma}_t|_{C^\alpha(S^1,\R^q)} &\leq C \bigl( |\mathbb{I}^N(\gamma_t',\gamma_t') + Z(\gamma_t')|_{C^\alpha(S^1,\R^q)} + |\gamma_t|_{L^\infty(S^1,\R^q)} \bigr) \\ &\leq C \bigl( |\gamma_t'|^2_{C^{\alpha}(S^1,\R^q)} + |\gamma_t'|_{C^\alpha(S^1,\R^q)} + |\gamma_t|_{L^\infty(S^1,\R^q)} \bigr). \end{align*} Using the compactness of $N$ and \eqref{Eq_1_alpha_Reg}, we get bounds \begin{align} \label{EqUniformBound1} |\gamma_t|_{C^{2,\alpha}(S^1,\R^q)}, |\dot{\gamma}_t|_{C^\alpha(S^1,\R^q)} &\leq C , \end{align} which are again uniform in $t \in [0,\infty)$. Now, by assumption, we have the estimate \begin{align*} \int_0^{\infty}\int_{S^1}|\dot{\gamma_t}|^2dsdt\leq C. \end{align*} Hence, there exists a sequence $t_k \rightarrow \infty$ such that \begin{align} \label{EqL2Convergence} |\dot{\gamma}_{t_k}|^2_{L^2(S^1)}\to 0 \end{align} as $k \rightarrow \infty$. Using the bounds from \eqref{EqUniformBound1} and the Theorem of Arzela and Ascoli, there exists a subsequence (again denoted by $\gamma_{t_k}$), which converges in $C^2$ to a limiting map $\gamma_\infty \in C^{2}(S^1,\R^q)$ such that $\dot{\gamma}_{t_k}$ converges in $C^0(S^1,\R^q)$. In fact, $\gamma_\infty$ defines an element of $C^2(S^1,N)$ because we have $\gamma_t(S^1) \subset N \subset \R^q$ for all $t$. Finally, \eqref{EqL2Convergence} implies that $\lim_{k \to \infty}\dot{\gamma}_{t_k} = 0$ and we conclude that $\gamma_\infty$ is a $C^2$-solution of \eqref{first-variation}. Since $\gamma$ is smooth in $t$, it is moreover clear that $\gamma_\infty$ is homotopic to $\gamma_0$. \end{proof} Since the limit in the previous proof is a $C^2$-solution of \eqref{first-variation} by construction and recalling that a magnetic geodesic has constant energy, which follows from \[ \frac{\partial}{\partial s}\frac{1}{2}|\gamma'_\infty|^2=\langle\tau(\gamma_\infty),\gamma'_\infty\rangle=\langle Z(\gamma'_\infty),\gamma'_\infty\rangle=0, \] the standard bootstrap argument for elliptic equations now implies the following regularity result: \begin{Cor} The limit $\gamma_\infty$ from Theorem \ref{Convergence} is smooth. \end{Cor} \begin{Bem} In the case of the heat flow for \emph{geodesics} and under the assumption that the target manifold \(N\) has non-positive curvature, a theorem due to Hartmann \cite{MR0214004} states that the limiting map \(\gamma_\infty\) is independent of the chosen subsequence. This theorem uses the fact, that the second variation of the harmonic energy is positive. Due to the extra term in the energy functional \eqref{energy-functional}, there is a corresponding term in the second variation \eqref{second-variation} and we do not get such a statement here. \end{Bem} \begin{Bem} So far, we have established the existence of a convergent subsequence. This does not necessarily imply that the gradient flow converges itself. This phenomena also occurs in the heat flow for closed geodesics, see \cite{choi-parker}. \end{Bem} \begin{Bem} \label{RemGeneralConvergence} Let \(\gamma_t\) be a solution of \eqref{gradient-flow}, then for $T > 0$, we have \begin{equation} \label{energy-equality} \frac{1}{2}\int_{S^1}|\gamma_T'|^2ds+\int_0^T\int_{S^1}|\dot{\gamma}_t|^2dsdt+\int_0^T\int_{S^1}\Omega(\dot{\gamma}_t,\gamma_t')dsdt=\frac{1}{2}\int_{S^1}|\gamma'_0|^2ds. \end{equation} The only term in \eqref{energy-equality} not having a definite sign is the last term on the left hand side. By Theorem \ref{Convergence}, we may in particular expect that if we have a uniform bound \begin{equation*} \int_0^T\int_{S^1}\Omega(\dot{\gamma}_t,\gamma_t')dsdt\geq C > -\infty, \end{equation*} then the evolution equation \eqref{gradient-flow} subconverges to a magnetic geodesic. \end{Bem} In the following we will describe two situations in which we can ensure the necessary estimates such that we can apply Theorem \ref{Convergence}. \subsection{The case of null-homotopic curves} Throughout this section we assume that the curves \(\gamma_t\) are null-homotopic. If \(U\) is a subset of \(N\), we define \begin{align} \label{EqDefKappa_r} \kappa_U &:=\sup\{\langle R^N(X,Y)Y,X\rangle\mid X,Y\in T_pM,|X|=|Y|=1,p\in U\} \end{align} To give a sufficient condition for the application of Theorem \ref{Convergence}, we want to apply the following Poincaré type inequality (for a derivation see \cite{MR834094}, p.58): \begin{Satz} \label{ThmOttarson} Let \(\gamma\colon S^1\to N\) be a non-constant closed \(C^2\) curve and assume w.l.o.g. that the maximum of the energy density occurs at \(s=0\). Suppose that the image of \(\gamma\) is contained in a ball \(B_r(\gamma(0))\) such that the exponential map \[ \exp_{\gamma(0)}\colon T_{\gamma(0)}N\to N, \] restricted to \(B_r(0)\) in \(T_{\gamma(0)}N\), is a diffeomorphism onto \(B_r(\gamma(0))\). In case \(\kappa_B>0\), assume in addition that \(r<(2\sqrt{\kappa_B})^{-1}\). Then the following inequality holds: \begin{equation} \label{poincare-tension} \frac{1}{4\pi^2}\int_{S^1}|\gamma'|^2ds\leq\int_{S^1}|\tau(\gamma)|^2ds \end{equation} \end{Satz} \begin{Bem} \label{RemOttarson} \begin{enumerate} \item The estimate in \eqref{poincare-tension} can be improved for specific geometries. As an example, consider $\R^n$ with the flat Euclidean metric. A simple calculation using Fourier series (see also \cite{MR834094} p.57) then shows that the inequality still holds with $1/4\pi^2$ replaced by $1$. For a general Riemmanian manifold $(N,g)$, let us denote the optimal constant for \eqref{poincare-tension} by $O(N,g)$. By Theorem \ref{ThmOttarson}, we thus have $O(N,g) \in [1/(4\pi^2), \infty)$. Moreover, computing the integrals for $\gamma$ a circle with radius 1, it is straightforward to see that actually $O(\R^n,g_{Eucl}) = O(T^n,g_{flat}) = 1$. \item The following examples show that the assumptions in Theorem \ref{ThmOttarson} are necessary: Consider the two-dimensional flat torus and a nontrivial closed geodesic on it. Since \eqref{poincare-tension} is clearly violated, we see that the condition concerning the exponential map cannot be omitted. Similarly, the statement is not valid without the bound on the radius $r$ in case of $\kappa_{B_r} > 0$: Consider a circle $c_h$ of latitude $h > 0$ on $S^2$. Clearly, for each point $p$ on $c_h$, the curve is completely contained in the ball of radius $\pi$ (= the injectivity radius of $S^2$) around $p$. On the other hand, for $h \searrow 0$, this circle converges to the equator and hence $\int_{S^1} |\tau(c_h)|^2ds \rightarrow 0$, whereas $\int_{S^1} |c_h'|^2ds$ converges to some positive number. This shows that we have to restrict to balls of radius smaller than the injectivity radius in order to ensure \eqref{poincare-tension} without assuming $K \leq 0$. \end{enumerate} \end{Bem} \begin{Lem} \label{LemZAssump} Let \(\gamma:S^1\times [0,\infty)\to N\) be a solution of (\ref{gradient-flow}) and \((N,g)\) be a compact Riemannian manifold. Assume that \begin{equation} |Z|^2_{L^\infty}\leq\frac{1}{4\pi^2} \end{equation} and that Theorem \ref{ThmOttarson} can be applied to the curves $\gamma_t$ for all $t \in [0,\infty)$. Then we obtain the uniform bounds \begin{equation} \int_{S^1}|\gamma'_t|^2ds\leq C,\qquad \int_0^\infty\int_{S^1}|\dot{\gamma}_t|^2dsdt\leq C. \end{equation} \end{Lem} \begin{proof} We rewrite (\ref{energy-equality}) in the following way \begin{align*} \frac{1}{2}\int_{S^1}|\gamma'_T|^2ds+\frac{1}{2}\int_0^T\int_{S^1}|\dot{\gamma}_t|^2dsdt= \frac{1}{2}\int_{S^1}|\gamma'_0|^2ds-\frac{1}{2}\int_0^T\int_{S^1}|\dot{\gamma}_t|^2dsdt-\int_0^T\int_{S^1}\Omega(\dot{\gamma}_t,\gamma_t')dsdt. \end{align*} By a direct calculation using the evolution equation (\ref{gradient-flow}) we obtain \[ -\frac{1}{2}|\dot{\gamma}_t|^2-\Omega(\dot{\gamma}_t,\gamma'_t)=-\frac{1}{2}|\tau(\gamma_t)|^2+\frac{1}{2}|Z(\gamma_t')|^2. \] Combining both equations, we find for \(T>0\) \[ \int_{S^1}|\gamma'_T|^2ds+\int_0^T\int_{S^1}|\dot{\gamma}_t|^2dsdt=\int_{S^1}|\gamma'_0|^2ds + \int_0^T\int_{S^1}(-|\tau(\gamma_t)|^2+|Z(\gamma'_t)|^2)dsdt. \] Using the assumption on \(|Z|^2_{L^\infty}\) and applying Ottarsson's inequality \eqref{poincare-tension}, we obtain \begin{equation} \int_{S^1}|Z(\gamma'_t)|^2ds-\int_{S^1}|\tau(\gamma_t)|^2ds\leq(4\pi^2|Z|^2_{L^\infty}-1)\int_{S^1}|\tau(\gamma_t)|^2ds\leq 0, \end{equation} which gives the result. \end{proof} \begin{Bem} If we think of magnetic geodesics as curves of prescribed geodesic curvature, then the assumption of Lemma \eqref{LemZAssump} means that we have to restrict to small curvature. \end{Bem} We next discuss the question whether the conditions from Theorem \ref{ThmOttarson} are preserved under the flow, i.e. whether we may use estimate \eqref{poincare-tension} provided it is satisfied for $t =0$. The following Lemma shows, that this is indeed possible for curves of sufficiently low kinetic energy. To give a precise statement of this condition, let us define \begin{align} r(N) &:= \begin{cases} \mathrm{injrad}(N) &\text{ if } \kappa_N \leq 0, \\ \mathrm{min}(\mathrm{injrad}(N),(2\sqrt{\kappa_N })^{-1} ) &\text{ if } \kappa_N > 0, \end{cases} \label{EqDefRadius} \end{align} where $\kappa_N$ was defined in \eqref{EqDefKappa_r}. \begin{Lem} \label{LemApplyOttarson} If the assumptions from Lemma \ref{LemZAssump} on $Z$ are satisfied, then for any initial curve of kinetic energy smaller than $r(N)^2 / (16\pi)$, Ottarson's estimate \eqref{poincare-tension} is valid for all curves $\gamma_t, \ t \in [0,\infty)$ of the associated flow. \end{Lem} \begin{proof} We first observe that \begin{align} \frac{\partial}{\partial t}\frac{1}{2}\int_{S^1}|\gamma'_t|^2ds=&-\int_{S^1}|\tau(\gamma_t)|^2ds+\int_{S^1}\langle\tau(\gamma_t),Z(\gamma'_t)\rangle ds \notag \\ \leq&\frac{1}{2}\big(-\int_{S^1}|\tau(\gamma_t)|^2ds+|Z|^2_{L^\infty}\int_{S^1}|\gamma'_t|^2ds\big) \notag \\ \leq&\frac{1}{2}\big(|Z|_{L^\infty}^2-\frac{1}{4\pi^2}\big)\int_{S^1}|\gamma'_t|^2ds, \label{EqEstimate1} \end{align} where we applied (\ref{poincare-tension}) in the last step. By the definition of $r(N)$ from \eqref{EqDefRadius} and the compactness of $N$, we have $r(N) > 0$. Let $\gamma_0$ be an initial curve satisfying $E_{kin}(\gamma_0) < (r(N))^2 / (16\pi)$. By the Cauchy-Schwarz inequality, the length of $\gamma_0$ can be estimated by $L(\gamma_0) < r(N)/2$. Thus, for any point $p \in \gamma_0(S^1)$, we have $\gamma_0(S^1) \subset B_{r(N)}(p)$ and by the choice of $r(N)$ and continuity, we may apply Theorem \ref{ThmOttarson} on some time interval $[0,\epsilon)$. Now assume that \begin{align*} J &:= \{t \in [0,\infty) \mid \eqref{poincare-tension} \text{ is not valid } \} \subset [0,\infty) \end{align*} is \emph{not empty}. Then, $T := \inf(J) \geq \epsilon > 0$ and \eqref{poincare-tension} holds on $[0,T)$. By \eqref{EqEstimate1} and the assumption on $Z$, we have $\partial E_{kin}(\gamma_t) / \partial t \leq 0$ on $[0,T)$ which implies $E(\gamma_T) \leq E(\gamma_0)$. Thus, we may argue as before to show that \eqref{poincare-tension} is in fact valid on $[0, T+\epsilon')$. This contradiction proves that $J$ must be empty, i.e. \eqref{poincare-tension} holds for all $t > 0$. \end{proof} Combining the results of Theorem \ref{Convergence} and Lemmas \ref{LemZAssump}, \ref{LemApplyOttarson}, we have shown \begin{Satz} \label{ThmConvergenceOttarson} Let $Z$ satisfy $|Z|^2_{L^\infty} \leq 1/(4\pi^2)$ and assume that $E_{kin}(\gamma_0) < r(N)^2/(16\pi)$. Then the solution of \eqref{gradient-flow} subconverges to a magnetic geodesic. \end{Satz} Since all curves satisfying the assumptions of Theorem \ref{ThmOttarson} are contractible, the question of nontriviality of the limit curve $\gamma_\infty$ arises naturally. To discuss this question, we first observe that integrating the estimate \eqref{EqEstimate1} with respect to $t$ directly yields the following estimate on the kinetic energy under the flow: \begin{Lem} \label{LemEkinEstimate} Let \(\gamma:S^1\times [0,\infty)\to N\) be a solution of (\ref{gradient-flow}) and \((N,g)\) be a compact Riemannian manifold. If Theorem \ref{ThmOttarson} can be applied to the curves $\gamma_t$ for all $t \in [0,\infty)$, then we have \begin{equation} \int_{S^1}|\gamma'_t|^2ds\leq e^{\big(|Z|_{L^\infty}^2-\frac{1}{4\pi^2}\big)t}\int_{S^1}|\gamma'_0|^2ds. \end{equation} \end{Lem} This allows us to conclude that the flow in fact converges to a point in many cases: \begin{Cor} \label{trivial-limit} If the magnetic field $Z$ satisfies \begin{equation} |Z|^2_{L^\infty} < \frac{1}{4\pi^2}, \end{equation} then the flow converges to the trivial magnetic geodesic \(\gamma_\infty\). In fact, under this assumption on $Z$, any magnetic geodesic that satisfies \eqref{poincare-tension} is trivial. \end{Cor} \begin{proof} Lemma \ref{LemEkinEstimate} directly implies, that $E_{kin}(\gamma_\infty)=0$, i.e. $\gamma_\infty$ is constant. An argument analogous to the one used in \cite{choi-parker} (Chapter 9) shows that under this condition, the flow does not only subconverge but actually converges. Moreover, if a curve $\gamma$ satisfies \eqref{first-variation} together with \eqref{poincare-tension}, we have $\int_{S^1} |\tau(\gamma)|^2ds \leq \frac{1}{4\pi^2}|Z|^2_{L^\infty} \int_{S^1} |\tau(\gamma)|^2ds$. Hence, $\int|\tau(\gamma)|^2 ds= 0$ and \eqref{poincare-tension} implies that $\gamma$ is constant. \end{proof} \begin{Bem} The situation described in Corollary \ref{trivial-limit} is similar to the one for the heat flow for geodesics, see \cite{MR834094}, Theorem 4A and also \cite{choi-parker}, Section 9. \end{Bem} \begin{Bem}\label{RemZ_Eins_Optimal} As noted in Remark \ref{RemOttarson} (1), \eqref{poincare-tension} may be improved using the optimal constant $O(N,g)$. From the argument given in Corollary \ref{trivial-limit}, we see that we obtain convergence to a trivial limit in case $|Z|^2_{L^\infty} < O(N,g)$. A non-trivial limit $\gamma_\infty$ can only be obtained, if $|Z|^2_{L^\infty} = O(N,g)$ (note that for larger values, Theorem \ref{ThmConvergenceOttarson} may no longer be applied!). In general, it is difficult to say what happens in this case of equality. However, we will later see from Example \ref{ExTorusPart1} that our result is in fact optimal in the following sense: In accordance with Corollary \ref{trivial-limit}, the flow converges to a point for $|Z|^2_{L^\infty} < 1 = O(T^2, g_{flat})$. For $|Z|^2_{L^\infty} = 1$, we do in fact obtain a nontrivial limit and for $|Z|_{L^\infty} > 1$, the flow does not converge in general. Finally, all periodic magnetic geodesics in this example are in fact contractible. \end{Bem} Theorem \ref{Dennislangzeit} and Theorem \ref{ThmConvergenceOttarson} in fact also allow us to conclude the existence of nontrivial periodic magnetic geodesics via the heat flow method without assuming that $Z$ is small. Let $\Lambda > 0$ and $\gamma : S^1 \times [0,\infty) \rightarrow N$ a solution of \eqref{gradient-flow}, defined on the circle of length $2\pi$. Then, a straightforward calculation shows that the rescaled curve $\gamma_\Lambda : S^1_{\Lambda^{-1}} \times [0,\infty), \gamma_\Lambda(s,t) := \gamma(\Lambda s, \Lambda^2 t)$ satisfies the rescaled equation \begin{align} \label{rescaledEOM} \dot{\gamma}_\Lambda &= \tau(\gamma_\Lambda) - \Lambda Z(\gamma_\Lambda). \end{align} Using this rescaling technique, we obtain \begin{Cor} \label{CorRescaling} Given an arbitrary $Z \in \Gamma(\Hom(TN,TN))$, there exist solutions to \eqref{gradient-flow} which converge to (possibly trivial) closed magnetic geodesics with respect to $Z$, defined on a possibly rescaled loop. \end{Cor} \begin{proof} Let $O(N,g)$ denote the constant from Remark \ref{RemOttarson} (1), choose $\Lambda \leq O(N,g)/|Z|^2_{L^\infty}$ and set $Z_\Lambda := \Lambda Z$ . Then Theorem \ref{Dennislangzeit} guarantees a solution $\gamma_\Lambda : S^1 \times [0,\infty)$ to \eqref{rescaledEOM}. By Theorem \ref{ThmConvergenceOttarson} and the choice of $\Lambda$, this flow subconverges (as $t \rightarrow \infty$) to $\gamma_{\Lambda,\infty}$, which satisfies $\tau(\gamma_{\Lambda,\infty}) = Z_{\Lambda}(\gamma'_{\Lambda,\infty})$. Scaling by $\Lambda^{-1}$ as described above, the resulting map $\gamma : S^1_{\Lambda} \times [0,\infty), \gamma(s,t) := \gamma_\Lambda(\Lambda^{-1} s, \Lambda^{-2} t)$ solves the original equation \eqref{gradient-flow}. It is clear that $\gamma$ still subconverges to a solution of the equation for magnetic geodesics with the original $Z$ which, however, is defined on $S^1_{\Lambda}$. \end{proof} Of course, choosing $\Lambda < O(N,g)/|Z|^2_{L^\infty}$, Corollary \ref{trivial-limit} implies, that the limit will aways be trivial. \begin{Bem} It is well know that in our (approximately flat) physical space $\R^3$, periodic magnetic geodesics - in fact circles - can be observed for \emph{homogeneous} magnetic fields of (almost) arbitrary strength $B_0$. This is included in Corollary \ref{CorRescaling} but not in Theorem \ref{ThmConvergenceOttarson} if $B_0$ is large. In fact, it is easy to see that we have to perform a rescaling to include these solutions: Setting charge and mass to $1$, a straightforward calculation shows that equality of Lorentz- and centrifugal force translates into $2\pi/T = B_0$ where $T$ denotes the time of revolution. Since $T$ corresponds to the length of $S^1_\Lambda$, it is obvious that its length has to be ``adjusted''. \end{Bem} \subsection{The case of an exact magnetic field} In this section, we discuss the convergence of the gradient flow in the case that the magnetic field is exact. Under this assumption we can exploit the fact that \eqref{gradient-flow} is derived from a the well defined energy \(E(\gamma)\). \begin{Satz} \label{ThmConvExactCase} If the two-form $\Omega$ is exact, then the solution $\gamma$ of \eqref{gradient-flow} subconverges to a magnetic geodesic \(\gamma_\infty\). \end{Satz} \begin{proof} Using the fact that our evolution equation \eqref{gradient-flow} is the \(L^2\)-gradient flow of \(E(\gamma)\) we obtain \begin{equation} \label{energy-inequality-exact} E(\gamma_T)+\int_0^T\int_{S^1}|\dot\gamma_t|^2dsdt=\frac{1}{2}\int_{S^1}|\gamma'_T|^2ds+\int_{S^1}\gamma_T^\ast Ads+\int_0^T\int_{S^1}|\dot\gamma_t|^2dsdt=E(\gamma_0). \end{equation} Thus, we may directly conclude that \begin{equation} \label{exact-energy-inequality} \frac{1}{4}\int_{S^1}|\gamma'_T|^2ds+\int_0^T\int_{S^1}|\dot\gamma_t|^2dsdt\leq E(\gamma_0)+\vol(S^1)|A|^2_{L^\infty}. \end{equation} Using \eqref{bochner1} and \eqref{bochner2} together with \eqref{maximum-principle-l2}, we obtain a uniform bound on \(|\gamma_t'|^2\) and \(|\dot\gamma_t|^2\) from \eqref{exact-energy-inequality}. Thus, the assumptions of Theorem \ref{Convergence} are satisfied and the evolution equation \eqref{gradient-flow} subconverges in \(C^2(S^1,N)\) to a magnetic geodesic \(\gamma_\infty\). Since $\gamma$ is smooth, it is clearly homotopic to $\gamma_0$. \end{proof} Note that we did not use Ottarson's Theorem \ref{ThmOttarson} and henceforth do not have to restrict to contractible curves. Starting with initial conditions in a prescribed homotopy class, we thus obtain \begin{Cor} If $\Omega$ is exact, then each homotopy class of curves in $N$ contains a magnetic geodesic. \end{Cor} Note that in contrast to the heat flow for geodesics on manifolds with negative curvature, the magnetic geodesic need not be unique in its homotopy class. The argument which is used for the heat flow of ordinary geodesics (see \cite{MR0214004}) is no longer valid because of the presence of the magnetic contribution to the energy. We will see an explicit counterexample in Example \ref{ExHyperbolic}: $\mathbb{H}^2$ (or compact quotients thereof), equipped with a suitable magnetic field $Z$, allows for contractible nontrivial closed magnetic geodesics. The following statement shows that under additional assumptions on the initial conditions, the limit $\gamma_\infty$ will be nontrivial, even in the contractible case. \begin{Prop} \label{LemNonTriv1} Let $\Omega= dA$ be exact and $\gamma_\infty$ be the limit of the associated flow (which exists by Theorem \ref{ThmConvExactCase}) with initial condition $\gamma_0$. Moreover, assume $E(\gamma_0) \leq 0$ and in addition $\gamma_0 \neq \mathrm{constant \ curve}$ if $E(\gamma_0)=0$. Then, $\gamma_\infty$ is a nontrivial closed magnetic geodesic. \end{Prop} \begin{proof} Let $\gamma$ be the solution to \eqref{gradient-flow}. By \eqref{energy-inequality-exact} applied to the interval $t \in [t_1,t_2]$, we have $E(\gamma_{t_2})\leq E(\gamma_{t_1})$ for any $t_2 \geq t_1 $ and hence, $E(\gamma_\infty) \leq E(\gamma_t)$ for any $t \geq 0$. If $E(\gamma_t) < 0$ for some $t$ (in particular for $t=0$), this proves the claim since the energy of a constant curve clearly vanishes. If $E(\gamma_t) = 0$ for all $t$, then we conclude from \eqref{energy-inequality-exact} that $\dot{\gamma}=0$ and $\gamma_\infty = \gamma_0$, which is nontrivial by assumption in this case. \end{proof} The existence of contractible initial conditions $\gamma_0$ satisfying $E(\gamma_0) \leq 0$ can in fact be ensured provided the field does not vanish: \begin{Cor} Assume that $\Omega = dA \neq 0$. Then there exists $\gamma_0 : S^1_\Lambda \rightarrow N$ for some $\Lambda > 0$ such that the resulting flow from Theorem \ref{Dennislangzeit} subconverges to a nontrivial magnetic geodesic. \end{Cor} \begin{proof} By assumption, $A \in \Omega^1(N)$ for $\Omega$ cannot be closed. Thus, we can always find a small ball $B \subset N$ and a loop $c : S^1 \rightarrow B$ s.t. $\int_{S^1} c^\ast A \neq 0$. Reversing the orientation of $c$ if necessary, we may assume $\int_{S^1} c^\ast A < 0$. Rescaling $c$ as described above \eqref{rescaledEOM}, we see that $E_{kin}(c_\Lambda)$ becomes arbitrary small if $\Lambda$ tends to zero, whereas the term $\int_{S^1_\Lambda} (c_\Lambda)^\ast A$ is independent of $\Lambda$. Putting $\gamma_0 := c_\Lambda$ for an appropriate choice of $\Lambda$ yields $E(\gamma_0) \leq 0$ and we can apply Proposition \ref{LemNonTriv1}. \end{proof} \begin{Bem} \begin{enumerate} \item The previous result is similar to the one stated in Theorem 1 of \cite{MR2679767}, which however uses a slightly different functional. \item In case that $\Omega$ is not exact, the argument used in the proof of Proposition \ref{LemNonTriv1} breaks down since there is no longer a well defined energy. Of course, $\Omega$ always admits local potentials $A_{loc}$ and the argument clearly remains valid provided one can show that the curves $\gamma_t$ stay inside the domain of $A_{loc}$. The examples in Section \ref{SecExamples} show that this may or may not be the case, see Remark \ref{RemTorusEx} \eqref{RemTorusExEnergy} and Remark \ref{RemSphere}. We are currently not aware of a condition ensuring the applicability of Proposition \ref{LemNonTriv1} in the non-exact case. \end{enumerate} \end{Bem} \section{Examples} \label{SecExamples} In this section we discuss some explicit examples on the two-dimensional torus $T^2$, on the two-dimensional sphere \(S^2\), and in the hyperbolic plane \(\mathbb{H}^2\). They illustrate the influence of different background geometries and initial conditions on the heat flow. Moreover, we believe it is helpful to discuss the abstract results from Section \ref{SecConvergence} in view of these examples. \subsection{Magnetic geodesics on the torus} \label{ExTorus} A general existence result for magnetic geo\-de\-sics on the two-dimensional flat torus has been obtained using methods from symplectic geometry in \cite{MR2036336} (Section 5). The heat flow in this specific case was studied in \cite{MR2551140}, p.457ff. Here, we want to make some further remarks concerning this example.\\ Let $C := S^1 \times \R \subset \R^3$ denote the cylinder of radius $1$. Using cylindrical coordinates $(\varphi,z) \in (-\pi,\pi) \times \R$, a curve on $C$ depending on some parameter $t$ may be represented as \begin{align*} \gamma_t &= (\cos(\varphi(s,t)),\sin(\varphi(s,t)),z(s,t)). \end{align*} The magnetic field $Z$ at $x \in C$ is defined by taking the vector product (in $\R^3$) of $v \in T_xC$ and the vector $B(x)$: \begin{align} \label{EqTorusZ} Z_x(v) &:= v \times B(x), & B=B_0(\cos(\varphi(x),\sin(\varphi(x)),0). \end{align} Here, $B_0 \in \R$ is a constant describing the strength of the magnetic field. Since $B(x) \perp T_xC$, we clearly have $Z_x \in \mathrm{End}(T_xC)$. It follows directly from \eqref{EqTorusZ} that $\nabla^C Z = 0$ and thus, the associated two-form $\Omega$ (cf. \ref{EqZOmega}) is closed. Finally, taking the quotient by $\Z$ in z-direction, we obtain a field $Z \in \mathrm{End}(TT^2)$ enjoying the same properties. Note that the resulting induced field strength in $\Omega^2(T^2)$ is given by $\Omega_{T^2} = B_0 vol_{T^2}$. In particular, it is not exact. On the opposite, it is easy to check that on $C$, we have $\Omega_{C} = -d(z d\varphi)$, i.e. there exists a global potential $A = -z d\varphi \in \Omega^1(C)$.\\ Before solving the flow equation, we briefly discuss the solution to the ODE \eqref{first-variation} for magnetic geodesics in case $B_0 \neq 0$. This is most easily done on the level of the universal coverings $\R^2 \rightarrow T^2$ and $\R^2 \rightarrow C$, respectively. In fact, viewing $\R^2$ as subspace of $\R^3$, we have to determine the trajectories subject to a magnetic field of constant strength $B_0$ which is perpendicular to $\R^2$. It is well known that all the trajectories are circles in $\R^2$ of radius $|B_0|$, so in particular they are closed and contractible. Projecting to $T^2$ or $C$, we conclude that all magnetic geodesics on these spaces subject to the magnetic field from \eqref{EqTorusZ} are closed and contractible.\\ The heat flow equation \eqref{gradient-flow} is equivalent to the following system of partial differential equations \[ \dot\varphi=\varphi''-B_0z',\qquad \dot z=z''+B_0\varphi', \] which can be rewritten in terms of the complex variable \(\xi=\varphi+iz\) as \begin{equation} \label{evolution-xi} \dot{\xi}=\xi''+iB_0 \xi'. \end{equation} The integrand of the magnetic term in the energy identity \eqref{energy-equality}, which determines the asymptotic behaviour of the system, can be computed explicitly: \[ \Omega(\dot{\gamma},\gamma')=\langle \dot{\gamma},Z(\gamma')\rangle=B_0\langle \dot{\gamma},\gamma'\times B\rangle=B_0(z'\dot{\varphi}-\dot{z}\varphi'). \] We can integrate \eqref{evolution-xi} directly and obtain a family of solutions \begin{equation} \label{Eq_k_Mode} \xi_k=e^{iks}e^{-(B_0k+k^2)t}, \end{equation} where $k\in\N$. The most general solution is obtained by summing over all \(k\). Similar to \cite{MR2551140} we now solve this system for different types of initial data.\\ \begin{enumerate} \item \label{ExTorusPart1} Let $k \in \Z\setminus\{0\}$. Prescribing initial conditions $(\varphi_k(s,0),z_k(s,0))=(a\cos (ks),b\sin (ks))$ for $a,b> 0$, we find the following solution: \begin{align*} \varphi_k(s,t)&=\frac{a+b}{2}\cos (ks) e^{-(kB_0+k^2)t}+\frac{a-b}{2}\cos (ks) e^{-(-kB_0+k^2)t},\\ z_k(s,t)&=\frac{a+b}{2}\sin (ks)e^{-(kB_0+k^2)t}-\frac{a-b}{2}\sin (ks) e^{-(-kB_0+k^2)t}. \end{align*} Putting $B_0=k=1$, we reproduce the solution in \cite{MR2551140}, Example 8a) and it is easy to see that the flow converges as $t \to \infty$. More general, a brief inspection of the exponents shows that the flow (sub)-converges if and only if $|B_0| \leq |k|$. In case $|B_0| < |k|$, the flow in fact shrinks to a point and we only obtain a nontrivial limit for $B_0 = \pm k$. It is not hard to check that these limits are in fact magnetic geodesics. Explicit computation of the magnetic term in the energy identity \eqref{energy-equality} yields \begin{align*} \int_0^T\int_{S^1}\Omega(\dot{\gamma},\gamma')dsdt = -\pi abkB_0 &- \tfrac{\pi}{4}(b-a)^2kB_0 e^{-2T(k^2-kB_0)} \\ &+ \tfrac{\pi}{4}(b+a)^2kB_0 e^{-2T(k^2+kB_0)}. \end{align*} We see that in accordance with Remark \ref{RemGeneralConvergence}, this term is bounded below with respect to $T$ if and only if $|B_0|\leq k$, i.e. if and only if the flow (sub)-converges. \item \label{ExTorusPart2} Prescribing the initial conditions $(\varphi(s,0),z(s,0))=(s,\mu \cos(s))$ for $\mu > 0$, we find the following solution: \begin{align*} \qquad \varphi=s+\frac{\mu}{2}\sin s(e^{(B_0-1)t}-e^{-(B_0+1)t}),\qquad z=B_0t+\frac{\mu}{2}\cos s(e^{(B_0-1)t}+e^{-(B_0+1)t}). \end{align*} Note that this solution contains contributions from \eqref{Eq_k_Mode} for all $k$. It is obvious that this solution does not converge as $t\to\infty$ for any choice of $B_0 \in \R\setminus\{0\}$. Again, this is reflected in the critical term in \eqref{energy-equality} \begin{align*} \int_0^T\int_{S^1}\Omega(\dot{\gamma},\gamma')dsdt &= -2\pi B_0^2t + B_0\frac{\pi\mu^2}{4}(e^{-2T(1+B_0)}-e^{-2T(1-B_0)}), \end{align*} which is not bounded below with respect to $t$ for any choice of $B_0 \in \R\setminus\{0\}$. Note that the first term on the right hand side is multiplied by $B_0^2$. Thus, we cannot change the asymptotic behaviour of the solution by changing the sign of \(B_0\), even if $|B_0|$ is small. \end{enumerate} One may of course produce other explicit solutions using the Fourier decomposition of $\xi$ from \eqref{Eq_k_Mode}. Comparing the findings in this particular example with the general results from Theorem \ref{ThmConvergenceOttarson} and Theorem \ref{ThmConvExactCase}, we can draw the following conclusions: \begin{Bem} \label{RemTorusEx} \begin{enumerate} \item In general, we cannot expect to obtain the (sub)convergence of the flow to a magnetic geodesic in case $\Omega$ is not exact unless the initial curve $\gamma_0$ is contractible. In fact, since all solutions in this example are contractible, the initial conditions, being in the same homotopy class, have to be contractible, too. Part (2) of the example shows an initial curve wrapping around the torus and we have seen that the corresponding flow never converges. \item Even if $\Omega$ is exact, we need additional assumptions to guarantee $|A|_{L^\infty} < \infty$ and hence (sub)-convergence of the flow to a magnetic geodesic. In fact, the flow from part \eqref{ExTorusPart2}, viewed as an element of $C^\infty(S^1 \times [0,\infty), C)$, is not subconvergent, which reflects the fact that the potential $A = z d\phi$ is not essentially bounded on the non-compact cylinder $C$. \item An upper bound on $|Z|_{L^\infty}$ (as the one used in Lemma \ref{LemZAssump}) is crucial to obtain convergence. In fact, choosing $k := 1 < B_0$ in part \eqref{ExTorusPart1} above provides an example of a divergent flow. \item Even if the flow is subconvergent, the limit need not be a magnetic geodesic. Setting $\mu := 0$ in part \eqref{ExTorusPart2}, we obtain the constant sequence of curves $\gamma_{t_m} \in C^\infty(S^1,T^2)$ for $t_m := 2\pi m$, $m \in \N$. We have already seen that $\gamma_\infty$ cannot be a magnetic geodesic because it winds one time around the torus. \item Provided $|B_0| \leq k$, part \eqref{ExTorusPart1} gives an example, where Proposition \ref{LemNonTriv1} can be applied even though $\Omega$ is not exact. Moreover, we see that the condition $E(\gamma_0) \leq 0$ cannot be improved: Choosing $a=b=1$, we obtain $E(\gamma_0) = E_{kin}(\gamma_0) + E_A(\gamma_0) = \pi(k^2+kB_0)$. Thus, we find examples of a flow with positive total energy arbitrary close to zero, which shrinks to a point. On the other hand, we also see that the aforementioned condition is not necessary since $B_0 = k$ yields an example with positive initial energy converging to a nontrivial magnetic geodesic. \label{RemTorusExEnergy} \item Part \eqref{ExTorusPart1} also illustrates the rescaling technique used in Corollary \ref{CorRescaling}: For parameter $k > 1$, we may consider the flow as map $S^1_{1/k} \times [0,\infty) \rightarrow T^2$. Rescaling it by $\frac{1}{k}$, we get a new flow, defined on $S^1_1 \times [0,\infty)$, which solves \eqref{rescaledEOM} for $\Lambda = \frac{1}{k}$. The rescaled flow converges provided $|\frac{B_0}{k}| \leq 1$ by \ref{RemZ_Eins_Optimal}. This is equivalent to the condition $|B_0| \leq k$, which was obtained above by an explicit calculation. In particular, we find that convergence of the flow to magnetic geodesics may be obtained for arbitrary large values of $|B_0|$ by suitable rescaling in accordance with Corollary \ref{CorRescaling}. \end{enumerate} \end{Bem} \subsection{Magnetic Geodesics on the two-dimensional sphere} \label{ExSphere} Now we study the case of a two-dimensional spherical target, \(N := S^2\subset\R^3\). Magnetic geodesics on \(S^2\) have been extensively studied in \cite{MR2788659}. A natural choice for a magnetic force is given by \(Z(\gamma')=B_0\gamma\times\gamma'\). Again, it is not hard to see that the associated 2-form is given by $\Omega = B_0 vol_{S^2}$, which is not exact. In this case, the equation for magnetic geodesics on \(S^2\) acquires the form \begin{equation} \label{magnetic-sphere} \gamma''=-|\gamma'|^2\gamma+B_0\gamma\times\gamma'. \end{equation} By a direct computation one can check that \begin{equation} \label{solution-sphere} \gamma(s)=(\sin\theta_0\cos(\frac{B_0}{\cos\theta_0}s),\sin\theta_0\sin(\frac{B_0}{\cos\theta_0}s),\cos\theta_0) \end{equation} solves \eqref{magnetic-sphere} for a fixed value of \(\theta_0\). However, note that this equation implicitly assumes that this solution is defined on a circle $S^1_r$ whose radius $r$ is an integer multiple of $\frac{B_0}{\cos\theta_0}$. The corresponding heat flow leads to the following system \begin{equation} \dot{\gamma}=\gamma''+|\gamma'|^2\gamma-B_0\gamma\times\gamma' \end{equation} with initial data \(\gamma_0\). Here, the relevant term in \eqref{energy-equality} term can be expressed as \[ \Omega(\dot{\gamma},\gamma')=B_0\langle\dot{\gamma},\gamma\times\gamma'\rangle=B_0\det(\dot{\gamma},\gamma,\gamma') \] and depending on its sign, we expect that the evolution equation converges or not. Let us analyze this evolution equation for several different ansätze. \begin{enumerate} \item We use the following ansatz: \begin{equation} \label{EqSphereAnsatz1} \gamma(s,t) = (\sin\theta(t)\cos\varphi(s),\sin\theta(t)\sin\varphi(s),\cos\theta(t)) \end{equation} This ansatz leads to a system of differential equations, which can be simplified easily. From the equation for \(\gamma_3\) we obtain \[ \frac{\dot\theta}{\sin\theta}=B_0\varphi'-\varphi'^2\cos\theta. \] Equating the expressions for \(\gamma_1\) and \(\gamma_2\) yields \(\tan(\varphi)\varphi''=-\cot(\varphi)\varphi''\), which implies that \(\varphi''=0\). Hence, $\varphi(s) = c\cdot s$ and after a suitable rescaling, which only affects the value of $B_0$, we may assume \(\varphi(s)=s\). The equations for \(\gamma_i,i=1,2,3\) now lead to \begin{equation} \label{evolution-theta-sphere} \dot{\theta}=-\sin\theta\cos\theta+B_0\sin\theta = \sin\theta (B_0 - \cos\theta). \end{equation} This equation cannot be integrated directly for arbitrary \(B_0\). To understand the asymptotics of \eqref{evolution-theta-sphere}, we study the zero's of the right hand side. It can easily be checked that the right hand side vanishes for \(\theta=0,\pi,\theta_0:= \arccos(B_0)\) and the functions $\theta(t)=0,\pi,\theta_0$ are clearly solutions of \eqref{evolution-theta-sphere}. Moreover, it is straightforward to check that \begin{align} \label{EqThetaSigns} \left. \begin{array}{c} \theta > \theta_0 \\ \theta < \theta_0 \end{array} \right\} &\Longrightarrow \left\{ \begin{array}{c} \sin\theta (B_0 - \cos\theta) > 0 \\ \sin\theta (B_0 - \cos\theta) < 0 \end{array} \right. \end{align} Thus, depending on the initial condition, the solution will converge to \(\theta_\infty=0\), \(\theta_\infty=\pi\) or, more interestingly, to \(\theta_\infty=\theta_0\), see also \cite{MR2961944}, Lemma 1.1. Moreover we see that if $\theta(0)\neq \theta_0$, then the flow equation \eqref{evolution-theta-sphere} causes $\theta(t)$ to approach one of the poles, i.e. $\lim_{t \rightarrow \infty}\theta(t) = 0, \pi$. The corresponding limit curve $\gamma_\infty$ will either be one of the poles of the sphere (and hence constant) or given by the curve \begin{equation} \gamma_\infty(s)=(\sqrt{1-B_0^2}\cos s,\sqrt{1-B_0^2}\sin s,B_0), \end{equation} which is a reparametrization of \eqref{solution-sphere}. We have seen above that unless we already start on this particular solution ($\theta(0) = \theta_0$), the flow will converge to one of the poles. Hence, \eqref{solution-sphere} represents a non-stable solution, whereas both trivial solutions are stable, at least within the limits of our ansatz \eqref{EqSphereAnsatz1}.\\ In addition, we check the critical term in \eqref{energy-equality} for this ansatz: \[ \Omega(\dot{\gamma},\gamma')=B_0\langle\dot{\gamma},\gamma\times\gamma'\rangle=B_0\varphi'\tfrac{d}{dt}(\cos\theta). \] When integrating this expression over \(S^1\) we find that it vanishes, in accordance with Remark \ref{RemGeneralConvergence} and the fact that this ansatz converges. \item We now use a different ansatz: \begin{equation} \gamma(s,t)=(\sin\theta(s)\cos\varphi(t),\sin\theta(s)\sin\varphi(t),\cos\theta(s)) \end{equation} The flow equation now reads \(Z(\gamma')=B_0(-\sin\varphi(t)\theta',-\cos\varphi(t)\theta',0)\) and the equation for \(\gamma_3\) then directly implies \(\theta''=0\). Multiplying the equation for \(\gamma_1\) with \(\sin\varphi\), multiplying the equation for \(\gamma_2\) with \(\cos\varphi\) and adding up both contributions then yields \[ \dot{\varphi}=-B_0\frac{\theta'}{\sin\theta}. \] But this means that both sides have to be equal to a constant \(\lambda\), leading to \[ -\theta'=\lambda\sin\theta. \] In addition, we know that \(\theta'\) is constant, such that \(\theta=0\). Hence, the curve \(\gamma\) will stay at the north pole of \(S^2\). The critical term in \eqref{energy-equality} takes the form \[ \Omega(\dot{\gamma},\gamma')=-B_0\dot{\varphi}\tfrac{d}{ds}(\cos\theta) \] and this expression again vanishes when integrated over \(S^1\). \end{enumerate} In addition to the comments at the end of Example \ref{ExTorus}, we can draw additional conclusions concerning the heat flow approach from the current example on $S^2$: \begin{Bem} \label{RemSphere} \begin{enumerate} \item By \eqref{solution-sphere}, there exist nontrivial contractible periodic magnetic geodesics. However, the heat flow method is not appropriate to establish their existence since the nontrivial solution is unstable. Given a generic choice for the initial curve $\gamma_0$, the flow will converge to one of the poles. This behaviour is analogous to the properties of the ordinary heat flow for closed geodesics on $S^2$. \item The discussion of ansatz (1) also provides an example, where the criterion in Proposition \ref{LemNonTriv1} for non-triviality of $\gamma_\infty$ explicitly fails. Choosing a potential $A_p$ on $S^2\setminus \{p\}$ for some point $p \in S^2$ and an initial curve $\gamma_0$ such that $\tfrac{1}{2}\int |\gamma_0'|^2ds + \int \gamma_0^\ast Ads \leq 0$, it may be shown that the flow either passes through $p$ at some time or converges to $p$. Thus, the argument in \ref{LemNonTriv1} breaks down. \end{enumerate} \end{Bem} \subsection{Magnetic Geodesics on the hyperbolic plane} \label{ExHyperbolic} Finally, we assume that \(N\) is the hyperbolic plane \(\hyp^2\) with constant curvature \(-1\). A general existence result for magnetic geodesics on \(\hyp^2\) was recently established in \cite{MR2959932}. We choose the convention \[ \hyp^2 = \{ (\gamma_1,\gamma_2,\gamma_3) \in \R^3 \mid \gamma_1^2-\gamma_2^2-\gamma_3^2=1 \}, \] i.e the axis of rotation of the hyperboloid points in the $\gamma_1$-direction. As in \cite{MR2959932}, p.6, we define a modified cross product between two vectors \(v,w\) by \begin{equation} v\tilde{\times}w=(v_3w_2-v_2w_3,v_1w_3-v_3w_1,v_1w_2-v_2w_1). \end{equation} The equation for magnetic geodesics is now given by \[ \gamma''=\gamma|\gamma'|^2+B_0\gamma\tilde{\times}\gamma' \] and as in the spherical case, an explicit solution can easily be found: \begin{equation} \label{solution-hyperbolic} \gamma(s)=(\cosh\theta_0,\sinh\theta_0\cos(\frac{B_0}{\cosh\theta_0}s),\sinh\theta_0\sin(\frac{B_0}{\cosh\theta_0}s)). \end{equation} Note that for \(B_0\to 0\), the solution shrinks to a point. This reflects the fact that there do not exist ordinary closed geodesics on \(\hyp^2\).\\ The evolution equation for magnetic geodesics then reads \begin{equation} \dot{\gamma}=\gamma''-\gamma|\gamma'|^2-B_0\gamma \end{equation} for some given initial data \(\gamma_0\). Let us make the following ansatz \[ \gamma(s,t)=(\cosh\theta(t),\sinh\theta(t)\cos\varphi(s),\sinh\theta(t)\sin\varphi(s)). \] Similar to the spherical case this leads to the following two equations \begin{align*} \dot\theta=-\cosh\theta\sinh\theta\varphi'^2-\sinh\theta\varphi',\qquad \varphi''=0. \end{align*} Thus, setting \(\varphi=s\), we obtain \[ \dot\theta=-\cosh\theta\sinh\theta+B_0\sinh\theta. \] As in the spherical case, we cannot integrate this equation directly. Thus, we again analyze the zero's of the right hand side, which are given by \(0,\theta_0 := \operatorname{arcosh} B_0\). Similar to \eqref{EqThetaSigns}, we moreover have \begin{align} \label{EqThetaSignsHyp} \left. \begin{array}{c} \theta > \theta_0 \\ \theta < \theta_0 \end{array} \right\} &\Longrightarrow \left\{ \begin{array}{c} \sinh\theta (B_0 - \cosh\theta) < 0 \\ \sinh\theta (B_0 - \cosh\theta) > 0 \end{array} \right. \end{align} In contrast to the spherical case, we now find that unless $\theta(0) = 0$, we have $\lim_{t\rightarrow \infty}\theta(t) = \theta_0$. Hence, depending on the initial condition, the solution will stay at $(1,0,0)$ if it started at this point or converges to the curve \begin{equation} \gamma_\infty=(B_0,\sqrt{B_0^2-1}\cos s,\sqrt{B_0^2-1}\sin s), \end{equation} which again coincides with \eqref{solution-hyperbolic} up to a reparametrization. \begin{Bem} In contrast to Example \ref{ExSphere}, the nontrivial solution on $\hyp^2$ is stable, at least within the limits of our ansatz. In particular, the heat flow method is appropriate for establishing the existence of the nontrivial magnetic geodesics \eqref{solution-hyperbolic}. Again, this reflects the properties of the heat flow for ordinary geodesics where critical points are stable if the manifold has strictly negative curvature. Due to the magnetic terms in \eqref{second-variation}, we do not fully understand the role of curvature for magnetic geodesics at present. \end{Bem} \bibliographystyle{alpha}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} The Higgs boson was hypothesised as a remnant of the Higgs field that was responsible for the electroweak symmetry breaking about a half century ago~\cite{higgs}. Understanding the mechanism for electroweak symmetry breaking, especially by testing for the presence or absence of the standard model (SM) Higgs boson, has been a major goal of particle physics and a central part of the Fermilab Tevatron physics program. Both CDF and D0 collaborations have performed new combinations of multiple direct searches for the standard model Higgs boson~\cite{tevcomb}. The new searches include more data, additional channels, and improved analyses techniques compared to previous analysis. Results are derived from the complete Tevatron Run II dataset, with a measured integrated luminosity of 10 fb$^{-1}$ of proton-antiproton data. The searches are performed for assumed Higgs masses between 90 and 200 GeV/c$^2$. The global fit of the electroweak precision data, including recent top-quark and $W$ boson mass measurements from the Tevatron~\cite{mtop,mw}, constrains the Higgs mass $m_H$ to be less than 152 GeV/c$^2$ at the 95\% conference level (CL)~\cite{ewfit}. The direct searches from LEP~\cite{lep}, Tevatron~\cite{tevcomb}, and LHC results~\cite{cms,atlas} set the Higgs mass between 116.6 and 119.4 GeV/c$^2$ or between 122.1 and 127 GeV/c$^2$ at the 95\% CL. Recently both LHC experiments~\cite{discoverycms, discoveryatlas} observed local excesses above the background expectations for a Higgs boson mass of approximately 125 GeV/c$^2$. Much of the power of the LHC searches comes from $gg\rightarrow H$ production and Higgs boson decays to $\gamma\gamma$, $W^+W^-$, and $Z^+Z^-$, which probe the couplings of the Higgs boson to other bosons. In the allowed mass range, the Tevatron experiments are particularly sensitive to the association production of the Higgs boson with a weak vector boson in the $b\bar b$ channel, which probes the Higgs boson's couplings to $b$ quarks. The Tevatron collider produces proton and anti-proton collision at the center mass of 1.96 TeV with a record luminosity of 4.3~$10^{32}$~cm$^{-2}$s$^{-1}$. The Tevatron has delivered close to 12 fb$^{-1}$ to each experiment before the shutdown on 30 September 2011 after 28 successful years running. Both CDF and D0 detectors are the general-purpose detectors, which provide excellent tracking, lepton identification, jets finding, and missing transverse energy (\met) detections. The details can be found elsewhere~\cite{cdf,d0}. \section{Higgs Production and Decays} The dominant Higgs production processes at the Tevatron are the gluon-gluon fusion ($gg\rightarrow H$) and the associated production with a $W$ or $Z$ boson~\cite{higgs-xsec}. The cross section for the production of SM Higgs and its decays are summarized in Fig.~\ref{fig:xbr-decay} as a function of the Higgs mass between 100-200 GeV/c$^{2}$. The cross section for $WH$ production is twice that of $ZH$ and is about a factor of 10 smaller than $gg\rightarrow H$. The Higgs boson decay branching fraction is dominated by $H\rightarrow b\bar b$ for the low-mass Higgs ($m_H < 135$ GeV/c$^2$) and by $H\rightarrow W^+W^-$ or $Z Z^*$ for the high-mass Higgs ($m_H>135$ GeV/c$^2$). A search for a low-mass Higgs boson in the $gg\rightarrow H\rightarrow b\bar b$ channel is extremely challenging because the $b\bar b$ QCD production rate is many orders of magnitude larger than the Higgs boson production rate. Requiring the leptonic decay of the associated $W$ or $Z$ boson greatly improves the expected signal over background ratio in these channels. As a result, the Higgs associated production with $H\rightarrow b\bar b$ is the most promising channel for the low-mass Higgs boson searches. For the high-mass Higgs, $H\rightarrow W^+W^-$ modes with leptonic decay provide the greatest sensitivity. The secondary channels of $H\rightarrow\gamma\gamma$, $H\rightarrow \tau^+\tau^-$, and $t\bar t H$ are also considered at the Tevatron. Finally, all the channels have to be combined to achieve the best SM Higgs sensitivity. \begin{figure}[htpb] \centerline{\psfig{file=tev_higgs_xsec.eps,width=6.7cm} \psfig{file=sm_br.eps,width=6.7cm}} \vspace*{8pt} \caption{SM Higgs production cross section at the Tevatron (left) and its decay branching ratio (right) as a function of the Higgs boson mass.\label{fig:xbr-decay}} \end{figure} \section{Search Strategies} The challenge for the standard model Higgs search at the Tevatron is that the Higgs signal is so tiny compared to that of other SM processes with the same final states. The search strategies employed by the CDF and D0 collaborations are quite similar and have been evolving constantly over time. We first maximize the signal acceptances by using efficient triggers, excellent lepton identifications, and powerful $b$-tagging, which can improve the signal to the background ratio up to the 1\% level. Then we use multivariate analysis (MVA) to exploit the kinematic differences between the signal and background, which can further enhance the signal to the background ratio up to the 10\% level in the high score regions. The same strategies have been used to help discover the single-top and diboson processes at the Tevatron, which provide a solid ground for how to isolate a small signal out of the huge background. For the low-mass $H\rightarrow b\bar b$ signatures we look for a $b\bar b$ mass resonance produced in association with a $W$ or $Z$ boson where $W$ decays into $l\nu$ or $Z$ decays into $l^+l^-$ or $\nu\bar \nu$. The $WH\rightarrow lvbb$ is the most sensitive channel that gives one high $P_T$ lepton, large \met, and two $b$-jets. Before $b$-tagging, the sample is predominated by the $W$ + light-flavor jets, which provides ideal control data to test the background modeling. For the high-mass $H\rightarrow W^+W^-$ signatures, we look for the Higgs boson decaying into a $W^+W^-$ pair in the inclusive Higgs events that lead to many interesting final states. The most sensitive channel is both W bosons decaying leptonically that gives an opposite-signed dilepton, large \met, and some jets from the initial state radiation or other production processes. Because of the missed neutrinos in the final state, the Higgs mass can not be reconstructed. We have to rely on the event kinematic that distinguishes signal from background. For example, the $\Delta\phi$ of two leptons from the Higgs decay prefers a smaller $\Delta\phi$ than the background due to the fact that the Higgs is a scalar particle. We can further improve the separation between signal and background by combining the $\Delta\phi$ with other kinematic variables in the event using a multivariate discriminant. \section{Recent Improvements} Since there are two $b$-quark jets from the low-mass Higgs decay, improving $b$-tagging is crucial. Both CDF and D0 use MVA $b$-tagging to exploit the decay of long-lived $B$ hadron as displaced tracks/vertices. The typical efficiency is about 40-70\% with a mistag rate of 1-5\% per jet. Recently CDF combined their existing $b$-taggers into a Higgs Optimized $b$-tagger (HOBIT)~\cite{hobit} using a neural network tagging algorithm, based on sets of kinematic variables sensitive to displaced decay vertices and tracks within jets with large transverse impact parameters relative to the hard-scatter vertices. Using an operating point which gives an equivalent rate of false tags, the new algorithm improves upon previous $b$-tagging efficiencies by $\approx$ 20\%. Fig.~\ref{fig:beff-mistag} shows the comparison of $b$-tag efficiency vs mistag rejection for the existing taggers and HOBIT $b$-tagger as shown in the black curve. The $b$-tag efficiency is calibrated using the $t\bar t$ events selected in the $W$+ three or more jets sample and the $b$-enriched inclusive electron data, while the mistag rate is determined using the $W$ + one jet sample. The ratio of $b$-tag efficiency per $b$-jet measured from the data and the Monte Carlo is used as a scale factor to correct for the differences in the Monte Carlo modeling, \begin{figure}[htpb] \centerline{\psfig{file=nim_roc_comp.eps,width=6.7cm}} \vspace*{8pt} \caption{A comparison of the purity-efficiency tradeoffs for HOBIT vs other $b$-taggers at CDF.\label{fig:beff-mistag}} \end{figure} To discriminate Higgs signal events against background, using MVA would improve the background rejection with a sensitivity gain of 25\%, compared to using a single variable alone, such as the dijet mass. We can further improve MVA by training different backgrounds, splitting events into sub-channels based on S/B, e.g. lepton type, number of jets. CDF trains $ ZH\rightarrow llbb$ separately against $t\bar t$, $Z+c\bar c$ or $c$, and diboson to build the final discriminat. \section{Low-Mass Searches} We describe the searches for the low-mass Higgs boson at the Tevatron in some detail. \subsection{$WH\rightarrow l\nu b\bar b$} One of the golden channels for the low-mass Higgs boson search is the Higgs produced in association with a $W$ boson with $WH\rightarrow l\nu b\bar b$~\cite{cdfwh, d0wh}. We select events with one isolated high $P_T$ lepton (electron, muon, or isolated track), a large missing transverse energy, and two or three jets, of which at least one is required to be $b$-tagged as containing a weakly-decaying B hadron. Events with more than one isolated lepton are rejected. For the multivariate discriminant, CDF trained a Bayesian neural network discriminant(BNN) in the $W$ + two and three jets for each Higgs mass, separately for each lepton, jet multiplicity, $b$-tagging category. For the D0 $WH\rightarrow l\nu b\bar b$ analyses, the data are split by lepton type, jet multiplicity, and the number of $b$-tagged jets, similar to CDF. The outputs of boosted decision trees (BDT), trained separately for each sample and for each Higgs boson mass, are used as the final discriminating variables. We perform a direct search for an excess of events in the signal region of the final discriminant from each event category. Fig.~\ref{fig:wh-output} shows the output of the final discriminants optimized for a 115 GeV/c$^2$ Higgs signal in the double $b$-tagged $W$ + two jets data from CDF and D0, respectively. The data and background predictions are in good agreement. The expected Higgs signals are also shown, but rescaled by a large factor. \begin{figure}[htpb] \centerline{\psfig{file=WH_2JET_HobitTHobitT_CEMCMUPCMXISOTRKPHX_bnnwh115.eps,width=6.7cm} \psfig{file=d0_wh_mva_double.eps,width=6.7cm}} \vspace*{8pt} \caption{The final discriminants for a 115 GeV/c$^2$ Higgs signal are shown in the $W$ + two jets after 2 $b$-tags for CDF's BNN (left) and D0's BDT (right), respectively.\label{fig:wh-output}} \end{figure} Since there is no significant excess of signal observed in the data, we set an upper limit at 95\% CL on the Higgs production cross section times branching ratio with respect to the SM predictions as a function of Higgs mass, as shown in Fig.~\ref{fig:whlimits}. For $m_H=125$ GeV/c$^2$, CDF set an observed (expected) upper limit at 4.9(2.8) while D0 set a limit at 6.2(4.8). They are not yet competitive for a single channel and we need to combine all other channels including both CDF and D0 results together. \begin{figure}[htpb] \centerline{\psfig{file=cdf_wh_hobit_all.eps,width=6.7cm} \psfig{file=d0_wh_all.eps,width=6.7cm}} \vspace*{8pt} \caption{Observed and expected 95\% CL upper limits on SM Higgs production as a function of Higgs boson mass in the $WH\rightarrow l\nu b\bar b$ from CDF (left) and D0 (right), respectively.\label{fig:whlimits}} \end{figure} \subsection{$ZH\rightarrow l^+l^- b\bar b$} Another interesting channel to search for the low-mass Higgs boson is $ZH\rightarrow l^+l^-b\bar b$~\cite{cdfllbb,d0llbb}. It provides a clean signature, but has a low event yield due to a small branching fraction of $Z\rightarrow e^+e^-$ and $\mu^+\mu^-$. We select events with two high $P_T$ leptons from $Z$ decay and two or three jets. Events are further divided based on lepton type, jet multiplicity, and the number of $b$-tagged jets, similar to $WH\rightarrow l\nu b\bar b$. To increase signal accepance D0 loosens the selection criteria for one of the leptons to include an isolated track not reconstructed in the muon detector or an electron from the inter-cryostat region of the D0 calorimeter. CDF uses neural networks to select loose dielectron and dimuon candidates. D0 applies a kinematic fit to optimize reconstruction while CDF corrects jet energies for the missing $E_T$ using a neural network approach. D0 uses random forests of decision trees to provide the final discriminant for sitting limits. CDF utilizes a multi-layer discriminant based on neural networks where separate discriminant functions are used to define four separate regions of the final discriminant function. Fig.~\ref{fig:llbb-output} shows the final discriminant optimized for a Higgs signal ($m_H=115$ GeV/c$^2$) in the $b$-tagged events from CDF and the double $b$-tagged events from D0, respectively. There seem to be some excess of events in the high score signal region, but not statistically significant yet. CDF set an observed (expected) upper limit at 95\% CL on the Higgs cross section times branching ratio over the standard model prediction at 7.2(3.6) while D0 set a limit at 6.9(5.9) for the Higgs mass at 125 GeV/c$^2$. \begin{figure}[htpb] \centerline{\psfig{file=cdf_llbb_sort.eps,width=6.7cm} \psfig{file=d0_llbb_mva.eps,width=6.7cm}} \vspace*{8pt} \caption{The final discriminants are shown in the $b$-tagged $Z$ + two or three jets events from CDF (NN output, left) and the double $b$-tagged $Z$ + two jets events from D0 (RF output, right), respectively.\label{fig:llbb-output}} \end{figure} \subsection{$WH, ZH\rightarrow $\met$b\bar b$} We also looked for the Higgs boson in the $ZH$ and $WH$ channels where the $Z$ decays into two neutrinos or the lepton from $W$ decay is undetected~\cite{cdfmetbb,d0metbb}. It has a large signal rate as well as a large QCD-multijet background. However, the final state is relatively clean, containing two high $E_T$ jets and a large missing transverse energy. We require \met$> 50 $ GeV and two $b$-tagged jets. Both CDF and D0 use a track-based missing transverse momentum calculation as a discriminant against false \met. In addition both CDF and D0 utilize multivariate technique, a neural network for CDF and a boosted decision tree for D0, to further discriminate against the multi-jet background. The final discriminant is obtained for a Higgs signal ($m_H= 115$ GeV/c$^2$) by combining dijet mass, track \met, and other kinematic variables, which are shown in Fig.~\ref{fig:metbb-output} for CDF and D0, respectively. There seems a good agreement between data and background predictions, as CDF set an observed (expected) upper limit at 95\% CL on the Higgs cross section times branching ratio over the standard model prediction at 6.8(3.6) while D0 set a limit at 3.8(4.3) for the Higgs mass at 125 GeV/c$^2$. \begin{figure}[htpb] \centerline{\psfig{file=cdf_metbb_NN_ss.eps,width=6.7cm} \psfig{file=d0_metbb_mva_tight.eps,width=6.7cm}} \vspace*{8pt} \caption{The final discriminants are shown in the \met + two jets after 2 $b$-tags for CDF (left) and after tight tag for D0 (right), respectively.\label{fig:metbb-output}} \end{figure} \section{High-Mass Searches} For the high-mass signatures, we look for Higgs decay into WW pair in the inclusive Higgs events that lead to many interesting final states~\cite{cdfww,d0ww}. The most sensitive channel is both W decaying leptonic that gives an opposite-signed dilepton pair, large missing Et, and some jets from the initial state radiation. The presence of neutrinos in the final state prevents the precise reconstruction of the Higgs boson mass. We have to rely on the event kinematic that distinguishes signal from background based on the scalar nature of the Higgs boson. We also include the processes $WH\rightarrow WW^+W^-$ and $ZH\rightarrow Z W^+W^-$ that give rise to like-sign dilepton and trilepton in the final states. Fig.~\ref{fig:dphicdf} shows the $\Delta\phi$ distribution of two opposite-signed leptons in the zero jet bin. The red line is for the signal, which prefers a smaller $\Delta\phi$ than most backgrounds. By combining the $\Delta\phi$ with other kinematics we obtain a multivariate discriminant shown in Fig.~\ref{fig:dphicdf} that improves the analysis significantly. We set a 95\% upper limit on the production cross section times branching ratio over the standard model prediction as a function of the tested Higgs mass after combing all $H\rightarrow W^+W^-$ channels including the low mass dileptons, the same sign, and trileptons from $WH$ and $ZH$, as shown in Fig.~\ref{fig:wwlimit}. CDF observes some deficit near 165 GeV/c$^2$ while D0 observes a broad excess, but they are consistent with each other. \begin{figure}[htpb] \centerline{\psfig{file=cdf_ww_os_dphi.eps,width=6.7cm} \psfig{file=TemplateStackHWW165_HighSB.eps,width=6.7cm}} \vspace*{8pt} \caption{The $\Delta\phi$ distribution of the two opposite-signed leptons in the events with no jets (left) and the final multivariate discriminant in right.\label{fig:dphicdf}} \end{figure} \begin{figure}[htpb] \centerline{\psfig{file=cdf_ww_all.eps,width=6.7cm} \psfig{file=D0_ww_all.eps,width=6.7cm}} \vspace*{8pt} \caption{Observed and expected 95\% CL upper limits on SM Higgs production as a function of Higgs boson mass in the $H\rightarrow W^+W^-$ from CDF (left) and D0 (right), respectively.\label{fig:wwlimit}} \end{figure} \section{Secondary Searches} Other searches are also considered for the $H\rightarrow \tau^+\tau^-$ decay~\cite{cdftau,d0tau}, the $H\rightarrow\gamma\gamma$ decay~\cite{cdfgg,d0gg}, and the $t\bar t H$ production~\cite{cdftth}. Fig.~\ref{fig:secondsearch} shows their 95\% CL upper limits on the production cross section times branching ratio with respect to the SM prediction, which is about a factor of ten larger than the SM Higgs sensitivity. But they do help to achieve the best Higgs sensitivity at the Tevatron. \begin{figure}[htpb] \centerline{\psfig{file=D0_tautau_limits.eps,width=4.7cm} \psfig{file=cdf_newlimitsBless_tth_limit.eps,width=4.7cm} \psfig{file=tevgamgambayeslimits19jun2012.eps,width=4.7cm}} \vspace*{8pt} \caption{The limits obtained from the searches on $H\rightarrow \tau^+\tau^-$ from D0 (left), $t\bar t H$ from CDF (middle), and $H\rightarrow \gamma\gamma$ from the Tevatron (left), respectively.\label{fig:secondsearch}} \end{figure} \section{Tevatron Combinations} We have searched for all possible SM Higgs production and decays. and set limits with respect to nominal SM predictions. CDF and D0 are in good agreement and we combine them to improve the Tevatron final Higgs sensitivity. To check the Tevatron Higgs sensitivity, we use the log-likelihood ratios (LLR) with different signal hypotheses to test the expected sensitivity as a function of Higgs mass, as shown in Fig~\ref{fig:LLR}. The black dot is for the background-only hypothesis, the red dot is for signal-plus-background hypothesis, and the solid curve is for the observed data. The colored bands indicate 1 or 2 sigma width of LLR for the background only distribution. The separation between the background only and the signal + background provides a measure of the search sensitivity, which is about 2 sigma for the Higgs boson mass at 125 GeV/c$^2$. The data seem consistent with the signal + background hypothesis between 115 and 135 GeV/c$^2$. \begin{figure}[htpb] \centerline{\psfig{file=tevatronSMComboLLRFeb24.eps,width=6.7cm}} \vspace*{8pt} \caption{The Tevatron combined LLR distributions as a function of Higgs mass.\label{fig:LLR}} \end{figure} All of the searches for the SM Higgs boson at the Tevatron are combined together for the best sensitivity~\cite{tevcomb}. We are able to exclude the Higgs mass between $100<m_H<106$ and $147<m_H<179$ GeV/c$^2$ with comparable expected exclusion $100<m_H<120$ and $141<m_H<184$ GeV/c$^2$, as shown in Fig~\ref{fig:tevlimit}. There are some excess of events observed in the mass range between 115 and 135 GeV/c$^2$ with a maximum local p-value of 2.7 standard deviations (sigma) at $m_H$ = 120 GeV/c$^2$, where the expected local p-value for a SM Higgs signal is 2.0 sigma. When corrected for the look-elsewhere effect (LEE), which accounts for the possibility of selecting the strongest of several random excesses in the range $115<m_H<200$ GeV/c$^2$, the global significance of the excess is 2.2 sigma. We also combined results for different decay modes to see where the excess comes from. Fig.~\ref{fig:bbwwlimit} shows the combined limit for $H\rightarrow b\bar b$ and $H\rightarrow W^+W^-$ separately. The observed limit in the $H\rightarrow b\bar b$ channel is more than 2 sigma higher than expected in the mass range between 115-135 GeV/c$^2$, which counts the majority of the excess in the same mass region. \begin{figure}[htpb] \centerline{\psfig{file=tev28febsmbayeslimits.eps,width=6.7cm}} \vspace*{8pt} \caption{The Tevatron combined Higgs limit as a function of tested Higgs mass.\label{fig:tevlimit}} \end{figure} \begin{figure}[htpb] \centerline{\psfig{file=tevbbbayeslimits28feb.eps,width=6.7cm} \psfig{file=tevww28feblimits.eps,width=6.7cm}} \vspace*{8pt} \caption{The Tevatron combined Higgs limit as a function of tested Higgs mass in the $H\rightarrow b\bar b$ (left) and $H\rightarrow W^+W^-$ (right) decays separately.\label{fig:bbwwlimit}} \end{figure} Given the excess, we fitted the signal production cross section times branching ratio, normalized to the SM expectation, as a function of Higgs mass, as shown in Fig.~\ref{fig:signalstrength}. The obtained signal strength seems consistent with the SM Higgs signal in the mass range between 115 and 135 GeV/c$^2$. \begin{figure}[htpb] \centerline{\psfig{file=tevatronSMComboXsecFitFeb24.eps,width=6.7cm}} \vspace*{8pt} \caption{The fitted signal strength as a function of the Higgs mass, which the data seem consistent with the SM Higgs signal in the mass range between 115 and 135 GeV/c$^2$.\label{fig:signalstrength}} \end{figure} To further check the compatibility with SM Higgs signal at $m_H=125$ GeV/c$^2$, we compared the LLR by injecting the Higgs signal of $m_H=125$ GeV/c$^2$ into the background-only pesudo-experiment (shown in Fig.~\ref{fig:injecting}) which seems consistent with the LLR observed in the data as shown in Fig.~\ref{fig:LLR}. The distribution is quite broad due to the fact that the final discriminant is not optimized for mass, but for the signal and background separation. Finally, it's worth noting that the Tevatron has made significant progresses to achieve the SM Higgs sensitivity. The gain obtained over time seems go proportional to the inverse of the extra luminosity used, instead of the square root of the luminosity. \begin{figure}[htpb] \centerline{\psfig{file=tevIchepMH125signalInjectionLLR_smx1.eps,width=6.7cm}} \vspace*{8pt} \caption{The expected LLR after injecting an SM Higgs signal at $m_H=125$ GeV/c$^2$ into the background-only-pseudo experiment.\label{fig:injecting}} \end{figure} \section{Conclusion} In clusion, with a full dataset and many years of hard work, the Tevatron has finally achieved the SM Higgs sensitivity over most of the mass range up to 185 GeV/c$^2$. We observe an excess of events in the data compared with the background predictions, which is most significant in the mass range between 115 and 135 GeV/c$^2$, consistent with the Higgs-like particle recently observed by ATLAS and CMS. The largest local significance is 2.7 standard deviations, corresponding to a global significance of 2.2 standard deviations. We also combine separate searches for $H\rightarrow b\bar b$ and $H\rightarrow W^+W^-$, and find that the excess is concentrated in the $H\rightarrow b\bar b$ channel, although the results in the $H\rightarrow W^+W^-$ channel are still consistent with the possible presence of a low-mass Higgs boson. \section*{Acknowledgment} We would like to thank the organizers of the phenomenology 2012 symposium for a wonderful conference with excellent presentations and the CDF and D0 collaborations for the results presented. A special thanks to Mr. Chee-Hok Lim for his invitation of this write up.
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} For aerodynamic flows, a passive flow control device (flap) inspired from self-actuating covert feathers of birds has been shown to improve lift at post-stall angles of attack \citep{bramesfeld2002experimental,duan2021covert}. In particular, when the flap is mounted via a torsional spring, further aerodynamic benefits can be obtained compared with a free (zero hinge stiffness) or static configuration \citep{rosti2018passive}. These added benefits arise from rich fluid-structure interaction (FSI) between the flap and vortex dynamics \citep{nair2022fluid}. This outcome teases a question: can additional lift enhancement be achieved if the flap motion was controlled to yield more favorable flapping amplitudes and phase relative to key flow processes To address this question, we propose a hybrid active-passive flow control method to adaptively tune the flap stiffness. That is, the flap dynamics are \emph{passively} induced by the FSI, according to the \emph{actively} modulated hinge stiffness. This hybrid approach could incur less expense as compared to a fully active control method where the flap deflection is controlled using a rotary actuator. Our focus is on the design of a control algorithm that can actuate the hinge stiffness to provide aerodynamic benefits without accounting for how these stiffness changes are implemented, and on explaining the physical mechanisms that drive these benefits. We note, however, that there are various ways of achieving stiffness modulation in practice via continuous variable stiffness actuators (VSA) \citep{wolf2015variable}, used extensively in robotics \citep{ham2009compliant}, wing morphing \citep{sun2016morphing}, \emph{etc}. Discrete VSA restricts the stiffness to vary discretely across fixed stiffness levels but it weighs less and requires lower power \citep{diller2016lightweight}. Historically, linear approximations of fundamentally nonlinear systems are used to design optimal controllers \citep{kim2007linear}. While these linear techniques have been effective in stabilizing separated flows at low $Re\sim \mathcal{O}(10^2)$ where the base state has a large basin of attraction, its effectiveness is compromised at larger $Re$ \citep{ahuja2010feedback}. These challenges are exacerbated by the nonlinear FSI coupling between the flap and vortex shedding of interest here. Model predictive control (MPC) uses nonlinear models to make real-time predictions of the future states to guide the control actuations. The need for fast real-time predictions necessitates the use of reduced-order models where the control optimization problem is solved using a reduced system of equations \citep{peitz2020data}. Machine learning in fluid mechanics has provided further avenues of deriving more robust reduced nonlinear models to be used with MPC \citep{bieker2020deep,baumeister2018deep,mohan2018deep}. However, these reduced-order modeling efforts remain an area of open investigation, and would be challenging for the strongly coupled flow-airfoil-flap FSI system. We therefore utilize a model-free, reinforcement learning (RL) framework to develop our controller. RL has recently gained attention in fluid mechanics \citep{garnier2021review}, and is used to learn an effective control strategy by trial-and-error via stochastic agent-environment interactions \citep{sutton2018reinforcement}. Unlike MPC, the control optimization problem is completely solved offline, thereby not requiring real-time predictions. RL has been successfully applied to attain drag reduction \citep{rabault2019artificial,paris2021robust,fan2020reinforcement,li2022reinforcement}, shape optimization \citep{viquerat2021direct} and understanding swimming patterns \citep{verma2018efficient,zhu2021numerical}. In this work, we develop a closed-loop feedback controller using deep RL for our proposed hybrid control approach consisting of a tunable-stiffness covert-inspired flap. We train and test this controller using high-fidelity fully coupled simulations of the airfoil-flap-flow dynamics, and demonstrate the effectiveness of the variable-stiffness control paradigm compared with the highest performing passive (single-stiffness) case. We explain the lift-enhancement mechanisms by relating the large-amplitude flap dynamics to those of the vortex formation and shedding processes around the airfoil. \section{Methodology} \subsection{Hybrid-active-passive control} \label{hybridcontrol} The problem setup is shown in Fig.~\ref{nnprob}, which consists of a NACA0012 airfoil of chord $c$ at an angle of attack of $20^\circ$ and $\reynolds = 1 {,}000$, where significant flow separation and vortex shedding occur. A flap of length $0.2c$ is hinged on the upper surface of the airfoil via a torsional spring with stiffness, $\ensuremath{k_{\defl}}$, where $\ensuremath{\beta}$ denotes the deflection of the flap from the airfoil surface. In the passive control approach \citep{nair2022fluid}, $\ensuremath{k_{\defl}}$ was fixed and maximum lift was attained at $\ensuremath{k_{\defl}}=0.015$. In our hybrid active-passive control the stiffness is a function of time, $\ensuremath{k_{\defl}}(t)$, determined by a RL-trained closed-loop feedback controller described in Sec.~\ref{rlgeneral}. While the stiffness variation is allowed to take any functional form, it is restricted to vary in $\ensuremath{k_{\defl}}(t) \in [10^{-4}, 10^{-1}]$, similar to the range of stiffness values considered in the passive control study. The mass and location of the flap are fixed at $\ensuremath{m_{\defl}}=0.01875$ and $\ensuremath{60\%}$ of the chord length from the leading edge, chosen here since they induced the maximal lift benefits in the passive (single-stiffness) configuration (\emph{c.f.}, Fig.~\ref{fvv1}--\ref{fvv4} for vorticity contours at four time instants in one periodic lift cycle for this highest-lift single-stiffness case). \begin{figure} \centering \includegraphics[scale=1]{Paper-figure0.pdf} \vspace{-0.9cm} \caption{Schematic of the problem setup and RL framework. \label{nnprob} \end{figure} \subsection{Reinforcement learning (RL)} \label{rlgeneral} A schematic of the RL framework is shown in Fig. \ref{nnprob} where an agent (with a controller) interacts with an environment. At each time step, $\ensuremath{m}$, the agent observes the current state of the environment, $\state{\ensuremath{m}} \in \RR^{\ensuremath{N_{s}}}$, where $\ensuremath{N_{s}}$ is the number of states, implements an action, $\action{\ensuremath{m}} \in \RR$, and receives a reward, $\reward{\ensuremath{m}} \in \RR$. The environment then advances to a new state, $\state{\ensuremath{m}+1}$. This process is continued for $\ensuremath{M_{\tau}}$ time steps and the resulting sequence forms a trajectory, $\ensuremath{\tau}= \{\state{0},\action{0},\reward{0},\ldots,\state{\ensuremath{M_{\tau}}},\action{\ensuremath{M_{\tau}}},\reward{\ensuremath{M_{\tau}}}\}$. The actions chosen by the agent to generate this trajectory are determined by the stochastic policy of the controller, $\ensuremath{\pi_{\rlweights}}(\action{\ensuremath{m}}|\state{\ensuremath{m}})$, parametrized by weights, $\ensuremath{\bm{\theta}}$. This policy outputs a probability distribution of actions, from which $\action{\ensuremath{m}}$ is sampled, $\action{\ensuremath{m}} \sim \ensuremath{\pi_{\rlweights}}(\action{\ensuremath{m}}|\state{\ensuremath{m}})$, as shown in Fig.~\ref{nnprob}. In policy-based deep RL methods, a neural network is used as a nonlinear function approximator for the policy as shown in Fig.~\ref{nnprob}. Accordingly, $\ensuremath{\bm{\theta}}$ corresponds to the weights of the neural network. The goal in RL is to learn an optimal control policy that maximizes an objective function $\ensuremath{J}(\cdot)$, defined as the expected sum of rewards as \begin{equation} \ensuremath{J}(\ensuremath{\bm{\theta}}) = \ensuremath{\mathbb{E}}_{\ensuremath{\tau} \sim \ensuremath{\pi_{\rlweights}}} \left[ \sum_{m=0}^{\ensuremath{M_{\tau}}} \reward{\ensuremath{m}} \right]. \label{rewardssum2} \end{equation} Here, the expectation is performed over $\ensuremath{N_{\tau}}$ different trajectories sampled using the policy, $\ensuremath{\tau} \sim \ensuremath{\pi_{\rlweights}}$. The maximization problem is then solved by gradient ascent where an approximate gradient of the objective function, $\nabla_{\ensuremath{\bm{\theta}}} \ensuremath{J}(\ensuremath{\bm{\theta}})$, is obtained from the policy gradient theorem \citep{nota2019policy}. In this work, we use the PPO algorithm \citep{schulman2017proximal} for computing the gradient, which is a policy-based RL method suitable for continuous control problems as opposed to $Q$-learning methods for discrete problems \citep{sutton2018reinforcement}. PPO has been used successfully to develop RL controllers for fluid flows \citep{rabault2019artificial}, and is chosen here among other policy-based methods due to its relative simplicity in implementation, better sample efficiency, and ease of hyperparameter tuning. In our hybrid control problem, the environment is the strongly-coupled FSI solver of \cite{nair2022strongly}, where the incompressible Navier-Stokes equation for the fluid coupled with Newton's equation of motion for the flap are solved numerically. The state provided as an input to the controller are sensor measurements consisting of flow vorticity in the wake, $\ensuremath{\bm{\omega}}_\ensuremath{m}$, and flap deflection, $\ensuremath{\beta}_\ensuremath{m}$. The action is the time-varying stiffness, $\action{\ensuremath{m}} = {\ensuremath{k_{\defl}}}_{\ensuremath{m}} \in \RR^+$. Similar to \cite{rabault2019artificial}, when advancing a single \emph{control} time step from $\ensuremath{m}$ to $\ensuremath{m}+1$, the flow-airfoil-flap system is simulated for $\ensuremath{N_{t}}$ \emph{numerical} time steps of the FSI solver. In this duration, the chosen value of the stiffness is kept constant. The reason for introducing these two time scales---control and numerical---is to allow the FSI system to meaningfully respond to the applied stiffness and achieve faster learning. The reward for the lift maximization problem of our hybrid control approach is \begin{equation} \reward{\ensuremath{m}} = \frac{1}{2}{\ensuremath{\overline{C}_l}}^2_m + \ensuremath{p_1} \left( \frac{1-\ensuremath{p_2}^{\ensuremath{u}/\ensuremath{u_{max}}}}{\ensuremath{p_2}}\right). \label{rewpenal} \end{equation} The first term is the mean lift coefficient of the airfoil, ${\ensuremath{\overline{C}_l}}_m$, evaluated over the $\ensuremath{N_{t}}$ numerical time steps. The second term, where $\ensuremath{p_1}>0$, $\ensuremath{p_2} \gg 1$ are constants whose values are given in Sec.~\ref{rlresults}, is a physics-based penalty term that provides an exponentially growing negative contribution to the reward if the flap remains undeployed for several consecutive control time steps (intuitively, one wishes to avoid periods of prolonged zero deployment angle). Accordingly, $\ensuremath{u}$ denotes the current count of consecutive control time steps that the flap has remained undeployed and $\ensuremath{u_{max}}$ is the maximum number of consecutive time steps that the flap may remain undeployed. The flap is deemed undeployed if $\ensuremath{\beta}_{\ensuremath{m}} < \ensuremath{\beta}_{min}$ \begin{algorithm}[b!] \footnotesize \caption{PPO-RL applied to hybrid control}\label{pporl} \begin{algorithmic}[1] \REQUIRE Set of initial conditions, \ensuremath{\mathcal{S}^0} \ENSURE Optimization parameters, $\ensuremath{\bm{\theta}} \STATE Initialize state, $\state{0} \sim \ensuremath{\mathcal{S}^0}$ \STATE Initiate a vector of counters for each trajectory, $\ensuremath{u}[1:\ensuremath{N_{\tau}}]\leftarrow 0$, $\counter[1:\ensuremath{N_{\tau}}]\leftarrow 0$ \FOR{$\text{iterations} \leftarrow 1,2,\ldots$} \FOR[multi-window optimization]{$\text{window} \leftarrow 1,2,\ldots, \ensuremath{k}$ \label{algokwindow}} \FOR[perform trajectory sampling]{$\ensuremath{m} \leftarrow 1,2,\ldots, \ensuremath{M_{\tau}}(=\ensuremath{M_i}/\ensuremath{k})$ \label{algotraj}} \FOR[parallel trajectory sampling]{$\text{trajectory}:j \leftarrow 1,2,\ldots, \ensuremath{N_{\tau}}$ \label{algoparallel}} \STATE $\action{\ensuremath{m}}[j]\sim \ensuremath{\pi_{\rlweights}}(\action{\ensuremath{m}}[j]| \state{\ensuremath{m}}[j])$ \STATE $\reward{\ensuremath{m}}[j], \state{\ensuremath{m}+1}[j] = FSI(\state{\ensuremath{m}}[j],\action{\ensuremath{m}}[j])$ \label{algofsi} \STATE $\counter[\text{j}] \leftarrow \counter[\text{j}] + 1$ \IF{$\ensuremath{\beta}_{\ensuremath{m}+1}[j] < \ensuremath{\beta}_{min}$} \STATE $\ensuremath{u}[\text{j}] \gets \ensuremath{u}[\text{j}] + 1$ \ENDIF \IF[episode termination]{$\ensuremath{u}[\text{j}]=\ensuremath{u_{max}}$ $||$ $\counter[\text{j}]=\ensuremath{M_i}$ \label{algoepterm}} \STATE Reset to initial condition, $\state{\ensuremath{m}+1}[j] \sim \ensuremath{\mathcal{S}^0}$ \label{algoreset} \STATE $\ensuremath{u}[\text{j}]\leftarrow 0$, $\counter[\text{j}]\leftarrow 0$ \ENDIF \ENDFOR \ENDFOR \STATE Optimize $\ensuremath{\bm{\theta}}$ using gradient ascent. \label{algooptimize} \ENDFOR \ENDFOR \end{algorithmic} \end{algorithm} The RL algorithm is proceeded iteratively as shown in Algorithm \ref{pporl}. Each iteration consists of sampling trajectories spanning a total of $\ensuremath{M_i}$ control time steps (lines \ref{algokwindow} and \ref{algotraj}) and using the collected data to optimize $\ensuremath{\bm{\theta}}$ (line~\ref{algooptimize}). We also define an episode as either the full set of $\ensuremath{M_i}$ time steps or, in the case where the parameters yield an undeployed flap, the time steps until $\ensuremath{u}=\ensuremath{u_{max}}$. Note that an episode and iteration coincide only if an episode is not terminated early. The state is only reset after an episode terminates (line~\ref{algoreset}), which could occur within an iteration if the episode terminates early (line~\ref{algoepterm}). We also use a modified strategy to update $\ensuremath{\bm{\theta}}$ in a given iteration. In policy-based methods, the weights update step is performed after trajectory sampling. Generally, in one iteration, trajectories are collected only once. This implies that (a) typically the weights are updated once in one training iteration and (b) the length of the trajectory is equal to the iteration length, $\ensuremath{M_{\tau}} = \ensuremath{M_i}$. However, in our work, we perform $\ensuremath{k}>1$ number of weight updates in a single iteration by sampling $\ensuremath{k}$ trajectories each of length $\ensuremath{M_{\tau}} = \ensuremath{M_i}/ \ensuremath{k}$. This procedure is found to exhibit faster learning since the frequent weight updates sequentially cater to optimizing \emph{shorter} temporal windows of the \emph{long}-time horizon. We therefore refer to this procedure as the long-short-term training strategy and demonstrate its effectiveness in Sec.~\ref{rlresults}. As shown in Algorithm~\ref{pporl}, each iteration is divided into $\ensuremath{k}$ optimization windows (line~\ref{algokwindow}) and $\ensuremath{\bm{\theta}}$ is updated at the end of each window (line~\ref{algooptimize}). Finally, for computing more accurate estimates of the expected values used in Eq.~\ref{rewardssum2} and for evaluating gradients, the $\ensuremath{m}$th time advancement (line~\ref{algotraj}) is performed $\ensuremath{N_{\tau}}$ times independently (line~\ref{algoparallel}) \citep{schulman2017proximal}. For accelerated training, this set of $\ensuremath{N_{\tau}}$ trajectories are sampled in parallel \citep{rabault2019accelerating,pawar2021distributed} \section{Results} \label{rlresults} \subsection{RL and FSI parameters} The parameters of the FSI environment are the same as in \cite{nair2022strongly}, which contains the numerical convergence details. The spatial grid and time step sizes are $\Delta x/\ensuremath{c}=0.00349$ and $\Delta t/(\ensuremath{c}/\velocityscale) = 0.0004375$, respectively. For the multi-domain approach for far-field boundary conditions, five grids of increasing coarseness are used where the finest and coarsest grids are $[-0.5,2.5]\ensuremath{c} \times [-1.5, 1.5]\ensuremath{c}$ and $[-23,25]\ensuremath{c} \times [-24, 24]\ensuremath{c}$, respectively. The airfoil leading edge is located at the origin. For the sub-domain approach, a rectangular sub-domain that bounds the physical limits of flap displacements, $[0.23,0.7] c\times [-0.24,0.1] c$, is utilized. The FSI solver is parallelized and simulated across six processors. For the states, $\ensuremath{N_{s}} = 65$ sensor measurements are used which measure vorticity at $64$ locations distributed evenly across $[1,2.4] c\times [-0.6,0.1] c$ and flap deflection as denoted by the red markers in Fig.~\ref{nnprob}. To ensure unbiased stiffness sampling across $[10^{-4}, 10^{-1}]$, a transformation between stiffness and action is introduced: ${\ensuremath{k_{\defl}}}_{\ensuremath{m}} = 10^{\action{\ensuremath{m}}}$. Accordingly, $\action{\ensuremath{m}}$ is sampled from a normal distribution, $\mathcal{N}(\action{\ensuremath{m}},\ensuremath{\sigma})$ in the range $[-4, -1]$, so that ${\ensuremath{k_{\defl}}}_{\ensuremath{m}}$ is sampled from a log-normal distribution. The neural network consists of fully-connected layers with two hidden layers. The size of each hidden layer is $64$ and the hyperbolic tangent (tanh) function is used for nonlinear activations. The parameters in the reward function \eqref{rewpenal} are $\ensuremath{p_1}=0.845$, $\ensuremath{p_2}=10,000$, $\ensuremath{u_{max}}=20$ and $\ensuremath{\beta}_{min}=5^\circ$. The initial state for initializing every episode corresponds to the limit cycle oscillation solution obtained at the end of a simulation with a constant $\ensuremath{k_{\defl}}=0.015$ spanning $40$ convective time units ($\ensuremath{t}=0$ in this work denotes the instance at the end of this simulation). In advancing one control time step, $\ensuremath{N_{t}}=195$ numerical time steps of the FSI solver are performed, or approximately $0.085$ convective time units. That is, the control actuation is provided every $5\%$ of the vortex-shedding cycle. The various PPO-related parameters are the discount factor of $\ensuremath{\gamma}=0.9$, learning rate of $\alpha=0.0003$, $\ensuremath{n_{e}}=10$ epochs and clipping fraction of $\ensuremath{\epsilon}=0.2$. Refer to \cite{schulman2017proximal} for the details of these parameters. $\ensuremath{N_{\tau}}=3$ trajectories are sampled in parallel. The PPO-RL algorithm is implemented by using the Stable-Baselines3 library \citep{stable-baselines3} in Python. We test the utility of our long-short-term strategy described in Sec.~\ref{rlgeneral} against the traditional long-term strategy. In the latter, the controller is optimized for a long-time horizon of 10 convective time units ($\ensuremath{M_i}=120$) spanning approximately 6 vortex-shedding cycles and weights are updated traditionally, \emph{i.e.} only once in an iteration ($\ensuremath{M_{\tau}}=120, \ensuremath{k}=1$ window). On the other hand, in the long-short-term strategy, while the long-time horizon is kept same ($\ensuremath{M_i}=120$), the optimization is performed on two shorter optimization windows ($\ensuremath{M_{\tau}}=60, \ensuremath{k}=2$). \subsection{Implementation, results, and mechanisms} To demonstrate the effectiveness of the long-short-term strategy, the evolution of the mean reward (sum of rewards divided by episode length) versus iterations for the two learning strategies as well as the passive case of $\ensuremath{k_{\defl}}=0.015$ (for reference) are shown in Fig.~\ref{iterations}. Firstly, we note that the evolution is oscillatory because of the stochasticity in stiffness sampling during training. Next, it can be seen that with increasing iterations, for both cases, the controller gradually learns an effective policy as the mean reward increases beyond the passive reference case. However, the long-short-term strategy is found to exhibit faster learning as well as attain a larger reward at the end of 90 iterations as compared to the long-term one. This is because splitting the long-time horizon into two shorter windows and sequentially updating the weights for each window alleviates the burden of learning an effective policy for the entire long horizon as compared to learning via a single weight update in the long-term strategy. The remainder of the results focuses on the performance of the control policy obtained after the 90th iteration of the long-short strategy. A deterministic policy is used for evaluating the true performance of the controller, where the actuation provided by the neural network is directly used as the stiffness instead of stochastically sampling a sub-optimal stiffness in training \begin{figure} \centering \includegraphics[scale=1]{Paper-figure1.pdf} \caption{Evolution of mean rewards with iterations for different training strategies.} \label{iterations} \end{figure} The airfoil lift of the flap-less, maximal passive control and hybrid control cases are plotted in Fig.~\ref{liftresult}. For hybrid control, the lift is plotted not only for $\ensuremath{t} \in[0, 10]$ that the controller has been trained for, but also for $\ensuremath{t} \in[10, 20]$. It can be seen that the hybrid controller is able to significantly increase the lift in the training duration and beyond. Overall in $\ensuremath{t} \in[0, 20]$, a significant lift improvement of $136.43 \%$ is achieved as compared to the flap-less case. For comparison, the corresponding lift improvement of the best passive case is $27\%$ \citep{nair2022fluid}. The stiffness actuations outputted by the controller and the resulting flap deflection are plotted in Fig.~\ref{stiffresult} and \ref{betaresult}, respectively. It can be observed for hybrid control that the stiffness varies across four orders of magnitude (as compared to fixed $\ensuremath{k_{\defl}}=0.015$ in passive control) and often reaching its bounding values of $\ensuremath{k_{\defl}}=10^{-4}$ and $10^{-1}$. Due to these large stiffness variations, the flap oscillates with an amplitude that is more than twice in passive control, indicating that larger amplitude flap oscillations can yield larger lift benefits when timed appropriately with key flow processes. \begin{figure} \centering \begin{subfigure}[t]{0.95\textwidth} \centering \includegraphics[scale=1]{Paper-figure2.pdf} \vspace{-0.1cm} \caption{Airfoil lift coefficient.} \label{liftresult} \end{subfigure} \begin{subfigure}[t]{0.48\textwidth} \centering \includegraphics[scale=1]{Paper-figure3.pdf} \vspace{-0.1cm} \caption{Hinge stiffness.} \label{stiffresult} \end{subfigure} \begin{subfigure}[t]{0.48\textwidth} \centering \includegraphics[scale=1]{Paper-figure4.pdf} \vspace{-0.1cm} \caption{Flap deflection.} \label{betaresult} \end{subfigure} \caption{Temporal plots for flap-less (no control), passive and hybrid flow control cases.} \end{figure} To understand the physical mechanisms driving lift benefits for the hybrid case, we show various quantities during the first and sixth vortex shedding cycles (\emph{c.f.}, Fig.~\ref{liftresult}). These distinct cycles allow for a comparison of the transient and quasi-periodic regimes. The perfectly periodic dynamics of the passive $\ensuremath{k_{\defl}}=0.015$ case are also shown for reference. Firstly, from Fig.~\ref{onetstiff}, we can see that initially in $\ensuremath{t/T}\in[0,0.16]$, the controller actuation is lower ($\ensuremath{k_{\defl}}= 10^{-4}$) than the constant passive actuation ($\ensuremath{k_{\defl}}=0.015$). This low stiffness prompts the hybrid flap in the first cycle to undergo a slightly larger deflection until $\ensuremath{\beta}\approx 50^\circ$ in Fig.~\ref{onetbeta}. The decisive actuation occurs at $\ensuremath{t/T}\approx 0.21$ when the largest $\ensuremath{k_{\defl}}=10^{-1}$ is prescribed (\emph{c.f.} Fig.~\ref{onetstiff}), which forces the flap to oscillate downwards within a short time span until $\ensuremath{t/T}=0.4$ (\emph{c.f.} Fig.~\ref{onetbeta}). The flap then begins to rise only after the actuation is reduced back to $\ensuremath{k_{\defl}}=10^{-4}$ by $\ensuremath{t/T}=0.5$ (\emph{c.f.} Fig.~\ref{onetstiff}). For comparison, the rising and falling of the single-stiffness flap in the same duration of $\ensuremath{t/T}\in[0,0.5]$ occurs gradually (\emph{c.f.} Fig.~\ref{onetbeta}). % To understand the effect of such an aggressive flapping mechanism on airfoil lift, we plot the circulation strengths of the trailing- and leading-edge vortices (TEV and LEV, respectively) in Fig.~\ref{onettev} and \ref{onetlev}, respectively. Here, $\ensuremath{\Gamma_{TEV}}$ and $\ensuremath{\Gamma_{LEV}}$ are the magnitudes of positive and negative circulation strengths evaluated in bounding boxes, $[0.85, 1.1]c \times [-0.35, -0.1]c$ and $[0, 1.1]c \times [-0.35, 0.2]c$, respectively. It can be observed that after $\ensuremath{t/T}\approx 0.18$ when the flap strongly oscillates downwards in the first cycle, $\ensuremath{\Gamma_{TEV}}$ and $\ensuremath{\Gamma_{LEV}}$ for the hybrid case are decreased and increased as compared to the passive case, respectively. The overall effect on performance is that the lift of the hybrid case in the first cycle begins to increase at $\ensuremath{t/T}\approx 0.4$ after an initial dip as seen in Fig.~\ref{onetcl}. \begin{figure} \centering \hspace{-0.55cm} \begin{subfigure}[b]{0.32\textwidth} \centering \includegraphics[scale=1]{Paper-figure5.pdf} \vspace{-0.5cm} \caption{Lift coefficient} \label{onetcl} \end{subfigure} \begin{subfigure}[b]{0.32\textwidth} \centering \includegraphics[scale=1]{Paper-figure6.pdf} \vspace{-0.5cm} \caption{Flap deflection} \label{onetbeta} \end{subfigure} \hspace{0.1cm} \begin{subfigure}[b]{0.32\textwidth} \centering \includegraphics[scale=1]{Paper-figure7.pdf} \vspace{-0.5cm} \caption{Stiffness} \label{onetstiff} \end{subfigure} \begin{subfigure}[b]{0.35\textwidth} \centering \includegraphics[scale=1]{Paper-figure8.pdf} \vspace{-0.5cm} \caption{TEV strength} \label{onettev} \end{subfigure} \begin{subfigure}[b]{0.6\textwidth} \centering \includegraphics[scale=1]{Paper-figure9.pdf} \vspace{-0.5cm} \caption{LEV strength (magnitude)} \label{onetlev} \end{subfigure} \caption{Time variation in various quantities during the lone periodic cycle for the passive case, and the first and sixth cycles of the hybrid case (highlighted in Fig.~\ref{liftresult}).} \label{onet} \end{figure} \begin{figure} \begin{adjustwidth}{}{0.5cm} \centering \hspace{-0.45cm} \centering \begin{subfigure}[t]{0.281\textwidth} \centering \includegraphics[scale=1]{Paper-figure10.pdf} \vspace{-0.6cm} \caption{Hybrid: $t=0 \ T$ } \label{vort61} \end{subfigure} \begin{subfigure}[t]{0.22\textwidth} \centering \includegraphics[scale=1]{Paper-figure11.pdf} \vspace{-0.6cm} \caption{Hybrid: $t=0.27 \ T$ } \label{vort62} \end{subfigure} \begin{subfigure}[t]{0.22\textwidth} \centering \includegraphics[scale=1]{Paper-figure12.pdf} \vspace{-0.6cm} \caption{Hybrid: $t=0.55 \ T$ } \label{vort63} \end{subfigure} \begin{subfigure}[t]{0.22\textwidth} \centering \includegraphics[scale=1]{Paper-figure13.pdf} \vspace{-0.6cm} \caption{Hybrid: $t=0.82 \ T$ } \label{vort64} \end{subfigure} \centering \begin{subfigure}[t]{0.281\textwidth} \centering \includegraphics[scale=1]{Paper-figure14.pdf} \vspace{-0.6cm} \caption{Passive: $t=0 \ T$ } \label{fvv1} \end{subfigure} \begin{subfigure}[t]{0.22\textwidth} \centering \includegraphics[scale=1]{Paper-figure15.pdf} \vspace{-0.6cm} \caption{Passive: $t=0.27 \ T$ } \label{fvv2} \end{subfigure} \begin{subfigure}[t]{0.22\textwidth} \centering \includegraphics[scale=1]{Paper-figure16.pdf} \vspace{-0.6cm} \caption{Passive: $t=0.55 \ T$ } \label{fvv3} \end{subfigure} \begin{subfigure}[t]{0.22\textwidth} \centering \includegraphics[scale=1]{Paper-figure17.pdf} \vspace{-0.6cm} \caption{Passive: $t=0.82 \ T$ } \label{fvv4} \end{subfigure} \end{adjustwidth} \caption{Vorticity contours for hybrid control during the 6th cycle (top row) and passive single-stiffness control (bottom row) at four instances indicated by the markers in Fig.~\ref{onetcl}.} \label{rlvort} \end{figure} Now, as time progresses, this aggressive flapping mechanism continues, but with increasing amplitude as observed in Fig.~\ref{betaresult}. Eventually, by the sixth cycle, the switching in stiffness occurs at delayed time instants of $\ensuremath{t/T}\approx 0.53$ and $0.7$ (\emph{c.f.} Fig.~\ref{onetstiff}). As a consequence, the flap oscillates upwards until $\ensuremath{t/T}=0.5$ and attains a large deflection of $\ensuremath{\beta}\approx 100^\circ$ (\emph{c.f.} Fig.~\ref{onetbeta}). Then as the stiffness switches to high $\ensuremath{k_{\defl}}=10^{-1}$, the flap deflection suddenly drops to $\ensuremath{\beta}\approx 5 ^\circ$ within a short time span. Similar to the first cycle, this strong downward motion mitigates the TEV and enhances the LEV as seen in Fig.~\ref{onettev} and \ref{onetlev}, respectively, but now to a much stronger degree. To visualize these effects, vorticity contours at four time instants in the sixth cycle are plotted in Fig.~\ref{vort61}--\ref{vort64} and compared to passive control in Fig.~\ref{fvv1}--\ref{fvv4}. The TEV which is clearly decipherable for the passive case in Fig.~\ref{fvv3} is now limited to a much smaller size in the hybrid case in Fig.~\ref{vort63}. This is because the strong angular velocity of the downward oscillating flap sheds away the TEV quickly and restricts its growth. This downward motion further contributes to a reduced width of the separated recirculation region in the airfoil-normal direction in Fig.~\ref{vort61} as compared to passive control in Fig.~\ref{fvv1}. Due to the TEV mitigating, LEV enhancing and separation width reducing mechanisms of hybrid flow control, the airfoil attains a much higher lift by the sixth cycle as compared to the passive case (\emph{c.f.} Fig.~\ref{onetcl}). Finally, we briefly discuss the occurrence of the spikes observed in the lift signal in Fig.~\ref{liftresult} by focusing on the spike in the sixth cycle in Fig.~\ref{onetcl}, visualized via $C_p$ contours at four time instants surrounding the occurrence of the spike in Fig.~\ref{cp61}--\ref{cp64}. We note that the spike initiates at $\ensuremath{t/T}\approx 0.53$ when the flap attains its highest deflection (\emph{c.f.} Fig.~\ref{onetcl} and \ref{onetbeta}). Preceding this instant, as the flap oscillates upwards, it induces the fluid in the vicinity to move upstream due to its no-slip condition. When the flap comes to a sudden stop as the stiffness switches from $10^{-4}$ to $10^{-1}$ at $\ensuremath{t/T}\approx 0.53$ (\emph{c.f.} Fig.~\ref{onetstiff}), the moving fluid in the region post-flap abruptly loses its momentum. This momentum loss is manifested as a strong rise in post-flap pressure (\emph{c.f.}, Fig.~\ref{cp62}). The fluid in the pre-flap region, however, does not experience a barrier in its upstream motion, and instead continues to roll up and builds up the suction pressure. The flap dividing the high-pressure post-flap and low-pressure pre-flap regions can be clearly seen in Fig.~\ref{cp62} and \ref{cp63}, respectively. This large pressure difference across the flap contributes to the large spikes in the airfoil lift. \begin{figure} \begin{adjustwidth}{}{0.5cm} \centering \hspace{-0.45cm} \centering \begin{subfigure}[t]{0.281\textwidth} \centering \includegraphics[scale=1]{Paper-figure18.pdf} \vspace{-0.6cm} \caption{Hybrid: $t=0.45 \ T$ } \label{cp61} \end{subfigure} \begin{subfigure}[t]{0.22\textwidth} \centering \includegraphics[scale=1]{Paper-figure19.pdf} \vspace{-0.6cm} \caption{Hybrid: $t=0.55 \ T$ } \label{cp62} \end{subfigure} \begin{subfigure}[t]{0.22\textwidth} \centering \includegraphics[scale=1]{Paper-figure20.pdf} \vspace{-0.6cm} \caption{Hybrid: $t=0.64 \ T$ } \label{cp63} \end{subfigure} \begin{subfigure}[t]{0.22\textwidth} \centering \includegraphics[scale=1]{Paper-figure21.pdf} \vspace{-0.6cm} \caption{Hybrid: $t=0.73 \ T$ } \label{cp64} \end{subfigure} \iffalse \centering \begin{subfigure}[t]{0.281\textwidth} \centering \input{data_rl/rl_cpcont_l0.6_i1e-05_k0.015_PBS_trans_3.tex} \vspace{-0.6cm} \caption{Passive: $t=0.18 \ T$ } \label{cpbl1} \end{subfigure} \begin{subfigure}[t]{0.22\textwidth} \centering \input{data_rl/rl_cpcont_l0.6_i1e-05_k0.015_PBS_trans_4.tex} \vspace{-0.6cm} \caption{Passive: $t=0.27 \ T$ } \label{cpbl2} \end{subfigure} \begin{subfigure}[t]{0.22\textwidth} \centering \input{data_rl/rl_cpcont_l0.6_i1e-05_k0.015_PBS_trans_5.tex} \vspace{-0.6cm} \caption{Passive: $t=0.36 \ T$ } \label{cpbl3} \end{subfigure} \begin{subfigure}[t]{0.22\textwidth} \centering \input{data_rl/rl_cpcont_l0.6_i1e-05_k0.015_PBS_trans_6.tex} \vspace{-0.6cm} \caption{Passive: $t=0.45 \ T$ } \label{cpbl4} \end{subfigure} \fi \end{adjustwidth} \caption{$\ensuremath{C_p}$ contours at four time instants in the sixth cycle of hybrid control. \label{rlcp6} \end{figure} \section{Conclusions} A hybrid active-passive flow control method was introduced as an extension of the covert-inspired passive flow control method consisting of a torsionally mounted flap on an airfoil at post-stall conditions involving vortex shedding. This hybrid strategy consisted of actively actuating the hinge stiffness to passively control the dynamics of the flap. A closed-loop feedback controller trained using deep RL was used to provide effective stiffness actuations to maximize lift. The RL framework was described, including modifications to the traditional RL methodology that enabled faster training for our hybrid control problem. The hybrid controller provided lift improvements as high as $136\%$ and $85\%$ with respect to the flap-less airfoil and the maximal passive control (single-stiffness) cases, respectively. These lift improvements were attributed to large flap oscillations due to stiffness variations occurring over four orders of magnitude. Detailed flow analysis revealed an aggressive flapping mechanism that led to significant TEV mitigation, LEV enhancement and reduction of separation region width. Finally, we remark that since the stiffness changes can be well approximated by a small number of finite jumps, a discrete VSA could be a pathway to realizing this actuation strategy. \section{Declaration of interests} The authors report no conflict of interest. \bibliographystyle{jfm}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{INTRODUCTION} Segmentation of medical images is a long-standing problem with excessive number of deep learning based methods already available nowadays. Although there are recent paradigm-shifting works at segmentation literature such as capsule based segmentation~\cite{lalonde2021capsules} and transformer based segmentation~\cite{chen2021transunet}, most of the medical image segmentation literature in deep learning field are based on standard U-Net or its derivation based methodologies. In this study, we approach the segmentation from a slightly different angle where our unique clinical imaging conditions infer some constraints on the problem formulation. In many clinical scenarios, for instance, multi-modality images are necessary for a more appropriate evaluation of the clinical condition through better tissue characterization (anatomically and/or functionally). Multi-modal brain imaging, PET/CT, PET/MRI, and multi-contrast MRIs are some of the mostly used examples in this context. \begin{figure}[!ht] \centering \includegraphics[width = 0.3\textwidth]{allcontrastsgt.png} \caption{MRI contrasts (first row): fat-suppressed, water-fat, water-suppressed. Segmented tissues-muscle, fat, bone and bone marrow-(second row).} \label{fig:multi_tissue} \end{figure} Despite the strengths of combining multiple modality images to characterize a clinical condition better, or quantify, there are further challenges to be addressed. First, handling more than one modality for image segmentation is already more challenging than single modality images. Second, multi-object segmentation is another hurdle compared to single object segmentation, which is often the case in multi-modality image analysis. Third, clinical workflow has deficiencies and not always all modalities are available for further analysis. Missing slices or missing scans are not rare especially in multi-contrast evaluation of MRI scans. In this study, our goal is to develop a successful segmentation strategy, based on deep networks, that accepts multi-contrast MRI scans and perform a multi-tissue segmentation even there are missing scans. To achieve our overall goal by addressing the challenges defined above, we focus on musculoskeletal (MSK) radiology examples: delineation of thigh tissues from multi-contrast MRI scans. Figure \ref{fig:multi_tissue} demonstrates such a multi-contrast MRI scan's slices from the same patient, from left to right in top row: Fat-suppressed: MRI1, water-fat: MRI2, and water-suppressed: MRI3. Our clinical motivation comes from the fact that MSK radiology applications are critical for several diseases spanning from obesity, metabolic syndromes to cartilage quantification. For instance, according to American Cancer Society studies in 2021 \cite{society2021cancer}, some of the most effective measures for decreasing cancer risk are having a healthy body weight, healthy diet and being physically active. Excess body weight (obesity), alcohol consumption, physical inactivity, and a poor diet are thought to be responsible for 18\% of cancer cases and 16\% of cancer deaths. Of all cancer risk factors, excess body weight is believed to be responsible for 5\% of cancers in males and 11\% of cancers in women. In this respect, sarcopenia is related with general loss of body mass and excess body weight , and have strong relation with cancer risk factors \cite{ligibel2020sarcopenia}. In this work, we propose a systematic approach to (1) synthesize MRI contrasts, (2) train synthesized images on a deep learning based segmentation engine, and (3) evaluate the efficacy of the segmentation model on true multi-contrast MRI images. We target segmenting thigh tissues (muscle, fat, bone and bone marrow). We also conduct an ablation study where training model includes true, synthesized, and mixed (true and synthesized) images towards a segmentation model. Comprehensive quantitative and qualitative segmentation results showed that proposed approach can be used effectively for multi-modal image analysis. This is essentially useful when there is not enough medical imaging data, a typical constraint in medical imaging problems. Our major contributions are as follows: \begin{itemize} \item Application-wise our study is the first one handling missing contrast issue while retaining a high accuracy in segmenting multiple tissues from thigh MRI. \item Our method is generic, any deep segmentation or GAN based methods can be replaced within our framework. \item We present a comprehensive evaluation, carefully analyzing three contrasts of MRI, their relations and effect on the final segmentation results. \item We examine whether it is robust and feasible enough to run a segmentation training on completely synthesized and mixed data, opening new discussions about the use of completely synthesized data to obtain clinically accepted segmentation results on real MRI data. \end{itemize} \section{Related Work} There is a relatively small body of literature that is concerned with muscle, fat, bone and bone marrow segmentation in MSK radiology despite its clinical importance \cite{shin2021deep}. Available deep learning based studies focus on U-Net based standard segmentation methods on single or multi-tissues but mostly in single modality MRI scans. When there are missing scans, there is no particular method presented for MSK applications. GAN (generative adversarial networks) based methods are being increasingly used for several applications spanning from brain imaging to functional imaging. One interesting work utilizes a multi-modal generative adversarial network (MM-GAN) \cite{sharma2019missing}, a variant of pix2pix \cite{isola2017image} network. Authors integrate multi-modal data from existing Brain image sequences in a single forward pass training to synthesize missing sequences. In another major work \cite{gadermayr2019domain}, authors used popular CycleGAN network on thigh MRI for increasing the data size for segmentation purposes. Our work has some similarities with this work, but unlike focusing on data augmentation aspect of particular tissue, we generate the whole sequence(s), using them to train segmentation models, and explore the relationship of MRI contrast in an ablation study, leading us to train the complete segmentation process on synthetic MRI scans. In pre-deep learning era, there are some segmentation studies available too. It might be worth to mention that \cite{irmakci2018novel} proposed an architecture based novel affinity propagation within the fuzzy connectivity for segmenting multi contrast thigh MRI. The most recent work in this domain is handling the lack of labeling problem from a semi-supervised deep learning aspect \cite{anwar2020semi} utilizing Tiramisu network. However, the synthesis of one or more MRI contrasts remains a major challenge, and not considered in those works. Herein we propose a comprehensive evaluation and generic approach for handling missing MRI contrast and its effect on multi-tissue segmentation problems. \begin{figure*} \centering \includegraphics[width = 0.45\textwidth]{anygan2.png} \includegraphics[width = 0.47\textwidth]{gan_generation.png} \caption{Fat-suppressed: MRI1, water-fat: MRI2, water-suppressed: MRI3. \textbf{Left.} Generation procedure for all Synthesized MRI contrasts. (A) Synthesized generations from only R MRI1 B) Synthesized generations from only R MRI2 C) Synthesized generations from only R MRI3. \textbf{Right.} Different combinations of MRI synthesis procedure are shown where ($\leftarrow$ or $\rightarrow$) indicates synthesis. For example, R MRI1$\rightarrow$F MRI2 or F MRI1$\leftarrow$R MRI2 both indicates synthesize operation from Real contrasts.} \label{fig:any_gan2} \end{figure*} \begin{table*} \caption{Segmentation performance of Single Input Multi Output MR contrasts (5-fold cross validation) (Avg.=Average and Std.=Standard deviation) } \centering \resizebox{1\textwidth}{!}{% \begin{tabular}{@{}llllllllllllllllll@{}} \toprule \textbf{SINGLE INPUT} & \multicolumn{5}{c}{\textbf{MUSCLE}} & \multicolumn{4}{c}{\textbf{FAT}} & \multicolumn{4}{c}{\textbf{BONE}} & \multicolumn{4}{c}{\textbf{BONE MARROW}} \\ \midrule & \textbf{} & \textbf{DSC.} & \textbf{ACC.} & \textbf{SENS.} & \textbf{SPEC.} & \textbf{DSC.} & \textbf{ACC.} & \textbf{SENS.} & \textbf{SPEC.} & \textbf{DSC.} & \textbf{ACC.} & \textbf{SENS.} & \textbf{SPEC.} & \textbf{DSC.} & \textbf{ACC.} & \textbf{SENS.} & \textbf{SPEC.} \\ R MRI1 & Avg. & \textbf{0,9264} & 0,9831 & 0,9521 & 0,9868 & \textbf{0,8826} & 0,9793 & 0,8985 & 0,9897 & \textbf{0,8245} & 0,9985 & 0,8383 & 0,9992 & \textbf{0,8397} & 0,9994 & 0,8482 & 0,9997 \\ & Std & 0,0340 & 0,0110 & 0,0337 & 0,0097 & 0,0956 & 0,0206 & 0,0622 & 0,0126 & 0,0943 & 0,0008 & 0,0969 & 0,0005 & 0,0943 & 0,0004 & 0,1134 & 0,0002 \\ R MRI2 & Avg. & 0,9312 & 0,9846 & 0,9541 & 0,9883 & 0,9100 & 0,9870 & 0,9246 & 0,9917 & \textbf{0,9591} & 0,9997 & 0,9612 & 0,9999 & \textbf{0,9682} & 0,9999 & 0,9693 & 1,0000 \\ & Std & 0,0384 & 0,0100 & 0,0448 & 0,0078 & 0,0811 & 0,0122 & 0,0619 & 0,0087 & \textbf{0,0321} & 0,0002 & 0,0422 & 0,0001 & \textbf{0,0254} & 0,0001 & 0,0420 & 0,0001 \\ R MRI3 & Avg. & \textbf{0,9468} & 0,9884 & 0,9587 & 0,9919 & \textbf{0,9467} & 0,9925 & 0,9608 & 0,9945 & 0,8296 & 0,9985 & 0,8324 & 0,9992 & 0,8897 & 0,9996 & 0,8848 & 0,9998 \\ & Std & \textbf{0,0211} & 0,0057 & 0,0276 & 0,0061 & \textbf{0,0411} & 0,0057 & 0,0205 & 0,0058 & 0,0919 & 0,0009 & 0,1045 & 0,0007 & 0,0846 & 0,0003 & 0,1023 & 0,0002 \\ F MRI1($\leftarrow$ R MRI2) TEST ON R MRI1 & Avg. & 0,9063 & 0,9774 & 0,9805 & 0,9770 & 0,8815 & 0,9817 & 0,9112 & 0,9882 & 0,8445 & 0,9986 & 0,8665 & 0,9992 & 0,8514 & 0,9994 & 0,8527 & 0,9997 \\ & Std. & 0,0387 & 0,0130 & 0,0248 & 0,0127 & 0,0813 & 0,0157 & 0,0526 & 0,0115 & 0,0999 & 0,0008 & 0,0999 & 0,0006 & 0,0970 & 0,0004 & 0,1192 & 0,0002 \\ F MRI1($\leftarrow$ R MRI3) TEST ON R MRI1 & Avg. & \textbf{0,9239} & 0,9824 & 0,9550 & 0,9856 & 0,8920 & 0,9828 & 0,9301 & 0,9875 & 0,8206 & 0,9985 & 0,8263 & 0,9992 & 0,8362 & 0,9994 & 0,8372 & 0,9997 \\ & Std. & 0,0326 & 0,0108 & 0,0334 & 0,0100 & 0,0951 & 0,0172 & 0,0537 & 0,0139 & 0,1017 & 0,0008 & 0,1021 & 0,0005 & 0,0951 & 0,0004 & 0,1168 & 0,0002 \\ F MRI2($\leftarrow$ R MRI1) TEST ON R MRI2 & Avg. & 0,9189 & 0,9813 & 0,9661 & 0,9831 & 0,9049 & 0,9856 & 0,9468 & 0,9889 & \textbf{0,9163} & 0,9993 & 0,9387 & 0,9995 & 0,9443 & 0,9998 & 0,9550 & 0,9999 \\ & Std. & 0,0364 & 0,0106 & 0,0309 & 0,0095 & \textbf{0,0815} & 0,0136 & 0,0539 & 0,0104 & 0,0434 & 0,0003 & 0,0403 & 0,0003 & 0,0315 & 0,0001 & 0,0521 & 0,0001 \\ F MRI2($\leftarrow$ R MRI3) TEST ON R MRI2 & Avg. & 0,9211 & 0,9824 & 0,9422 & 0,9873 & 0,9094 & 0,9863 & 0,9312 & 0,9907 & 0,9044 & 0,9992 & 0,9061 & 0,9996 & \textbf{0,9533} & 0,9998 & 0,9547 & 0,9999 \\ & Std. & \textbf{0,0400} & 0,0106 & 0,0491 & 0,0081 & 0,0812 & 0,0129 & 0,0567 & 0,0098 & \textbf{0,0414} & 0,0003 & 0,0552 & 0,0003 & \textbf{0,0277} & 0,0001 & 0,0532 & 0,0000 \\ F MRI3($\leftarrow$ R MRI1) TEST ON R MRI3 & Avg. & 0,9386 & 0,9861 & 0,9622 & 0,9891 & \textbf{0,9411} & 0,9921 & 0,9754 & 0,9933 & \textbf{0,8089} & 0,9983 & 0,8496 & 0,9989 & \textbf{0,8796} & 0,9995 & 0,8785 & 0,9998 \\ & Std. & \textbf{0,0249} & 0,0077 & 0,0323 & 0,0075 & 0,2356 & 0,2036 & 0,2067 & 0,2037 & \textbf{0,0998} & 0,0008 & 0,1052 & 0,0006 & \textbf{0,0954} & 0,0003 & 0,1075 & 0,0002 \\ F MRI3($\leftarrow$ R MRI2) TEST ON R MRI3 & Avg. & \textbf{0,9391} & 0,9863 & 0,9677 & 0,9885 & 0,9382 & 0,9916 & 0,9709 & 0,9929 & 0,8358 & 0,9987 & 0,8370 & 0,9993 & 0,8952 & 0,9996 & 0,8891 & 0,9998 \\ & Std. & 0,0269 & 0,0075 & 0,0239 & 0,0078 & \textbf{0,0506} & 0,0064 & 0,0165 & 0,0068 & 0,0900 & 0,0006 & 0,0937 & 0,0005 & 0,0861 & 0,0003 & 0,0983 & 0,0002 \\ \bottomrule \end{tabular}} \label{tab:test_on_single_mri} \caption{Segmentation performance of Multi Input Multi Output MR contrasts (5-fold cross validation) (Avg.=Average and Std.=Standard deviation)} \centering \label{tab:test_on_multiple_mri} \resizebox{1\textwidth}{!}{% \begin{tabular}{@{}llllllllllllllllll@{}} \toprule \textbf{MULTI INPUT} & \multicolumn{5}{c}{\textbf{MUSCLE}} & \multicolumn{4}{c}{\textbf{FAT}} & \multicolumn{4}{c}{\textbf{BONE}} & \multicolumn{4}{c}{\textbf{BONE MARROW}} \\ \midrule & \textbf{} & \textbf{DSC.} & \textbf{ACC.} & \textbf{SENS.} & \textbf{SPEC.} & \textbf{DSC.} & \textbf{ACC.} & \textbf{SENS.} & \textbf{SPEC.} & \textbf{DSC.} & \textbf{ACC.} & \textbf{SENS.} & \textbf{SPEC.} & \textbf{DSC.} & \textbf{ACC.} & \textbf{SENS.} & \textbf{SPEC.} \\ R MRI1 R MRI2 R MRI3 & Avg. & \textbf{0,9541} & 0,9898 & 0,9655 & 0,9927 & \textbf{0,9461} & 0,9923 & 0,9595 & 0,9944 & \textbf{0,9522} & 0,9996 & 0,9502 & 0,9998 & \textbf{0,9438} & 0,9998 & 0,9400 & 0,9999 \\ & Std & 0,0200 & 0,0056 & 0,0290 & 0,0061 & 0,0419 & 0,0062 & 0,0234 & 0,0059 & 0,0410 & 0,0003 & 0,0555 & 0,0001 & 0,0617 & 0,0002 & 0,0833 & 0,0001 \\ F MRI1($\leftarrow$ R MRI2) F MRI2 ($\leftarrow$ R MRI1) F MRI3 ($\leftarrow$ R MRI1) & Avg. & 0,9365 & 0,9854 & 0,9660 & 0,9878 & 0,9357 & 0,9909 & 0,9557 & 0,9933 & 0,8795 & 0,9990 & 0,9079 & 0,9993 & 0,8928 & 0,9996 & 0,8941 & 0,9998 \\ & Std. & 0,0280 & 0,0087 & 0,0338 & 0,0083 & 0,0520 & 0,0078 & 0,0288 & 0,0072 & 0,0862 & 0,0007 & 0,0784 & 0,0005 & 0,0868 & 0,0003 & 0,0971 & 0,0001 \\ F MRI1($\leftarrow$ R MRI2) F MRI2 ($\leftarrow$ R MRI1) F MRI3 ($\leftarrow$ R MRI2) & Avg. & 0,9370 & 0,9860 & 0,9419 & 0,9916 & 0,9341 & 0,9904 & 0,9656 & 0,9917 & 0,8913 & 0,9991 & 0,9146 & 0,9994 & 0,9145 & 0,9997 & 0,9152 & 0,9998 \\ & Std. & 0,0327 & 0,0090 & 0,0533 & 0,0067 & 0,0557 & 0,0087 & 0,0252 & 0,0092 & 0,0841 & 0,0007 & 0,0774 & 0,0005 & 0,0794 & 0,0002 & 0,0863 & 0,0001 \\ F MRI1($\leftarrow$ R MRI2) F MRI2 ($\leftarrow$ R MRI3) F MRI3 ($\leftarrow$ R MRI1) & Avg. & 0,9281 & 0,9835 & 0,9496 & 0,9878 & 0,9249 & 0,9889 & 0,9544 & 0,9915 & 0,8960 & 0,9992 & 0,9056 & 0,9996 & 0,9276 & 0,9997 & 0,9337 & 0,9999 \\ & Std. & 0,0306 & 0,0095 & 0,0424 & 0,0075 & 0,0618 & 0,0105 & 0,0363 & 0,0089 & 0,0803 & 0,0006 & 0,0788 & 0,0004 & 0,0700 & 0,0002 & 0,0850 & 0,0001 \\ F MRI1($\leftarrow$ R MRI2) F MRI2 ($\leftarrow$ R MRI3) F MRI3 ($\leftarrow$ R MRI2) & Avg. & \textbf{0,9408} & 0,9868 & 0,9549 & 0,9908 & 0,9314 & 0,9900 & 0,9558 & 0,9924 & 0,9011 & 0,9992 & 0,9108 & 0,9996 & 0,9240 & 0,9997 & 0,9303 & 0,9999 \\ & Std. & 0,0292 & 0,0081 & 0,0430 & 0,0066 & 0,0582 & 0,0093 & 0,0374 & 0,0078 & 0,0765 & 0,0006 & 0,0727 & 0,0004 & 0,0773 & 0,0002 & 0,0858 & 0,0001 \\ F MRI1($\leftarrow$ R MRI3) F MRI2 ($\leftarrow$ R MRI1) F MRI3 ($\leftarrow$ R MRI1) & Avg. & 0,9375 & 0,9860 & 0,9526 & 0,9902 & 0,9346 & 0,9909 & 0,9552 & 0,9930 & 0,8879 & 0,9991 & 0,8919 & 0,9995 & 0,8934 & 0,9997 & 0,8876 & 0,9999 \\ & Std. & 0,0263 & 0,0076 & 0,0395 & 0,0069 & \textbf{0,0515} & 0,0072 & 0,0295 & 0,0073 & 0,0788 & 0,0006 & 0,0772 & 0,0004 & 0,0887 & 0,0003 & 0,1073 & 0,0001 \\ F MRI1($\leftarrow$ R MRI3) F MRI2 ($\leftarrow$ R MRI1) F MRI3 ($\leftarrow$ R MRI2) & Avg. & 0,9406 & 0,9868 & 0,9518 & 0,9911 & 0,9349 & 0,9908 & 0,9680 & 0,9919 & 0,8925 & 0,9991 & 0,8990 & 0,9995 & 0,9127 & 0,9997 & 0,9069 & 0,9999 \\ & Std. & \textbf{0,0249} & 0,0071 & 0,0412 & 0,0062 & 0,0515 & 0,0075 & 0,0257 & 0,0078 & 0,0693 & 0,0005 & 0,0734 & 0,0003 & 0,0757 & 0,0003 & 0,0857 & 0,0001 \\ F MRI1($\leftarrow$ R MRI3) F MRI2 ($\leftarrow$ R MRI3) F MRI3 ($\leftarrow$ R MRI1) & Avg. & 0,9288 & 0,9843 & 0,9344 & 0,9906 & 0,9254 & 0,9888 & 0,9489 & 0,9913 & \textbf{0,9061} & 0,9992 & 0,9203 & 0,9996 & \textbf{0,9375} & 0,9998 & 0,9295 & 0,9999 \\ & Std. & 0,0336 & 0,0094 & 0,0519 & 0,0064 & 0,0624 & 0,0108 & 0,0421 & 0,0096 & \textbf{0,0570} & 0,0004 & 0,0586 & 0,0003 & \textbf{0,0526} & 0,0002 & 0,0737 & 0,0001 \\ F MRI1($\leftarrow$ R MRI3) F MRI2 ($\leftarrow$ R MRI3) F MRI3 ($\leftarrow$ R MRI2) & Avg. & 0,9399 & 0,9865 & 0,9519 & 0,9909 & 0,9335 & 0,9906 & 0,9611 & 0,9924 & 0,8840 & 0,9991 & 0,8808 & 0,9996 & 0,9180 & 0,9997 & 0,9193 & 0,9999 \\ & Std. & 0,0256 & 0,0074 & 0,0384 & 0,0063 & 0,0537 & 0,0077 & 0,0275 & 0,0075 & 0,0794 & 0,0005 & 0,0892 & 0,0003 & 0,0756 & 0,0002 & 0,0882 & 0,0001 \\ \bottomrule \end{tabular}} \caption{Quality of Single Modality Thigh MRIs Synthesis with 5-fold cross validation. (Avg. = Average (Mean) and Std.=Standard deviation)} \label{tab:psnr_fid_ssim} \resizebox{1\textwidth}{!}{ \begin{tabular}{@{}llllllll@{}} \toprule & & F MRI1($\leftarrow$ R MRI2) & F MRI1($\leftarrow$ R MRI3) & F MRI2($\leftarrow$ R MRI1) & F MRI2($\leftarrow$ R MRI3) & F MRI3($\leftarrow$ R MRI1) & F MRI3($\leftarrow$ R MRI2) \\ \midrule PSNR & Avg. & 28,3153 & 27,6520 & 27,2156 & 27,9233 & \textbf{28,5335} & 28,0810 \\ & Std. & 3,3855 & \textbf{2,9039} & 3,2992 & 3,4147 & 3,2828 & 3,7948 \\ SSIM & Avg. & 0,8786 & 0,8848 & 0,8728 & 0,8827 & \textbf{0,8968} & 0,8890 \\ & Std. & \textbf{0,0496} & 0,0510 & 0,0520 & 0,0608 & 0,0601 & 0,0616 \\ FID & Avg. & 42,5333 & 41,6491 & 57,5200 & \textbf{68,3417} & 39,8365 & 54,2351 \\ & Std. & \textbf{4,4439} & 4,6984 & 10,9396 & 20,8823 & 4,4900 & 13,9198 \\ \bottomrule \end{tabular}} \end{table*} \section{METHOD} The proposed segmentation strategy includes both synthesis of missing contrasts with a generator and a segmentor (Figure~\ref{fig:any_gan2}). In the generation stage, we adapted popular pix2pix \cite{isola2017image} conditional GAN method for synthesising all synthesized contrasts from real (true) contrasts. Any other GAN method can be used too. Briefly, pix2pix uses a conditional generative adversarial network to learn mapping from a source domain $x$ and random noise $z$ to an target domain $y$. The network is made up of two blocks, the generator ${G}$, and the discriminator ${D}$. The generator transforms the source domain ($x$) with random noise ($z$) to get the target domain ($y$) while discriminator learns how much target domain is similar to source domain. As shown in Figure ~\ref{fig:any_gan2}, then we use all real MR contrasts (R MRI) for synthesizing (generating) other contrasts (F MRI) using Equation \ref{eq:1} and Equation \ref{eq:2} where $x$ is source contrast, $y$ is target contrast, $z$ is random noise, $\lambda$ is hyperparameter for adjusting blurriness, and $\mathcal{L}_{L 1}$ mean square error: \begin{equation} \begin{aligned} \label{eq:1} \mathcal{L}_{c G A N}(G, D)=& \mathbb{E}_{x, y}[\log D(x, y)]+\\ & \mathbb{E}_{x, z}[\log (1-D(x, G(x, z))]. \end{aligned} \end{equation} \begin{equation} \label{eq:2} G^{*}=\arg \min _{G} \max _{D} \mathcal{L}_{c G A N}(G, D)+\lambda \mathcal{L}_{L 1}(G). \end{equation} First, we condition on source contrast, Real MRI1 for generating Synthesized MRI2 (R MRI1 $\rightarrow$ F MRI2) or Synthesized MRI3 (R MRI1 $\rightarrow$ F MRI3) separately, then we condition on Real MRI2 for generating Synthesized MRI1 (R MRI2 $\rightarrow$ F MRI1) or Synthesized MRI3 (R MRI2 $\rightarrow$ F MRI3), separately. Finally, we condition on Real MRI3 for generating Synthesized MRI1 ((R MRI3 $\rightarrow$ F MRI1) and Synthesized MRI2 (R MRI3 $\rightarrow$ F MRI2) for getting all six different combinations. For delineation of multiple tissues, we devise commonly used standard U-Net segmentor \cite{ronneberger2015u}. We speculate that if synthesized images are in good quality (Table \ref{tab:psnr_fid_ssim}), overall segmentation of tissues should be accurate. In addition to fully synthesized, mixed, and real images based training of segmentation, we also apply multi-contrast and single-contrast segmentation settings to validate the necessity of additional contrasts and their complementary strengths to each other. \section{EXPERIMENTS and RESULTS} \label{sec:typestyle} \noindent\textbf{Dataset and Preprocessing:} We have used multi-contrast MRI data from Baltimore Longitudinal Study of Aging (BLSA) \cite{ferrucci2008baltimore}. Experiments were performed on three different T1-weighted MR contrasts: fat-suppressed (MRI1), water and fat (MRI2),and water-suppressed (MRI3). These images are abbreviated as "real" in our experiments to separate them from synthesized ones. Original data set contains 150 volumetric MRI scans from 50 subjects acquired using a 3T Philips Achieva MRI scanner (Philips Healthcare, Best, The Netherlands). A voxel size of $1 \times 1$ mm$^2$ in-plane, and slice thickness varies from 1 mm to 3 mm in different scans. Details of contrasts and other imaging parameters can be found in \cite{ferrucci2008baltimore}. Prior to our experiments, we used non-uniform non-parametric intensity normalization technique (N4ITK) \cite{tustison2010n4itk} to remove bias afterwards edge-preserving diffusive filter for removing noise without any distortion on tissue structures in MR images. Finally, we used whitening transformation then scaled voxel values between $0$ and $1$.\\% Water and fat suppression were achieved using spectral pre-saturation with inversion recovery (SPIR), with coverage from the proximal to distal ends of the femur using $80$ slices in the foot to head direction, a field of view (FOV) of $440 \times 296 \times 400$ mm$^3$ and \noindent\textbf{Network settings and Training Procedure:} pix2pix was trained for 250 epoch based on 2D slices with learning rate of 0.0001. Generator and discriminators consist of U-Net and PatchGAN. Best models were selected on validation portion of the whole data set. In segmentation stage, we optimized network with cross entropy loss with ADAM optimizer and a learning rate of 0.0001. Early stopping criteria was used. We did not use any data augmentation techniques as we did not see any overfitting problem in training. We have performed 90 (18x5 Fold) experiments in segmentation stage and 30 (6x5 Fold) experiments for generation stage with 5 fold cross validation ($70\%$ training, $10\%$ validation, and $20\%$ test). All experiments were performed on Nvidia Titan-XP GPUs with 12GB memory. Proposed approach was implemented in PyTorch framework.\\ \noindent\textbf{Quantitative evaluations:} We report our GAN results in three different metrics: the Frechet Inception Distance (FID, lower is better), Peak Signal to Noise Ratio (PSNR, higher is better) and Structural Similarity Index Measure (SSIM, higher is better) (Table \ref{tab:psnr_fid_ssim}). We observed that synthesized MRI3 from real MRI1 gives the best PSNR, SSIM and FID outperforming other synthesized MRI contrasts. However, results are not hugely different from each other, this is likely due to the one-to-one mapping nature between MRI contrasts and the learned maps between contrasts are highly informative about each other. This is a strength that can be attributed to the power of GAN. We also summarized segmentation results with evaluation metrics of DICE, accuracy, sensitivity, and specificity (Table \ref{tab:test_on_single_mri} and Table \ref{tab:test_on_multiple_mri}). We analyzed the muscle, fat, bone and bone marrow segmentation results on single synthesized and real MRI contrasts (Table \ref{tab:test_on_single_mri}). Muscle and fat tissue shows higher DICE scores when real MRI3 (water-suppressed) is used, and bone and bone marrow show higher DICE scores when R MRI2 (water-fat) is used. Surprisingly, synthesized MRI3 ($\leftarrow$ R MRI2), synthesized MRI3 ($\leftarrow$ R MRI1), synthesized MRI2 ($\leftarrow$ R MRI1), synthesized MRI2 ($\leftarrow$ R MRI3) show similar results for muscle, fat, bone and bone marrow tissues, respectively (Table \ref{tab:test_on_single_mri}), thanks to strongly learned mapping between each contrast. For multi-contrast input (Table \ref{tab:test_on_multiple_mri}), although true MRIs trained segmentation results show higher DICE scores than other strategies, synthesized images from real MRI2 and MRI3 show very close DICE scores to that of the best results. Synthesized images based segmentation results showed similar trends to true images based segmentation results even when the tissue of interest is small such as bone marrow, indicating that high-quality synthesis was achieved.\\ \noindent\textbf{Qualitative evaluations:} Qualitative results are shown in Figure \ref{fig:qualitative} for muscle, fat, bone and bone marrow tissues. We compare some best synthesized MRI image results (Table \ref{tab:test_on_single_mri} and Table \ref{tab:test_on_multiple_mri}) to original MRIs. We observed that synthesizing multi-contrasts MRI were high quality such that even small details on soft tissues were preserved, justified in segmentation results too. This observation is promising as lack of data, missing contrast and other data problems can be addressed with synthetic images as an intermediate step while diagnostic path is still using true/real images, thus avoiding the concern of using synthetic images as diagnostic purpose. \begin{figure}[!ht] \centering \includegraphics[width = 0.47\textwidth]{qualitative_results.png} \caption{Comparison of fat, muscle, bone and bone marrow tissue segmentation results for a given MRI slice. (a) Original MRI, b) F MRI1($\leftarrow$ R MRI2) F MRI2 ($\leftarrow$ R MRI3) F MRI3 ($\leftarrow$ R MRI2) c) F MRI1($\leftarrow$ R MRI3) F MRI2 ($\leftarrow$ R MRI3) F MRI3 ($\leftarrow$ R MRI2) d) R MRI1 R MRI2 R MRI3 e) F MRI2 ($\leftarrow$ R MRI1) f) F MRI2 ($\leftarrow$ R MRI3).} \label{fig:qualitative} \end{figure} \section{CONCLUSIONS} \label{sec:majhead} In this work, we conducted extensive experiments to explore the use of synthetic MRI scans for training segmentation engine for multi-tissue analysis. We showed that an accurate segmentation model can be built on solely based on synthetic scans or mixed (real + synthetic) images with a precision level close to a segmentation model completely trained with true images. In addition, we have demonstrated that multi-modality combination of scans provide better segmentation results even when some of the modalities are synthetically generated. \\ \noindent\textbf{Acknowledgments} Our study is exempt from human subject section as the data is publicly available and fully anonymized. This study is approved under the existing IRB at Baltimore Longitudinal Study of Aging (BLSA) \cite{ferrucci2008baltimore}. This study is partially supported by the NIH grant R01-CA246704-01 and R01-CA240639-01. We thank Ege University for letting us to use their servers for running our experiments. \addtolength{\textheight}{-12cm} \bibliographystyle{IEEEbib}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} Polynomial multiplication (i.e., $c(x)=a(x)\times b(x)$) is a fundamental building block for cryptographic hardware and is often identified as the bottleneck in implementing efficient circuits. The most widely deployed public key crypto systems (e.g., RSA and ECC) need polynomial multiplications \cite{RSA_ECC}. Many of the post-quantum cryptography (PQC) algorithms (e.g., NTRU-Prime, FrodoKEM, Saber, etc.) also require large integer multipliers for multiplying polynomial coefficients utilized to perform key-encapsulations and digital signatures \cite{NIST_Competition}. Another application is in fully homomorphic encryption, a specific branch of cryptography that requires large integer multipliers to enable multi-party and secure-by-construction on the cloud computations \cite{cloud-computations}. There is a clear demand for large integer multipliers to perform multiplication over polynomial coefficients. \textbf{To our knowledge, today, no widely available repository of open source multiplier architectures exists}. This is the gap that our library addresses. There are several multiplication methods employed to perform multiplication over polynomial coefficients, including the schoolbook method (SBM), Karatsuba, Toom-Cook, Montgomery, and number theoretic transformation (NTT). A quick scan of the PQC algorithms involved in the NIST standardization effort \cite{NIST_Competition_Round_2} reveals that many reference implementations suggest the use of these multipliers: SBM is suggested by the authors of NTRU-Prime and FrodoKEM, Karatsuba and Toom-Cook methods are used in Saber and NTRU, a combination of NTT and SBM is suggested for CRYSTALS-Kyber, SBM and Montgomery are considered in Falcon. Examples of recent works employing non-digitized and digitized polynomial multiplication methods are given in \cite{Rafferty_2017,2_way_Karatsuba,Montgomery,NTT,ASIC_65nm_2014,Interleaved_modular_reduction_algorithm,homomorphic_encryption} and \cite{Xie_2018,Morales_Sandoval,Pan}, respectively. In \cite{Rafferty_2017}, for different polynomial sizes, an architectural evaluation of different multiplication methods (SBM, comba, Karatsuba, Toom-Cook, Montgomery, and NTT) is performed over a Virtex-7 FPGA platform. An improved Montgomery polynomial multiplier is presented in \cite{Montgomery} for a polynomial size of 1024 bits over a Virtex-6 FPGA. A run-time configurable and highly parallelized NTT-based polynomial multiplication architecture over Virtex-7 is discussed in \cite{NTT}. A systolic based digit serial multiplier wrapper on an Intel Altera Stratix-V FPGA is described in \cite{Xie_2018}, where digit sizes of 22 and 30 bits are considered for operand lengths 233 and 409 bits, respectively. A digit serial Montgomery based wrapper is provided in \cite{Morales_Sandoval}, where a digit size of 64 is selected for the operand length 571 bits, on a Virtex-6. Similarly, a digit serial modular multiplication based wrapper on Virtex-7 is shown in \cite{Pan}, where digit sizes of 2, 4 and 8 bits are preferred for an operand length of 2048 bits. ASIC implementations, while less frequent, also explore the polynomial multiplication design space. In \cite{2_way_Karatsuba}, different polynomial multipliers with different operand lengths are considered for area and power evaluations on a 65nm technology. On similar technology, a bit level parallel-in-parallel-out (BL-PIPO) multiplier architecture and a modified interleaved modular reduction multiplication with bit-serial sequential architecture is proposed in \cite{Interleaved_modular_reduction_algorithm, ASIC_65nm_2014}, respectively. Using a 65nm commercial node, for an operand length of 409 bits. For fully homomorphic encryption schemes, an optimized multi-million bit multiplier based on the Schonhage Strassen multiplication algorithm is described in \cite{homomorphic_encryption} on 60nm technology node. Although there are several reported implementations of different multiplication methods \cite{Rafferty_2017,2_way_Karatsuba,Montgomery,NTT,ASIC_65nm_2014,Interleaved_modular_reduction_algorithm,homomorphic_encryption,Xie_2018,Morales_Sandoval,Pan}, these implementations tend to be specifically tailored for a given operand size and for a given target (e.g., high speed or low area). The matter is that this trade-off space is rather complicated to navigate without automation. Consequently, a common approach to assess (several) multiplication methods is required. In order to tackle the aforementioned limitations of the available literature and the need for automation, we develop an open-source library of multipliers which we name TTech-LIB. Our library is supported by a C++ generator utility that produces -- following user specifications -- hardware description of four selected multiplication methods: (a) SBM, (b) 2-way Karatsuba, (c) 3-way Toom-Cook, and (d) 4-way Toom-Cook. For selected multiplication methods, our library also offers a digitized solution: a single parameterized digit-serial wrapper to multiply polynomial coefficients. By default, the wrapper instantiates a singular SBM multiplier, but it can be replaced by any other multiplier method since the interfaces are identical between all methods. Finally, FPGA and ASIC designers can select their own multiplication method, size of the input operands, and digit size (only for the digitized wrapper, naturally). Moreover, for ASIC designers, there is the possibility to generate synthesis scripts for one of two synthesis tools, either Cadence Genus or Synopsys Design Compiler (DC). The user is not restricted to generating a single architecture at a time, the generator will produce multiple solutions if asked to do so, which will appear as separate Verilog (.v) files. The remainder of this work is structured as follows: The mathematical background for selected multiplication methods is described in Section \ref{mathematical_background}. The generator architecture and the structure of proposed TTech-LIB is provided in Section \ref{TTech_LIB}. Section \ref{sec:results} shows the experimental results and provide comparisons of non-digitized and digitized flavours of multiplication methods. Finally, Section \ref{conclusion} concludes the paper. \section{Mathematical background} \label{mathematical_background} In this section, we present the mathematical formulations behind polynomial multiplication. We assume the inputs are two $m$-bit polynomials and the output is a polynomial of size $2m-1$. \subsection{Non-digitized multiplication} The SBM is the traditional way to multiply two input polynomials $a(x)\times b(x)$, as shown in Eq. \ref{eq:eq_sbm}. To produce resultant polynomial $c(x)$ by performing bit by bit operations, it requires $2\times m$ clock cycles, $m^2$ multiplications and $(m-1)^2$ additions. \begin{figure}[ht] \begin{ceqn} \begin{align}\label{eq:eq_sbm} c(x)=\sum_{i=0}^{m-1}\sum_{j=0}^{m-1}a\textsubscript{i}b\textsubscript{j}x^{i+j} \end{align} \end{ceqn} \end{figure} Other approaches such as the 2-way Karatsuba, 3-way Toom-Cook, and 4-way Toom-Cook are more time efficient since they split the polynomials into $n$ equal parts, as shown in Eq. \ref{eq:eq_2_way_KM}. The value of $n$ for 2-way Karatsuba, 3-way Toom-Cook and 4-way Toom-Cook multipliers is 2, 3 and 4, respectively and as the name implies. In Eq. \ref{eq:eq_2_way_KM}, the variable $k$ determines the index of the split input polynomial. For example, for a 4-way Toom-Cook multiplier, the values of $k$ are \{3, 2, 1, 0\}, meaning the input polynomial $a(x)$ becomes $a_3(x)$, $a_2(x)$, $a_1(x)$, and $a_0(x)$. \begin{figure}[ht] \begin{ceqn} \begin{align}\label{eq:eq_2_way_KM} \resizebox{0.445\textwidth}{!}{$c(x)=\underbrace{\left(\sum_{i={\frac{k \times m}{n}}}^{m-1} a\textsubscript{k}(x) + \ldots + \sum_{i=0}^{\frac{k \times m}{n}-1} a\textsubscript{0}(x)\right)}_\text{$split\, polynomial\, a(x)$} \times \underbrace{\left(\sum_{i={\frac{k \times m}{n}}}^{m-1} b\textsubscript{k}(x) + \ldots + \sum_{i=0}^{\frac{k \times m}{n}-1} b\textsubscript{0}(x)\right)}_\text{$split\, polynomial\, b(x)$}$} \end{align} \end{ceqn} \end{figure} In Eq. \ref{eq:eq_2_way_KM_1}, the expanded version of Eq. \ref{eq:eq_2_way_KM} is presented for the case of 2-way split of input polynomials. The straightforward computation would require four multiplications: (1) one for the computation of inner product resulting polynomial $c_1(x)$, two multiplications for the computation of $c_2(x)$, and finally one multiplication for the computation of $c_0(x)$. However, $c_2(x)$ could be alternatively calculated with only one multiplication, as shown in Eq. \ref{eq:eq_2_way_KM_2}. This is the Karatsuba observation. To generate the final resultant polynomial $c(x)$, addition of inner products is required, as presented in Eq. \ref{eq:eq_2_way_KM_3}. Similarly, when considering the 3-way and 4-way Toom-Cook multipliers, the expanded versions of Eq. \ref{eq:eq_2_way_KM} produce nine and sixteen multiplications, respectively. These multiplications are then reduced to five and seven using a process similar to the 2-way Karatsuba, respectively. We omit the equations for Toom-Cook multipliers for the sake of brevity. \begin{figure}[ht] \begin{ceqn} \begin{align}\label{eq:eq_2_way_KM_1} \resizebox{0.40\textwidth}{!}{$c(x)= \underbrace{a\textsubscript{1}(x) b\textsubscript{1}(x)}_{\color{black}\text{$c_1(x)$}} + \underbrace{a\textsubscript{1}(x) b\textsubscript{0}(x) + a\textsubscript{0}(x) b\textsubscript{1}(x)}_{\color{black}\text{$c_2(x)$}} + \underbrace{a\textsubscript{0}(x) b\textsubscript{0}(x)}_{\color{black}\text{$c_0(x)$}}$} \end{align} \end{ceqn} \end{figure} \begin{figure}[ht] \begin{ceqn} \begin{align}\label{eq:eq_2_way_KM_2} \resizebox{0.42\textwidth}{!}{$c\textsubscript{2}(x) = (a\textsubscript{1}(x) + a\textsubscript{0}(x)) \times (b\textsubscript{1}(x) + b\textsubscript{0}(x)) - c\textsubscript{1}(x) - c\textsubscript{0}(x)$} \end{align} \end{ceqn} \end{figure} \begin{figure}[ht] \begin{ceqn} \begin{align}\label{eq:eq_2_way_KM_3} \resizebox{0.22\textwidth}{!}{$c(x)= c\textsubscript{0}(x) + c\textsubscript{1}(x) + c\textsubscript{2}(x)$} \end{align} \end{ceqn} \end{figure} Now, let us assume that the polynomials involved in the multiplications above remain relatively large in size even after split. Thus, SBM multipliers can be employed to resolve the partial products. For a 2-way Karatsuba multiplier of $m$-bit input polynomials, there will be 3 SBM multipliers and each will take two polynomials of size $\frac{m}{2}$ as inputs. Each multiplier requires $\frac{m}{2}$ clock cycles to be completed. If all multipliers operate in parallel, the overall computation also takes $\frac{m}{2}$ cycles. For 3-way and 4-way splits, the number of clock cycles is $\frac{m}{3}$ and $\frac{m}{4}$, respectively. Since our library is aimed at large polynomials, the 2-way Karatsuba, 3-way Toom-Cook, and 4-way Toom-Cook codes available in it actually implement the parallel SBM strategy discussed above. In fact, our non-digitized multipliers are \textbf{hybrid} multipliers. \subsection{Digitized multiplication} The digit serial wrapper in TTech-LIB takes two $m$-bit polynomials $a(x)$ and $b(x)$ as an input and produces $c(x)$ as an output. Digits are created for polynomial $b(x)$ with different sizes which are user-defined as follows: $d=\frac{m}{n}$, where $d$ determines the total number of digits, $m$ denotes the size of input polynomial $b(x)$, and $n$ is the size of each digit. Then, the multiplication of each created digit is performed serially with the input polynomial $a(x)$, while the final resultant polynomial $c(x)$ is produced using shift and add operations. The main difference here is that our digitized solution is serial, while the 2-, 3-, and 4-way multipliers are parallel. The required computational cost (in clock cycles) to perform one digit multiplication is $n$. Since there are $d$ digits, the overall computation takes $d\times n$ clock cycles. It is important to mention that users/designers can choose any multiplication method inside the described digit serial wrapper as per their application requirements. We have used an SBM multiplication method as default. \section{How to access TTech-LIB} \label{TTech_LIB} The complete project files (written in C++) are freely available to everyone on our GitHub repository \cite{TTech_LIB}. A sample of pre-generated multipliers is also included in the repository. As shown in Fig. \ref{fig:figure_1}, the user settings can be customized by using a configuration file (\textit{config.xml}). The structure of the library is rather simple and includes five directories: (1) bin, (2) run, (3) src, (4) synth, and (5) vlog. After running the generator binary, the produced synthesis scripts are put in the synth directory while the generated multipliers are put in the vlog folder. All generated multipliers have the same interface (i.e., inputs are $clk$, $rst$, $a$, and $b$; the output is $c$). \begin{figure}[] \centering \footnotesize \includegraphics[width=2.4in,height=2.6in]{./Figures/figure1.pdf}\caption{Generator architecture and file structure of TTech-LIB} \centering \label{fig:figure_1} \end{figure} \section{Experimental Results and Comparisons} \label{sec:results} \subsection{Implementation results and evaluations} The experimental results for non-digitized and digitized polynomial multiplication methods over NIST defined field lengths \cite{NIST_ECC_PARAMETERS} on 65nm technology node using Genus, Cadence is provided in Table \ref{tab:table_1} and Table \ref{tab:table_2}, respectively. Moreover, the implementation results for various digit sizes of digitized SBM multiplication method over an Artix-7 FPGA device is given in Table \ref{tab:table_3}. In tables \ref{tab:table_1}--\ref{tab:table_2}, clock frequency (\textit{MHz}), area (in $\mu m^2$), and power (\textit{mW}) values are achieved after synthesis using Cadence Genus. Similarly, in Table \ref{tab:table_3}, clock frequency (\textit{MHz}), look-up-tables (LUTs), utilized registers (Regs) and power (\textit{mW}) values are achieved after synthesis using Vivado design tool. Finally, latency for both digitized and non-digitized multipliers (in tables \ref{tab:table_1}--\ref{tab:table_3}) is calculated using Eq. \ref{eq:latency}: \begin{figure}[ht] \begin{ceqn} \begin{align}\label{eq:latency} \resizebox{0.38\textwidth}{!}{$latency\,(\mu s) = \underbrace{\underbrace{\frac{clock\,cycles}{frequency\,(MHz)}}_{\color{blue}\text{non-digitized}} \times \,total\,digits}_{\color{blue}\text{digitized}}$} \end{align} \end{ceqn} \end{figure} \begin{table}[ht] \begin{threeparttable} \caption{Results of non-digitized multipliers for NIST recommended Elliptic curves over prime and binary fields} \label{tab:table_1} \begin{tabular}{|p{2.5cm}|p{0.8cm}|p{0.6cm}|p{0.7cm}|p{1.0cm}|p{0.7cm}|} \hline \textit{\textbf{Multiplier}} & \textit{\textbf{m}} & \textit{\textbf{Freq (MHz)}} & \textit{\textbf{latency ($\mu s$)}} & \textit{\textbf{Area ($\mu m^2$)}} & \textit{\textbf{Power (mW)}}\\\hline \multirow{10}{*}{\textbf{Schoolbook}} & P-192 & 500 & {0.382} & 32011.2 & 13.8 \\ \cline{2-6} {} & {P-224} & 486 & {0.458} & 38048.0 & 17.1 \\ \cline{2-6} {} & {P-256} & 480 & {0.531} & 48726.7 & 16.9 \\ \cline{2-6} {} & {P-384} & 444 & {0.862} & 67861.8 & 27.1 \\ \cline{2-6} {} & {P-521} & 434 & {1.198} & 100242.0 & 28.0 \\ \cline{2-6} {} & {B-163} & 500 & {0.324} & 29341.4 & 12.9 \\ \cline{2-6} {} & {B-233} & 476 & {0.487} & 39321.4 & 16.0 \\ \cline{2-6} {} & {B-283} & 454 & {0.621} & 50603.4 & 17.8 \\ \cline{2-6} {} & {B-409} & 442 & {0.923} & 73587.6 & 28.2 \\ \cline{2-6} {} & {B-571} & 413 & {1.380} & 89993.2 & 29.1 \\ \cline{1-6} \multirow{10}{*}{\textbf{2-way Karatsuba}} & P-192 & 473 & {0.202} & 41379.5 & 8.2 \\ \cline{2-6} {} & {P-224} & 469 & {0.238} & 49514.4 & 9.6 \\ \cline{2-6} {} & {P-256} & 467 & {0.274} & 59532.1 & 11.8 \\ \cline{2-6} {} & {P-384} & 420 & {0.457} & 74844.0 & 15.2 \\ \cline{2-6} {} & {P-521} & 408 & {0.639} & 105059.5 & 20.8 \\ \cline{2-6} {} & {B-163} & 487 & {0.168} & 35060.0 & 7.7 \\ \cline{2-6} {} & {B-233} & 478 & {0.244} & 52328.2 & 10.0 \\ \cline{2-6} {} & {B-283} & 455 & {0.312} & 64743.8 & 12.6 \\ \cline{2-6} {} & {B-409} & 432 & {0.474} & 84778.6 & 17.2 \\ \cline{2-6} {} & {B-571} & 418 & {0.684} & 120374.3 & 21.7 \\ \cline{1-6} \multirow{10}{*}{\textbf{3-way Toom-Cook}} & P-192 & 909 & {0.070} & 96498.4 & 44.4 \\ \cline{2-6} {} & {P-224} & 869 & {0.086} & 102470.8 & 46.9 \\ \cline{2-6} {} & {P-256} & 826 & {0.104} & 104820.9 & 49.4 \\ \cline{2-6} {} & {P-384} & 689 & {0.185} & 139375.1 & 57.2 \\ \cline{2-6} {} & {P-521} & 680 & {0.255} & 201341.2 & 80.0 \\ \cline{2-6} {} & {B-163} & 934 & {0.058} & 75085.6 & 36.0 \\ \cline{2-6} {} & {B-233} & 877 & {0.088} & 106357.7 & 49.5 \\ \cline{2-6} {} & {B-283} & 800 & {0.118} & 115188.1 & 54.5 \\ \cline{2-6} {} & {B-409} & 775 & {0.176} & 170509.0 & 78.4 \\ \cline{2-6} {} & {B-571} & 766 & {0.249} & 256604.4 & 115.9 \\ \cline{1-6} \multirow{10}{*}{\textbf{4-way Toom-Cook}} & P-192 & 900 & {0.053} & 105679.1 & 56.9 \\ \cline{2-6} {} & {P-224} & 847 & {0.066} & 125124.1 & 62.0 \\ \cline{2-6} {} & {P-256} & 826 & {0.077} & 122298.1 & 63.6 \\ \cline{2-6} {} & {P-384} & 793 & {0.121} & 241893.7 & 98.2 \\ \cline{2-6} {} & {P-521} & 767 & {0.170} & 332534.9 & 139.4 \\ \cline{2-6} {} & {B-163} & 925 & {0.044} & 94834.1 & 49.9 \\ \cline{2-6} {} & {B-233} & 892 & {0.066} & 132080.0 & 64.2 \\ \cline{2-6} {} & {B-283} & 826 & {0.085} & 145709.3 & 70.6 \\ \cline{2-6} {} & {B-409} & 769 & {0.133} & 236989.4 & 99.0 \\ \cline{2-6} {} & {B-571} & 746 & {0.191} & 340750.8 & 148.2 \\ \cline{1-6} \end{tabular} \begin{tablenotes} \small \item \textit{m} determines the field size or length of the inputs (in bits), where `P' stands for Prime and `B' stands for Binary \end{tablenotes} \end{threeparttable} \end{table} \begin{table}[ht] \caption{Results of digitized multipliers for NIST recommended Elliptic curves over prime and binary fields} \label{tab:table_2} {\begin{tabular}{|p{0.2cm}|p{0.8cm}|p{0.9cm}|p{0.7cm}|p{0.7cm}|p{1.0cm}|p{0.7cm}|}\hline \textit{\textbf{m}} & \textit{\textbf{digit size (n)}} & \textit{\textbf{total digits (d)}} & \textit{\textbf{Freq (MHz)}} & \textit{\textbf{latency ($\mu s$)}} & \textit{\textbf{Area ($\mu m^2$)}} & \textit{\textbf{Power (mW)}}\\\hline \multirow{4}{*}{\rotatebox[origin=c]{90}{521$\times$521}} & 32 & 17 & 505 & 1.07 & 106956.7 & 30.9 \\ \cline{2-7} {} & 41 & 13 & 377 & 1.41 & 101538.7 & 26.1 \\ \cline{2-7} {} & 53 & 10 & 340 & 1.55 & 94752.7 & 20.0 \\ \cline{2-7} {} & 81 & 7 & 336 & 1.68 & 84321.0 & 15.4 \\ \hline \multirow{4}{*}{\rotatebox[origin=c]{90}{571$\times$571}} & 32 & 18 & 487 & 1.18 & 114999.8 & 36.7 \\ \cline{2-7} {} & 41 & 14 & 369 & 1.55 & 116010.3 & 28.9 \\ \cline{2-7} {} & 53 & 11 & 312 & 1.86 & 91393.9 & 18.1 \\ \cline{2-7} {} & 81 & 8 & 291 & 2.22 & 76146.8 & 14.1 \\ \hline \multirow{10}{*}{\rotatebox[origin=c]{90}{1024$\times$1024}} & 2 & 512 & 363 & 2.82 & 196131.2 & 38.0 \\ \cline{2-7} {} & 4 & 256 & 357 & 2.86 & 178581.2 & 35.1 \\ \cline{2-7} {} & 8 & 128 & 353 & 2.90 & 167536.4 & 31.5 \\ \cline{2-7} {} & 16 & 64 & 343 & 2.98 & 166533.1 & 30.2 \\ \cline{2-7} {} & 32 & 32 & 313 & 3.27 & 148489.5 & 23.0 \\ \cline{2-7} {} & 64 & 16 & 285 & 3.59 & 122257.8 & 20.8 \\ \cline{2-7} {} & 128 & 8 & 268 & 3.82 & 123164.6 & 19.9 \\ \cline{2-7} {} & 256 & 4 & 263 & 3.89 & 129542.4 & 19.5 \\ \cline{2-7} {} & 512 & 2 & 261 & 3.92 & 136292.4 & 23.1 \\ \cline{2-7} {} & 1024 & 1 & 259 & 3.95 & 177834.2 & 24.1 \\ \hline \end{tabular}} \end{table} \begin{table}[ht] \caption{FPGA based results of digitized 1024$\times$1024 SBM multiplier for different digit sizes (Artix-7)} \label{tab:table_3} {\begin{tabular}{|p{0.2cm}|p{0.4cm}|p{0.65cm}|p{0.7cm}|p{0.7cm}|p{0.6cm}|p{0.5cm}|p{0.5cm}|p{0.6cm}|}\hline \textit{\textbf{m}} & \textit{\textbf{digit size (n)}} & \textit{\textbf{total digits (d)}} & \textit{\textbf{Freq (MHz)}} & \textit{\textbf{latency ($\mu s$)}} & \textit{\textbf{LUTs}} & \textit{\textbf{Regs}} & \textit{\textbf{Carry}} & \textit{\textbf{Power (mW)}}\\\hline \multirow{5}{*}{\rotatebox[origin=c]{90}{521$\times$521}} & 32 & 17 & 33.11 & 16.43 & 6369 & 1692 & \cellcolor{LightCyan!25} 408 & 184 \\ \cline{2-9} {} & 41 & 13 & 29.15 & 18.28 & 7995 & 1681 & 416 & 192 \\ \cline{2-9} {} & 53 & 10 & 28.32 & 22.72 & 8079 & 1732 & 417 & 191 \\ \cline{2-9} {} & 64 & 9 & 34.48 & 15.12 & 6095 & 1758 & 408 & 220 \\ \cline{2-9} {} & 81 & 8 & 30.30 & 21.38 & 8207 & 1795 & 415 & 247 \\ \cline{2-9} {} & 128 & 5 & 34.84 & 14.95 & 5964 & 1881 & 424 & 220 \\ \hline \multirow{4}{*}{\rotatebox[origin=c]{90}{571$\times$571}} & 32 & 17 & 30.12 & 18.06 & 6397 & 1847 & 447 & 194 \\ \cline{2-9} {} & 41 & 13 & 27.17 & 19.62 & 8750 & 1834 & 455 & 192 \\ \cline{2-9} {} & 53 & 10 & 26.04 & 20.35 & 9053 & 1880 & 449 & 187 \\ \cline{2-9} {} & 81 & 8 & 28.01 & 23.13 & 8958 & 1951 & 452 & 226 \\ \hline \multirow{10}{*}{\rotatebox[origin=c]{90}{1024$\times$1024}} & 2 & 512 & 14.22 & 72.11 & 10993 & 3634 & 1085 & 173 \\ \cline{2-9} {} & 4 & 256 & 15.89 & 64.48 & 10824 & 3384 & 928 & 172 \\ \cline{2-9} {} & 8 & 128 & 16.86 & 60.66 & 11074 & 3261 & 849 & 180 \\ \cline{2-9} {} & 16 & 64 & 17.51 & 58.48 & 10634 & 3248 & 811 & 185 \\ \cline{2-9} {} & 32 & 32 & 17.89 & 57.28 & 11371 & 3267 & 791 & 190 \\ \cline{2-9} {} & 64 & 16 & 17.95 & 57.04 & 11947 & 3330 & 792 &195 \\ \cline{2-9} {} & 128 & 8 & 18.57 & 55.14 & 12207 & 3450 & 800 & 221 \\ \cline{2-9} {} & 256 & 4 & 18.93 & 54.09 & 11367 & 3740 & 832 & 247 \\ \cline{2-9} {} & 512 & 2 & 19.12 & 53.55 & 10385 & 4295 & 896 & 226\\ \cline{2-9} {} & 1024 & 1 & 18.46 & 55.50 & 11462 & 5303 & 1024 & 235 \\ \hline \end{tabular}} \end{table} \subsubsection{ASIC non-digitized multipliers} \label{subsec:comparison_non_digitized} Our results consider NIST-defined prime (P-192 to P-521) and binary (B-163 to B-571) fields utilized in ECC-based public key cryptosystems. As the operand sizes increase, the corresponding clock frequency decreases, as shown in column three of Table \ref{tab:table_1}. The decrease in frequency leads to an increase in latency, as presented in column four of Table \ref{tab:table_1}. In addition to latency, the corresponding area and power values also increase with the increase in size of multiplier operands (see columns five and six of Table \ref{tab:table_1}). It is evident from these results that the SBM multiplier requires less hardware resources than 2-way Karatsuba, 3-way Toom-Cook, and 4-way Toom-Cook multipliers. Moreover, the 2-way Karatsuba achieves lower power values as compared to other selected multipliers. This is explained by the datapath and the composition of the different multipliers. SBM requires $2m + 2m$ bit adder, 2-way Karatsuba requires $m + m + m$ bit adder/subtracter for generating final polynomial, 3-way Toom-Cook requires fifteen $\frac{m}{3}$ bit incrementers, and 4-way Toom-Cook requires sixteen $\frac{m}{4}$ bit incrementers. There is always a trade-off between various design parameters such as area, latency, power etc. Consequently, the SBM multiplier is more useful for area constrained applications. For better latency, other multipliers are more convenient. \subsubsection{ASIC digitized multipliers} \label{subsec:comparison_sbm_digitized} For digitizing, we have selected 521, 571, and 1024 as the lengths of the input operands, as shown in column one of Table \ref{tab:table_2}. Moreover, for input lengths of 521 and 571, digit sizes of 32, 41, 53 and 81 have been adopted. For an input length of 1024 bits, digit sizes are given in powers of two, for $n$ = $2, \ldots, 1024$. Digit size $n$ and total digits $d$ are listed in columns two and three of Table \ref{tab:table_2}, respectively. It is noteworthy that the increase in digit size results in a decrease in clock frequency, as presented in column four of Table \ref{tab:table_2}. Moreover, it also translates to an increase in latency, as shown in column five of Table \ref{tab:table_2}. For the $1024\times1024$ multiplier, the obtained values for area and power show behavior similar to a parabolic curve with respect to digit size, as given in the last two columns of Table \ref{tab:table_2}. This is intuitive, as in the extreme cases of too small or too large digits, the wrapper logic becomes inefficient and may even become the bottleneck for timing. In summary, for an application that requires high clock frequency, shorter digits are preferred; however, this brings a significant cost in area and power. \subsubsection{FPGA digitized multipliers} \label{subsec:comparison_sbm_digitized_fpga} Alike ASIC demonstrations (presented in Sec. \ref{subsec:comparison_sbm_digitized}), we have chosen similar lengths of the input operands (521, 571, and 1024) for the evaluation on an Artix-7 FPGA platform, as shown in column one of Table \ref{tab:table_3}.We have used Xilinx Viviado Desig Suite for the FPGA based experiments. Furthermore, for input lengths of 521 and 571, digit sizes of 32, 41, 53 and 81 have been considered. For an input length of 1024 bits, digit sizes are adopted in powers of two, for $n$ = $2, \ldots, 1024$. Digit size $n$ and total digits $d$ are listed in columns two and three of Table \ref{tab:table_3}, respectively. The synthesis results (clock frequency, latency, area in terms of LUTs and Regs, and power) achieved for FPGA are totally distinct when compared to ASIC values as the implementation platforms are quite contrasting. It is important to note that the frequency of the multiplier architecture increases with the increase in digit size (shown in column four of Table \ref{tab:table_3}). This phenomenon keeps on-going until it reaches a saturation point (i.e., best possible performance in terms of clock frequency with respect to $n$). Once it reaches a saturation point, then there is a decrease in the clock frequency. Moreover, the saturation occurs at any digit size between 0 to $n$ (in this work and for this experiment, the saturation occurs when the value for $n$ = $512$). The saturation point also varies with the change in operand size of the multiplier as given in Table \ref{tab:table_3}. For other reported parameters, i.e., latency, LUTs and power, the saturation point is not possible to show as there is a non-linear behavior (see columns five, six and nine of Table \ref{tab:table_3}). It is noteworthy that we have considered the worst case scenario by excluding the DSP (Digital Signal Processing) blocks during synthesis. The performance of multiplier architectures will be higher by considering the conventional synthesis flow with DSPs. \subsubsection{Figure-of-Merit (FoM) for digitized SBM multiplier} \label{subsec:fom} A FoM is defined to perform a comparison while taking into account different design characteristics at the same time. A FoM to evaluate the latency and area parameters for both ASIC and FPGA platforms is defined using Eq. \ref{eq:latency_times_area}. The higher the FoM values, the better. Similarly, a ratio for latency and power characteristics are calculated considering Eq. \ref{eq:latency_times_power}. \begin{equation}\label{eq:latency_times_area} FoM = \frac{1}{latency\,(\mu s) \times area} \end{equation} \begin{equation}\label{eq:latency_times_power} FoM = \frac{1}{latency\,(\mu s) \times power\,(mW)} \end{equation} The calculated values of defined FoMs for ASIC are given in figures \ref{fig:figure_2} and \ref{fig:figure_3}, where various digit sizes were considered for a $1024\times1024$ multiplier. \begin{figure}[ht] \centering \footnotesize \includegraphics[width=2.75in]{./Figures/figure2.pdf} \caption{Area and latency FoM for various digit sizes of a $1024\times 1024$ multiplier} \centering \label{fig:figure_2} \end{figure} \begin{figure}[ht] \centering \footnotesize \includegraphics[width=2.75in]{./Figures/figure3.pdf} \caption{Power and latency FoM for various digit sizes of a $1024\times 1024$ multiplier} \centering \label{fig:figure_3} \end{figure} For both FoMs (shown in figures \ref{fig:figure_2} and \ref{fig:figure_3}), it becomes clear that the extreme cases lead to suboptimal results. For the studied 1024 $\times$ 1024 multiplier, the variant with $n=64$ and $d=16$ presents an optimal solution. Other similar values, such as $n=32$ and $n=128$, also give very close to optimal solutions. Likewise ASICs, the calculated values of defined FoM (from Eq. \ref{eq:latency_times_area}) for FPGA is given in Fig. \ref{fig:figure_4}, where various digit sizes were considered for a 1024$\times$1024 multiplier. To calculate FPGA area utilizations, the slices flip-flops, LUTs and carry units are the basic building-blocks. Therefore, the FoM in Eq. \ref{eq:latency_times_area} can be calculated by employing different metrics-of-interest (e.g., slices, LUTs, registers and carry blocks). Note that we have used an FPGA slices as area in Eq. \ref{eq:latency_times_area}. Fig. \ref{fig:figure_4} reveals that the FoM value for $n=512$ and $d=2$ results an optimal solution. \begin{figure}[] \centering \footnotesize \includegraphics[width=2.75in]{./Figures/figure4.pdf} \caption{Slices and latency FoM for various digit sizes of a $1024\times 1024$ multiplier} \centering \label{fig:figure_4} \end{figure} The combined relation between frequency, latency and power for different values of $n$ is illustrated in Fig. \ref{fig:figure_6}. Therefore, it is noted from Fig. \ref{fig:figure_6} that the value of latency decreases, frequency increases with the increase in $n$. The increase in frequency and decrease in latency keeps on-going until saturation point occurs (when $n=512$). \begin{figure}[t] \centering \footnotesize \includegraphics[width=2.75in]{./Figures/figure6.pdf} \caption{Frequency, latency and power analysis for various digit sizes of a $1024\times 1024$ multiplier} \centering \label{fig:figure_6} \end{figure} \subsection{Comparison to the state of the art} \label{subsec:comparisons} To perform a fair comparison with existing state-of-the-art modular multiplier architectures, we have used similar operand lengths, digit sizes and implementation technologies (for FPGA and ASIC) as used in the corresponding solutions, shown in Table \ref{tab:table_4}. In state-of-the-art solutions, multiplication results are given for different operands length. However, we have provided comparison of our results with only the larger operands. Moreover, we have used symbol `N/A' in Table \ref{tab:table_4} where the values for design parameters (\textit{Freq}, \textit{latency} and \textit{area}) are not given. \begin{table}[ht] \begin{threeparttable} \caption{Area and latency comparisons of non-digitized and digitized multipliers with state of the art} {\begin{tabular}{|p{0.38cm}|p{1.68cm}|p{0.7cm}|p{0.4cm}|p{0.6cm}|p{0.7cm}|p{1.57cm}|}\hline \textit{\textbf{Ref}} & \textit{\textbf{Multiplier}} & \textit{\textbf{Device}} & \textit{\textbf{m}} & \textit{\textbf{Freq (MHz)}} & \textit{\textbf{latency ($\mu s$)}} & \textit{\textbf{Area ($\mu m^2$)/LUTs}}\\\hline \multirow{3}{*}{\cite{Rafferty_2017}} & 2-way KM & V7 & {128} & 104.3 & 0.61 & 3499 \\ \cline{2-7} {} & {2-way KM} & {V7} & {256} & 74.5 & 1.71 & 7452 \\ \cline{2-7} {} & {2-way KM} & {V7} & {512} & 51.6 & 4.96 & 20474 \\ \hline \multirow{1}{*}{\cite{ASIC_65nm_2014}} & BL-PIPO & 65nm & 163 & {N/A} & {N/A} & {5328 GE} \\\hline \multirow{1}{*}{\cite{Morales_Sandoval}} & DSM (ds=64) & V6 & 571 & {258.5} & {0.03} & {10983} \\\hline \multirow{3}{*}{\cite{Pan}} & DSMM (ds=2) & V7 & 2048 & {N/A} & {N/A} & {18067} \\\cline{2-7} {} & DSMM (ds=4) & V7 & 2048 & {N/A} & {N/A} & {33734} \\\cline{2-7} {} & DSMM (ds=8) & V7 & 2048 & {N/A} & {N/A} & {62023} \\\hline \multirow{8}{*}{\textbf{TW}} & SBM & 65nm & 163 & {N/A} & {N/A} & {11727 GE} \\\cline{2-7} {} & {2-way KM} & {V7} & {128} & 167.4 & 0.38 & 2110 \\ \cline{2-7} {} & {2-way KM} & {V7} & {256} & 119.9 & 1.06 & 4318 \\ \cline{2-7} {} & {2-way KM} & {V7} & {512} & 63.8 & 4.01 & 9582 \\ \cline{2-7} {} & SBM (ds=2) & V7 & 2048 & {15.03} & {69760} & {25559} \\\cline{2-7} {} & SBM (ds=4) & V7 & 2048 & {16.6} & {15790} & {22040} \\\cline{2-7} {} & SBM (ds=8) & V7 & 2048 & {17.4} & {3760} & {23315} \\\cline{2-7} {} & SBM (ds=64) & V6 & 571 & {46.4} & {1.74} & {6181} \\\cline{1-7} \end{tabular}} \label{tab:table_4} \begin{tablenotes} \small \item \textbf{V7:} Xilinx Virtex-7, \textbf{V6:} Xilinx Virtex-6, \textbf{ds:} digit size, \textbf{TW:} this work, \textbf{DSM:} Digit Serial Montgomery multiplier based wrapper, \textbf{BL-PIPO:} Bit level parallel in parallel out multiplier using SBM multiplication method, \textbf{GE:} gate equivalents \end{tablenotes} \end{threeparttable} \end{table} Concerning only the non-digitized multipliers for comparison, the 2-way Karatsuba multiplier of \cite{Rafferty_2017} over Virtex-7 FPGA for operand sizes of 128, 256 and 512 bits presents 38\%, 39\% and 20\% higher latency when compared to 2-way Karatsuba multiplier generated by TTech-LIB, as shown in Table \ref{tab:table_4}. Moreover, the generated multiplier utilizes lower hardware resources in terms of LUTs (see column seven in Table \ref{tab:table_4}) as compared to resources (LUTs) utilized in \cite{Rafferty_2017}. On 65nm node, the BL-PIPO multiplier of \cite{ASIC_65nm_2014} utilizes 55\% lower hardware resources in terms of gate counts as compared to our SBM multiplier generated by TTech-LIB. When digitized flavor of polynomials multiplication is considered for comparison over different digit sizes, the digit serial Montgomery multiplier based wrapper of \cite{Morales_Sandoval} results 83\% higher clock frequency and requires 58\% less computational time as compared to our SBM based digit serial wrapper generated by TTech-LIB. On the other hand, the SBM based digit serial wrapper results 56\% lower hardware resources over Virtex-6 FPGA. There is always a trade-off between performance and area parameters. Another digit serial modular multiplication based wrapper of \cite{Pan} results 14\% (for ds=2) lower FPGA LUTs while for remaining digit sizes of 4 and 8, it utilizes 35\% and 63\% higher FPGA LUTs as compared to SBM wrapper generated by TTech-LIB. The frequency and latency parameters cannot be compared as these are not given. The comparisons and discussion above show that the multipliers generated by TTech-LIB provide a realistic and reasonable comparison to state-of-the-art multiplier solutions \cite{Rafferty_2017,ASIC_65nm_2014,Morales_Sandoval,Pan}. Hence, not only can users explore various design parameters within our library, they can also benefit from implementations that are competitive with respect to the existing literature. \section{Conclusion} \label{conclusion} This work has presented an open-source library for large integer polynomial multipliers. The library contains digitized and non-digitized flavors of polynomial coefficient multipliers. For non-digitized multipliers, based on the values for various design parameters, users/designers can select amongst several studied multipliers according to needs of their targeted application. Furthermore, we have shown that for digitized multipliers, the evaluation of individual design parameters may not be comprehensive, and figures of merit are better suited to capture the characteristics of a circuit. Furthermore, we believe the results enabled by TTech-LIB will guide hardware designers to select an appropriate digit size that reaches an acceptable performance according to application requirements. This is achieved with the aid of TTech-LIB's generator, which helps a designer to quickly explore the complex design space of polynomial multipliers. \bibliographystyle{IEEEtran}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} Nonlocal operators are operators whose functional values are determined by integration over a neighborhood, in contrast to differential operators which are locally determined. The integral nature of these operators allows them to describe multiscale behavior and anomalous behavior such as super- and sub-diffusion. This feature makes nonlocal models a viable alternative to models based on partial differential equations (PDEs) for a broad class of engineering and scientific applications. Such applications include in groundwater hydrology for subsurface transport \citep{Benson2000,d2021analysis,Schumer2003,Schumer2001}, image processing \citep{Buades2010,Gilboa2007,DElia2021Imaging}, multiscale and multiphysics systems \citep{Alali2012,Du-Lipton-Mengesha,Askari2008}, finance \citep{Scalas2000,Sabatelli2002}, and stochastic processes \citep{Burch2014,DElia2017,Meerschaert2012,Metzler2000,Metzler2004}. The foundations of nonlocal vector calculus, based on the nonlocal gradient, divergence, and Laplace operators in multiple dimensions, were developed by \citet{Gunzburger2010}, \citet{Du2012}, \citet{du2013analysis}, and \cite{Du2013}. In these works, two frameworks were introduced: an \emph{unweighted} framework and a \emph{weighted} framework. The unweighted framework involves the two-point gradient $\cG \mathbf{u}(\mathbf{x},\mathbf{y})$ and its adjoint, the nonlocal divergence operator $\cD \mathbf{u}(\mathbf{x})$, the composition of which yields a nonlocal Laplace operator. The weighted framework is based on the one-point weighted gradient $\cG_\varrho \mathbf{u}(\mathbf{x})$ and its adjoint $\cD_\varrho \mathbf{u}(\mathbf{x})$, the weighted divergence. The one-point structure characterizing these weighted operators makes them more amenable for certain applications; see \cite{Du2018Dirichlet,lee2020nonlocal} for applications to mechanics and \cite{Du2020SPH} for an application to fluid dynamics. Rigorous analysis of important aspects pertaining to these operators was performed by \cite{Mengesha-Spector} and \citet{MENGESHA201682}. Various forms of fractional-order (hereafter referred to as \emph{fractional}) vector calculus have been developed both independently and in parallel; see for instance \citet{Meerschaert2006}, \cite{silhavy2020fractional}, and \citet{Tarasov2008}. \citet{d2020unified} showed that a widely used form of fractional-vector calculus is in fact a special case of the weighted nonlocal vector calculus with singular weight function $\varrho$ and infinite interaction radius. In particular, it was noted that the fractional gradient and divergence are special cases of weighted nonlocal operators. Moreover, despite the fractional Laplacian having an immediate representation as $\mathcal{D} \circ \mathcal{G}$, a composition of unweighted operators, it was also shown to be represented as $\mathcal{D}_\varrho \circ \mathcal{G}_\varrho$, a composition of the weighted fractional divergence and gradient. This representation of weighted Laplace operators as unweighted diffusion operators was formally extended to the more general kernel-based nonlocal calculus in \cite{d2020unified} by deriving an associated {\it equivalence kernel.} This result reinforces ideas discovered in prior studies on equivalence kernels by \citet{Alali-preprint,mengesha2014peridynamic}, and \citet{vsilhavy2017higher} in the context of peridynamics, a nonlocal model of mechanics in which the nonlocal Navier-Lam\'e operator is represented as a nonlocal Laplace-type operator. The aforementioned representations clearly show the major role that composition of operators play in deriving useful nonlocal vector calculus identities. A major contribution of this work is the rigorous justification of various representations of compositions of weighted nonlocal operators. We provide conditions under which the composition of two nonlocal operators defined by principal value integrals, such as the fractional divergence and gradient, can be represented by a double principal value integral. These analytical results are utilized together with classical vector calculus identities to prove several identities for weighted nonlocal vector operators, such as \begin{equation*} \cC_\omega \circ \cG_\omega = 0, \quad \cD_\omega \circ \cC_\omega = 0, \quad \cC_\omega \circ \cC_\omega = \cG_\omega \circ \cD_\omega - \cD_\omega \circ \cG_\omega \end{equation*} for translation invariant kernels, including fractional kernels. We specify the space of functions over which such composition is possible. Another contribution is a \emph{rigorous} proof of the equivalence of weighted and unweighted nonlocal Laplace operators via the equivalence kernel. While this result was presented in \citet{d2020unified} formally, here we provide a set of conditions under which the result is valid. We verify these conditions for several important classes of kernels, including fractional kernels. We further study the properties of the equivalence kernel, which are important for establishing well-posedness for weighted nonlocal models. Finally, we combine our results to obtain a weighted fractional Helmholtz decomposition in H\"older spaces. This result utilizes the vector calculus identities proved in the first half of the paper, as well as the characterization of the equivalence kernel for fractional kernels. A nonlocal Helmholtz decomposition for unweighted operators was derived by \cite{DElia2020Helmholtz}. For weighted nonlocal operators, a Helmholtz decomposition for operators with kernels supported in the half ball was derived by \citet{lee2020nonlocal}. Their results bear resemblance to the Helmholtz decompositions derived in the present paper, but in a different setting, namely in a periodic domain and for nonlocal kernels that scale in certain limits to local operators. In contrast, we study such decompositions in $\mathbb{R}^d$ with relaxed assumptions about the decay at infinity, which hold for standard fractional operators. In another related work, \citet{petronela_poster} studied Helmholtz decompositions for nonlocal convolution operators. \noindent{\it Outline of the paper.} In Section \ref{sec:OperatorDef}, {we introduce our notation and recall several relevant results for nonlocal operators in multiple dimensions with well-known kernels. We establish basic mapping properties of these operators as well. In Section \ref{sec:holder}, we focus on fractional operators and characterize their mapping properties completely for several function spaces.} In Section \ref{sec:identities}, we prove several nonlocal operator identities that reflect well-established local counterparts from classical vector calculus. In Section \ref{sec:eq-kernel}, we identify a specific class of functions for which there exists an equivalence kernel such that the composition of the divergence and gradient operators corresponds to the (unweighted) negative nonlocal Laplace operator. Finally, in Section \ref{sec:helmholtz} we combine the vector calculus identities and the characterization of the equivalence kernel to obtain a weighted fractional Helmholtz decomposition. We collect requisite properties of the hypergeometric function in Appendix \ref{apdx:HyperGeometricFxn}. \section{Definitions of Operators}\label{sec:OperatorDef} In this section, we recall the definitions of the nonlocal operators that will be used throughout the paper and identify function spaces for which these operators are defined. Furthermore, we introduce examples of nonlocal kernel functions that will be utilized in our main results. Suppose we have a radial kernel $\varrho$ satisfying \begin{equation}\label{assumption:Kernel} \begin{gathered} \varrho \in L^1_{\text{loc}}(\bbR^d)\,, \qquad \varrho \geq 0\,, \qquad \frac{\varrho(\bseta)}{|\bseta|} \text{ is nonincreasing in } |\bseta|\,, \\ \text{ and } \intdm{\bbR^d}{ \min \{1,|\bseta|^{-1} \} \varrho(\bseta) }{\bseta} < \infty\,. \tag{K} \end{gathered} \end{equation} Given positive integers $N, d$ and a given vector field $\mathbf{u}: \mathbb{R}^d \rightarrow \mathbb{R}^N$ we define the nonlocal gradient \begin{align}\label{eq:definitions_of_ops} \cG_{\varrho}\mathbf{u}(\mathbf{x}) &= \int_{\bbR^d} \varrho (\mathbf{y}-\mathbf{x}) \frac{\mathbf{u}(\mathbf{y})-\mathbf{u}(\mathbf{x})}{|\by-\bx|} \otimes \frac{\by-\bx}{|\by-\bx|} \, \rmd\mathbf{y} \end{align} where for any vector ${\bf a}\in \bbR^N$ and ${\bf b}\in \bbR^d$, the tensor product ${\bf a}\otimes {\bfb}$ is the $N \times d$ matrix the $ij^{\text{th}}$ entry of which is $a_ib_j$. From this definition, it is clear that $\cG_{\varrho}$ is an $N \times d$ matrix-valued map. For a given second order tensor field $\mathbf{u} : \mathbb{R}^d \rightarrow \mathbb{R}^{N \times d}$, we define the nonlocal divergence as \begin{equation}\label{eq:div-tensor} \cD_{\varrho}\mathbf{u}(\mathbf{x}) = \int_{\bbR^d} \varrho (\mathbf{y}-\mathbf{x}) \frac{\mathbf{u}(\mathbf{y})-\mathbf{u}(\mathbf{x})}{|\by-\bx|} \frac{\by-\bx}{|\by-\bx|} \, \rmd \mathbf{y}. \end{equation} We interpret $\mathbb{R}^{N \times d}$ as a set of matrices with $N$ rows and $d$ columns. It is clear from the above definition \eqref{eq:div-tensor} that $\cD_{\varrho} \bu(\bx)$ is $\bbR^N$-valued. Finally, in the event $d=N=3$ and $\mathbf{u}:\bbR^3\to\bbR^3$ we define the nonlocal curl operator as \begin{align}\label{eq:curl-vector} \cC_{\varrho}\mathbf{u}(\mathbf{x}) = \int_{\bbR^3} \varrho (\mathbf{y}-\mathbf{x}) \frac{\by-\bx}{|\by-\bx|} \times \frac{\mathbf{u}(\mathbf{y})-\mathbf{u}(\mathbf{x})}{|\by-\bx|} \, \rmd\mathbf{y}. \end{align} The definitions above are consistent with known corresponding definitions for scalar fields or vector fields given in, e.g., \cite{Du2013}. Indeed, in the event that $u:\bbR^d\to \bbR$ is a scalar field, the nonlocal gradient operator acting on $u$ gives the vector field $\cG_{\varrho} u$ where \begin{equation*} \cG_{\varrho} u(\mathbf{x}) = \int_{\bbR^d} \varrho (\mathbf{y}-\mathbf{x}) \frac{u(\mathbf{y})-u(\mathbf{x})}{|\by-\bx|} \frac{\by-\bx} {|\by-\bx|} \, \rmd\mathbf{y}. \end{equation*} If we identify vector fields $\mathbf{u}: \bbR^d\to \bbR^d$ as $1\times d$ matrix-valued fields, then the nonlocal divergence operator acting on the vector field is the scalar function given by \begin{align*} \cD_{\varrho}\mathbf{u}(\mathbf{x}) &= \int_{\bbR^d} \varrho (\mathbf{y}-\mathbf{x}) \frac{\by-\bx}{|\by-\bx|} \cdot \frac{\mathbf{u}(\mathbf{y})-\mathbf{u}(\mathbf{x})}{|\by-\bx|} \, \rmd\mathbf{y}. \end{align*} {Note that in the literature on nonlocal vector calculus (see, e.g, \cite{Du2013}) these operators are commonly referred to as {\it weighted}, as opposed to their unweighted counterparts.} { In what follows, we denote the $i$-th partial derivative by $D_i$, and we use multi-index notation for derivatives of arbitrary order. The (classical) gradient operator $(D_1, D_2, \ldots, D_n)$ is denoted by $\grad$, its negative adjoint, the (classical) divergence operator, is denoted by $\div$, and their composition, the (classical) Laplacian $\div \grad$, is denoted by $\Delta$. For functions $\bu : \bbR^d \to \bbR^N$, for $k \in \bbN_0$ and $\alpha \in (0,1)$ we denote the H\"older spaces as $C^{k,\alpha}(\bbR^d;\bbR^N) := \{ \bu \in C(\bbR^d;\bbR^N) \, : \, \Vnorm{\bu}_{C^{k,\alpha}(\bbR^d)} < \infty \}$, where the norm is given by \begin{gather*} \| \mathbf{u} \|_{C^{k,\alpha}(\mathbb{R}^d)} = \Vnorm{\bu}_{L^{\infty}(\bbR^d)} + \sum_{|\gamma|=1}^k \Vnorm{ D^{\gamma} \mathbf{u} }_{L^\infty(\mathbb{R}^d)} + \sum_{|\gamma|=k} [ D^{\gamma} \mathbf{u} ]_{C^{0,\alpha}(\mathbb{R}^d)}, \\ \left[ \mathbf{v} \right]_{C^{0,\alpha}(\mathbb{R}^d)} = \underset{\mathbf{x},\mathbf{y} \in \mathbb{R}^d, \mathbf{x} \neq \mathbf{y}}{\text{sup}} \frac{|\mathbf{v}(\mathbf{x}) - \mathbf{v}(\mathbf{y})|}{|\mathbf{x}-\mathbf{y}|^{\alpha}}. \end{gather*} Later on in the paper we will need to consider functions in a range of H\"older spaces that depend on a parameter $s$. For all $s \in (0,1)$ and $\sigma > 0$ small, we say that \begin{equation*} \bu \in \scC^{2s+\sigma}(\bbR^d;\bbR^N) \end{equation*} if \begin{equation*} \bu \in \begin{cases} C^{0,2s+\sigma}(\bbR^d;\bbR^N)\,, &\quad \text{ when } s < 1/2\,, \\ C^{1,2s+\sigma-1}(\bbR^d;\bbR^N)\,, &\quad \text{ when } s \geq 1/2\,. \end{cases} \end{equation*} } We next study the mapping properties of the nonlocal operators introduced above. To that end, we recall the class of Schwartz vector fields by $\scS(\bbR^d;\bbR^N)$: this is the space $C^{\infty}(\bbR^d;\bbR^N)$ equipped with the countable family of seminorms \begin{equation*} [\bu]_{\alpha,\beta} := \sup_{|\gamma|\leq \alpha} \sup_{\bx \in \bbR^d} |\bx|^{\beta} |D^{\gamma}\bu(\bx)|\,, \qquad \alpha, \beta \in \bbN_0\,, \quad \gamma \text{ a } d\text{-multi-index.} \end{equation*} Our next result says that although these nonlocal operators do not necessarily map $\scS(\bbR^d;\bbR^N)$ to itself, they map it to the class $C^{\infty}$, and the mapped vector fields satisfy a certain decay property. \begin{proposition}\label{decay-estimates-operator} Let $d$ and $N$ be positive integers. Let $\cZ_{\varrho} \bu(\bx)$ denote any one of the following objects: \begin{equation}\label{eq:OperatorAbbreviations} \begin{split} \cG_{\varrho} \bu(\bx)\,, &\quad \text{ for } \bu \in \scS(\bbR^d;\bbR^N)\,, \\ \cD_{\varrho} \bu(\bx)\,, &\quad \text{ for } \bu \in \scS(\bbR^d;\bbR^{N \times d})\,, \\ \cC_{\varrho} \bu(\bx)\,, &\quad \text{ for } \bu \in \scS(\bbR^d;\bbR^d) \text{ and } d=3\,. \end{split} \end{equation} Then $\cZ_{\varrho} \bu(\bx)$ is a well-defined measurable function for all $\bx$, $\cZ_{\varrho} \bu \in C^{\infty}$, and for any $p\in [1, \infty]$ and $\gamma \in \bbN^d$, there is a constant $C$ depending on $d$, $N$ and $p$ such that \begin{equation}\label{eq:Lp-NonlocalOperator} \|D^{\gamma}\cZ_{\varrho}\bu\|_{L^{p}(\bbR^d)} \leq C\left(\|\grad D^{\gamma}\bu\|_{L^{p}(\bbR^d)}\|\varrho\|_{L^{1}(B_1(0))} + \|D^{\gamma}\bu\|_{L^{p}(\bbR^d)}\left\|{\varrho(\cdot)\over |\cdot|}\right\|_{L^1 (\bbR^d\setminus B_1(0))}\right). \end{equation} Moreover, we have the following decay estimates for the derivatives: for any $j$, $k \in \bbN$ , there exists a constant $C$ depending on $d$, $N$, $j$ and $k$ such that \begin{equation}\label{eq:DecayRate} \begin{split} |D^{\gamma}\cZ_{\varrho}\bu(\bx)| \leq C \left( \frac{[\bu]_{|\gamma|+1,j}}{|\bx|^j} \intdm{|\bh| \leq \frac{|\bx|}{2} }{ \varrho(|\bh|)}{\bh} + \frac{[\bu]_{|\gamma|,k}}{|\bx|^k} \intdm{|\bh| > \frac{|\bx|}{2} }{ \frac{\varrho(|\bh|)}{|\bh|} }{\bh} + \|D^{\gamma}\bu\|_{L^{1}(\bbR^d)}\frac{ \varrho\left( \frac{|\bx|}{2} \right)}{|\frac{\bx}{2}|} \right) \end{split} \end{equation} for all $|\bx| \geq 1$. \end{proposition} \begin{proof} First, we show that for $\bu \in \scS(\bbR^d)$, $\cZ_{\varrho} \bu(\bx)$ is well defined for any fixed $\bx \in \bbR^d$. From the definition of these operators and after change of variables, we notice that \begin{equation*} |\cZ_{\varrho} \bu(\bx)|\leq \intdm{\bbR^d}{\varrho(|\bz|)\frac{|\bu(\bx+\mathbf{z})-\bu(\bx)|}{|\mathbf{z}|}}{\mathbf{z}}. \end{equation*} Thus to show that $\cZ_{\varrho} \bu(\bx)$ is well defined, it suffices to show that the integrand on the right-hand side is integrable. This follows from the fact that the integrand can be estimated by a sum of two integrable functions, i.e. \begin{equation}\label{eq:L1Est1} \varrho(|\bz|)\frac{|\bu(\bx+\mathbf{z})-\bu(\bx)|}{|\mathbf{z}|} \leq \|\grad \bu\|_{L^{\infty}(B_1(\bx))} \varrho(\mathbf{z})\chi_{\{|\mathbf{z}|\leq 1\}}(\mathbf{z}) + 2 \|\mathbf{u}\|_{L^\infty(\mathbb{R}^d)} {\varrho(\mathbf{z})\over |\mathbf{z}|}\chi_{\{|\mathbf{z}|\geq 1\}}(\mathbf{z}). \end{equation} The fact that $\cZ_{\varrho} \bu \in C^{\infty}(\bbR^d;\bbR^d)$ follows from the observation that the operators $\cZ_{\varrho}$ commute with differentiation for vector fields in $\scS(\bbR^d)$, i.e. for any multi-index $\gamma\in \bbN_0^d$, $D^\gamma \cZ_{\varrho} \bu = \cZ_{\varrho} D^{\gamma}\bu$. This commutative property follows by induction from the relation $D_i \cZ_{\varrho} \bu = \cZ_{\varrho} D_i \bu$ for $i = 1, \ldots, d$. This, in turn, can be seen by applying the estimates \eqref{eq:L1Est1} and \begin{multline}\label{eq:L1Est2} \varrho(|\bz|)\frac{|D_i \bu(\bx+\mathbf{z})- D_i \bu(\bx)|}{|\mathbf{z}|} \\ \leq \|\grad D_i \bu\|_{L^{\infty}(B_1(\bx))} \varrho(\mathbf{z})\chi_{\{|\mathbf{z}|\leq 1\}}(\mathbf{z}) + 2 \|D_i \mathbf{u}\|_{L^\infty(\mathbb{R}^d)} {\varrho(\mathbf{z})\over |\mathbf{z}|}\chi_{\{|\mathbf{z}|\geq 1\}}(\mathbf{z}) \end{multline} in the dominated convergence theorem in order to differentiate under the integral sign. Thus to demonstrate the estimates \eqref{eq:Lp-NonlocalOperator} and \eqref{eq:DecayRate}, it suffices to check it for $\gamma =0$. To prove \eqref{eq:Lp-NonlocalOperator} when $1\leq p<\infty$, we have \begin{multline} \int_{\bbR^d}|\cZ_{\varrho}\bu|^p \, \rmd \bx \leq C \int_{\bbR^d}\left|\int_{B_1(0)}\varrho(|\bz|)\frac{|\bu(\bx+\mathbf{z})-\bu(\bx)|}{|\mathbf{z}|} \, \rmd \mathbf{z}\right|^p \, \rmd \bx\\ + C \int_{\bbR^d}\left|\int_{\bbR^{d}\setminus B_1(0) }\varrho(|\bz|)\frac{|\bu(\bx+\mathbf{z})-\bu(\bx)|}{|\mathbf{z}|} \, \rmd \mathbf{z}\right|^p \, \rmd\bx\,. \end{multline} Using the identity $\bu(\bx+\mathbf{z})-\bu(\bx) = \int_{0}^{1}\grad \bu(\bx +t\mathbf{z})\mathbf{z} \, \rmd t$ and applying Minkowski's integral inequality, we have that \begin{align*} \int_{\bbR^d}|\cZ_{\varrho}\bu|^p \, \rmd \bx &\leq C \int_{\bbR^d}\left|\int_{B_1(0)}\varrho(|\bz|)\int_{0}^{1}|\grad \bu(\bx +t\mathbf{z})| \, \rmd t \, \rmd \mathbf{z}\right|^p \, \rmd \bx\\ &\qquad +\int_{\bbR^d}\left|\int_{\bbR^{d}\setminus B_1(0) }\varrho(|\bz|)\frac{|\bu(\bx+\mathbf{z})-\bu(\bx)|}{|\mathbf{z}|} \, \rmd\mathbf{z}\right|^p \, \rmd\bx\\ &\leq C\left[\left(\int_{B_1(0)}\varrho(|\mathbf{z}|) \, \rmd \mathbf{z}\right)^p \|\grad \bu\|_{L^{p}(\bbR^d)} +\left( \int_{\bbR^{d}\setminus B_1(0) }{\varrho(|\bz|) \over |\mathbf{z}|} \, \rmd \mathbf{z} \right)^p \|\bu\|_{L^{p}(\bbR^d)}\right]\,. \end{align*} The estimate \eqref{eq:Lp-NonlocalOperator} in the case $p=\infty$ is similar: \begin{align*} \Vnorm{\cZ_{\varrho}\bu}_{L^{\infty}(\bbR^d)} &\leq C\left[\left(\int_{B_1(0)}\varrho(|\mathbf{z}|) \, \rmd \mathbf{z}\right) \|\grad \bu\|_{L^{\infty}(\bbR^d)} +\left( \int_{\bbR^{d}\setminus B_1(0) }{\varrho(|\bz|) \over |\mathbf{z}|} \, \rmd \mathbf{z} \right) \|\bu\|_{L^{\infty}(\bbR^d)}\right]\,. \end{align*} To estimate \eqref{eq:DecayRate}, we proceed by splitting the integral defining $\cZ_{\varrho} \bu$: \begin{equation*} \begin{split} |\cZ_{\varrho} \bu(\bx)| &\leq \intdm{|\mathbf{z}|\leq \frac{|\bx|}{2} }{ \varrho(|\mathbf{z}|) \frac{|\bu(\mathbf{z} + \bx)-\bu(\bx)|}{|\mathbf{z}|} }{\mathbf{z}} + \intdm{|\mathbf{z}| > \frac{|\bx|}{2} }{ \varrho(|\mathbf{z}|) \frac{|\bu(\mathbf{z}+\bx)-\bu(\bx)|}{|\mathbf{z}|} }{\mathbf{z}} \\ &\leq \intdm{|\mathbf{z}|\leq \frac{|\bx|}{2} }{ \varrho(|\mathbf{z}|) \grad \bu(\bx +t(\mathbf{z}) ) }{\by} + \intdm{|\mathbf{z}| > \frac{|\bx|}{2} }{ \varrho(|\mathbf{z}|) \frac{|\bu(\mathbf{z}+\bx)|}{|\mathbf{z}|} }{\mathbf{z}} \\ &\qquad + \intdm{|\mathbf{z}| > \frac{|\bx|}{2} }{ \varrho(|\mathbf{z}|) \frac{|\bu(\bx)|}{|\mathbf{z}|} }{\mathbf{z}} \end{split} \end{equation*} for some $t \in [0,1]$. We use the definition of $[\bu]_{\alpha,\beta}$ to estimate: \begin{equation*} \begin{split} |\cZ_{\varrho} \bu(\bx)| &\leq \intdm{|\mathbf{z}|\leq \frac{|\bx|}{2} }{ \varrho(|\mathbf{z}|) \frac{[\bu]_{1,j} }{|\bx+t(\mathbf{z})|^j} }{\mathbf{z}} + \intdm{|\mathbf{z}| > \frac{|\bx|}{2} }{ \varrho(|\mathbf{z}|) \frac{|\bu(\mathbf{z}+\bx)|}{ |\mathbf{z}|} }{\mathbf{z}} \\ &\qquad + \intdm{|\mathbf{z}| > \frac{|\bx|}{2} }{ \varrho(|\mathbf{z}|) \frac{[\bu]_{0,k}}{|\bx|^k |\mathbf{z}|} }{\mathbf{z}}\,. \end{split} \end{equation*} Since $\varrho(\bseta) |\bseta|^{-1}$ is nonincreasing and $|\bx + t\mathbf{z}| \geq \frac{|\bx|}{2}$ for all $\mathbf{z} \in B(0,{|\bx|\over 2})$, \begin{equation*} \begin{split} |\cZ_{\varrho} \bu(\bx)| &\leq C(j) \frac{[\bu]_{1,j} }{|\bx|^j} \intdm{|\mathbf{z}|\leq \frac{|\bx|}{2} }{ \varrho(|\mathbf{z}|) }{\mathbf{z}} + \frac{\varrho(\frac{|\bx|}{2})}{ \left( \frac{|\bx|}{2} \right) } \intdm{|\mathbf{z}| > \frac{|\bx|}{2} }{ |\bu(\mathbf{z}+\bx)| }{\mathbf{z}} \\ &\qquad + \frac{[\bu]_{0,k}}{|\bx|^k} \intdm{|\mathbf{z}| > \frac{|\bx|}{2} }{ \frac{\varrho(|\mathbf{z}|)}{|\mathbf{z}|} }{\mathbf{z}}\,. \end{split} \end{equation*} Changing coordinates gives \eqref{eq:DecayRate}. \end{proof} \begin{remark} For $m \in \bbN$, denoting the class of $m$-times continuously differentiable and bounded vector fields on $\bbR^d$ by $C^m_b(\bbR^d;\bbR^d)$, the first part of the above proof implies that if $\bu \in C^{m}_b(\bbR^d;\bbR^d)$ then $\cZ_{\varrho}\bu(\bx)$ is well-defined for all $\bx \in \bbR^d$ and $\cZ_{\varrho} \bu \in C^{m-1}_b(\bbR^d;\bbR^d)$ with the estimate \[ \|D^{\gamma}\cZ_{\varrho} \bu\|_{L^{\infty}(\bbR^d)} \leq C \left[ \|\grad D^{\gamma}{\bu}\|_{L^{\infty}(\bbR^d)}\|\varrho\|_{L^{1}(B_1(0))} + \|D^{\gamma}\bu\|_{L^{\infty}(\bbR^d)}\left\|{\varrho(\cdot)\over |\cdot|}\right\|_{L^1 (\bbR^d\setminus B_1(0))} \right] \] for any $1\leq |\gamma|\leq m-1.$ \end{remark} We conclude the section by giving three examples of kernels that satisfy \eqref{assumption:Kernel} and demonstrate the decay estimates in Proposition \ref{decay-estimates-operator}. We will refer to these examples in the sequel. \begin{example}\label{ex:frac_kernel} \textbf{The Fractional Kernel} For $s \in (0,1)$ define the \textit{fractional kernel} \begin{equation}\label{eq:Def:FracKernel} \varrho_s(|\bseta|) := \frac{c_{d,s}}{|\bseta|^{d+s-1}}\,, \qquad c_{d,s} := \frac{2^s \Gamma(\frac{d+s+1}{2})}{\pi^{d/2} \Gamma(\frac{1-s}{2})}\,, \qquad \bseta \in \bbR^d \setminus \{ {\bf 0} \}\,. \end{equation} Choosing $j = d+1$ and $k = d$ in \eqref{eq:DecayRate}, and computing the integrals gives the decay rate \begin{equation*} |\cZ_{\varrho_s} \bu(\bx)| \leq \frac{C}{|\bx|^{d+s}}\,. \end{equation*} where $C$ depends on $d$, $s$ and $\bu$. \end{example} \begin{example} \textbf{The Truncated Fractional Kernel.} Let $\delta > 0$. Define the \textit{truncated fractional kernel} $\varrho_{s,\delta}$ by \begin{equation*} \varrho_{s,\delta}(|\bseta|) := c_{d,s} \frac{\chi_{B({\bf 0},\delta)} (|\bseta|) }{|\bseta|^{d+s-1}}\,, \qquad \bseta \in \bbR^d \setminus \{ {\bf 0} \}\,. \end{equation*} Using the notation $\cZ_{s, \delta}$ for $\cZ_{\varrho_{s,\delta}}$, the estimate \eqref{eq:DecayRate} corresponding to this kernel becomes \begin{equation}\label{decay-Truncated-frac-ker} \begin{split} |\cZ_{s,\delta} \bu(\bx)| &\leq \frac{C \delta^{1-s}}{|\bx|^j} \,, \qquad 2 \delta \leq |\bx| \end{split} \end{equation} for any $j \in \bbN$, where $C$ depends on $d$, $s$ and $\bu$. \end{example} \begin{example} \textbf{The Tempered Fractional Kernel.} Let $\alpha > 0$, and let $s \in (0,1)$. We define the \textit{tempered fractional kernel} \begin{equation*} \varrho_{s,\text{temp}}(|\bseta|) := \frac{\rme^{-\alpha|\bseta|}}{|\bseta|^{d+s-1}}\,, \qquad \bseta \in \bbR^d\,. \end{equation*} We abbreviate the operators $\cZ_{\varrho_{s,\text{temp}}}$ as $\cZ_{s,\text{temp}}$. The exponential decay of $\varrho_{s,\text{temp}}$ gives the resulting nonlocal derivatives rapid decay. To see this, we consider the three terms in \eqref{eq:DecayRate} separately. First, integrating directly we have \begin{equation}\label{eq:DecayRate:TemperedFrac:Pf1} \frac{1}{|\bx|^j} \intdm{|\bh| \leq \frac{|\bx|}{2} }{ \frac{\rme^{-\alpha|\bseta|}}{|\bseta|^{d+s-1}} }{\bh} = \frac{\omega_{d-1}}{|\bx|^j} \int_0^{|\bx|/2} \frac{\rme^{-\alpha r}}{r^s} \, \rmd r \leq \frac{C(d,s,\alpha) \Gamma(1-s) }{|\bx|^j} \text{ for all } |\bx| \geq 1\,. \end{equation} Next, by change of coordinates \begin{equation}\label{eq:DecayRate:TemperedFrac:Pf2} \begin{split} \frac{1}{|\bx|^k} \intdm{|\bh| > \frac{|\bx|}{2} }{ \frac{\rme^{-\alpha |\bh|}}{|\bh|^{d+s}} }{\bh} = \frac{\omega_{d-1}}{|\bx|^k} \int_{\frac{|\bx|}{2} }^{\infty} \frac{\rme^{-\alpha r}}{r^{1+s}} \, \rmd r &= \frac{2^s \omega_{d-1}}{|\bx|^{k+s}} \int_{1}^{\infty} \frac{\rme^{-\frac{\alpha |\bx|}{2} r}}{r^{1+s}} \, \rmd r \leq C \frac{\rme^{-\frac{\alpha |\bx|}{2} }}{|\bx|^{k+1+s}}\,, \end{split} \end{equation} where we have used the upper bound $\int_1^{\infty} t^{-n}\rme^{-zt} \, \rmd t \leq z^{-1} \rme^{-z}$ in the last inequality (see \cite[Equation 15.1.19]{abramowitz1988handbook}); here $C$ depends on $d$, $s$, $\alpha$ and $\bu$. Plugging estimates \eqref{eq:DecayRate:TemperedFrac:Pf1} and \eqref{eq:DecayRate:TemperedFrac:Pf2} in to \eqref{eq:DecayRate} we arrive at \begin{equation}\label{eq:DecayRate:TemperedFrac} |\cZ_{s,\text{temp}} \bu(\bx)| \leq C \left( \frac{1}{|\bx|^j} + \frac{\rme^{-\frac{\alpha |\bx|}{2} }}{|\bx|^{k+1+s}} + \frac{\rme^{-\frac{\alpha |\bx|}{2}} }{|\bx|^{d+s}} \right) \,, \qquad |\bx| \geq 1\,, \qquad j\,, k \in \bbN\,. \end{equation} \end{example} \begin{remark} From the decay estimates \eqref{decay-Truncated-frac-ker} and \eqref{eq:DecayRate:TemperedFrac} corresponding to the truncated and the tempered fractional kernels, we see that $\cZ_{s, \delta}$ and $\cZ_{s,\text{temp}}$ map the Schwartz class of vector fields $\scS(\bbR^d)$ into itself. \end{remark} \begin{example} \textbf{The Characteristic Function Kernel.} Let $\delta > 0$. We define the \textit{characteristic function kernel} \begin{equation*} \varrho_{\chi,\delta}(|\bseta|) := \frac{d}{ \omega_{d-1} \delta^d} \chi_{B({\bf 0},\delta)}(|\bseta|)\,, \quad \bseta \in \bbR^d\,, \end{equation*} where $\omega_{d-1}$ denotes the surface measure of the sphere in $\bbR^d$. Using the notation $\cZ_{\chi,\delta}$ for $\cZ_{\varrho_{\chi,\delta}}$, the estimate \eqref{eq:DecayRate} corresponding to this kernel becomes \begin{equation*} \begin{split} |\cZ_{\chi,\delta} \bu(\bx)| &\leq \frac{C}{|\bx|^j} \,, \qquad 2 \delta \leq |\bx| \end{split} \end{equation*} for any $j \in \bbN$, where $C$ depends on $d$ and $\bu$. \end{example} \section{H\"older spaces and fractional vector calculus}\label{sec:holder} The mapping properties of the nonlocal operators $\cG_\varrho$, $\cD_\varrho$, and $\cC_\varrho$ depend on the kernel $\varrho$. In the case of the fractional kernel \eqref{eq:Def:FracKernel}, it is possible to characterize the mapping properties of these operators completely for several function spaces. We refer to the nonlocal gradient, divergence, and curl operators associated with the fractional kernel \eqref{eq:Def:FracKernel} as the fractional gradient, divergence, and curl, respectively, and identify them using the noation \begin{equation*} \cG_{s} := \cG_{\varrho_s}\,, \qquad \cD_{s} := \cD_{\varrho_s}\,, \qquad \cC_{s} := \cC_{\varrho_s}\,. \end{equation*} The mapping properties of $\cG_s$ and $\cD_s$ in fractional Sobolev spaces were established by \cite{d2020unified}, and are analogous to the well-known mapping property of the fractional Laplacian $(-\Delta)^s$ in Sobolev spaces \citep{lischke2018fractional, stein2016singular}. In this section, we study the mapping properties in H\"older spaces of these operators. The properties will be used in in Section \ref{sec:identities} to prove identities for fractional vector calculus operators in larger spaces than for general nonlocal operators, and in Section \ref{sec:helmholtz} in proving a Helmholtz decomposition involving fractional operators in H\"older spaces. We will define the fractional gradient operators for functions that satisfy appropriate smoothness and integrability conditions. For $\alpha \in (0,2)$, we define the weighted Lebesgue space $L^1_{\alpha}$ as \begin{equation*} L^1_{\alpha}(\bbR^d;\bbR^d) := \left\{ \bu \in L^1_{loc}(\bbR^d;\bbR^d) \, : \, \Vnorm{\bu}_{L^1_{\alpha}(\bbR^d)} := \intdm{\bbR^d}{ \frac{|\bu(\bx)|}{1+|\bx|^{d+\alpha}} }{\bx} < \infty \right\}\,. \end{equation*} Note that for any $\alpha \in (0,2)$, $L^p(\bbR^d;\bbR^d) \subset L^1_{\alpha}(\bbR^d;\bbR^d)$ for $p \in [1,\infty]$. \begin{theorem}\label{thm:MappingPropertiesOfOperators} Let $s \in (0,1)$, $N \geq 1$, and $d \geq 2$. Let $\cZ_s \bu(\bx)$ denote any of the following objects: \begin{equation*} \begin{split} \cG_s \bu(\bx)\,, &\quad \text{ for } \bu \in L^1_s(\bbR^d;\bbR^N)\,, \\ \cD_s \bu(\bx)\,, &\quad \text{ for } \bu \in L^1_s(\bbR^d;\bbR^{N \times d})\,, \\ \cC_s \bu(\bx)\,, &\quad \text{ for } \bu \in L^1_s(\bbR^d;\bbR^d) \text{ and } d =3\,. \end{split} \end{equation*} Then we have the following: \begin{itemize} \item[1)] If $\bu \in C^{0,\beta}(\bbR^d)$ for $\beta \in (s,1)$, then $\cZ_s \bu \in C^{0,\beta-s}(\bbR^d)$ with \begin{equation}\label{eq:NaturalEstimate1} \Vnorm{\cZ_s \bu}_{C^{0,\beta-s}(\bbR^d)} \leq C \Vnorm{\bu}_{C^{0,\beta}(\bbR^d)}\,. \end{equation} \item[2)] If $\bu \in C^{1,\beta}(\bbR^d)$ for $\beta \in (0,1)$ and $s < \beta$, then $\cZ_s \bu \in C^{1,\beta-s}(\bbR^d)$ with \begin{equation}\label{eq:NaturalEstimate2} \Vnorm{\cZ_s \bu}_{C^{1,\beta-s}(\bbR^d)} \leq C \Vnorm{\bu}_{C^{1,\beta}(\bbR^d)}\,. \end{equation} \item[3)] If $\bu \in C^{1,\beta}(\bbR^d)$ for $\beta \in (0,1)$ and $s > \beta$, then $\cZ_s \bu \in C^{0,\beta-s+1}(\bbR^d)$ with \begin{equation}\label{eq:NaturalEstimate3} \Vnorm{\cZ_s \bu}_{C^{0,\beta-s+1}(\bbR^d)} \leq C \Vnorm{\bu}_{C^{1,\beta}(\bbR^d)}\,. \end{equation} \end{itemize} In all estimates the constant $C$ depends only on $d$, $N$, $s$ and $\beta$. \end{theorem} \begin{proof} To prove 1), we write \begin{align*} |\cZ_s \mathbf{u}(\mathbf{x})| &\le \int_{\mathbb{R}^d} \frac{|\mathbf{u}(\mathbf{y}) - \mathbf{u}(\mathbf{x})|}{|\mathbf{y}-\mathbf{x}|^{d+s}} \, \rmd \mathbf{y} \\ &\leq \int_{|\mathbf{y}-\mathbf{x}| \leq R} \frac{|\mathbf{u}(\mathbf{y}) - \mathbf{u}(\mathbf{x})|}{|\mathbf{y}-\mathbf{x}|^{d+s}} \, \rmd \mathbf{y} + \int_{|\mathbf{y}-\mathbf{x}| > R} \frac{|\mathbf{u}(\mathbf{y}) - \mathbf{u}(\mathbf{x})|}{|\mathbf{y}-\mathbf{x}|^{d+s}} \, \rmd \mathbf{y} \\ &\leq \left[ \mathbf{u} \right]_{C^{0,\beta}(\mathbb{R}^d)} \int_{|\mathbf{y}-\mathbf{x}| \leq R} \frac{1}{|\mathbf{y}-\mathbf{x}|^{d+s-\beta}} d\mathbf{y} + 2 \| \mathbf{u} \|_{L^\infty(\mathbb{R}^d)} \int_{|\mathbf{y}-\mathbf{x}| > R} \frac{1}{|\mathbf{y}-\mathbf{x}|^{d+s}} \, \rmd \mathbf{y} \\ &\leq C R^{\beta - s} \left[ \mathbf{u} \right]_{C^{0,\beta}(\mathbb{R}^d)} + 2 R^{-s} \| \mathbf{u} \|_{L^\infty(\mathbb{R}^d)} \end{align*} for any $R>0$. This holds for all $\mathbf{x} \in \mathbb{R}^d$, so $\cZ_s \mathbf{u} \in L^\infty(\mathbb{R}^d)$. Next, we use the following notation for the placeholder $\cZ_s$: \begin{equation*} \cZ_s \mathbf{u}(\mathbf{x}) - \cZ_s \mathbf{u}(\mathbf{y}) = \int_{\mathbb{R}^d} \frac{\left( \mathbf{u}(\mathbf{x}+\mathbf{h}) - \mathbf{u}(\mathbf{x}) \right) - \left( \mathbf{u}(\mathbf{y}+\mathbf{h}) - \mathbf{u}(\mathbf{y}) \right)}{|\mathbf{h}|^{d+s}} \left( \otimes, \times, \cdot \right) \frac{\mathbf{h}}{|\mathbf{h}|} \, \rmd \mathbf{h}. \end{equation*} It is clear that for any $R > 0$ \begin{align*} |\cZ_s \mathbf{u}(\mathbf{x}) - \cZ_s \mathbf{u}(\mathbf{y})| &\leq \int_{\mathbb{R}^d} \frac{| \left( \mathbf{u}(\mathbf{x}+\mathbf{h}) - \mathbf{u}(\mathbf{x}) \right) - \left( \mathbf{u}(\mathbf{y}+\mathbf{h}) - \mathbf{u}(\mathbf{y}) \right) |}{|\mathbf{h}|^{d+s}} \, \rmd \mathbf{h} \\ &= \int_{|\bh| \leq R} ... + \int_{|\bh| > R} ... \\ &= I + II. \end{align*} To estimate $I$, we use \begin{equation*} |\mathbf{u}(\mathbf{z}+\mathbf{h}) - \mathbf{u}(\mathbf{z})| \leq [\mathbf{u}]_{C^{0,\beta}(\mathbb{R}^d)} |\mathbf{h}|^\beta \text{ for } \bz = \bx \text{ or } \by\,. \end{equation*} Therefore, \begin{align*} I &\le \left[ \bu \right]_{C^{0,\beta}(\mathbb{R}^d)} \int_{|\mathbf{h}|\leq R}\frac{1}{|\mathbf{h}|^{d+s-\beta}} \, \rmd\mathbf{h} = C \left[ \mathbf{u} \right]_{C^{0,\beta}(\mathbb{R}^d)} R^{\beta - s}. \end{align*} For II, we use the estimate that, for $\bh\in \mathbb{R}^d$, \begin{equation*} |\bu(\bx+\bh)-\bu(\by+\bh)| + |\bu(\bx)-\bu(\by)| \leq 2 [\mathbf{u}]_{C^{0,\beta} (\mathbb{R}^d)} |\bx-\by|^{\beta} \end{equation*} to obtain \begin{align*} II &\leq 2 [\mathbf{u}]_{C^{0,\beta} (\mathbb{R}^d)} \int_{|\mathbf{h}| \leq R } \frac{|\mathbf{x}-\mathbf{y}|^{\beta}}{|\mathbf{h}|^{d+s}} \, \rmd\mathbf{h} \\ &= C \left[ \mathbf{u} \right]_{C^{0,\beta}(\mathbb{R}^d)} \frac{|\mathbf{x}-\mathbf{y}|^\beta}{R^s}\,. \end{align*} Choosing $R = |\mathbf{x}-\mathbf{y}|$ gives \begin{align*} |\cZ_s \mathbf{u}(\mathbf{x}) - \cZ_s \mathbf{u}(\mathbf{y})| &\leq I + II \leq C \left[ \mathbf{u} \right]_{C^{0,\beta}(\mathbb{R}^d)} |\mathbf{x}-\mathbf{y}|^{\beta - s}. \end{align*} Therefore, $\cZ_s \mathbf{u} \in C^{0,\beta-s} (\mathbb{R}^d)$. The proof of 2) proceeds in the same as for the proof of 1), but with $\nabla \mathbf{u}$ in place of $\mathbf{u}$. Here, one only needs to verify that the operator $\cZ_s$ commutes with derivatives. The process to verify this follows identically to the process in the proof of \Cref{decay-estimates-operator}, with the estimate \eqref{eq:L1Est2} replaced with \begin{equation*} \frac{|D_i \bu(\bx+\bh)-D_i \bu(\bx)|}{|\bz|^{d+s}} \leq \chi_{ \{|\bh|\leq 1\} } \frac{ [ \grad \bu ]_{C^{0,\beta}(\bbR^d)} }{ |\bh|^{d+s-\beta} } + 2 \chi_{ \{|\bh|\geq 1\} } \frac{ \Vnorm{\grad \bu}_{L^{\infty}(\bbR^d)} }{ |\bh|^{d+s} }\,. \end{equation*} To prove 3), we assume $\mathbf{u} \in C^{1,\beta}(\mathbb{R}^d)$ for $s > \beta$. To show that $\cZ_s \mathbf{u} \in C^{0,\beta - s + 1}(\mathbb{R}^d)$, we write \begin{align*} |\cZ_s \mathbf{u}(\mathbf{x})| &\le \int_{\mathbb{R}^d} \frac{|\mathbf{u}(\mathbf{y}) - \mathbf{u}(\mathbf{x})|}{|\mathbf{y} - \mathbf{x}|^{d+s}} \, \rmd \mathbf{y} \\ &\le \|\nabla \mathbf{u}\|_{L^{\infty}(\bbR^d)} \int_{|\mathbf{y} - \mathbf{x}| < R} \frac{1}{|\mathbf{y} - \mathbf{x}|^{d+s-1}} \, \rmd \mathbf{y} + 2 \| \mathbf{u} \|_{L^\infty(\mathbb{R}^d)} \int_{|\mathbf{y} - \mathbf{x}| \ge R} \frac{1}{|\mathbf{y} - \mathbf{x}|^{d+s}} \, \rmd \mathbf{y} \\ &\le \|\mathbf{u}\|_{C^1(\mathbb{R}^d)} \left( R^{1-s} + R^{-s} \right). \end{align*} Thus, $\cZ \mathbf{u} \in L^{\infty}(\mathbb{R}^d)$. Next, we have \begin{align*} |\cZ_s \mathbf{u}(\mathbf{x}) - \cZ_s \mathbf{u}(\mathbf{y})| &\le \int_{\mathbb{R}^d} \frac{|(\mathbf{u}(\mathbf{x} + \mathbf{h}) - \mathbf{u}(\mathbf{x})) - (\mathbf{u}(\mathbf{y} + \mathbf{h}) - \mathbf{u}(\mathbf{y}))|} {|\mathbf{h}|^{d+s}} \, \rmd \mathbf{h} \\ &= \int_{|\mathbf{h}| \le R} ... + \int_{|\mathbf{h}| > R} ... \\ &= I + II. \end{align*} For $I$, we know that for $t, t' \in (0,1)$, and any $\bh\in \mathbb{R}^d$ we write first, using mean value theorem, \[ [\mathbf{u}(\mathbf{x}+\mathbf{h}) - \mathbf{u}(\mathbf{x})] - [\mathbf{u}(\mathbf{y}+\mathbf{h}) - \mathbf{u}(\mathbf{y})] = \nabla \mathbf{u}(\mathbf{x} + t \mathbf{h}) \cdot \mathbf{h} - \nabla \mathbf{u}(\mathbf{y} + t' \mathbf{h}) \cdot \mathbf{h} \] We add and subtract terms $\nabla \mathbf{u}(\mathbf{x}) \cdot \mathbf{h}$ and $\nabla \mathbf{u}(\mathbf{y}) \cdot \mathbf{h}$ to obtain the estimate \begin{align*} &|(\mathbf{u}(\mathbf{x}+\mathbf{h}) - \mathbf{u}(\mathbf{x})) - (\mathbf{u}(\mathbf{y}+\mathbf{h}) - \mathbf{u}(\mathbf{y}))|\\ &\le |\nabla \mathbf{u}(\mathbf{x}) - \nabla \mathbf{u}(\mathbf{y})| \, |\mathbf{h}| + |\nabla \mathbf{u}(\mathbf{x} + t \mathbf{h}) - \nabla \mathbf{u}(\mathbf{x})| \, |\mathbf{h}| + |\nabla \mathbf{u}(\mathbf{x} + t' \mathbf{h}) - \nabla \mathbf{u}(\mathbf{y})| \, |\mathbf{h}| \\ &\le |\nabla \mathbf{u}(\mathbf{x}) - \nabla \mathbf{u}(\mathbf{y})| \, |\mathbf{h}| + 2 \left[ \nabla \mathbf{u} \right]_{C^{0,\beta}(\mathbb{R}^d)} |\mathbf{h}|^{1+\beta} \\ &\le 2 [\nabla \mathbf{u}]_{C^{0,\beta}(\mathbb{R}^d)} \left( |\mathbf{x} - \mathbf{y}|^{\beta}|\mathbf{h}| + |\mathbf{h}|^{1+\beta} \right). \end{align*} Thus, \begin{align*} I &\le 2[\nabla \mathbf{u}]_{C^{0,\beta}(\mathbb{R^d})} \left[ \int_{|\mathbf{h}| \le R} |\mathbf{x} - \mathbf{y}|^\beta \frac{1}{|\mathbf{h}|^{d+s-1}} + \frac{1}{|\mathbf{h}|^{d+s-\beta-1}} \, \rmd \mathbf{h} \right] \\ &= C [\nabla \mathbf{u}]_{C^{0,\beta}(\mathbb{R^d})} \left( |\mathbf{x} - \mathbf{y}|^{\beta} R^{1-s} + R^{1+\beta-s} \right). \end{align*} To estimate $II$, we have, for some $t, t' \in (0,1)$, for any $\bh$ \begin{align*} [\mathbf{u}(\mathbf{x}&+\mathbf{h}) - \mathbf{u}(\mathbf{y}+\mathbf{h})] - [\mathbf{u}(\mathbf{x}) - \mathbf{u}(\mathbf{y})] \\ &=\nabla \mathbf{u}\big(t \mathbf{x} + t \mathbf{h} + (1-t) \mathbf{y} + (1-t) \mathbf{h}\big) (\mathbf{x} - \mathbf{y}) - \nabla \mathbf{u}\big(t' \mathbf{x} + (1-t') \mathbf{y} \big) (\mathbf{x} - \mathbf{y}) \\ &= \left[ \nabla \mathbf{u} \big( \mathbf{y} + t (\mathbf{x} - \mathbf{y}) + \mathbf{h} \big) - \nabla \mathbf{u} \big( \mathbf{y} + t' (\mathbf{x} - \mathbf{y}) \big) \right](\mathbf{x} - \mathbf{y}) \end{align*} We now add and subtract appropriate terms to be able to write \begin{align*} [\mathbf{u}(\mathbf{x}&+\mathbf{h}) - \mathbf{u}(\mathbf{y}+\mathbf{h})] - [\mathbf{u}(\mathbf{x}) - \mathbf{u}(\mathbf{y})] \\ &= \left[ \nabla \mathbf{u} \big( \mathbf{y} + t (\mathbf{x} - \mathbf{y}) + \mathbf{h} \big) - \nabla \mathbf{u} \big( \mathbf{y} + t' (\mathbf{x} - \mathbf{y}) \big) \right](\mathbf{x} - \mathbf{y})\\ &=\left[ \nabla \mathbf{u} \big( \mathbf{y} + t (\mathbf{x} - \mathbf{y}) + \mathbf{h} \big) -\nabla \mathbf{u} \big( \mathbf{x} + t (\mathbf{x} - \mathbf{y}) +\mathbf{h} \big) \right] (\mathbf{x} - \mathbf{y})\\ &+\left[\nabla \mathbf{u} \big( \mathbf{x} + t (\mathbf{x} - \mathbf{y}) +\mathbf{h} \big)-\nabla \mathbf{u}(\mathbf{x} + \bh)\right](\mathbf{x}-\mathbf{y})\\ &+\left[\nabla \mathbf{u}(\mathbf{x}+\mathbf{h}) - \nabla \mathbf{u}(\bx)\right](\mathbf{x}-\mathbf{y})\\ &+\left[\nabla \mathbf{u}(\mathbf{x}) -\nabla \mathbf{u} (\mathbf{y}) \right](\mathbf{x}-\mathbf{y})\\ &+\left[\nabla \mathbf{u}(\mathbf{y}) - \nabla \mathbf{u}(\mathbf{y} + t'(\mathbf{x}-\mathbf{y}))\right](\mathbf{x}-\mathbf{y}). \end{align*} Therefore, estimating each term as before using the H\"older continuity of $\mathbf{u}$ we have \[ |[\mathbf{u}(\mathbf{x}+\mathbf{h} - \mathbf{u}(\mathbf{y}+\mathbf{h})] - [\mathbf{u}(\mathbf{x}) - \mathbf{u}(\mathbf{y})]| \leq C \left[ \nabla \mathbf{u} \right]_{C^{0,\beta}(\mathbb{R}^d)} \left( |\mathbf{y} - \mathbf{x}|^{\beta+1} + |\mathbf{x} - \mathbf{y}| |\mathbf{h}|^{\beta} \right). \] Thus, \begin{equation*} II \le C [\nabla \mathbf{u}]_{C^{0,\beta}(\mathbb{R}^d)} \left( \int_{|\mathbf{x} - \mathbf{y}| > R} \frac{|\mathbf{x}-\mathbf{y}|^{1+\beta}}{|\mathbf{h}|^{d+s}} + \frac{|\mathbf{x} - \mathbf{y}|}{|\mathbf{h}|^{d+s-\beta}} \, \rmd \mathbf{h} \right). \end{equation*} Since $s > \beta$, the above integral exists and so \begin{equation*} II \le C [\nabla \mathbf{u}]_{C^{0,\beta}(\mathbb{R}^d)} \left( \frac{|\mathbf{x} - \mathbf{y}|^{1+\beta}}{R^s} + \frac{|\mathbf{x} - \mathbf{y}|}{R^{s-\beta}} \right). \end{equation*} Putting I and II together, we obtain that for any $R>0$, and $\mathbf{x},\mathbf{y}\in \mathbb{R}^d$ \begin{align*} |\cZ_s \mathbf{u} (\mathbf{x}) - \cZ_s \mathbf{u}(\mathbf{y})| &\le I + II \\ &\le C [\nabla \mathbf{u}]_{C^{0,\beta}(\mathbb{R^d})} \left( |\mathbf{x} - \mathbf{y}|^{\beta} R^{1-s} + R^{1+\beta-s} + \frac{|\mathbf{x} - \mathbf{y}|^{1+\beta}}{R^s} + \frac{|\mathbf{x} - \mathbf{y}|}{R^{s-\beta}} \right). \end{align*} Choosing $R = |\mathbf{x} - \mathbf{y}|$ gives us \begin{equation*} | \cZ_s \mathbf{u}(\mathbf{x}) - \cZ_s \mathbf{u}(\mathbf{y}) | \le C [\nabla \mathbf{u}]_{C^{0,\beta}(\mathbb{R}^d)} |\mathbf{x} - \mathbf{y}|^{\beta - s + 1}, \end{equation*} completing the proof. \end{proof} \section{Vector calculus identities for nonlocal operators}\label{sec:identities} {This section is devoted to the proof of several operator identities whose local, classical counterpart is well-established, but that have not been fully investigated for the nonlocal operators considered in this work. To handle the potential singularity along the diagonal $\bx=\by$, we start by proving similar identities for ``truncated'' operators first, and then recover the desired identities in the vanishing truncation limit. The latter is justified by an important result proved at the beginning of this section in Theorem \ref{thm:OperatorsWellDefdForSmoothFxns}. This allows us to establish the validity of the operator identities for bounded $C^2$ functions.} {We define the truncated operators below; note that in the nonlocal literature (see, e.g. \cite{Delia2013}) ``truncated'' operators usually correspond to ``compactly supported'' kernels; however, in our usage below, the truncation is performed in a neighborhood of $\mathbf{x}$, i.e. we remove from the domain of integration an infinitesimal ball centered at $\mathbf{x}$. Let $\veps > 0$. The truncated gradient, divergence and curl operators are defined as} \begin{align*} \cG_{\varrho,\veps}\mathbf{u}(\mathbf{x}) &= \int_{\bbR^d \setminus B(\bx,\veps)} \varrho (\mathbf{y}-\mathbf{x}) \frac{(\mathbf{u}(\mathbf{y})-\mathbf{u}(\mathbf{x}))}{|\by-\bx|} \otimes \frac{\by-\bx}{|\by-\bx|} \, \rmd\mathbf{y}\,, \qquad \bu : \bbR^d \to \bbR^N\,,\\ \cD_{\varrho,\veps}\mathbf{u}(\mathbf{x}) &= \int_{\bbR^d \setminus B(\bx,\veps)} \varrho (\mathbf{y}-\mathbf{x}) \frac{(\mathbf{u}(\mathbf{y})-\mathbf{u}(\mathbf{x}))}{|\by-\bx|} \frac{\by-\bx}{|\by-\bx|} \, \rmd\mathbf{y}\,, \qquad \bu : \bbR^d \to \bbR^{N \times d}\,, \\ \cC_{\varrho,\veps}\mathbf{u}(\mathbf{x}) &= \int_{\bbR^d \setminus B(\bx,\veps)} \varrho (\mathbf{y}-\mathbf{x}) \frac{\by-\bx}{|\by-\bx|} \times \frac{(\mathbf{u}(\mathbf{y})-\mathbf{u}(\mathbf{x}))}{|\by-\bx|} \, \rmd\mathbf{y}\,, \quad \bu : \bbR^d \to \bbR^d \text{ and } d = 3\,. \end{align*} We use the notation $\cZ_{\varrho,\veps} \bu(\bx)$ in exactly the same way as in Proposition \ref{decay-estimates-operator}. Note that for $\bu \in L^{\infty}(\bbR^d)$, we have that \begin{equation*} | \cZ_{\varrho,\veps} \bu(\bx)| \leq 2 \Vnorm{\bu}_{L^{\infty}(\bbR^d)} \intdm{|\bh| \geq \veps }{\frac{\varrho(|\bh|)}{|\bh|} }{\bh}. \end{equation*} Thus, for any fixed $\veps > 0$ all three operators are well-defined. {The next theorem shows that the composition of the limits of two operators, when compatible, equals the limit of the truncated composition.} \begin{theorem}\label{thm:OperatorsWellDefdForSmoothFxns} Let $d \geq 2$ and $N \geq 1$. Let $\cY_{\varrho} \circ \cZ_{\varrho} \bu(\bx)$ denote any of the following compositions of operators: \begin{equation*} \begin{split} \cG_{\varrho} \circ \cG_{\varrho} u(\bx)\,, &\qquad u : \bbR^d \to \bbR\,,\\ \cD_{\varrho} \circ \cG_{\varrho} \bu(\bx)\,, &\qquad \bu : \bbR^d \to \bbR^N\,,\\ \cC_{\varrho} \circ \cG_{\varrho} u(\bx)\,, &\qquad u : \bbR^d \to \bbR \text{ and } d = 3\,,\\ \cG_{\varrho} \circ \cD_{\varrho} \bu(\bx)\,, &\qquad \bu : \bbR^d \to \bbR^{N\times d}\,,\\ \cD_{\varrho} \circ \cD_{\varrho} \bu(\bx)\,, &\qquad \bu : \bbR^d \to \bbR^{d \times d}\,,\\ \cC_{\varrho} \circ \cD_{\varrho} \bu(\bx)\,, &\qquad \bu : \bbR^d \to \bbR^{d \times d} \text{ and } d = 3\,,\\ \cG_{\varrho} \circ \cC_{\varrho} \bu(\bx)\,, &\qquad \bu : \bbR^d \to \bbR^d \text{ and } d = 3\,,\\ \cD_{\varrho} \circ \cC_{\varrho} \bu(\bx)\,, &\qquad \bu : \bbR^d \to \bbR^d \text{ and } d = 3\,,\\ \cC_{\varrho} \circ \cC_{\varrho} \bu(\bx)\,, &\qquad \bu : \bbR^d \to \bbR^d \text{ and } d = 3\,.\\ \end{split} \end{equation*} If either \begin{enumerate} \item[1)] $\bu \in C^2_b(\bbR^d)$, or \item[2)] $\varrho = \varrho_s$ and $\bu \in L^1_{2s}(\bbR^d) \cap \scC^{2s+\sigma}(\bbR^d)$ for $\sigma > 0$ sufficiently small, \end{enumerate} then $\cY_{\varrho} \circ \cZ_{\varrho} \bu(\bx)$ is a bounded function. Furthermore, we have \begin{equation*} \cY_{\varrho} \circ \cZ_{\varrho} \bu(\bx) = \lim_{\veps, \veps' \to 0} \cY_{\varrho,\veps} \circ \cZ_{\varrho,\veps'} \bu(\bx) \end{equation*} where $ \cY_{\varrho,\veps}$, $ \cZ_{\varrho,\veps'}$ denote the relevant truncated form of the operator. \end{theorem} \begin{proof} For any $\veps$, $\veps' > 0$ we have \begin{equation*} \begin{split} |\cY_{\varrho,\veps} \circ \cZ_{\varrho,\veps'} \bu(\bx)| &\leq \intdm{\bbR^d \setminus B({\bf 0},\veps)} { \varrho( \bf h ) \frac{|\cZ_{\varrho,\veps'} \bu(\bx+\bh) - \cZ_{\varrho,\veps'} \bu(\bx)| }{|\bh|} }{\bh}\,. \end{split} \end{equation*} We will use the Lebesgue Dominated Convergence Theorem. We will derive the relevant estimates for the function \begin{equation*} \Upsilon_{\veps,\veps'}(\bx,\bh) := \chi_{\bbR^d \setminus B({\bf 0},\veps)} \varrho(\bh) \frac{|\cZ_{\varrho,\veps'} \bu(\bx+\bh) - \cZ_{\varrho,\veps'} \bu(\bx)| }{|\bh|}\,. \end{equation*} Specifically, we will show that there exists a function $\Upsilon(\bh)$ such that $\Upsilon \in L^1(\bbR^d)$ and \begin{equation*} |\Upsilon_{\veps,\veps'}(\bx,\bh)| \leq |\Upsilon(\bh)| \quad \text{ for all } \bx\,, \bh \in \bbR^d\,, \qquad \text{ for all } \veps,\veps' > 0\,. \end{equation*} First we prove the theorem for case 1). We have \begin{equation*} \begin{split} \Upsilon_{\veps,\veps'}(\bx,\bh) \leq \varrho(|\bh|) \min \left\{ \Vnorm{\grad \cZ_{\varrho,\veps'} \bu}_{L^{\infty}(\bbR^d)} \,, \frac{ \Vnorm{\cZ_{\varrho,\veps'} \bu}_{L^{\infty}(\bbR^d)} }{|\bh|} \right\}\,. \end{split} \end{equation*} Therefore, it suffices to show that there exist constants $b_1$ and $b_2$ independent of $\veps'$ such that \begin{equation}\label{eq:CurlOfCurl:SmoothFxns:Pf1} \Vnorm{\cZ_{\varrho,\veps'} \bu}_{L^{\infty}(\bbR^d)} \leq b_1\,, \qquad \Vnorm{\grad \cZ_{\varrho,\veps'} \bu}_{L^{\infty}(\bbR^d)} \leq b_2\,, \qquad \text{ for all } \veps' >0 \end{equation} and the proof will be complete by setting $\Upsilon(\bh) = \varrho(|\bh|) \min \left\{ b_2 \,, \frac{b_1 }{|\bh|} \right\} $. To prove \eqref{eq:CurlOfCurl:SmoothFxns:Pf1} we proceed analogously to \eqref{eq:L1Est1}: \begin{equation*} \begin{split} |\cZ_{\varrho,\veps'} \bu(\bx)| &\leq \intdm{\veps' \leq |\bz|\leq 1 }{ \varrho(|\bz|) \frac{|\bu(\bx+\bz)-\bu(\bx)|}{|\bz|} }{\bz} + \intdm{|\bz| > 1 }{ \varrho(|\bz|) \frac{|\bu(\bx+\bz)-\bu(\bx)|}{|\bz|} }{\bz} \\ &\leq \intdm{|\bz|\leq 1 }{ \varrho(|\bz|) \frac{|\grad \bu(\bx)(\bz)| + o(|\bz|)}{|\bz|} }{\bz} + \intdm{|\bz| > 1 }{ \varrho(|\bz|) \frac{|\bu(\bx+\bz)|+|\bu(\bx)|}{|\bz|} }{\bz} \\ &\leq |\grad \bu(\bx)| \intdm{|\by-\bx|\leq 1 }{ \varrho(|\bz|) }{\bz} + \Vnorm{\bu}_{L^{\infty}(\bbR^d)} \intdm{|\bz| > 1 }{ \frac{\varrho(|\bz|)}{|\bz|} }{\bz} \\ &:= b_1 < \infty\,. \end{split} \end{equation*} The estimate for $\grad \cZ_{\varrho,\veps'} \bu$ follows the same lines, since the operators $\cZ_{\varrho,\veps'}$ commute with derivatives. Therefore, the theorem is proved for case 1). { For case 2) and for $s < 1/2$, we need to show that \begin{equation*} \Upsilon_{\veps,\veps'}(\bx,\bh) = \chi_{\bbR^d \setminus B({\bf 0},\veps)} \frac{|\cZ_{s,\veps'} \bu(\bx+\bh) - \cZ_{s,\veps'} \bu(\bx)| }{|\bh|^{d+s}}\,. \end{equation*} is bounded by an $L^1$ function $\Upsilon(\bh)$. If we can show the existence of constants $b_1$ and $b_2$ independent of $\veps'$ such that \begin{equation}\label{eq:Composition:FractionalOp:Proof1} \Vnorm{\cZ_{s,\veps'} \bu}_{L^{\infty}(\bbR^d)} \leq b_1\,, \qquad [\cZ_{s,\veps'} \bu]_{C^{0,s+\sigma}(\bbR^d)} \leq b_2\,, \end{equation} then we have the upper bound by an $L^1$ function \begin{equation*} \Upsilon_{\veps,\veps'}(\bx,\bh) \leq \chi_{ \{ |\bh|\leq 1 \} } \frac{[\cZ_{s,\veps'} \bu]_{C^{0,s+\sigma}(\bbR^d)}}{|\bh|^{d-\sigma}} + 2 \chi_{ \{ |\bh| > 1 \} } \frac{\Vnorm{\cZ_{s,\veps'} \bu}_{L^{\infty}(\bbR^d)}}{|\bh|^{d+s}}\,, \end{equation*} and the proof in the case 2) with $s < 1/2$ will be complete. The existence of $b_1$ and $b_2$ in \eqref{eq:Composition:FractionalOp:Proof1} can be shown by following the proof of the estimate \eqref{eq:NaturalEstimate1} line by line, with $\bu$ replaced by $\cZ_{s,\veps'} \bu$ and $\beta = 2s+\sigma$. The case 2) and $s \geq 1/2$ is proved the same way, instead following the proof of the estimate \eqref{eq:NaturalEstimate3} line by line. } \end{proof} \subsection{The Curl of the Gradient is Zero} The following proposition is a nonlocal analogue of the vector calculus identity $\curl \grad u = {\bf 0}$. \begin{proposition}\label{prop:CurlOfGrad:SmoothFxns} The identity \begin{equation}\label{eq:CurlOfGrad:SmoothFxns} \cC_{\varrho} \circ \cG_{\varrho} u(\bx) = {\bf 0} \end{equation} holds for all $\bx \in \bbR^d$ if either \begin{enumerate} \item[1)] $u \in C^2_b(\bbR^d)$ with $d=3$, or \item[2)] $\varrho = \varrho_s$ and $u \in L^1_{2s}(\bbR^d) \cap \scC^{2s+\sigma}(\bbR^d)$ for $\sigma > 0$ sufficiently small. \end{enumerate} \end{proposition} This can be shown immediately by applying \Cref{thm:OperatorsWellDefdForSmoothFxns} to the following theorem for the corresponding truncated operators. \begin{theorem}\label{thm:CurlOfGrad:Truncated} For any $u \in L^{\infty}(\bbR^d)$ and for any $\veps$, $\veps' > 0$ \begin{equation}\label{eq:CurlOfGrad:Truncated} \cC_{\varrho,\veps} \circ \cG_{\varrho,\veps'} u (\mathbf{x}) = - \cC_{\varrho,\veps'} \circ \cG_{\varrho,\veps} u (\mathbf{x}) \end{equation} for all $\bx \in \bbR^d$. \end{theorem} \begin{proof} Unpacking the operator $\cG_{\varrho,\veps} \circ \cG_{\varrho,\veps'} u$ and changing coordinates, \begin{equation*} \begin{split} \cC_{\varrho,\veps} &\circ \cG_{\varrho,\veps'} u \\ &= \intdm{\bbR^d \setminus B({\bf 0},\veps) }{\varrho(|\bh|) \frac{\bh}{|\bh|} \times \frac{\cG_{\varrho,\veps'} u(\bx+\bh)-\cG_{\varrho,\veps'} u(\mathbf{x})}{|\bh|} }{\bh} \\ &= \int_{\bbR^d \setminus B({\bf 0},\veps) } \frac{\varrho(|\bh|)}{|\bh|} \frac{\bh}{|\bh|} \times \Bigg( \int_{\bbR^d \setminus B({\bf 0},\veps')} \frac{\varrho(|\bw|)}{|\bw|} \big(u(\bx+\bh+\bw) - u(\bx+\bh) \big) \frac{\bw}{|\bw|} \, \rmd \bw \\ &\qquad -\int_{\bbR^d \setminus B({\bf 0},\veps')} \frac{\varrho(|\bw|)}{|\bw|} \big(u(\bx+\bw) - u(\bx) \big) \frac{\bw}{|\bw|} \, \rmd \bw \Bigg) \, \rmd \bh \\ &= \int_{\bbR^d \setminus B({\bf 0},\veps) } \frac{\varrho(|\bh|)}{|\bh|} \frac{\bh}{|\bh|} \times \\ & \qquad \Bigg( \int_{\bbR^d \setminus B({\bf 0},\veps')} \frac{\varrho(|\bw|)}{|\bw|} \big(u(\bx+\bh+\bw) - u(\bx+\bh) -u(\bx+\bw) + u(\bx) \big) \frac{\bw}{|\bw|} \, \rmd \bw \Bigg) \, \rmd \bh\,. \end{split} \end{equation*} We are justified in using linearity of the integral in the last equality, since $\bw \mapsto \varrho(\bw) \frac{|\mathbf{u}(\bx+\bw)-\mathbf{u}(\bx)|}{|\bw|}$ is in $ L^1(\bbR^d \setminus B({\bf 0},\veps))$ for any $\bu \in L^{\infty}(\bbR^d)$, for any $\veps > 0$ and for any $\bx \in \bbR^d$. Thus we obtain, {\small \begin{equation*} \cC_{\varrho,\veps} \circ \cG_{\varrho,\veps'} u = \int_{\bbR^d \setminus B({\bf 0},\veps) } \int_{\bbR^d \setminus B({\bf 0},\veps')} \frac{\varrho(|\bh|)}{|\bh|} \frac{\varrho(|\bw|)}{|\bw|} \big(u(\bx+\bh+\bw) - u(\bx+\bh) -u(\bx+\bw) + u(\bx) \big) \frac{\bh}{|\bh|} \times \frac{\bw}{|\bw|} \, \rmd \bw \, \rmd \bh\,. \end{equation*} } The last expression in the double integral is majorized by \begin{equation}\label{eq:CurlOfGrad:Majorizer} C \chi_{ \{|\bh| \geq \veps\} } \chi_{ \{|\bw| \geq \veps'\} } \Vnorm{\bu}_{L^{\infty}(\bbR^d)} \frac{\varrho (\bh)}{|\bh|} \frac{\varrho (\bw)}{|\bw|} \in L^1( \bbR^d \times \bbR^d)\,. \end{equation} Therefore, we can use Fubini's theorem and interchange the order of integration: {\small \begin{equation*} \cC_{\varrho,\veps} \circ \cG_{\varrho,\veps'} u = \int_{\bbR^d \setminus B({\bf 0},\veps') } \int_{\bbR^d \setminus B({\bf 0},\veps)} \frac{\varrho(|\bh|)}{|\bh|} \frac{\varrho(|\bw|)}{|\bw|} \big(u(\bx+\bh+\bw) - u(\bx+\bh) -u(\bx+\bw) + u(\bx) \big) \frac{\bh}{|\bh|} \times \frac{\bw}{|\bw|} \, \rmd \bh \, \rmd \bw\,. \end{equation*} } Now, we use the identity $\ba \times \bfb = - (\ba \times \bfb)$, and ``re-pack" the integrals to obtain the result: {\small \begin{equation*} \begin{split} & \cC_{\varrho,\veps} \circ \cG_{\varrho,\veps'} u \\ &= - \int_{\bbR^d \setminus B({\bf 0},\veps') } \int_{\bbR^d \setminus B({\bf 0},\veps)} \frac{\varrho(|\bh|)}{|\bh|} \frac{\varrho(|\bw|)}{|\bw|} \big(u(\bx+\bh+\bw) - u(\bx+\bh) -u(\bx+\bw) + u(\bx) \big) \frac{\bw}{|\bw|} \times \frac{\bh}{|\bh|} \, \rmd \bh \, \rmd \bw \\ &= - \int_{\bbR^d \setminus B({\bf 0},\veps') } \frac{\varrho(|\bw|)}{|\bw|} \frac{\bw}{|\bw|} \times \Bigg( \int_{\bbR^d \setminus B({\bf 0},\veps)} \frac{\varrho(|\bh|)}{|\bh|} \big(u(\bx+\bh+\bw)-u(\bx+\bw) \big) \frac{\bh}{|\bh|} \, \rmd \bh \\ &\qquad \qquad - \int_{\bbR^d \setminus B({\bf 0},\veps)} \frac{\varrho(|\bh|)}{|\bh|} \big(u(\bx+\bh)-u(\bx) \big) \frac{\bh}{|\bh|} \, \rmd \bh \Bigg) \, \rmd \bw\\ &= - \cC_{\varrho,\veps'} \circ \cG_{\varrho,\veps} u(\bx)\,. \end{split} \end{equation*}} \end{proof} \begin{proof}[Proof of Proposition \ref{prop:CurlOfGrad:SmoothFxns}] Use Theorem \ref{thm:OperatorsWellDefdForSmoothFxns} to take the limit as $\veps$, $\veps' \to 0$ on both sides of \eqref{eq:CurlOfGrad:Truncated}: \begin{equation*} \cC_{\varrho} \circ \cG_{\varrho} u = - \cC_{\varrho} \circ \cG_{\varrho} u\,. \end{equation*} \end{proof} \subsection{The Divergence of the Curl is Zero} We proceed as in the previous section to prove a nonlocal vector calculus analogue of the identity $\div \curl \bu = 0$. \begin{proposition}\label{prop:DivOfCurl:SmoothFxns} The identity \begin{equation}\label{eq:DivOfCurl:SmoothFxns} \cD_{\varrho} \circ \cC_{\varrho} \bu(\bx) = 0 \end{equation} holds for all $\bx \in \bbR^d$ if either \begin{enumerate} \item[1)] $\bu \in C^2_b(\bbR^d;\bbR^d)$ with $d=3$, or \item[2)] $\varrho=\varrho_s$ and $\bu \in L^1_{2s}(\bbR^d;\bbR^d) \cap \scC^{2s+\sigma}(\bbR^d;\bbR^d)$ with $d=3$ and for $\sigma > 0$ sufficiently small. \end{enumerate} \end{proposition} This can be shown immediately by applying \Cref{thm:OperatorsWellDefdForSmoothFxns} to the following theorem for the corresponding truncated operators. \begin{theorem}\label{thm:DivOfCurl:Truncated} For any $\bu \in L^{\infty}(\bbR^d;\bbR^d)$ with $d=3$ and for any $\veps$, $\veps' > 0$ \begin{equation}\label{eq:DivOfCurl:Truncated} \cD_{\varrho,\veps} \circ \cC_{\varrho,\veps'} \bu (\mathbf{x}) = - \cD_{\varrho,\veps'} \circ \cC_{\varrho,\veps} \bu (\mathbf{x}) \end{equation} for all $\bx \in \bbR^d$. \end{theorem} \begin{proof} Unpacking the operator $\cD_{\varrho,\veps} \circ \cC_{\varrho,\veps'} \bu$ and changing coordinates, {\small \begin{equation*} \begin{split} &\cD_{\varrho,\veps} \circ \cC_{\varrho,\veps'} \bu \\ &= \intdm{\bbR^d \setminus B({\bf 0},\veps) }{\varrho(|\bh|) \frac{\cC_{\varrho,\veps'} \bu(\bx+\bh)-\cC_{\varrho,\veps'} \bu(\mathbf{x})}{|\bh|} \cdot \frac{\bh}{|\bh|} }{\bh} \\ &= \int_{\bbR^d \setminus B({\bf 0},\veps) } \frac{\varrho(|\bh|)}{|\bh|} \Bigg( \int_{\bbR^d \setminus B({\bf 0},\veps')} \frac{\varrho(|\bw|)}{|\bw|} \frac{\bw}{|\bw|} \times \Big( \bu(\bx+\bh+\bw) - \bu(\bx+\bh) \Big) \, \rmd \bw \\ &\qquad -\int_{\bbR^d \setminus B({\bf 0},\veps')} \frac{\varrho(|\bw|)}{|\bw|} \frac{\bw}{|\bw|} \times \Big(\bu(\bx+\bw) - \bu(\bx) \Big) \, \rmd \bw \Bigg) \cdot \frac{\bh}{|\bh|} \, \rmd \bh \\ &= \int_{\bbR^d \setminus B({\bf 0},\veps) } \frac{\varrho(|\bh|)}{|\bh|} \Bigg(\int_{\bbR^d \setminus B({\bf 0},\veps')} \frac{\varrho(|\bw|)}{|\bw|} \frac{\bw}{|\bw|} \times \big(\bu(\bx+\bh+\bw) - \bu(\bx+\bh) -\bu(\bx+\bw) + \bu(\bx) \big) \, \rmd \bw \Bigg) \cdot \frac{\bh}{|\bh|} \, \rmd \bh\,. \end{split} \end{equation*}} We are justified in using linearity of the integral in the last equality, since $\bw \mapsto \varrho(\bw) \frac{|\mathbf{u}(\bx+\bw)-\mathbf{u}(\bx)|}{|\bw|}$ is in $L^1(\bbR^d \setminus B({\bf 0},\veps))$ for any $\bu \in L^{\infty}(\bbR^d)$, for any $\veps > 0$ and for any $\bx \in \bbR^d$. Thus we obtain {\small \begin{equation*} \cD_{\varrho,\veps} \circ \cC_{\varrho,\veps'} \bu = \int_{\bbR^d \setminus B({\bf 0},\veps) } \int_{\bbR^d \setminus B({\bf 0},\veps')} \frac{\varrho(|\bh|)}{|\bh|} \frac{\varrho(|\bw|)}{|\bw|} \Bigg( \frac{\bw}{|\bw|} \times \big(\bu(\bx+\bh+\bw) - \bu(\bx+\bh) - \bu(\bx+\bw) + \bu(\bx) \big) \Bigg) \cdot \frac{\bh}{|\bh|} \, \rmd \bw \, \rmd \bh\,. \end{equation*} } The last expression in the double integral is majorized by \begin{equation}\label{eq:DivOfCurl:Majorizer} C \chi_{ \{|\bh| \geq \veps\} } \chi_{ \{|\bw| \geq \veps'\} } \Vnorm{\bu}_{L^{\infty}(\bbR^d)} \frac{\varrho (\bh)}{|\bh|} \frac{\varrho (\bw)}{|\bw|} \in L^1( \bbR^d \times \bbR^d)\,. \end{equation} Therefore, we can use Fubini's theorem and interchange the order of integration: {\small \begin{equation*} \cD_{\varrho,\veps} \circ \cC_{\varrho,\veps'} \bu = \int_{\bbR^d \setminus B({\bf 0},\veps') } \int_{\bbR^d \setminus B({\bf 0},\veps)} \frac{\varrho(|\bh|)}{|\bh|} \frac{\varrho(|\bw|)}{|\bw|} \Bigg( \frac{\bw}{|\bw|} \times \big(\bu(\bx+\bh+\bw) - \bu(\bx+\bh) - \bu(\bx+\bw) + \bu(\bx) \big) \Bigg) \cdot \frac{\bh}{|\bh|} \, \rmd \bh \, \rmd \bw\,. \end{equation*}} Now, we use the identity $(\ba \times \bfb) \cdot \bc = (\bfb \times \bc) \cdot \ba = - (\bc \times \bfb) \cdot \ba$, and ``re-pack" the integrals to obtain the result: {\small \begin{equation*} \begin{split} &\cD_{\varrho,\veps} \circ \cC_{\varrho,\veps'} \bu \\ &= - \int_{\bbR^d \setminus B({\bf 0},\veps') } \int_{\bbR^d \setminus B({\bf 0},\veps)} \frac{\varrho(|\bh|)}{|\bh|} \frac{\varrho(|\bw|)}{|\bw|} \Bigg( \frac{\bh}{|\bh|} \times \big(\bu(\bx+\bh+\bw) - \bu(\bx+\bh) - \bu(\bx+\bw) + \bu(\bx) \big) \Bigg) \cdot \frac{\bw}{|\bw|} \, \rmd \bh \, \rmd \bw \\ &= - \int_{\bbR^d \setminus B({\bf 0},\veps') } \frac{\varrho(|\bw|)}{|\bw|} \Bigg( \int_{\bbR^d \setminus B({\bf 0},\veps)} \frac{\varrho(|\bh|)}{|\bh|} \frac{\bh}{|\bh|} \times \big( \bu(\bx+\bh+\bw)-\bu(\bx+\bw) \big) \, \rmd \bh \\ &\qquad \qquad - \int_{\bbR^d \setminus B({\bf 0},\veps)} \frac{\varrho(|\bh|)}{|\bh|} \frac{\bh}{|\bh|} \times \big( \bu(\bx+\bh)-\bu(\bx) \big) \, \rmd \bh \Bigg) \cdot \frac{\bw}{|\bw|} \, \rmd \bw = - \cD_{\varrho,\veps'} \circ \cC_{\varrho,\veps} \bu(\bx)\,. \end{split} \end{equation*}} \end{proof} \begin{proof}[Proof of Proposition \ref{prop:DivOfCurl:SmoothFxns}] Use Theorem \ref{thm:OperatorsWellDefdForSmoothFxns} to take the limit as $\veps$, $\veps' \to 0$ on both sides of \eqref{eq:DivOfCurl:Truncated}: \begin{equation*} \cD_{\varrho} \circ \cC_{\varrho} \bu = - \cD_{\varrho} \circ \cC_{\varrho} \bu\,. \end{equation*} \end{proof} \subsection{Curl of Curl Identity} {We again proceed by computing the composition of the curl operator with itself in the truncated case and then using Theorem \ref{thm:OperatorsWellDefdForSmoothFxns} to prove that the same identity holds in the limit.} \begin{proposition}\label{prop:CurlOfCurl:SmoothFxns} The identity \begin{equation}\label{eq:goal} \cC_{\varrho} \circ \cC_{\varrho}\mathbf{u}(\mathbf{x}) = \cG_{\varrho} \circ \cD_{\varrho} \mathbf{u}(\mathbf{x}) - \cD_{\varrho} \circ \cG_{\varrho} \mathbf{u}(\mathbf{x}) \end{equation} holds for all $\bx \in \bbR^d$ if either \begin{enumerate} \item[1)] $\bu \in C^2_b(\bbR^d;\bbR^d)$ with $d=3$, or \item[2)] $\varrho=\varrho_s$ and $\bu \in L^1_{2s}(\bbR^d;\bbR^d) \cap \scC^{2s+\sigma}(\bbR^d;\bbR^d)$ with $d=3$ and for $\sigma > 0$ sufficiently small. \end{enumerate} \end{proposition} This can be shown immediately by applying \Cref{thm:OperatorsWellDefdForSmoothFxns} to the following version for the corresponding truncated operators. \begin{theorem}\label{thm:CurlOfCurl:Truncated} For any $\bu \in L^{\infty}(\bbR^d;\bbR^d)$ with $d=3$ and for any $\veps$, $\veps' > 0$ \begin{equation}\label{eq:CurlCurlId:Truncated} \cC_{\varrho,\veps} \circ \cC_{\varrho,\veps'}\mathbf{u}(\mathbf{x}) = \cG_{\varrho,\veps'} \circ \cD_{\varrho,\veps} \mathbf{u}(\mathbf{x}) - \cD_{\varrho,\veps} \circ \cG_{\varrho,\veps'} \mathbf{u}(\mathbf{x}) \end{equation} \end{theorem} \begin{proof} We require the following ``triple product'' identity, \begin{equation}\label{eq:triple_product} \mathbf{a} \times (\bfb \times \mathbf{c}) = (\mathbf{a} \cdot \mathbf{c}) \bfb - (\mathbf{a} \cdot \bfb) \mathbf{c}. \end{equation} Unpacking the operator $\cC_{\varrho,\veps} \circ \cC_{\varrho,\veps'} \mathbf{u}$ using the definition of $\cC_{\varrho,\veps}$ and changing coordinates, \begin{equation*} \begin{split} \cC_{\varrho,\veps} \circ \cC_{\varrho,\veps'}\mathbf{u}(\mathbf{x}) &= \cC_{\varrho,\veps} (\cC_{\varrho,\veps'}\mathbf{u})(\mathbf{x}) \\ &= \int_{\bbR^d \setminus B({\bf 0},\veps)} \varrho (\bh) \frac{\bh}{|\bh|} \times \frac{(\cC_{\varrho,\veps'}\mathbf{u}(\bx+\bh)-\cC_{\varrho,\veps'}\mathbf{u}(\mathbf{x}))}{|\bh|} \, \rmd \bh \\ &= \int_{\bbR^d \setminus B({\bf 0},\veps)} \frac{\varrho(\bh)}{|\bh|} \frac{\bh}{|\bh|} \\ &\qquad \Bigg( \int_{\bbR^d \setminus B({\bf 0},\veps')} \frac{\varrho(\bw)}{|\bw|} \frac{\bw}{|\bw|} \times (\mathbf{u}(\bx+\bh+\bw)-\mathbf{u}(\bx+\bh)) \, \rmd \bw \\ &\qquad \qquad \qquad \qquad - \int_{\bbR^d \setminus B({\bf 0},\veps')} \frac{\varrho(\bw)}{|\bw|} \frac{\bw}{|\bw|} \times (\mathbf{u}(\bx+\bw)-\mathbf{u}(\mathbf{x})) \, \rmd\bw \Bigg) \, \rmd\bh \\ &= \int_{\bbR^d \setminus B({\bf 0},\veps)} \frac{\varrho(\bh)}{|\bh|} \frac{\bh}{|\bh|} \times \int_{\bbR^d \setminus B({\bf 0},\veps')} \frac{\varrho(\bw)}{|\bw|} \\ &\qquad \Bigg( \frac{\bw}{|\bw|} \times (\mathbf{u}(\bx+\bh+\bw)-\mathbf{u}(\bx+\bh)) - \frac{\bw}{|\bw|} \times (\mathbf{u}(\bx+\bw)-\mathbf{u}(\mathbf{x})) \Bigg) \, \rmd \bw \, \rmd \bh. \end{split} \end{equation*} We are justified in using linearity of the integral in the last equality, since $\bw \mapsto \varrho(\bw) \frac{|\mathbf{u}(\bx+\bw)-\mathbf{u}(\bx)|}{|\bw|}$ is in $L^1(\bbR^d \setminus B({\bf 0},\veps))$ for any $\bu \in L^{\infty}(\bbR^d)$, for any $\veps > 0$ and for any $\bx \in \bbR^d$. Some of these terms do not depend on $\bw$, so we can write \begin{align*} \cC_{\varrho,\veps} \circ \cC_{\varrho,\veps'}\mathbf{u}(\mathbf{x}) &= \int_{\bbR^d \setminus B({\bf 0},\veps)} \frac{\varrho(\bh)}{|\bh|} \int_{\bbR^d \setminus B({\bf 0},\veps')} \frac{\varrho(\bw)}{|\bw|} \left[\frac{\bh}{|\bh|} \times \left( \frac{\bw}{|\bw|} \times (\mathbf{u}(\bx+\bh+\bw)-\mathbf{u}(\bx+\bh)) \right)\right. \\ &\qquad \qquad - \left.\frac{\bh}{|\bh|} \times \left( \frac{\bw}{|\bw|} \times (\mathbf{u}(\bx+\bw)-\mathbf{u}(\mathbf{x})) \right)\right] \, \rmd \bw \, \rmd \bh. \end{align*} Now we use the identity \eqref{eq:triple_product} to write \begin{align*} \cC_{\varrho,\veps} \circ \cC_{\varrho,\veps'}\mathbf{u}(\mathbf{x}) = \int_{\bbR^d \setminus B({\bf 0},\veps)} & \frac{\varrho (\bh)}{|\bh|} \int_{\bbR^d \setminus B({\bf 0},\veps')} \frac{\varrho (\bw)}{|\bw|} \Bigg[ \\ &\left( \frac{\bh}{|\bh|} \cdot (\mathbf{u}(\bx+\bh+\bw)-\mathbf{u}(\bx+\bh)) \right) \frac{\bw}{|\bw|} \\ - &\left( \frac{\bh}{|\bh|} \cdot \frac{\bw}{|\bw|} \right) (\mathbf{u}(\bx+\bh+\bw)-\mathbf{u}(\bx+\bh)) \\ - &\left( \frac{\bh}{|\bh|} \cdot (\mathbf{u}(\bx+\bw)-\mathbf{u}(\bx)) \right) \frac{\bw}{|\bw|} \\ + &\left( \frac{\bh}{|\bh|} \cdot \frac{\bw}{|\bw|} \right) (\mathbf{u}(\bx+\bw)-\mathbf{u}(\bx)) \Bigg] \, \rmd \bw \, \rmd \bh \\ = \int_{\bbR^d \setminus B({\bf 0},\veps)} & \frac{\varrho (\bh)}{|\bh|} \int_{\bbR^d \setminus B({\bf 0},\veps')} \frac{\varrho (\bw)}{|\bw|} \Bigg[ \\ &\left( \frac{\bh}{|\bh|} \cdot (\mathbf{u}(\bx+\bh+\bw)-\mathbf{u}(\bx+\bh)) \right) \frac{\bw}{|\bw|} \\ - &\left( \frac{\bh}{|\bh|} \cdot (\mathbf{u}(\bx+\bw)-\mathbf{u}(\bx)) \right) \frac{\bw}{|\bw|} \\ - &\left( \frac{\bh}{|\bh|} \cdot \frac{\bw}{|\bw|} \right) (\mathbf{u}(\bx+\bh+\bw)-\mathbf{u}(\bx+\bh)) \\ + &\left( \frac{\bh}{|\bh|} \cdot \frac{\bw}{|\bw|} \right) (\mathbf{u}(\bx+\bw)-\mathbf{u}(\bx)) \Bigg] \, \rmd \bw \, \rmd \bh\,. \end{align*} Since the last expression in the double integral is majorized by \begin{equation}\label{eq:CurlOfCurl:Majorizer} C \chi_{ \{|\bh| \geq \veps\} } \chi_{ \{|\bw| \geq \veps'\} } \Vnorm{\bu}_{L^{\infty}(\bbR^d)} \frac{\varrho (\bh)}{|\bh|} \frac{\varrho (\bw)}{|\bw|} \in L^1( \bbR^d \times \bbR^d) \end{equation} we can use linearity of the double integral to separate the two former terms from the latter two. This gives {\small \begin{align*} \cC_{\varrho,\veps} \circ \cC_{\varrho,\veps'}\mathbf{u}(\mathbf{x}) = & \int_{\bbR^d \setminus B({\bf 0},\veps)} \int_{\bbR^d \setminus B({\bf 0},\veps')} \frac{\varrho (\bh)}{|\bh|} \frac{\varrho (\bw)}{|\bw|} \Bigg[ \\ & \qquad \left( \frac{\bh}{|\bh|} \cdot (\mathbf{u}(\bx+\bh+\bw)-\mathbf{u}(\bx+\bh)) \right) \frac{\bw}{|\bw|} - \left( \frac{\bh}{|\bh|} \cdot (\mathbf{u}(\bx+\bw)-\mathbf{u}(\bx)) \right) \frac{\bw}{|\bw|} \Bigg] \, \rmd \bw \, \rmd \bh \\ &\quad - \int_{\bbR^d \setminus B({\bf 0},\veps)} \int_{\bbR^d \setminus B({\bf 0},\veps')} \frac{\varrho (\bh)}{|\bh|} \frac{\varrho (\bw)}{|\bw|} \Bigg[ \\ &\qquad \left( \frac{\bh}{|\bh|} \cdot \frac{\bw}{|\bw|} \right) (\mathbf{u}(\bx+\bh+\bw)-\mathbf{u}(\bx+\bh)) - \left( \frac{\bh}{|\bh|} \cdot \frac{\bw}{|\bw|} \right) (\mathbf{u}(\bx+\bw)-\mathbf{u}(\bx)) \Bigg] \, \rmd \bw \, \rmd \bh \\ &:= I \, (\text{first two lines above}) - II \, (\text{last two lines above}). \end{align*} } Now, we use the vector identity \begin{equation*} (\ba \cdot \bfb) \bc = (\bc \otimes \ba) \bfb \end{equation*} and write {\small \begin{equation}\label{eq:CurlOfCurl:Pf1} \begin{split} II &= \int_{\bbR^d \setminus B({\bf 0},\veps)} \int_{\bbR^d \setminus B({\bf 0},\veps')} \frac{\varrho (\bh)}{|\bh|} \frac{\varrho (\bw)}{|\bw|} \Bigg[ \\ & \qquad \left( (\mathbf{u}(\bx+\bh+\bw)-\mathbf{u}(\bx+\bh)) \otimes \frac{\bw}{|\bw|} \right) \frac{\bh}{|\bh|} + \left( (\mathbf{u}(\bx+\bw)-\mathbf{u}(\bx)) \otimes \frac{\bw}{|\bw|} \right) \frac{\bh}{|\bh|} \Bigg] \, \rmd \bw \, \rmd \bh \\ &= \int_{\bbR^d \setminus B({\bf 0},\veps)} \frac{\varrho (\bh)}{|\bh|} \Bigg[ \left( \int_{\bbR^d \setminus B({\bf 0},\veps')} \frac{\varrho (\bw)}{|\bw|} (\mathbf{u}(\bx+\bh+\bw)-\mathbf{u}(\bx+\bh)) \otimes \frac{\bw}{|\bw|} \, \rmd \bw \right) \frac{\bh}{|\bh|} \\ &\qquad - \left( \int_{\bbR^d \setminus B({\bf 0},\veps')} \frac{\varrho (\bw)}{|\bw|} (\mathbf{u}(\bx+\bw)-\mathbf{u}(\bx)) \otimes \frac{\bw}{|\bw|} \, \rmd \bw \right) \frac{\bh}{|\bh|} \Bigg] \, \rmd \bh \\ &= \int_{\bbR^d \setminus B({\bf 0},\veps)} \frac{\varrho (\bh)}{|\bh|} \Bigg[ \cG_{\varrho,\veps'} \bu(\bx+\bh) \frac{\bh}{|\bh|} - \cG_{\varrho,\veps'} \bu(\bx) \frac{\bh}{|\bh|} \Bigg] \, \rmd \bh \\ &= \cD_{\varrho,\veps} \circ \cG_{\varrho,\veps'} \bu(\bx)\,. \end{split} \end{equation}} Using linearity of the inner integral is again justified since the double integrand of $II$ is majorized by the function in \eqref{eq:CurlOfCurl:Majorizer}. Last, the double integrand of $I$ is also majorized by the function in \eqref{eq:CurlOfCurl:Majorizer}. Therefore using Fubini's theorem and linearity of the integral {\small \begin{equation}\label{eq:CurlOfCurl:Pf2} \begin{split} I &= \int_{\bbR^d \setminus B({\bf 0},\veps)} \int_{\bbR^d \setminus B({\bf 0},\veps')} \frac{\varrho (\bh)}{|\bh|} \frac{\varrho (\bw)}{|\bw|} \Bigg[ \frac{\bh}{|\bh|} \cdot (\mathbf{u}(\bx+\bh+\bw)-\mathbf{u}(\bx+\bh)-\bu(\bx+\bw) + \bu(\bx) ) \Bigg] \frac{\bw}{|\bw|} \, \rmd \bw \, \rmd \bh \\ &= \int_{\bbR^d \setminus B({\bf 0},\veps)} \int_{\bbR^d \setminus B({\bf 0},\veps')} \frac{\varrho (\bh)}{|\bh|} \frac{\varrho (\bw)}{|\bw|} \Bigg[ \left( \frac{\bh}{|\bh|} \cdot (\mathbf{u}(\bx+\bh+\bw)-\mathbf{u}(\bx+\bw)) \right) \\ &\qquad \qquad \qquad - \left( \frac{\bh}{|\bh|} \cdot (\bu(\bx+\bh) - \bu(\bx) ) \right) \Bigg] \frac{\bw}{|\bw|} \, \rmd \bw \, \rmd \bh \\ &= \int_{\bbR^d \setminus B({\bf 0},\veps')} \int_{\bbR^d \setminus B({\bf 0},\veps)} \frac{\varrho (\bw)}{|\bw|} \Bigg[ \frac{\varrho (\bh)}{|\bh|} \left( \frac{\bh}{|\bh|} \cdot (\mathbf{u}(\bx+\bh+\bw)-\mathbf{u}(\bx+\bw)) \right) \\ &\qquad \qquad \qquad - \frac{\varrho (\bh)}{|\bh|} \left( \frac{\bh}{|\bh|} \cdot (\bu(\bx+\bh) - \bu(\bx) ) \right) \Bigg] \frac{\bw}{|\bw|} \, \rmd \bh \, \rmd \bw \\ &= \int_{\bbR^d \setminus B({\bf 0},\veps')} \frac{\varrho (\bw)}{|\bw|} \Bigg[ \int_{\bbR^d \setminus B({\bf 0},\veps)} \frac{\varrho (\bh)}{|\bh|} \left( \frac{\bh}{|\bh|} \cdot (\mathbf{u}(\bx+\bh+\bw)-\mathbf{u}(\bx+\bw)) \right) \, \rmd \bh \\ &\qquad \qquad \qquad - \int_{\bbR^d \setminus B({\bf 0},\veps)} \frac{\varrho (\bh)}{|\bh|} \left( \frac{\bh}{|\bh|} \cdot (\bu(\bx+\bh) - \bu(\bx) ) \right) \, \rmd \bh \Bigg] \frac{\bw}{|\bw|} \, \rmd \bw \\ &= \int_{\bbR^d \setminus B({\bf 0},\veps')} \frac{\varrho (\bw)}{|\bw|} \Big[ \cD_{\varrho,\veps} \bu(\bx+\bw) - \cD_{\varrho,\veps} \bu(\bx) \Big] \frac{\bw}{|\bw|} \, \rmd \bw \\ &= \cG_{\varrho,\veps'} \circ \cD_{\varrho,\veps} \bu(\bx)\,. \end{split} \end{equation}} Putting together \eqref{eq:CurlOfCurl:Pf1} and \eqref{eq:CurlOfCurl:Pf2} gives us the theorem. \end{proof} \begin{proof}[Proof of Proposition \ref{prop:CurlOfCurl:SmoothFxns}] Follows from Theorem \ref{thm:OperatorsWellDefdForSmoothFxns} by passing to the limit as $\veps$, $\veps' \to 0$ in \eqref{eq:CurlCurlId:Truncated}. \end{proof} \section{Equivalence Kernel}\label{sec:eq-kernel} In this section we rigorously show that for bounded $C^2$ functions, there exists an equivalence kernel for which the composition of the divergence and gradient operators corresponds to the (unweighted) nonlocal Laplace operator, i.e. $-\cD_{\varrho} \circ \cG_{\varrho}=(-\Delta)_\varrho$. Furthermore, we use the kernel examples described in Section \ref{sec:OperatorDef} to illustrate our equivalence result. \begin{theorem}\label{thm:EquivalenceKernel} Let $d$ and $N$ be positive integers. Suppose $\varrho$ is a radial kernel that satisfies \eqref{assumption:Kernel}, and suppose that \begin{equation}\label{eq:KernelFullIntegrability} \frac{\varrho(|\bseta|)}{|\bseta|} \in L^1(\bbR^d)\,. \tag{K-INT} \end{equation} Then for functions $\bu \in C^2_b(\bbR^d;\bbR^N)$ the formula \begin{equation}\label{eq:Diffusion} -\cD_{\varrho} \circ \cG_{\varrho} \bu(\bx) = \frac{1}{2}\intdm{\bbR^d}{ \varrho_{\text{eq}}(|\by|) \frac{(2\bu(\bx)-\bu(\bx+\by)-\bu(\bx-\by))}{|\by|^2} }{\by} \end{equation} holds, where the measurable function $\varrho_{\text{eq}}$ is defined as \begin{equation}\label{eq:EquivalenceKernel} \varrho_{\text{eq}}(|\bseta|) := |\bseta|^{d} \intdm{\bbR^d}{ \frac{\varrho(|\bseta| |\bz|)}{|\bz|} \frac{\varrho(|\bseta| |\be_1 -\bz|)}{|\be_1-\bz|} { \frac{\be_1-\bz}{|\be_1 - \bz|} \cdot \frac{\bz}{|\bz|} } }{\bz}\,, \quad |\bseta| > 0\,. \end{equation} \end{theorem} Henceforth we define the operator appearing in \eqref{eq:Diffusion} as \begin{equation}\label{eq:DiffusionDefinition} (-\Delta)_{\varrho}\bu(\bx) := \frac{1}{2}\intdm{\bbR^d}{ \varrho_{\text{eq}}(|\by|) \frac{(2\bu(\bx)-\bu(\bx+\by)-\bu(\bx-\by))}{|\by|^2} }{\by}\,. \end{equation} \begin{proof}[Proof of Theorem \ref{thm:EquivalenceKernel}] Unpacking the operator $\cD_{\varrho} \circ \cG_{\varrho} \bu$ and changing coordinates, \begin{equation*} \begin{split} &\cD_{\varrho} \circ \cG_{\varrho} \bu(\bx) \\ &= \intdm{\bbR^d }{\varrho(|\bh|) \frac{\cG_{\varrho} \bu(\bx+\bh)-\cG_{\varrho} \bu(\mathbf{x})}{|\bh|} \frac{\bh}{|\bh|} }{\bh} \\ &= \int_{\bbR^d} \frac{\varrho(|\bh|)}{|\bh|} \Bigg(\int_{\bbR^d} \frac{\varrho(|\bw|)}{|\bw|} \Big( \bu(\bx+\bh+\bw) - \bu(\bx+\bh) \Big) \otimes \frac{\bw}{|\bw|} \, \rmd \bw \\ &\qquad -\int_{\bbR^d} \frac{\varrho(|\bw|)}{|\bw|} \Big(\bu(\bx+\bw) - \bu(\bx) \Big) \otimes \frac{\bw}{|\bw|} \, \rmd \bw \Bigg) \frac{\bh}{|\bh|} \, \rmd \bh \\ &= \int_{\bbR^d} \frac{\varrho(|\bh|)}{|\bh|} \Bigg( \int_{\bbR^d} \frac{\varrho(|\bw|)}{|\bw|} \big(\bu(\bx+\bh+\bw) - \bu(\bx+\bh) -\bu(\bx+\bw) + \bu(\bx) \big) \otimes \frac{\bw}{|\bw|} \, \rmd \bw \Bigg) \frac{\bh}{|\bh|} \, \rmd \bh\,. \end{split} \end{equation*} We are justified in using linearity of the integral in the last equality, since by \eqref{eq:KernelFullIntegrability} $\bw \mapsto \varrho(\bw) \frac{|\mathbf{u}(\bx+\bw)-\mathbf{u}(\bx)|}{|\bw|} \in L^1(\bbR^d)$ for any $\bu \in L^{\infty}(\bbR^d)$, for any $\veps > 0$ and for any $\bx \in \bbR^d$. Using the vector identity $(\ba \otimes \bfb) \bc = (\bfb \cdot \bc) \ba$ brings us to \begin{equation*} \cD_{\varrho} \circ \cG_{\varrho} \bu(\bx) = \int_{\bbR^d} \int_{\bbR^d} \frac{\varrho(|\bh|)}{|\bh|} \frac{\varrho(|\bw|)}{|\bw|} \Bigg( \frac{\bw}{|\bw|} \cdot \frac{\bh}{|\bh|} \Bigg) \big(\bu(\bx+\bh+\bw) - \bu(\bx+\bh) - \bu(\bx+\bw) + \bu(\bx) \big) \, \rmd \bw \, \rmd \bh\,. \end{equation*} The expression in the double integral is majorized by \begin{equation}\label{eq:DivOfGrad:Majorizer} C \Vnorm{\bu}_{L^{\infty}(\bbR^d)} \frac{\varrho (|\bh|)}{|\bh|} \frac{\varrho (|\bw|)}{|\bw|}\,, \end{equation} which belongs to $L^1( \bbR^d \times \bbR^d)$ by Tonelli's theorem. Therefore, Fubini's theorem is justified in the following splitting of the integrand: \begin{equation*} \begin{split} \cD_{\varrho} \circ \cG_{\varrho} \bu(\bx) &= \int_{\bbR^d} \int_{\bbR^d} \frac{\varrho(|\bh|)}{|\bh|} \frac{\varrho(|\bw|)}{|\bw|} \Bigg( \frac{\bw}{|\bw|} \cdot \frac{\bh}{|\bh|} \Bigg) \bu(\bx+\bh+\bw) \, \rmd \bw \, \rmd \bh \\ &\qquad - \int_{\bbR^d} \frac{\varrho(|\bh|)}{|\bh|} \left( \left[ \int_{\bbR^d} \frac{\varrho(|\bw|)}{|\bw|} \frac{\bw}{|\bw|} \, \rmd \bw \right] \cdot \frac{\bh}{|\bh|} \right) \bu(\bx+\bh) \, \rmd \bh \\ &\qquad - \int_{\bbR^d} \frac{\varrho(|\bw|)}{|\bw|} \left( \left[ \int_{\bbR^d} \frac{\varrho(|\bh|)}{|\bh|} \frac{\bh}{|\bh|} \, \rmd \bh \right] \cdot \frac{\bw}{|\bw|} \right) \bu(\bx+\bw) \, \rmd \bw \\ &\qquad + \int_{\bbR^d} \int_{\bbR^d} \frac{\varrho(|\bh|)}{|\bh|} \frac{\varrho(|\bw|)}{|\bw|} \Bigg( \frac{\bw}{|\bw|} \cdot \frac{\bh}{|\bh|} \Bigg) \, \rmd \bw \, \rmd \bh \, \bu(\bx) \,. \end{split} \end{equation*} The inner integrals on the second and third lines are both zero, since the respective integrands are odd. The last line is zero for the same reason. Therefore, we can subtract any multiple of the last line from $\cD_{\varrho} \circ \cG_{\varrho} \bu(\bx)$. Combining this fact, along with splitting the integral and changing coordinates, gives \begin{equation*} \begin{split} \cD_{\varrho} \circ \cG_{\varrho} \bu(\bx) &= \frac{1}{2} \int_{\bbR^d} \int_{\bbR^d} \frac{\varrho(|\bh|)}{|\bh|} \frac{\varrho(|\bw|)}{|\bw|} \Bigg( \frac{\bw}{|\bw|} \cdot \frac{\bh}{|\bh|} \Bigg) \bu(\bx+\bh+\bw) \, \rmd \bw \, \rmd \bh \\ &\qquad +\frac{1}{2} \int_{\bbR^d} \int_{\bbR^d} \frac{\varrho(|\bh|)}{|\bh|} \frac{\varrho(|\bw|)}{|\bw|} \Bigg( \frac{\bw}{|\bw|} \cdot \frac{\bh}{|\bh|} \Bigg) \bu(\bx+\bh+\bw) \, \rmd \bw \, \rmd \bh \\ &= \frac{1}{2} \int_{\bbR^d} \int_{\bbR^d} \frac{\varrho(|\bh|)}{|\bh|} \frac{\varrho(|\bw|)}{|\bw|} \Bigg( \frac{\bw}{|\bw|} \cdot \frac{\bh}{|\bh|} \Bigg) \bu(\bx+\bh+\bw) \, \rmd \bw \, \rmd \bh \\ &\qquad +\frac{1}{2} \int_{\bbR^d} \int_{\bbR^d} \frac{\varrho(|\bh|)}{|\bh|} \frac{\varrho(|\bw|)}{|\bw|} \Bigg( \frac{\bw}{|\bw|} \cdot \frac{\bh}{|\bh|} \Bigg) \bu(\bx-\bh-\bw) \, \rmd \bw \, \rmd \bh \\ &= \frac{1}{2} \int_{\bbR^d} \int_{\bbR^d} \frac{\varrho(|\bh|)}{|\bh|} \frac{\varrho(|\bw|)}{|\bw|} \Bigg( \frac{\bw}{|\bw|} \cdot \frac{\bh}{|\bh|} \Bigg) \\ &\qquad \qquad \cdot \big( \bu(\bx+\bh+\bw) + \bu(\bx-\bh-\bw) - 2 \bu(\bx) \big)\, \rmd \bw \, \rmd \bh\,. \end{split} \end{equation*} Now, we iterate the integrals and introduce the coordinate change $\by = \bw+\bh$: \begin{equation*} \begin{split} \cD_{\varrho} \circ \cG_{\varrho} \bu(\bx) &= \frac{1}{2} \int_{\bbR^d} \int_{\bbR^d} \frac{\varrho(|\bh|)}{|\bh|} \frac{\varrho(|\by-\bh|)}{|\by-\bh|} { \frac{\by-\bh}{|\by-\bh|} \cdot \frac{\bh}{|\bh|} } \big( \bu(\bx+\by) + \bu(\bx-\by) - 2 \bu(\bx) \big) \, \rmd \by \, \rmd \bh\,. \end{split} \end{equation*} We can interchange the order of integration, since the integrand remains majorized by \eqref{eq:DivOfGrad:Majorizer}. So we have \begin{equation*} -\cD_{\varrho} \circ \cG_{\varrho} \bu(\bx) = \frac{1}{2} \int_{\bbR^d} \varrho_{eq}(\by) \frac{ \big( 2 \bu(\bx) - \bu(\bx+\by) - \bu(\bx-\by) \big) }{|\by|^2} \, \rmd \by \,, \end{equation*} where \begin{equation*} \varrho_{\text{eq}}(\by) = |\by|^2 \intdm{\bbR^d}{ \frac{\varrho(|\bh|)}{|\bh|} \frac{\varrho(|\by-\bh|)}{|\by-\bh|} { \frac{\by-\bh}{|\by-\bh|} \cdot \frac{\bh}{|\bh|} } }{\bh}\,. \end{equation*} In order to conclude with the formula \eqref{eq:EquivalenceKernel} we will show that $\varrho_{\text{eq}}$ actually only depends on $|\by|$. For any $\by \neq {\bf 0}$, let $\bw = \frac{\bh}{|\by|}$ and change coordinates: \begin{equation*} \varrho_{\text{eq}}(\by) := |\by|^{d} \intdm{\bbR^d}{ \frac{\varrho(|\by| |\bw|)}{|\bw|} \frac{\varrho(|\by| |\frac{\by}{|\by|} -\bw|)}{|\frac{\by}{|\by|}-\bw|} { \frac{\frac{\by}{|\by|}-\bw}{|\frac{\by}{|\by|}-\bw|} \cdot \frac{\bw}{|\bw|} } }{\bw}\,. \end{equation*} Let $\bR(\by)$ be the rotation such that $\bR \frac{\by}{|\by|} =\be_1$, where $\be_1 = (1,0,\ldots, 0)$. Then letting $\bz = \bR \bw$ and changing coordinates gives \begin{equation*} \varrho_{\text{eq}}(\by) = |\by|^{d} \intdm{\bbR^d}{ \frac{\varrho(|\by| |\bz|)}{|\bz|} \frac{\varrho(|\by| |\be_1 -\bz|)}{|\be_1-\bz|} { \frac{\be_1-\bz}{|\be_1 - \bz|} \cdot \frac{\bz}{|\bz|} } }{\bz}\,. \end{equation*} \end{proof} The previous theorem relies heavily on the assumption \eqref{eq:KernelFullIntegrability}, which does not hold for singular kernels such as $\varrho_s$. Nevertheless, a pointwise equivalence kernel can be defined, as we show in the next lemma. \begin{lemma}\label{lma:EquivalenceKernel:Singular} Suppose that a radial kernel $\varrho$ satisfies \eqref{assumption:Kernel}. Assume that $\varrho$ satisfies the following conditions. Define the function \begin{equation*} \Psi(r) := \int_r^{\infty} \frac{\varrho(\theta)}{\theta} \, \rmd \theta < \infty\,, \qquad r > 0\,. \end{equation*} Note that $\Psi: (0,\infty) \to (0,\infty)$ is well-defined by assumption. Suppose that $\Psi$ satisfies \begin{equation}\label{assumption:EquivalenceKernel} r^{d-1} \Psi(r) \in L^1_{\text{loc}}([0,\infty))\,, \qquad \Psi \in C^2((0,\infty))\,. \tag{K-EQ} \end{equation} Then a pointwise equivalence kernel can be defined in the following way: \begin{equation*} \varrho_{\text{eq}}(|\bseta|) := \lim\limits_{\veps,\veps' \to 0} \varrho_{\text{eq},\veps,\veps'}(|\bseta|)\,, \end{equation*} for any $|\bseta| > 0\,,$ where the measurable function $\varrho_{\text{eq},\veps,\veps'}$ is defined for $\veps > 0$ and $\veps' > 0$ as \begin{equation}\label{eq:pvEquivalenceKernel} \varrho_{\text{eq},\veps,\veps'}(|\bseta|) := |\bseta|^{d} \intdm{\bbR^d}{ \chi_{\{|\be_1-\bz| > \veps' \}} \chi_{\{|\bz| > \veps \}} \frac{\varrho(|\bseta| |\bz|)}{|\bz|} \frac{\varrho(|\bseta| |\be_1 -\bz|)}{|\be_1-\bz|} { \frac{\be_1-\bz}{|\be_1 - \bz|} \cdot \frac{\bz}{|\bz|} } }{\bz}. \end{equation} \end{lemma} \begin{proof} To begin, we split the integral. For any $\veps > 0$, define the sets \begin{equation*} A_{\veps,1} := \{ \bz \, : \, \veps \leq |\bz| \leq \frac{1}{2} \}\,, \quad A_{\veps,2} := \{ \bz \, : \, \veps \leq |\be_1-\bz| \leq \frac{1}{2} \}\,, \quad A_{1} := \{ \bz \, : \, \frac{1}{2} \leq |\bz| \text{ and } \frac{1}{2} \leq |\be_1-\bz| \}\,. \end{equation*} Then \begin{equation*} \varrho_{\text{eq},\veps,\veps'}(|\bseta|) = \int_{A_{\veps,1}} \cdots + \int_{A_{\veps',2}} \cdots + \int_{A_{1}} \cdots \end{equation*} Clearly the third integral is an absolutely convergent integral. Letting $\by = \be_1 - \bz$, a change of coordinates gives \begin{equation*} \begin{split} &\intdm{A_{\veps',2}}{ \frac{\varrho(|\bseta| \, |\be_1-\bz|)}{|\be_1-\bz|} \frac{\varrho(|\bseta| \, |\bz|)}{|\bz|} { \frac{\be_1-\bz}{|\be_1-\bz|} \cdot \frac{\bz}{|\bz|} } }{\bz} \\ &\quad = \intdm{\bbR^d}{ \chi_{ \{ \frac{1}{2} \geq |\be_1-\bz| \geq \veps' \} } \cdot \frac{\varrho(|\bseta| \, |\be_1-\bz|)}{|\be_1-\bz|} \frac{\varrho(|\bseta| \, |\bz|)}{|\bz|} { \frac{\be_1-\bz}{|\be_1-\bz|} \cdot \frac{\bz}{|\bz|} } }{\bz} \\ &\quad = \intdm{\bbR^d}{ \chi_{ \{ \frac{1}{2} \geq |\by| \geq \veps' \} } \cdot \frac{\varrho(|\bseta| \, |\by|)}{|\by|} \frac{\varrho(|\bseta| \, |\be_1-\by|)}{|\be_1-\by|} { \frac{\by}{|\by|} \cdot \frac{\be_1-\by}{|\be_1-\by|} } }{\by} \\ &\quad = \intdm{A_{\veps',1}}{ \frac{\varrho(|\bseta| \, |\be_1-\bz|)}{|\be_1-\bz|} \frac{\varrho(|\bseta| \, |\bz|)}{|\bz|} { \frac{\be_1-\bz}{|\be_1-\bz|} \cdot \frac{\bz}{|\bz|} } }{\bz}\,. \end{split} \end{equation*} Thus it suffices to show that the quantity \begin{align}\label{eq:EquivalenceKernel:Proof1} \begin{split} \sup_{\veps > 0} \wt{\varrho}_{\text{eq},\veps,1}(|\bseta|) &:= \intdm{\bbR^d}{ \chi_{ \{ \frac{1}{2} \geq |\bz| \geq \veps \} } \cdot \frac{\varrho(|\bseta| \, |\be_1-\bz|)}{|\be_1-\bz|} \frac{\varrho(|\bseta| \, |\bz|)}{|\bz|} { \frac{\be_1-\bz}{|\be_1-\bz|} \cdot \frac{\bz}{|\bz|} } }{\bz} \\ &< \infty \end{split} \end{align} for any fixed $|\bseta| > 0$. We assume $\veps < 1/4$ from here on. Note for any fixed $\delta > 0$ and for $\ba \in \{ {\bf 0}, \be_1 \}$ \begin{equation*} \grad_{\bz} \Psi( \delta |\ba-\bz| ) = \frac{\varrho(\delta |\ba-\bz|)}{\delta |\ba-\bz|} \cdot \delta \frac{\ba-\bz}{|\ba-\bz|}\,; \end{equation*} Thus \begin{equation*} \begin{split} \wt{\varrho}_{\text{eq},\veps,1}(\delta) &= \intdm{\{ \frac{1}{2} \geq |\bz| \geq \veps \}}{ { \grad_{\bz} \Psi( \delta |\be_1-\bz| ) \,\cdot\, \grad_{\bz} \Psi( \delta |\bz| ) } }{\bz} \\ &= \intdm{\{ \frac{1}{2} \geq |\bz| \geq \veps \}}{ \Delta_{\bz} \Psi( \delta |\be_1-\bz| ) \, \Psi( \delta |\bz| ) }{\bz} + \intdm{ \{|\bz| = \veps\} }{ \grad_{\bz} \Psi(\delta|\be_1-\bz|) \cdot \frac{\bz}{|\bz|} \, \Psi(\delta |\bz|) }{\sigma(\bz)} \\ &\quad + \intdm{ \{|\bz| = 1/2\} }{ \grad_{\bz} \Psi(\delta|\be_1-\bz|) \cdot \frac{\bz}{|\bz|} \, \Psi(\delta |\bz|) }{\sigma(\bz)}\,. \end{split} \end{equation*} Note that $\Psi \in C^2((0,\infty))$ and the argument $\delta |\be_1 - \bz|$ lives in a bounded set far away from $0$. Note also that $\Psi(\delta |\bz|) \in L^1_{\text{loc}}(\bbR^d)$ by assumption. Therefore the first and third integrals are both finite and bounded uniformly in $\veps$. As for the second integral, a change of variables gives \begin{multline} C \intdm{ \{|\bx| = \veps\} }{ \grad_{\bz} \Psi(\delta|\be_1-\bz|) \cdot \frac{\bz}{|\bz|} \, \Psi(\delta |\bz|) }{\sigma(\bz)} \\ = C\intdm{\bbS^{d-1}}{ \veps^{d-1} \Psi(\delta \veps) \frac{\varrho(\delta |\be_1-\veps \bw|)}{|\be_1-\veps \bw|} { \frac{\be_1-\veps \bw}{|\be_1 - \veps \bw|} \cdot \bw } }{\sigma(\bw)} := I\,. \end{multline} Now for $\bw \in \bbS^{d-1}$ and for $\veps \in [0,1/4)$ define the function $h_{\bw,\delta}(\veps) := \Psi(\delta |\be_1 - \veps \bw|)$. For any choice of $\bw$, we have that $\veps \mapsto h_{\bw,\delta}(\veps)$ is $C^2$ and its derivatives are uniformly bounded (the bound is also uniform with respect to $\bw$). Thus we can write $I$ as \begin{equation*} I = C\intdm{\bbS^{d-1}}{ \veps^d \Psi(\delta \veps) \frac{h_{\bw,\delta}'(\veps)}{\veps} }{\sigma(\bw)}\,. \end{equation*} Note that $h_{\bw,\delta}'(0) = \varrho(\delta) { \be_1 \cdot \bw }$ and thus $\intdm{\bbS^{d-1}}{h_{\bw,\delta}'(0)}{\sigma(\bw)} = 0$. We then see by applying the mean value theorem that \begin{equation*} I = C\intdm{\bbS^{d-1}}{ \veps^d \Psi(\delta \veps) \frac{h_{\bw,\delta}'(\veps)-h_{\bw,\delta}'(0)}{\veps} }{\sigma(\bw)} = O \left( \veps^d \Psi(\delta \veps) \right)\,. \end{equation*} We claim that $\lim\limits_{\veps \to 0} \veps^d \Psi(\delta \veps) = 0$, and so \eqref{eq:EquivalenceKernel:Proof1} will follow. To see this claim, note that for any nonincreasing function $f \in L^1_{\text{loc}}([0,\infty)) \cap C^0((0,\infty))$ \begin{equation*} \frac{x f(x)}{2} \leq \int_{x/2}^{x} f(y) \, \rmd y \to 0 \text{ as } x \to 0\,, \end{equation*} by continuity of the integral. \end{proof} By the calculations in the proof of Theorem \ref{thm:EquivalenceKernel} it follows that \begin{equation}\label{eq:pvDiffusion} \cD_{\varrho,\veps} \circ \cG_{\varrho,\veps'} \bu(\bx) = \frac{1}{2} \intdm{\bbR^d}{ \varrho_{\text{eq},\veps,\veps'}(|\by|) \frac{2 \bu(\bx) - \bu(\bx+\by) -\bu(\bx-\by) }{|\by|^2} }{\by} \text{ for } \veps\,, \veps' > 0\,. \end{equation} Unfortunately it is unclear if the limit as $\veps$, $\veps' \to 0$ can be taken for general kernels satisfying \eqref{assumption:EquivalenceKernel}, even if $\bu$ is smooth. However, for specific examples of $\varrho$ we can show that the integrand on the right-hand side of \eqref{eq:pvDiffusion} is bounded by an $L^1$ function uniformly in $\veps$ and $\veps'$. Then the limit can be taken on both sides of \eqref{eq:pvDiffusion} by \Cref{thm:OperatorsWellDefdForSmoothFxns} and by the Lebesgue Dominated Convergence theorem to conclude that formula \eqref{eq:Diffusion} holds for any $\bu \in C^2_b(\bbR^d)$. The following examples illustrate the situation. \begin{example}\label{example:equivalence:FractionalKernel} Direct calculation shows that the fractional kernel $\varrho_s(|\bseta|)$ satisfies the conditions of \Cref{lma:EquivalenceKernel:Singular}. Moreover, \eqref{eq:pvEquivalenceKernel} for this particular kernel becomes $ \varrho_{s,\text{eq},\veps,\veps'}(|\bseta|) = \frac{C_{d,s,\veps,\veps'}}{|\bseta|^{d+2s-2}}\,,$ where the sequence of constants $C_{d,s,\veps,\veps'}$ is given by \[ C_{d,s,\veps,\veps'} := \intdm{\bbR^d}{ \chi_{\{|\be_1-\bz| > \veps' \}} \chi_{\{|\bz| > \veps \}} \frac{1}{|\bz|^{d+2s-2}} \frac{1}{|\be_1-\bz|^{d+2s-2}} { \frac{\be_1-\bz}{|\be_1 - \bz|} \cdot \frac{\bz}{|\bz|} } }{\bz}. \] By the same line of reasoning as in the proof of \Cref{lma:EquivalenceKernel:Singular} we see that the constants $C_{d,s,\veps,\veps'}$ converge to a constant $C_{d,s}$ as $\veps$, $\veps' \to 0$. Using the Fourier transform (see \cite{d2020unified}) it follows that $C_{d,s} := \frac{2^{2s} s \Gamma(\frac{d}{2}+s)}{\pi^{d/2} \Gamma(1-s)}$. We can therefore conclude that \eqref{eq:Diffusion} holds. We summarize this result in the following proposition: \begin{proposition}\label{prop:FractionalLaplaceIsDivGrad} Let $s \in (0,1)$. Suppose that either $\bu \in C^2_b(\bbR^d;\bbR^N)$, or $\bu \in L^1_{2s}(\bbR^d;\bbR^N) \cap \scC^{2s+\sigma}(\bbR^d;\bbR^N)$ for some $\sigma > 0$ small. Then the function $(-\Delta)_{\varrho_s} \bu(\bx)$ defined in \eqref{eq:DiffusionDefinition} coincides with the \textit{fractional Laplacian} \begin{equation*} (-\Delta)^s \bu(\bx) := C_{d,s} \int_{\bbR^d} \frac{2 \bu(\bx)-\bu(\bx+\by) - \bu(\bx-\by)}{|\by|^{d+2s}} \, \rmd \bh\,, \qquad \bx \in \bbR^d\,. \end{equation*} Put another way, \begin{equation*} -\cD_s \circ \cG_s \bu(\bx) = (-\Delta)^s \bu(\bx) \text{ for every } \bx \in \bbR^d\,. \end{equation*} \end{proposition} \begin{proof} If $\bu$ is in either set of function spaces, the limit as $\veps,\veps' \to 0$ can be taken on the left-hand side of \eqref{eq:pvDiffusion} by \Cref{thm:OperatorsWellDefdForSmoothFxns}. The limit on the right-hand side will follow by the Lebesgue Dominated Convergence theorem. First note that $C_{d,s,\veps,\veps'}$ is bounded by some constant $\wt{C}(d,s)$. Then the integrand is majorized by \begin{equation*} 4 \wt{C}(d,s) \chi_{ \{|\by| < 1 \} } \frac{ \sum_{|\gamma| = 2} \Vnorm{D^{\gamma} \bu}_{L^{\infty}(\bbR^d)} }{|\by|^{d+2s-2}} + 4 \wt{C}(d,s) \chi_{ \{|\by| \geq 1 \} } \frac{ \Vnorm{\bu}_{L^{\infty}(\bbR^d)} }{|\by|^{d+2s}} \end{equation*} in case 1, or by \begin{equation*} 2 \wt{C}(d,s) \chi_{ \{|\by| < 1 \} } \frac{ [\bu]_{C^{0,2s+\sigma}(\bbR^d)} }{|\by|^{d-\sigma}} + 4 \wt{C}(d,s) \chi_{ \{|\by| \geq 1 \} } \frac{ \Vnorm{\bu}_{L^{\infty}(\bbR^d)} }{|\by|^{d+2s}} \end{equation*} in case 2 with $s< 1/2$. In case 2 with $s \geq 1/2$ we have the bound \begin{equation*} \begin{split} \left| \frac{2 \bu(\bx) - \bu(\bx+\by) - \bu(\bx-\by)}{|\by|^{d+2s}} \right| =& \left| \chi_{ \{|\by| < 1 \} } \frac{ \int_0^1 \big( \grad \bu(\bx-t\by) - \grad \bu(\bx+t\by) \big) \by \, \rmd t }{|\by|^{d+2s}} \right.\\ &\left. + \chi_{ \{|\by| \geq 1 \} } \frac{2 \bu(\bx) - \bu(\bx+\by) - \bu(\bx-\by)}{|\by|^{d+2s}} \right| \\ \leq& \chi_{ \{|\by| < 1 \} } \frac{ [\grad \bu]_{C^{0,2s+\sigma-1}(\bbR^d)} }{|\by|^{d-\sigma}} + 4 \chi_{ \{|\by| \geq 1 \} } \frac{ \Vnorm{\bu}_{L^{\infty}(\bbR^d)} }{|\by|^{d+2s}}\,. \end{split} \end{equation*} In all cases the bounding function is in $L^1(\bbR^d)$, and the proof is complete. \end{proof} \end{example} \begin{example} The truncated fractional kernel $\varrho_{s,\delta}(|\bseta|)$ does not satisfy the conditions of \Cref{lma:EquivalenceKernel:Singular}. Nevertheless, when $d=1$ the formula \eqref{eq:pvEquivalenceKernel} holds for almost every $\eta \neq 0$. This can be seen directly by computing the equivalence kernel: \begin{equation*} \varrho_{s,\delta,\text{eq},\veps,\veps'}(|\eta|) = \frac{(c_{1,s})^2}{|\eta|^{1+2s-2}} \intdm{\bbR}{ \chi_{ \{ \veps< |z| < \frac{\delta}{|\eta|} \} } \chi_{ \{ \veps' < |z-1| < \frac{\delta}{|\eta|} \} } \frac{ 1-z}{|1-z|^{2+s}} \cdot \frac{z}{|z|^{2+s}} }{z}\,, \qquad \eta \neq 0\,. \end{equation*} The integral can be computed explicitly. Let $_2 F_1(a,b;c;z)$ denote the \textit{hypergeometric function}; see \cite[Equation 15.1.1]{abramowitz1988handbook} and \Cref{apdx:HyperGeometricFxn} for the definition. The derivative identity \eqref{eq:HypergeoDerivFormula1} implies that the function \begin{equation*} F_s(x) := \begin{cases} - \frac{1}{s(-x)^s} \text{}_2 F_1(-s,1+s;1-s;x)\,, & \quad x \leq 0\,, \\ \frac{1}{s(1-x)^s} \text{}_2 F_1(-s,1+s;1-s;1-x)\,, & \quad 0 < x < 1\,, \\ \frac{1}{s(x-1)^s} \text{}_2 F_1(-s,1+s;1-s;1-x)\,, & \quad x < 1\,, \\ \end{cases} \end{equation*} satisfies $F_s'(x) = \frac{ 1-x}{|1-x|^{2+s}} \cdot \frac{x}{|x|^{2+s}} $ for all $x \in \bbR \setminus \{0,1\}$. Therefore, {\small \begin{align*} & \varrho_{s,\delta,\text{eq},\veps,\veps'}(|\eta|)\\ &= \frac{(c_{1,s})^2}{|\eta|^{1+2s-2}} \begin{cases} F_s(-\veps) - F_s(1-\frac{\delta}{|\eta|}) + F_s(1-\veps') - F_s(\veps) + F_s(\frac{\delta}{|\eta|}) - F_s(1+\veps')\,, & \quad \frac{\delta}{|\eta|} > 1\,, \\ F_s(\frac{\delta}{|\eta|}) - F_s(1-\frac{\delta}{|\eta|})\,, & \quad \frac{1}{2} < \frac{\delta}{|\eta|} < 1\,, \\ 0\,, & \quad \frac{1}{2} \geq \frac{\delta}{|\eta|}\,. \end{cases} \end{align*} } Now we compute the limit as $\veps$, $\veps' \to 0$. To do this, we need the following limits for $F_s(z)$. \begin{theorem}\label{thm:PVofFs} For $s \in (0,1)$, \begin{equation}\label{eq:AntiDer:Limitat0} \lim\limits_{\veps \to 0} F_s(\veps) - F_s(-\veps) = \kappa_s := \begin{cases} \frac{\Gamma(1-s) \Gamma(-s)}{s \Gamma(-2s)}\,, &\quad \text{ if } s \neq 1/2\,, \\ 0 &\quad \text{ if } s = 1/2\,, \end{cases} \end{equation} and \begin{equation}\label{eq:AntiDer:Limitat1} \lim\limits_{\veps \to 0} F_s(1+\veps) - F_s(1-\veps) = 0\,. \end{equation} \end{theorem} See \Cref{apdx:HyperGeometricFxn} for the proof. An immediate corollary is the explicit formula for $\varrho_{s,\delta,\text{eq}}$. \begin{corollary} For all $|\eta| \neq 0$ and $|\eta| \neq \delta$ \begin{equation}\label{eq:Truncated:DefnOfEquivKernel} \varrho_{s,\delta,\text{eq}}(|\eta|) = \lim\limits_{\veps \to 0} \varrho_{s,\delta,\text{eq},\veps,\veps'}(|\eta|) = \frac{(c_{1,s})^2 }{|\eta|^{1+2s-2}} G_{s} \left( \frac{\delta}{|\eta|} \right) \,, \end{equation} where $G_s : (0,\infty) \setminus \{1\} \to \bbR$ is defined as \begin{equation}\label{eq:DefnOfRemainder} G_s(x) := \begin{cases} 0\,, & \quad 0 < x \leq \frac{1}{2}\,, \\ F_s(x) - F_s(1-x)\,, & \quad \frac{1}{2} < x < 1 \,, \\ F_s(x) - F_s(1-x) - \kappa_s \,, & \quad 1 < x \,. \\ \end{cases} \end{equation} \end{corollary} We now investigate properties of $\varrho_{s,\delta,\text{eq}}$ that are desirable for applications. To do this we need the following results concerning $G_s$. \begin{theorem}\label{thm:LimitsOfGs} For every $s \in (0,1)$ and for every $\tau > 0$, the function $G_s \big|_{(0,\infty) \setminus (1-\tau,1+\tau) }$ is continuous and bounded. Moreover, \begin{equation}\label{eq:LimOfGAtInfty} \lim\limits_{x \to \infty} G_s(x) = \frac{2\Gamma(1-s) \Gamma(1+2s)}{s\Gamma(1+s) } - \kappa_s \end{equation} and \begin{equation}\label{eq:LimOfGAt1} \lim\limits_{x \to 1} |x-1|^s G_s(x) = \frac{2}{s}\,. \end{equation} \end{theorem} See \Cref{apdx:HyperGeometricFxn} for the proof. \begin{theorem}[Properties of $\varrho_{s,\delta,\text{eq}}$] Let $\delta > 0$ and $s \in (0,1)$. Then $\varrho_{s,\delta,\text{eq}}$ is finite and differentiable for all $\eta \in \bbR \setminus \{-\delta,0,\delta \}$. At $|\eta| = \delta$ the function has a singularity of order $s$; that is, \begin{equation}\label{eq:TruncKernel:LimAt1} \lim\limits_{|\eta| \to \delta} |\eta|^{1+2s-2} \left| \frac{\delta}{|\eta|} -1 \right|^{s} \varrho_{s,\delta,\text{eq}}(|\eta|) = \frac{2 (c_{1,s})^2}{s}. \end{equation} Additionally, $\varrho_{s,\delta,\text{eq}}$ is compactly supported with $\supp \varrho_{s,\delta,\text{eq}} = \overline{B(0,2\delta)}$, \begin{equation}\label{eq:TruncEquivKernel:Nonnegative} \varrho_{s,\delta,\text{eq}}(|\eta|) \geq 0 \text{ for all } \eta \in \bbR \setminus \{ -\delta, 0, \delta \}\,, \end{equation} and \begin{equation}\label{eq:TruncEquivKernel:Integrable} \varrho_{s,\delta,\text{eq}} \in L^1(\bbR^d)\,. \end{equation} Moreover, $\varrho_{s,\delta,\text{eq}}$ is consistent with $\varrho_{s,\text{eq}}$; that is, for every fixed $|\eta| > 0$ \begin{equation}\label{eq:TruncEquivKernel:LimAsDeltaToInf} \lim\limits_{\delta \to \infty} \varrho_{s,\delta,\text{eq}}(|\eta|) = \frac{C_{1,s}}{ |\eta|^{1+2s-2}} = \varrho_{s,\text{eq}}(|\eta|)\,. \end{equation} \end{theorem} \begin{proof} The smoothness and compact support of $\varrho_{s,\delta,\text{eq}}$ is apparent from the definition, and \eqref{eq:TruncKernel:LimAt1} follows easily from \eqref{eq:LimOfGAt1}. To see \eqref{eq:TruncEquivKernel:Nonnegative}, we recall that $F_s'(x) = \frac{1-x}{|1-x|^{2+s}} \cdot \frac{x}{|x|^{2+s}}$, so therefore $F_s(x)$ is increasing for $x \in (0,1)$, and thus $\varrho_{s,\delta,\text{eq}}(|\eta|) \geq 0$ for $\delta < |\eta| < 2 \delta$. Next, for $t \in (0,\delta)$ \begin{equation*} \frac{d}{dt} \Big( F_s \Big( \frac{\delta}{t} \Big) - F_s \Big( 1-\frac{\delta}{t} \Big) - \kappa_s \Big) = 2 \frac{1-\frac{\delta}{t}}{|1-\frac{\delta}{t}|^{1+s}} \cdot \frac{\frac{\delta}{t} }{ |\frac{\delta}{t}|^{2+s} } \cdot \Big( \frac{-\delta}{t^2} \Big) > 0\,. \end{equation*} To see that $\varrho_{s,\delta,\text{eq}}(|\eta|) \geq 0$ for $0 < |\eta| < \delta$ it suffices to show that \begin{equation}\label{eq:TruncKernel:LimAt0} \lim\limits_{|\eta| \to 0} |\eta|^{1+2s-2} \varrho_{s,\delta,\text{eq}}(|\eta|) = (c_{1,s})^2 \left( \frac{2 \Gamma(1-s) \Gamma(1+2s)}{s \Gamma(1+s)} - \kappa_s \right) = C_{1,s} \,, \end{equation} where $C_{1,s} = \frac{2^{2s} s \Gamma(\frac{1}{2}+s) }{\pi^{1/2} \Gamma(1-s) }$ was defined in \Cref{example:equivalence:FractionalKernel}. The first equality follows from \eqref{eq:LimOfGAtInfty}, and the second equality follows from well-known identities satisfied by the Gamma function; these calculations are in \Cref{apdx:HyperGeometricFxn}. Since $C_{1,s}$ is clearly a positive number, we have established \eqref{eq:TruncEquivKernel:Nonnegative}. Now we prove \eqref{eq:TruncEquivKernel:Integrable}. By a change of variables and by definition of the support of $G_s$, \begin{equation*} \begin{split} \intdm{\bbR}{\big| \varrho_{\delta,s,\text{eq}}(|\eta|) \big| }{\eta} &= 2 (c_{1,s})^2 \int_0^{\infty} \left| G_s \left( \frac{\delta}{\eta} \right) \right| \eta^{2-2s} \, \frac{\rmd \eta}{\eta} \\ &= 2 (c_{1,s})^2 \delta^{2-2s} \int_0^{\infty} \frac{| G_s(r) |}{r^{2-2s}} \, \frac{\rmd r}{r} \\ &= C(s,\delta) \int_{1/2}^{\infty} \frac{| G_s(r) |}{r^{3-2s}}\, \rmd r\,. \end{split} \end{equation*} Since $G_s$ is continuous, by \eqref{eq:LimOfGAt1} there exists a $\tau > 0$ small such that $|G_s(r)| \leq \frac{4}{s} |r-1|^{-s}$ for all $r \in (1-\tau,1+\tau)$. Therefore since $G_s$ is bounded \begin{equation*} \begin{split} \int_{1/2}^{\infty} \frac{| G_s(r) |}{r^{3-2s}}\, \rmd r &\leq C \int_{ (\frac{1}{2},\infty) \setminus (1-\tau,1+\tau) } \frac{1}{r^{3-2s}}\, \rmd r + C \int_{(1-\tau,1+\tau) } \frac{1}{|r-1|^{s}} \frac{1}{r^{3-2s}}\, \rmd r < \infty \,. \end{split} \end{equation*} Thus \eqref{eq:TruncEquivKernel:Integrable} is proved. Finally, \eqref{eq:TruncEquivKernel:LimAsDeltaToInf} follows from the definition \eqref{eq:Truncated:DefnOfEquivKernel}, \eqref{eq:LimOfGAtInfty}, and the second equality in \eqref{eq:TruncKernel:LimAt0}. \end{proof} The properties of $\varrho_{s,\delta,\text{eq}}$ just established allow us to conclude that the formula \eqref{eq:Diffusion} holds. \end{example} \begin{example} The tempered fractional kernel $\varrho_{s,\text{temp}}(|\bseta|)$ satisfies the conditions of \Cref{lma:EquivalenceKernel:Singular}. Upper and lower bounds for $d=1$ are calculated in \cite{Olson2020CSRI}. Furthermore, we can show the following equivalence of energy spaces. \begin{theorem} For $s \in (0,1)$, $\alpha > 0$, there exists $C = C(d,s,\alpha)$ such that \begin{equation*} \begin{split} \frac{1}{C} & \iintdm{\bbR^d}{\bbR^d}{\rme^{-\alpha|\bx-\by|} \frac{|\bu(\bx)-\bu(\by)|^2}{|\bx-\by|^{d+2s}}}{\by}{\bx} \\ &\leq \iintdm{\bbR^d}{\bbR^d}{ \varrho_{s,\text{temp},\text{eq}}(|\bx-\by|) \frac{|\bu(\bx)-\bu(\by)|^2}{|\bx-\by|^2}}{\by}{\bx} \\ &\leq C \iintdm{\bbR^d}{\bbR^d}{\rme^{-\alpha|\bx-\by|} \frac{|\bu(\bx)-\bu(\by)|^2}{|\bx-\by|^{d+2s}}}{\by}{\bx} \end{split} \end{equation*} for every $\bu \in \scS(\bbR^d;\bbR^d)$. \end{theorem} The proof uses techniques that are outside the scope of this paper, and so it will be reported elsewhere. \end{example} \begin{example} The kernel defined in terms of the characteristic function \begin{equation*} \varrho_{\chi,\delta}(|\bseta|) := \frac{d}{ \omega_{d-1} \delta^d} \chi_{B({\bf 0},\delta)}(|\bseta|) \end{equation*} satisfies \eqref{eq:KernelFullIntegrability}, and so \eqref{eq:Diffusion} holds immediately. Moreover, when $d=1$ we can find the equivalence kernel explicitly. A straightforward calculation shows that \begin{equation*} \varrho_{\chi,\delta,\text{eq}}(|\eta|) = \begin{cases} \frac{2 |\eta|}{\delta^2} \log \left( \frac{ \frac{\delta}{|\eta|} }{ |1-\frac{\delta}{|\eta|}| } \right)\,, & 0<|\eta| < 2\delta\,, \\ 0\,, & |\eta| \geq 2\delta\,. \end{cases} \end{equation*} Thus, $\varrho_{\chi,\delta,\text{eq}}$ is a nonnegative, integrable function. \end{example} \section{Helmholtz Decomposition for Fractional Operators}\label{sec:helmholtz} In this section we combine the vector calculus identities proved in Section \ref{sec:identities} and the characterization of the equivalence kernel proved in Section \ref{sec:eq-kernel} to obtain a weighted fractional Helmholtz decomposition in H\"older spaces. Thus, we restrict our attention to the case of the fractional kernel $\varrho_s$ and utilize the results for H\"older spaces in Section \ref{sec:holder}. First, we state the following result, whose proof can be obtained by using \cite[Theorem 2.8]{bucur2016some}. \begin{theorem}\label{thm:FundSoln} Let $s \in (0,1)$ and $\sigma > 0$ be a sufficiently small quantity. Suppose $\bu \in \scC^{2s+\sigma}(\bbR^d;\bbR^d)$ with $d>2s$, and suppose $\bu$ is compactly supported. Define the constant \begin{equation*} \kappa_{d,s} := \frac{ \Gamma(\frac{d}{2}-s) }{2^{2s} \pi^{d/2} \Gamma(s) }\,, \end{equation*} and define the function \begin{equation*} \Phi_s(\bsxi) := \frac{\kappa_{d,s}}{|\bsxi|^{d-2s}}\,. \end{equation*} Then $\Phi_s$ is the fundamental solution of $(-\Delta)^s$ in the following sense: define the function \begin{equation*} \bv(\bx) := \Phi_s \ast \bu(\bx)\,, \qquad \bx \in \bbR^d\,. \end{equation*} Then $\bv$ belongs to $\scC^{2s+\sigma}(\bbR^d;\bbR^d)$, $\bv$ has the ``behavior at infinity" \begin{equation*} \Vnorm{\bv}_{L^1_{2s}(\bbR^d)} = \intdm{\bbR^d}{ \frac{|\bv(\bx)|}{ 1+|\bx|^{d+2s} } }{\bx} < \infty\,, \end{equation*} and both in the distributional sense and pointwise in $\bbR^d$ \begin{equation*} (-\Delta)^s \bv(\bx) = \bu(\bx)\,. \end{equation*} \end{theorem} We can now state the main theorem of this section. \begin{theorem}\label{thm:Helmholtz:Potentials} Let $0<s<1$. Suppose that $\bu \in \scC^{2s+\sigma}(\bbR^d;\bbR^d)$ with $d=3$ for some $\sigma > 0$ be sufficiently small. Suppose also that $\bu$ is compactly supported with $\supp \bu \subset B({\bf 0},R)$ for some $R >0$. Then there exist functions $\psi$ and $\bw$ belonging to $L^1_s(\bbR^d) \cap C^{0,s+\sigma}(\bbR^d)$ and $L^1_s(\bbR^d;\bbR^d) \cap C^{0,s+\sigma}(\bbR^d;\bbR^d)$ respectively such that \begin{equation}\label{eq:Helmholtz:Potentials} \bu(\bx) = \cG_s \psi(\bx) - \cC_s \bw(\bx) \quad \text{ for all } \bx \in \bbR^d\,. \end{equation} \end{theorem} \begin{proof} By \Cref{thm:FundSoln} \begin{equation*} \bu(\bx) = (-\Delta)^s \left[ \Phi_s \ast \bu(\bx) \right]\,. \end{equation*} Note that $\bu \in L^1_{2s}(\bbR^d;\bbR^d)$ since $\bu$ is continuous with compact support. By \Cref{prop:CurlOfCurl:SmoothFxns} and \Cref{prop:FractionalLaplaceIsDivGrad} we then have \begin{equation}\label{Helmholtz:Potentials:Proof1} \bu(\bx) = \cG_s \circ \cD_s \left[ \Phi_s \ast \bu(\bx) \right] - \cC_s \circ \cC_s \left[ \Phi_s \ast \bu(\bx) \right]\,. \end{equation} Define \begin{equation}\label{Helmholtz:Potentials:Proof2} \begin{split} \psi(\bx) &:= \cD_s [ \Phi_s \ast \bu ] (\bx)\,, \\ \bw(\bx) &:= \cC_s [ \Phi_s \ast \bu ] (\bx)\,. \end{split} \end{equation} Thus the formula \eqref{eq:Helmholtz:Potentials} will be established if we can show that $\psi$ and $\bw$ are well-defined functions. To this end, note that both $\psi$ and $\bw$ are of the form $\cZ_s [\Phi_s \ast \bu]$. These functions belong to $C^{0,s+\sigma}(\bbR^d)$ by \Cref{thm:MappingPropertiesOfOperators} since $\Phi_s \ast \bu \in \scC^{2s+\sigma}(\bbR^d)$ by \Cref{thm:FundSoln}. Second, both $\psi$ and $\bw$ also belong to $L^1_{s}(\bbR^d;\bbR^d)$, being in $L^{\infty}(\bbR^d)$. Thus by Theorem \ref{thm:MappingPropertiesOfOperators}, 1) the functions $\psi$ and $\bw$ are well-defined. \end{proof} \section*{Acknowledgments} M. D'Elia and M. Gulian are partially supported by the U.S. Department of Energy, Office of Advanced Scientific Computing Research under the Collaboratory on Mathematics and Physics-Informed Learning Machines for Multiscale and Multiphysics Problems (PhILMs) project (DE-SC0019453). They are also supported by Sandia National Laboratories (SNL). SNL is a multimission laboratory managed and operated by National Technology and Engineering Solutions of Sandia, LLC., a wholly owned subsidiary of Honeywell International, Inc., for the U.S. Department of Energy's National Nuclear Security Administration under contract {DE-NA0003525}. This paper, SAND2021-15379, describes objective technical results and analysis. Any subjective views or opinions that might be expressed in this paper do not necessarily represent the views of the U.S. Department of Energy or the United States Government. T. Mengesha's research is supported by the NSF DMS 1910180.
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} Recent years have witnessed considerable development in the study of multiparameter quantum deformations from both, the algebraic as well the differential geometric point of view. These have also found profound applications in many diverse areas of Mathematical Physics. Despite of the intensive and successful development of the mathematical theory of multiparameter quantum deformations or quantum groups, various important aspects still need thorough investigation. Besides, all quantum groups seem to have a natural coloured extension thereby defining corresponding coloured quantum groups. It is the aim of this paper to address some of the key issues involved.\\ Two parameter deformations provide an obvious step in constructing generalisations of single parameter deformations. Besides being mathematically interesting in their own right, two parameter quantum groups serve as very good examples in generalising physical theories based on the quantum group symmetry. ${GL}_{p,q}(2)$ and $\GLhh'two$ are well known examples of two parameter Quantum and Jordanian deformations of the space of $2 \times 2$ matrices. Just as both these quantum groups are of great significance in building up various mathematical and physical theories, it is worthwhile to look for other possible examples, including the `coloured' ones, which might play a fundamental role in future researches. We wish to focus our attention on a new two parameter quantum group [1], ${G}_{r,s}$ which sheds light on some of the above mentioned issues. ${G}_{r,s}$ is a quasitriangular Hopf algebra generated by five elements, four of which form a Hopf subalgebra ismomorphic to ${GL}_{q}(2)$, while the fifth generator relates ${G}_{r,s}$ to ${GL}_{p,q}(2)$. \\ The ${G}_{r,s}$ quantum group, which is the basis of our investigation, is defined in Section II. In Section III, we give a new Jordanian analogue of ${G}_{r,s}$, denoted ${G}_{m,k}$ and establish a homomorphism with $\GLhh'two$. Both ${G}_{r,s}$ and ${G}_{m,k}$ admit a natural coloured extension and this is given in Section IV. Section V generalises the contraction procedure to the case of coloured quantum groups and discusses various homomorphisms. In section VI, we make concluding remarks and give possible physical significance of our results. Throughout this paper, we shall endeavour to refrain from too much of technical details, which can be found in the appropriate references. \section{Two parameter $q$- deformations} The quantum group ${G}_{r,s}$ was defined in [1] as a quasitriangular Hopf algebra with two deformation parameters $r$ and $s$, and generated by five elements $a$,$b$,$c$,$d$ and $f$. The generators $a$,$b$,$c$,$d$ of this Hopf algebra form a subalgebra, infact a Hopf subalgebra, which coincides exactly with the single parameter dependent ${GL}_{q}(2)$ quantum group when $q=r^{-1}$. Moreover, the two parameter dependent ${GL}_{p,q}(2)$ can also be realised through the generators of this ${G}_{r,s}$ Hopf algebra, provided the sets of deformation parameters $(p,q)$ and $(r,s)$ are related to each other in a particular fashion. This new algebra can, therefore, be used to realise both ${GL}_{q}(2)$ and ${GL}_{p,q}(2)$ quantum groups. Alternatively, this ${G}_{r,s}$ structure can be considered as a two parameter quantisation of the classical ${GL}({2})\otimes {GL}({1})$ group. The first four generators of ${G}_{r,s}$, i.e. $a$, $b$, $c$, $d$ correspond to ${GL}({2})$ group at the classical level and the remaining generator $f$ is related to ${GL}({1})$. In fact, ${G}_{r,s}$ can also be interpreted as a quotient of multiparameter $q$- deformation of $GL(3)$. \par The elements of ${G}_{r,s}$ can be conveniently arranged in the matrix $T=\left( \begin{smallmatrix}a&b&0\\c&d&0\\0&0&f\end{smallmatrix} \right)$ and the coalgebra and counit are $\Delta (T)=T\dot{\otimes} T$, $\varepsilon (T)={\bf 1}$. It should be mentioned that the quantum determinant $\delta = D f$ (where $D = ad-{r}^{-1} bc$) is group-like but not central. The above block diagonal form of the $T$- matrix is particularly convenient to understand the related schematics. The ${G}_{r,s}$ $R$-matrix is given in [1,2], and the most general Hopf algebra generated by this $R$- matrix is multiparameter $GL(3)$ with the $T$-matrix of the form $\left( \begin{smallmatrix}a&b&x_{1}\\c&d&x_{2}\\v_{1}&v_{2}&f\end{smallmatrix} \right)$. It can be shown that the two-sided Hopf ideal generated by $x_{1},x_{2}$ when factored out yields the Inhomogenous multiparameter $IGL(2)$. Furthermore, if one factors out yet another two-sided Hopf ideal generated by elements $v_{1},v_{2}$, what one obtains is precisely the ${G}_{r,s}$ Hopf algebra. The relation of ${G}_{r,s}$ with various known $q$-deformed groups can be exhibited as \[ \begin{array}{ccccc} & & GL_{Q}(3) & &\\ & & \Big\downarrow\vcenter \rlap{$\mathcal{Q}$}} & & \\ & & IGL_{Q}(2) & & \\ & & \Big\downarrow\vcenter{% \rlap{$\mathcal{Q}$}} & & \\ GL(2)\otimes GL(1) & \stackrel{\mathcal{L}}{\longleftarrow} & {G}_{r,s} & \stackrel{\mathcal{F}}{\longrightarrow} & {GL}_{p,q}(2)\\ & &\Big\downarrow\vcenter{% \rlap{$\mathcal{S}$}} & & \\ & & {GL}_{q}(2) & & \end{array} \] where $\mathcal{Q}$, $\mathcal{F}$, $\mathcal{S}$ and $\mathcal{L}$ denote the Quotient, Hopf algebra homomorphism, (Hopf)Subalgebra and (classical) Limit respectively. $GL_{Q}(3)$ denotes the multiparameter $q$- deformed $GL(3)$ and $IGL_{Q}(2)$ is the inhomogenous multiparameter $q$- deformation of $GL(2)$. Motivated by the rich structure of ${G}_{r,s}$, this quantum group has recently been studied by the authors in detail [2]. As an intial step in the further understanding of ${G}_{r,s}$, the authors have derived explicitly the dual algebra and showed that it is isomorphic to the single parameter deformation of $gl(2) \oplus gl(1)$, with the second parameter appearing in the costructure. In [2], the authors have also constructed a differential calculus on ${G}_{r,s}$, which inturn provides a realisation of the calculus on ${GL}_{p,q}(2)$. \par \section{Two parameter $h$- deformations} Jordanian deformations (also known as $h$-deformations) of Lie groups and Lie algebras have attracted a lot of attention in recent years. A peculiar feature of this deformation is that the corresponding $R$-matrix is triangular i.e. $R_{12}R_{21}=1$. These deformations are called `Jordanian' due to the Jordan normal form of the $R$-matrix. It was shown in [3] that upto isomorphism, ${GL}_{q}(2)$ and ${GL}_{h}(2)$ are the only possible distinct deformations (with central determinant) of the group $GL(2)$. In [4], an interesting observation was made that the $h$- deformations could be obtained by a singular limit of a similarity transformation from the $q$- deformations, and this was generalised to multiparameter deformations as well as to higher dimensions i.e space of $n \times n$ quantum matrices [5]. For the purpose of current investigation, the authors have applied the contraction procedure to ${G}_{r,s}$ to obtain a new Jordanian quantum group ${G}_{m,k}$ [6]. It turns out that this new structure is also related to other known Jordanian quantum groups.\\ The ${G}_{m,k}$ quantum group can be defined as a triangular Hopf algebra generated by the $T$- matrix $\left( \begin{smallmatrix}a&b&0\\c&d&0\\0&0&f\end{smallmatrix} \right)$. The set of commutation relations consisting of elements $a$,$b$,$c$,$d$ form a subalgebra that coincides exactly with the single parameter Jordanian ${GL}_{h}(2)$ for $m=h$. This is exactly analogous to the $q$-deformed case where the first four elements of ${G}_{r,s}$ form the ${GL}_{q}(2)$ Hopf subalgebra. Again, the remaining fifth element $f$ generates the $GL(1)$ group, as it did in the $q$-deformed case, and the second parameter appears only through the cross commutation relations between $GL_{m}(2)$ and $GL(1)$ elements. Therefore, ${G}_{m,k}$ can also be considered as a two parameter Jordanian deformation of classical $GL(2)\otimes GL(1)$ group. Furthermore, ${G}_{m,k}$ also provides a realisation of the two parameter Jordanian $\GLhh'two$. Besides, it may be interpreted as a quotient of the multiparameter Jordanian deformation of $GL(3)$, denoted $GL_{J}(3)$ as well as that of inhomogenous $IGL(2)$, denoted $IGL_{J}(2)$ quantum groups. This can be represented as follows \[ \begin{array}{ccccc} & & GL_{J}(3) & &\\ & & \Big\downarrow\vcenter \rlap{$\mathcal{Q}$}} & & \\ & & IGL_{J}(2) & & \\ & & \Big\downarrow\vcenter{% \rlap{$\mathcal{Q}$}} & & \\ GL(2)\otimes GL(1) & \stackrel{\mathcal{L}}{\longleftarrow} & {G}_{m,k} & \stackrel{\mathcal{F}}{\longrightarrow} & \GLhh'two\\ & &\Big\downarrow\vcenter{% \rlap{$\mathcal{S}$}} & & \\ & & {GL}_{h}(2) & & \end{array} \] where the maps $\mathcal{Q}$, $\mathcal{F}$, $\mathcal{S}$ and $\mathcal{L}$ are as before. \section{Coloured Extensions} The standard quantum group relations can be extended by parametrising the corresponding generators using some continuous `colour' variables and redefining the associated algebra and coalgebra in a way that all Hopf algebraic properties remain preserved [1,7,8]. For the case of a single parameter quantum deformation of ${GL}({2})$ (with deformation parameter $r$), its `coloured' version [1] is given by the $R$-matrix, denoted $R_{r}^{\lambda,\mu}$ which satisfies \[ R_{12}^{\lambda,\mu}R_{13}^{\lambda,\nu}R_{23}^{\mu,\nu} = R_{23}^{\mu,\nu}R_{13}^{\lambda,\nu}R_{12}^{\lambda,\mu} \] the so-called `Coloured' Quantum Yang Baxter Equation (CQYBE). It should be stressed at this point that the coloured $R$- matrix provides a nonadditive-type solution $R^{\lambda,\mu} \neq R(\lambda - \mu)$ of the Yang-Baxter equation, which is in general multicomponent and the parameters $\lambda$, $\mu$, $\nu$ are considered as `colour' parameters. Such solutions were first discovered in the study of integrable models [9]. This gives rise to the coloured $RTT$ relations \[ R_{r}^{\lambda,\mu}T_{1\lambda}T_{2\mu}=T_{2\mu}T_{1\lambda} R_{r}^{\lambda,\mu} \] (where $T_{1\lambda}=T_{\lambda}\dot{\otimes} {\bf 1}$ and $T_{2\mu}={\bf 1}\dot{\otimes} T_{\mu}$) in which the entries of the $T$ matrices carry colour dependence. The coproduct and counit for the coalgebra structure are given by $\Delta (T_{\lambda})=T_{\lambda}\dot{\otimes} T_{\lambda}$, $\varepsilon (T_{\lambda})={\bf 1}$ and depend only on one colour parameter. By contrast, the algebra structure is more complicated with generators of two different colours appearing simultaneously in the algebraic relations. The full Hopf algebraic structure can be constructed and results in a coloured extension of the quantum group. Since $\lambda$ and $\mu$ are continuous variables, this implies the coloured quantum group has an infinite number of generators.\\ The above coloured generalisation of the FRT formalism was given by Kundu and Basu-Mallick [1,10] and that of the Drinfeld-Jimbo formulation of quantised universal enveloping algebras has been given by Bonatos, Quesne {\sl et al} [11]. In the context of knot theory, Ohtsuki [12] introduced some coloured quasitriangular Hopf algebras, which are characterised by the existence of a coloured universal $R$- matrix, and he applied his theory to $U_{q}sl(2)$. Coloured generalisations of quantum groups can also be understood as an application of the twisting procedure, in a manner similar to the multiparameter generalisation of quantum groups. Jordanian deformations also admit coloured extensions [7]. The associated $R$-matrix satisfies the CQYBE and is `colour' triangular i.e. $R_{12}^{\lambda,\mu}=({R_{21}^{\mu,\lambda}})^{-1}$, a coloured extension of the notion of triangularity. \subsection*{Coloured Extension of ${G}_{r,s}$ : $\Grss'$} The coloured extension of ${G}_{r,s}$ proposed in [1] has only one deformation parameter $r$ and two colour paramters $s$ and $s'$. The second deformation parameter of the uncoloured case now plays the role of a colour parameter. In such a coloured extension, the first four generators $a$,$b$,$c$,$d$ are kept independent of the colour parameters while the fifth generator $f$ is now paramterised by $s$ and $s'$. The matrices of generators are \[ T_{s}=\begin{pmatrix}a&b&0\\c&d&0\\0&0&f_{s}\end{pmatrix}\quad , \quad T_{s'}=\begin{pmatrix}a&b&0\\c&d&0\\0&0&f_{s'}\end{pmatrix} \] From the $RTT$ relations, one observes that the commutation relations between $a$,$b$,$c$,$d$ are as before but $f_{s}$ and $f_{s'}$ now satisfy two colour copies of the relations satisfied by $f$ of the uncloured ${G}_{r,s}$. In addition, the relation $[f_{s},f_{s'}]=0$ holds. The associated coloured $R$-matrix, denoted $R_{r}^{s,s'}$ satisfies the CQYBE \[ R_{12}(r;s,s')R_{13}(r;s,s'')R_{23}(r;s',s'') = R_{23}(r;s',s'')R_{13}(r;s,s'')R_{12}(r;s,s') \] and the corresponding coloured quantum group is denoted $\Grss'$. \subsection*{Coloured Extension of ${G}_{m,k}$ : $\Gmkk'$} Similar to the case of ${G}_{r,s}$, we have proposed [13] a coloured extension of the Jordanian quantum group ${G}_{m,k}$. The first four generators remain independent of the colour parameters $k$ and $k'$ whereas the generator $f$ is parameterised by $k$ and $k'$. Again, the second deformation parameter $k$ of the uncoloured case now plays the role of a colour parameter and the $T$-matrices are \[ T_{k}=\begin{pmatrix}a&b&0\\c&d&0\\0&0&f_{k}\end{pmatrix}\quad , \quad T_{k'}=\begin{pmatrix}a&b&0\\c&d&0\\0&0&f_{k'}\end{pmatrix} \] The commutation relations between $a$,$b$,$c$,$d$ remain unchanged whereas $f_{k}$ and $f_{k'}$ satisfy two colour copies of the relations satisfied by $f$ of the uncloured ${G}_{m,k}$. In addition, the relation $[f_{k},f_{k'}]=0$ holds. The associated coloured $R$-matrix, denoted $R_{m}^{k,k'}$, is a solution of the CQYBE \[ R_{12}(m;k,k')R_{13}(m;k,k'')R_{23}(m;k',k'') = R_{23}(m;k',k'')R_{13}(m;k,k'')R_{12}(m;k,k') \] and is colour triangular. The corresponding coloured Jordanian quantum group is denoted $\Gmkk'$. \section{Contractions and Homomorphisms} The $R$-matrix of the Jordanian (or $h$-deformation) can be viewed as a singular limit of a similarity transformation on the $q$-deformed $R$-matrix [4]. Let $g(\eta)$ be a matrix dependent on a contraction parameter $\eta $ which is itself a function of one of the deformation parameters of the $q$-deformed algebra. This can be used to define a transformed $q$-deformed $R$-matrix \[ R_{h} = (g^{-1} \otimes g^{-1})R_{q}(g \otimes g) \] The $R$-matrix of the Jordanian deformation is then obtained by taking a limiting value of the parameter $\eta$. Even though the contraction parameter $\eta$ is undefined in this limit, the new $R$-matrix is finite and gives rise to a new quantum group structure through the $RTT$-relations. For example, in the contraction process which takes $GL_q(2)$ to $GL_h(2)$, the contraction matrix is \[ g(\eta) = \left( \begin{array}{cc} 1 & \eta \\ 0 & 1 \end{array} \right) \] where $\eta = \frac{h}{1-q}$ with $h$ a new free parameter. Such transformations have proved to be powerful tools in establishing various connections between the $q$- and the $h$- deformed quantum groups, which were previously obscure. In the context of the quantum groups under consideration in the present paper, the contraction procedure was successfully applied [6] to the ${G}_{r,s}$ quantum group of Section II to obtain the Jordanian ${G}_{m,k}$ given in Section III. Furthermore, the multiparameter Jordanian $GL_{J}(3)$ and hence the multiparamter Inhomogeneous $IGL_{J}(2)$ were also obtained by contracting their respective $q$- deformed counterparts [14]. \par The Hopf algebra homomorphism $\mathcal{F}$ from ${G}_{r,s}$ to ${GL}_{p,q}(2)$, which provides a realisation of the latter, is given by \[ {\mathcal{F}}: {G}_{r,s}\longmapsto {GL}_{p,q}(2) \] \[ {\mathcal{F}}\left(\begin{array}{cc}a&b\\c&d\end{array}\right)\longmapsto \left(\begin{array}{cc}a'&b'\\c'&d'\end{array}\right)= f^{N}\left(\begin{array}{cc}a&b\\c&d\end{array}\right) \] The elements $a'$,$b'$,$c'$ and $d'$ are the generators of ${GL}_{p,q}(2)$ and $N$ is a fixed non-zero integer. The relation between the deformation parameters $(p,q)$ and $(r,s)$ is given by \[p = {r}^{-1} s^{N} \quad , \quad \quad q = {r}^{-1} s^{-N}\] A Hopf algebra homomorphism \[ {\mathcal{F}}: {G}_{m,k}\longmapsto \GLhh'two \] of exactly the same form as in the $q$-deformed case, exists between the generators of ${G}_{m,k}$ and $\GLhh'two$ provided that the two sets of deformation parameters $(h,h')$ and $(m,k)$ are related via the equation \[ h = -m + Nk \quad , \quad \quad h' = -m - Nk \] Note that for vanishing $k$, one gets the one parameter case. In addition, using the above realisation together with the coproduct, counit and antipode axioms for the ${G}_{m,k}$ algebra and the respective homeomorphism properties, one can easily recover the standard coproduct, counit and antipode for $\GLhh'two$. Thus, the Jordanian $\GLhh'two$ group can in fact be reproduced from the newly defined Jordanian ${G}_{m,k}$. It is curious to note that if we write $p=e^{h}$, $q=e^{h'}$, $r=e^{m}$ and $s=e^{k}$, then the relations between the parameters in the $q$-deformed case and the $h$-deformed case are identical. The systematics of the uncloured quantum groups discussed here can be summarised in the following commutative diagram \[ \begin{CD} GL_{Q}(3) @>\mathcal{Q}>> IGL_{Q}(2) @>\mathcal{Q}>> {G}_{r,s} @>\mathcal{F}>> {GL}_{p,q}(2)\\ @V{\mathcal{C}}VV @V{\mathcal{C}}VV @VV{\mathcal{C}}V @VV{\mathcal{C}}V \\ GL_{J}(3) @>>\mathcal{Q}> IGL_{J}(2) @>>\mathcal{Q}> {G}_{m,k} @>>\mathcal{F}> \GLhh'two \end{CD} \] where $\mathcal{Q}$, $\mathcal{C}$ and $\mathcal{F}$ denote the quotient, contraction and the Hopf algebra homomorphism. The contraction procedure discussed above has been successfully applied [13] to the case of coloured quantum groups yielding new coloured Jordanian deformations. We apply to $R_{r}^{\lambda,\mu}$, the coloured $R$-matrix for $q$-deformed ${GL}({2})$, the transformation \[ (g\otimes g)^{-1}R_{r}^{\lambda,\mu}(g\otimes g) \] where $g$ is the two dimensional transformation matrix $\left( \begin{smallmatrix}1&\eta\\0&1\end{smallmatrix} \right)$ and $\eta$ is chosen to be $\eta=\frac{m}{1-r}$. In the limit $r\rightarrow 1$, we obtain a new $R$-matrix, $R_{m}^{\lambda,\mu}$ which is a coloured $R$-matrix for a Jordanian deformation of ${GL}({2})$. The contraction is then also used to obtain the coloured extension $\Gmkk'$ of ${G}_{m,k}$, from the coloured extension $\Grss'$ of ${G}_{r,s}$. The $R$-matrix $R_{m}^{k,k'}$ is obtained as the contraction limit of the $R$-matrix for the coloured extension of ${G}_{r,s}$ via the transformation \[ R_{m}^{k,k'} = \lim_{r\rightarrow 1}(G\otimes G)^{-1} R_{r}^{s,s'}(G\otimes G) \] where \[ G=\begin{pmatrix}g&0\\0&1\end{pmatrix};\quad g=\begin{pmatrix}1&\eta\\0&1\end{pmatrix},\quad \eta=\frac{m}{r-1} \] The Hopf algebra homomorphism from $\Grss'$ to $GL_{r}^{\lambda,\mu}(2)$ \[ \mathcal{F}_{N}: \Grss'\longmapsto GL_{r}^{\lambda,\mu}(2) \] is given by \[ \mathcal{F}_{N}:\begin{pmatrix}a&b\\c&d\end{pmatrix}\longmapsto \begin{pmatrix}a'_{\lambda}&b'_{\lambda}\\c'_{\lambda}&d'_{\lambda} \end{pmatrix}=f_{s}^{N}\begin{pmatrix}a&b\\c&d\end{pmatrix} \] \[ \mathcal{F}_{N}:\begin{pmatrix}a&b\\c&d\end{pmatrix}\longmapsto \begin{pmatrix}a'_{\mu}&b'_{\mu}\\c'_{\mu}&d'_{\mu} \end{pmatrix}=f_{s'}^{N}\begin{pmatrix}a&b\\c&d\end{pmatrix} \] where N is a fixed non-zero integer and the sets of colour parameters $(s,s')$ and $(\lambda,\mu)$ are related through quantum deformation parameter $r$ by \[ s=r^{2N\lambda} \quad , \quad s'=r^{2N\mu} \] The primed generators $a'_{\lambda}$, $b'_{\lambda}$,$c'_{\lambda}$,$d'_{\lambda}$ and $a'_{\mu}$,$b'_{\mu}$,$c'_{\mu}$,$d'_{\mu}$ belong to $GL_{r}^{\lambda,\mu}(2)$ whereas the unprimed ones $a$,$b$,$c$,$d$,$f_{s}$ and $f_{s'}$ are generators of $\Grss'$. If we now denote the generators of $GL_{m}^{\lambda,\mu}(2)$ by $a'_{\lambda}$,$b'_{\lambda}$,$c'_{\lambda}$,$d'_{\lambda}$ and $a'_{\mu}$,$b'_{\mu}$,$c'_{\mu}$,$d'_{\mu}$ and the generators of $\Gmkk'$ by $a$,$b$,$c$,$d$,$f_{k}$ and $f_{k'}$ then a Hopf algebra homomorphism from $\Gmkk'$ to $GL_{m}^{\lambda,\mu}(2)$ \[ \mathcal{F}_{N}: \Gmkk'\longmapsto GL_{m}^{\lambda,\mu}(2) \] is of exactly the same form \[ \mathcal{F}_{N}:\begin{pmatrix}a&b\\c&d\end{pmatrix}\longmapsto \begin{pmatrix}a'_{\lambda}&b'_{\lambda}\\c'_{\lambda}&d'_{\lambda} \end{pmatrix}=f_{k}^{N}\begin{pmatrix}a&b\\c&d\end{pmatrix} \] \[ \mathcal{F}_{N}:\begin{pmatrix}a&b\\c&d\end{pmatrix}\longmapsto \begin{pmatrix}a'_{\mu}&b'_{\mu}\\c'_{\mu}&d'_{\mu} \end{pmatrix}=f_{k'}^{N}\begin{pmatrix}a&b\\c&d\end{pmatrix} \] The sets of colour parameters $(k,k')$ and $(\lambda,\mu)$ are related to the Jordanian deformation parameter $m$ by \[ Nk=-2m\lambda \quad , \quad Nk'=-2m\mu \] and N, again, is a fixed non-zero integer. The schematics of our analysis for the coloured quantum groups is represented in the diagram \[ \begin{CD} {G}_{r,s} @>\mathcal{E}>> \Grss' @>\mathcal{F}>> GL_{r}^{\lambda,\mu}(2)\\ @V{\mathcal{C}}VV @VV{\mathcal{C}}V @VV{\mathcal{C}}V\\ {G}_{m,k} @>>\mathcal{E}> \Gmkk' @>>\mathcal{F}> GL_{m}^{\lambda,\mu}(2) \end{CD} \] where $\mathcal{C}$, $\mathcal{F}$ and $\mathcal{E}$ denote the contraction, Hopf algebra homomorphism and coloured extension respectively. In both of the commutative diagrams above, the objects at the top level are the $q$ deformed ones and the corresponding Jordanian counterparts are shown at the bottom level. \section{Conclusions} In the present work, we have obtained a new Jordanian quantum group $G_{m,k}$ by contraction of the $q$- deformed quantum group $G_{r,s}$. We then used this new structure to establish quantum group homomorphisms with other known two parameter quantum groups at the Jordanian level. At the same time we also showed that such homomorphisms commute with the contraction procedure. Our analysis is then set in the wider context of coloured quantum groups. We give a coloured generalisation of the contraction procedure and obtain new coloured Jordanian quantum groups. A careful study of the properties of both $G_{r,s}$ and $G_{m,k}$ lead to their respective coloured extensions. Furthermore, we show that the homomorphisms of the uncoloured case naturally extend to the coloured case. \par The physical interest in studying ${G}_{r,s}$ lies in the observation that when endowed with a $\ast$- structure, this quantum group specialises to a two parameter quantum deformation of $SU(2) \otimes U(1)$ which is precisely the gauge group for the theory of electroweak interactions. Since gauge theories have an obvious differential geometric description, the study of differential calculus [2] provides insights in constructing a $q$-gauge theory based on ${G}_{r,s}$. It would also be of significance to generalise the formalism of differential calculus to the case of coloured quantum groups and explore possible physical applications. \par \section*{Acknowledgments} D.P. is grateful to the organisers of the Symposium and would like to thank Prof. David Radford and Dr. Gustav Delius for useful comments. The authors have also benefited by discussions with Prof. Vlado Dobrev and Dr. Preeti Parashar.\par \section*{References} [1] B. Basu-Mallick, hep-th/9402142; {\sl Intl. J. Mod. Phys.} {\bf A10}, 2851 (1995).\par [2] D. Parashar and R. J. McDermott, Kyoto University preprint RIMS - 1260 (1999), math.QA/9901132.\par [3] B. A. Kupershmidt, {\sl J. Phys.} {\bf A25}, L1239 (1992).\par [4] A. Aghamohammadi, M. Khorrami and A. Shariati, {\sl J. Phys.} {\bf A28}, L225 (1995).\par [5] M. Alishahiha, {\sl J. Phys.} {\bf A28}, 6187 (1995).\par [6] D. Parashar and R. J. McDermott, math.QA/9909001, {\sl Czech. J. Phys.}, in press.\par [7] P. Parashar, {\sl Lett. Math. Phys.} {\bf 45}, 105 (1998).\par [8] C. Quesne, {\sl J. Math. Phys.} {\bf 38}, 6018 (1997); {\em ibid} {\bf 39}, 1199 (1998).\par [9] V. V. Bazhanov and Yu. G. Stroganov, {\sl Theor. Math. Phys.} {\bf 62}, 253 (1985).\par [10] A. Kundu and B. Basu-Mallick, {\sl J. Phys.} {\bf A27}, 3091 (1994); B. Basu-Mallick, {\sl Mod. Phys. Lett.} {\bf A9}, 2733 (1994).\par [11] D. Bonatos {\sl et. al.}, {\sl J. Math. Phys.} {\bf 38}, 369 (1997); C. Quesne, q-alg/9705022.\par [12] T. Ohtsuki, {\sl J. Knot Theor. Its Rami.} {\bf 2}, 211 (1993).\par [13] D. Parashar and R. J. McDermott, math.QA/9911194, {\sl J. Math. Phys.}, in press.\par [14] R. J. McDermott and D. Parashar, math.QA/9909045, {\sl Czech. J. Phys.}, in press.\par \end{document}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} Solidification and crystallisation processes are present in various natural phenomena as well as in a large number of material production technologies such as, for example, semiconductor crystal growth from the melt, alloy metallurgy, etc. Usually the melt used for the production of solid material is not a pure substance but rather a solution containing some dissolved dopants or impurities. Often the solid material grown from the solution has a non-uniform distribution of the dissolved substance although the original solution was uniform. This non-uniformity is caused by the difference of equilibrium concentrations of solute in the liquid and solid phases. Thus, if the equilibrium concentration of solute in a crystal is lower than in the melt, only a fraction of solute is incorporated from the melt into the growing crystal while the remaining part is repelled by the solidification front as it advances into the liquid phase \cite{Hurle}. This effect causes axial segregation of the solute, usually concentrated in a thin, diffusion-controlled boundary layer adjacent to the solidification front. Axial segregation can strongly be influenced by the melt convection. According to the original work by Burton, Prim and Slichter (BPS) \cite{BPS}, a sufficiently strong convection towards the crystallisation front reduces the thickness of the segregation boundary layer and so the solute concentration getting into the crystal. Such a concept of solute boundary layer has been widely accepted to interpret the effect of melt flow on the solute distribution in various crystal growth configurations \cite{CamelFavier83,CamelFavier86,Garandetetal}. The BPS approach, originally devised for a rotating-disk flow modelling an idealised Czochralski growth configuration, supposes the melt to be driven towards the solidification front by a radially diverging flow. However, in many cases, as for instance in a flow rotating over a disk at rest \cite{Schlichting}, like in a flow driven by a rotating \cite{Davidson} or a travelling \cite{Yesil04} magnetic field, as well as in the natural convection above a concave solidification front in the vertical Bridgman growth process \cite{ChangBrown83}, the melt is driven away from the solidification front in its central part by a radially converging flow. Though several extensions of the BPS solution exist (e.g. \cite{Wilson78,Wheeler80,HurleSeries,Cartwright}), the possibility of a reversed flow direction away from the crystallisation front has not yet been considered in that context. In this work, we show that the BPS approach becomes invalid for converging flows because the effective boundary layer thickness, which is the basic concept of the BPS theory, is defined by an integral diverging for a flow away from the solidification front. The divergence can formally be avoided by restricting the space occupied by the melt above the solidification front to a layer of finite depth, but for higher melt velocities this solution becomes physically inconsistent, too. Next we consider a solidification front as a disk of finite radius immersed in the melt with a strong converging flow and show that a converging flow results in a logarithmic solute segregation along the solidification front with a peak at the symmetry axis. An analytical solution is obtained by an original technique using a Laplace transform. The advantage of this solution is its simple analytical form as well as the high accuracy which has been verified by comparing with numerical solutions. The simulation of dopant transport is an important aspect of crystal growth modelling \cite{Hirtz,Lan}, and various numerical approaches are used for it. However, a numerical approach is always limited in the sense that it provides only particular solutions while the basic relations may remain hidden. Besides, the numerical solution often requires considerable computer resources when a high spatial resolution is necessary which is particularly the case for thin solute boundary layers. It has been shown, \textit{e.g.}, by Vartak and Derby \cite{Vartak} that an insufficient resolution of the solute boundary layer may lead to numerically converged but nevertheless inaccurate results. The paper is organised as follows. In Section 2 we discuss the BPS-type approach and show its inapplicability to converging flows. The simple model problem of radial segregation along a disk of finite radius in a strong converging flow is described in Section 3, and an analytical solution for the concentration distribution on the disk surface is obtained in Section 4. Summary and conclusions are presented in Section 5. \section{Breakdown of BPS-type solutions} Consider a simple solidification model consisting of a flat radially-unbounded solidification front advancing at velocity $v_{0}$ into a half-space occupied by the melt which is a dilute solution characterised by the solute concentration $C.$ The latter is assumed to be uniform and equal to $C_{\infty}$ sufficiently far away from the solidification front. Solute is transported in the melt by both diffusion with a coefficient $D$ and the melt convection with a velocity field $\vec{v}$. At the solidification front, supposed to be at the thermodynamic equilibrium, the ratio of solute concentrations in the solid and liquid phases is given by the equilibrium partition coefficient $k$. In the absence of convection, the repelled solute concentrates in a boundary layer with the characteristic thickness $\delta_{0}=D/v_{0}$. We consider in the following the usual case of a much larger momentum boundary layer compared to the solute boundary layer, \textit{i.e.} a high Schmidt number $\mathit{Sc}=\nu/D\gg1$ where $\nu$ is the kinematic viscosity of the melt. The basic assumption of the BPS approach is that the lateral segregation is negligible and thus the solute transport is affected only by the normal velocity component. The latter is approximated in the solute boundary layer by a power series expansion in the distance $z$ from the solidification front as $v(z)\approx\frac{1}{2}v''(0)z^{2}.$ Then the equation governing the concentration distribution in the solute boundary layer may be written in dimensionless form as \begin{equation} -(1+\mathit{Pe}z^{2})\frac{dC}{dz}=\frac{d^{2}C}{dz^{2}}, \label{eq:BPS} \end{equation} where $\mathit{Pe}=\frac{v''(0)\delta_{0}^{3}}{2D}$ is the local P\'{e}clet number based on the characteristic boundary layer thickness $\delta_{0}$ which is used as length scale here while the concentration is scaled by $C_{\infty}.$ The boundary conditions for the uniformly mixed melt core and the solid-liquid interface take the form $\left.C\right|_{z\rightarrow\infty}\rightarrow1$ and \begin{equation} \left[(1-k)C+\frac{dC}{dz}\right]_{z=0}=0. \label{BPS.bnd} \end{equation} The solution of this problem is \begin{equation} C(z)=1+A\int_{z}^{\infty}\exp\left(-t-\frac{\mathit{Pe}}{3}t^{3}\right)\,dt, \label{BPS.sol} \end{equation} where the constant $A=\frac{1-k}{1-(1-k)\Delta(\mathit{Pe})}$ is obtained from (\ref{BPS.bnd}) in terms of $\Delta(\mathit{Pe})= \int_{0}^{\infty}\exp\left(-t-\frac{\mathit{Pe}}{3}t^{3}\right)\,dt$ which according to the relation $C'(0)=\frac{C(\infty)-C(0)}{\Delta(\mathit{Pe})}$ represents an effective dimensionless thickness of the solute boundary layer. Eventually, the concentration at the solidification front is obtained as $C(0)=\left[1-(1-k)\Delta(\mathit{Pe})\right]^{-1}$. This is the central result of the BPS approach stating that only the effective thickness of the solute boundary layer defined by the local velocity profile is necessary to find the solute concentration at the solidification front for a given uniform concentration in the bulk of the melt. However, it is important to note that this solution is limited to $\mathit{Pe}\geq0$ and it becomes invalid for $\mathit{Pe}<0$ when the flow is directed away from the solidification front because both integrals in Eq. (\ref{BPS.sol}) and $\Delta(\mathit{Pe})$ diverge in this case. The goal of this study is to find out what happens to the solute distribution when the flow is directed away from the solidification front and the BPS solution breaks down. \begin{figure} \centering \includegraphics[width=0.75\columnwidth]{fig1.eps} \caption{ \label{cap:sketch1} Sketch of a radially unbounded flat layer with solidification and melting fronts at bottom and top, respectively. } \end{figure} The divergence in the BPS model for $\mathit{Pe}<0$ is obviously related to the unbounded interval of integration which can be avoided by taking into account the finite axial size of the system. The simplest such model, shown in Fig. \ref{cap:sketch1}, is provided by a flat, radially-unbounded layer between two disks separated by a distance $2H$. The upper and lower disks represent the melting and solidification fronts, respectively, and the molten zone proceeds upwards with velocity $v_{0}.$ There is a forced convection in the melt with the axial velocity $v(z)$ which is assumed to satisfy impermeability and no-slip boundary conditions. There is also a radial velocity component following from the incompressibility constraint which, however, is not relevant as long as a radially uniform concentration distribution is considered. Here we choose $H$ as a length scale so that the boundaries are at $z=\pm1$. At the upper boundary, there is a constant solute flux due to the melting of the feed rod with the given uniform concentration $C_{0}$ with velocity $v_{0}$ \[ \left.\mathit{Pe}_{0}(C-C_{0})+\frac{dC}{dz}\right|_{z=1}=0. \] Note that this boundary condition following from the mass conservation does not formally satisfy the local thermodynamic equilibrium relating the solute concentrations in the solid and liquid phases. In order to ensure equilibrium concentrations at the melting front it would be necessary to take into account also the diffusion in the solid phase which, however, is neglected here. Such an approximation is justified by the smallness of the corresponding diffusion coefficient. \begin{figure} \centering \includegraphics[width=0.75\columnwidth]{fig2.eps} \caption{ \label{cap:BPSm} Modified effective boundary layer thickness $\mathit{Pe}_{0}\Delta(\mathit{Pe}_{0},\mathit{Pe}_{1})-1$ at the solidification front for a horizontal liquid layer of finite height with the flow away from the solidification front versus the P\'{e}clet number $\mathit{Pe}_{1}$ of melt stirring at various P\'{e}clet numbers $\mathit{Pe}_{0}$ based on the solidification rate.} \end{figure} At the lower boundary, coinciding with the moving solidification front, the boundary condition is \begin{equation} \left.(1-k)\mathit{Pe}_{0}C+\frac{dC}{dz}\right|_{z=-1}=0, \label{bnd:BPS-m} \end{equation} where $\mathit{Pe}_{0}=v_{0}H/D$ is the P\'{e}clet number based on the solidification velocity. The radially uniform concentration distribution depending only on the axial coordinate $z$ is governed by \begin{equation} \left(-\mathit{Pe}_{0}+\mathit{Pe}_{1}v(z)\right) \frac{dC}{dz}=\frac{d^{2}C}{dz^{2}}, \label{eq:BPS-m} \end{equation} where $\mathit{Pe}_{1}$ is the P\'{e}clet number of convection. The solution of the above equation is \begin{equation} C(z)=A+B\int_{-1}^{z}\exp\left[-\mathit{Pe}_{0}(t+1) +\mathit{Pe}_{1}\int_{-1}^{t}v(\tau)\,d\tau\right]\, dt. \label{sol:BPS-m} \end{equation} The boundary condition (\ref{bnd:BPS-m}) yields $B=-A(1-k)\mathit{Pe}_{0}$ while the remaining unknown constant $A$ is determined from the condition at the upper boundary. However, for our purposes it is sufficient to express $A$ in terms of the concentration at the solidification front: $A=C(-1).$ Then Eq. (\ref{sol:BPS-m}) allows us to relate the concentrations at the melting and solidification fronts \begin{equation} C(-1)=C(1)\left[1-(1-k)\mathit{Pe}_{0} \Delta(\mathit{Pe}_{0},\mathit{Pe}_{1})\right]^{-1}, \label{c0-m} \end{equation} where \begin{equation} \Delta(\mathit{Pe}_{0},\mathit{Pe}_{1})=\int_{-1}^{1}\exp \left[-\mathit{Pe}_{0}(t+1)+\mathit{Pe}_{1}\int_{-1}^{t}v(\tau)\,d\tau\right]\,dt \label{delta-gen.mod} \end{equation} is the effective solute boundary layer thickness defined by the relation $\left.\frac{dC}{dz}\right|_{z=-1}=\frac{C(1)-C(-1)} {\Delta(\mathit{Pe}_{0},\mathit{Pe}_{1})}$ following from Eqs. (\ref{bnd:BPS-m}) and (\ref{c0-m}). This effective boundary layer thickness at the solidification front is plotted in Fig. \ref{cap:BPSm} for a model velocity distribution $v(z)=\left(1-z^{2}\right)^{2}$. The effective boundary layer thickness increases with the convection but the increase is relatively weak until $\mathit{Pe}_{1}$ becomes comparable to $\mathit{Pe}_{0}$. At this point, the effective thickness starts to grow nearly exponentially. Although the effective boundary layer thickness is now bounded for any finite value of $\mathit{Pe}_{1}$ regardless of its sign defining the flow direction, the obtained solution is not really free from singularities. At first, note that the concentration at the solid-liquid interface becomes singular when the solute boundary layer becomes as thick as $\mathit{Pe}_{0}\Delta(\mathit{Pe}_{0},\mathit{Pe}_{1})=(1-k)^{-1}$ resulting in a zero denominator in Eq. (\ref{c0-m}). Second, for larger $\mathit{Pe}_{1}$ the denominator in Eq. (\ref{c0-m}) becomes negative implying a negative concentration at the solidification front that presents an obvious physical inconsistency. Thus, the obtained solution is applicable only for sufficiently weak converging flows and breaks down as the velocity of the melt flow away from the solidification front becomes comparable to the growth rate at $\mathit{Pe}_{1}\sim\mathit{Pe}_{0}.$ \section{A disk of finite radius with a strong converging flow} \begin{figure} \centering \includegraphics[width=0.75\columnwidth]{fig3.eps} \caption{ \label{cap:sketch2} Sketch of the solidification front presented by a disk of radius $R_{0}$ in a converging flow.} \end{figure} The assumption underlying both the classical BPS approach and that of the previous section is the neglected radial segregation. The simplest physical model which could account for radial segregation is presented by a solidification front in the form of a disk of finite radius $R_{0}$ with the melt occupying the half-space above it, as shown in Fig. \ref{cap:sketch2}. For simplicity, the velocity distribution in the melt is assumed to be that of a liquid rotating above a disk at rest. In this case, contrary to the classical BPS problem of a rotating disk, the flow is radially converging rather than diverging. Thus, within the solute boundary layer, assumed as usual to be thin relative to the momentum boundary layer, the radial and axial velocity components can be approximated as \[ v_{r}\approx-\frac{1}{2}v_{z}''(0)rz, \qquad v_{z}\approx\frac{1}{2}v_{z}''(0)z^{2}. \] Here we choose the thickness of the solute boundary layer based on the axial melt velocity as length scale \begin{equation} d_{0}=(2D/v_{z}''(0))^{1/3}, \label{eq:d0} \end{equation} and assume the stirring of the melt to be so strong that the advancement of the solidification front with the growth velocity $v_{0}$ is small compared the characteristic melt flow in the solute boundary layer. The last assumption implies that the local P\'{e}clet number based on the growth rate is small: $\tilde{\mathit{Pe}}_{0}=v_{0}d_{0}/D\ll1.$ Then the problem is defined by a single dimensionless parameter, the dimensionless radius $R=R_{0}/d_{0}=R_{0}(2D/v_{z}''(0))^{-1/3},$ which may be regarded as P\'{e}clet number based on the external length scale $R_{0}$ and the internal velocity scale $v_{0}=v_{z}''(0)d_{0}^{2}/2.$ The governing dimensionless equation is \begin{equation} z\left(z\frac{\partial C}{\partial z}-r\frac{\partial C}{\partial r}\right) =\frac{1}{r}\frac{\partial}{\partial r} \left(r\frac{\partial C}{\partial r}\right) +\frac{\partial^{2}C}{\partial z^{2}}, \label{eq:C-rbnd} \end{equation} where the radial diffusion term will be neglected as usual for the boundary layer solution to be obtained in the following. Sufficiently far away from the solidification front a well-mixed melt is assumed with a uniform dimensionless concentration $C_{0}=1.$ The boundary condition at the solidification front \[ \left.\tilde{\mathit{Pe}}_{0}(1-k)C+\frac{\partial C} {\partial z}\right|_{z=0}=0, \] for $\tilde{\mathit{Pe}}_{0}\ll1$ suggests to search for the concentration as \begin{equation} C\approx C_{0}+\tilde{\mathit{Pe}}_{0}(1-k)C_{1}, \label{eq:C1def} \end{equation} where $C_{1}$ is the deviation of the concentration with a characteristic magnitude $\tilde{\mathit{Pe}}_{0}(1-k)\ll1$ from its uniform core value $C_{0}=1.$ Then the boundary condition for $C_{1}$ takes the form $\left.\frac{\partial C_{1}}{\partial z}\right|_{z=0}=-1,$ while $C$ is substituted by $C_{1}$ in Eq. (\ref{eq:C-rbnd}) which, compared to the original BPS Eq. (\ref{eq:BPS}), has an extra term related to the radial advection whereas both the term of axial advection due to the solidification speed and the radial diffusion term have been neglected. Note that on one hand the radial advection term is indeed important because without it we recover the BPS case which was shown above to have no bounded solution. On the other hand, for the radial advection term to be significant the solute distribution has to be radially nonuniform. However, searching for a self-similar solution in the form $C_{1}(r,z)=r^{\alpha}F(zr^{\beta})$ leads only to the radially uniform solution with $\alpha=\beta=0$. This implies that a possible solution has to incorporate the radial length scale $R$. Additional difficulties with finding similarity solutions are caused by the explicit appearance of $r$ in Eq. (\ref{eq:C-rbnd}). Both these facts suggest the substitution $\tau=-\ln(r)$ that transforms Eq. (\ref{eq:C-rbnd}) into \begin{equation} z\left(z\frac{\partial C}{\partial z} +\frac{\partial C}{\partial\tau}\right) =\frac{\partial^{2}C}{\partial z^{2}} \label{eq:C-tau} \end{equation} with the radial diffusion term neglected as mentioned above. Since the transformed equation does not explicitly contain $\tau,$ $C(\tau,z)$ being a solution implies that $C(\tau-\tau_{0},z)$ is also a solution. Consequently, $\tau$ can be replaced by $\tau-\tau_{0},$ where $\tau_{0}=-\ln(R)$ and thus $\tau=\ln(R/r)$. Note that $\tau=0$ corresponds to the rim of the disk while $\tau\rightarrow\infty$ to the symmetry axis. \section{Solution by Laplace transform} Equation (\ref{eq:C-tau}) can efficiently be solved by a Laplace transform providing asymptotic solutions of the solute distribution along the solidification front for both small and large $\tau$. The Laplace transform defined as $\bar{C}(s,z)=\int_{0}^{\infty}C_{1}(\tau,z)e^{-s\tau}d\tau$ transforms Eq. (\ref{eq:C-tau}) into \[ z\left(z\frac{d\bar{C}}{dz}+s\bar{C}\right)=\frac{d^{2}\bar{C}}{dz^{2}}, \] where $s$ is a complex transformation parameter while the boundary condition at the solidification front takes the form: $\left.\frac{\partial\bar{C}}{\partial z}\right|_{z=0}=-\frac{1}{s}.$ A bounded solution of this problem is $\bar{C}(s,z)=cU\left(\frac{s}{3},\frac{2}{3},\frac{z^{3}}{3}\right),$ where $U(a,b,x)$ is the confluent hypergeometric function \cite{Abramowitz}. The constant $c$ is determined from the boundary condition at the solidification front as $c=\frac{3^{-2/3}}{s}\frac{\Gamma(s/3)}{\Gamma(2/3)}$. At the solidification front we obtain \[ \bar{C}(s,0)=\frac{3^{-2/3}}{s}\frac{\Gamma(1/3)}{\Gamma(2/3)} F\left(\frac{s}{3};\frac{1}{3}\right), \] where \begin{equation} F(p;a)=\frac{\Gamma(p)}{\Gamma(p+a)}. \label{eq:ffun} \end{equation} The concentration distribution along the solidification front is then given by the inverse Laplace transform \[ C_{1}(\tau,0)=\frac{1}{2\pi i}\int_{b-i\infty}^{b+i\infty}e^{st}\bar{C}(s,0)ds. \] The solution for small $\tau$ follows from the asymptotic expansion of $F(p;a)$ at $\left|p\right|\gg1$ that can be presented as \[ F\left(\frac{s}{3};\frac{1}{3}\right)= \sum_{j=0}f_{j}\left(\frac{1}{3}\right)\left(\frac{s}{3}\right)^{-j-1/3}, \] where $f_{j}(a)$ are the asymptotic expansion coefficients of $F(p;a)=p^{-a}\sum_{j=0}\frac{f_{j}(a)}{p^{j}}$ which can be found efficiently by the following approach. We start with the basic relation $F(p;a)=(1+a/p)F(p+1;a)$ resulting from (\ref{eq:ffun}). The asymptotic expansion of both sides of this relation can be presented as \begin{equation} \sum_{j=0}\frac{f_{j}(a)}{p^{j}}=\sum_{j=0} \frac{f_{j}(a)}{p^{j}}g_{j}(p;a), \label{eq:renorm} \end{equation} where $g_{j}(p;a)=\left(1+ap^{-1}\right)\left(1+p^{-1}\right)^{a-j} =\sum_{l=0}\frac{g_{j,l}(a)}{p^{l}}$ with the expansion coefficients \[ g_{j,l}(a)= \left\{ \begin{array}{cc} 1, & l=0\\ \frac{(-1)^{l}}{l!}(a+j)_{l-1}(j+(l-1)(1-a)), & l>0 \end{array} \right., \] defined by use of Pochhammer's symbol $(p)_{n}=\frac{\Gamma(p+n)}{\Gamma(p)}$. Substituting the above expansion back into Eq. (\ref{eq:renorm}) and comparing the terms with equal powers of $p$ we obtain $f_{j}(a)=\sum_{l=0}^{j}f_{l}(a)g_{l,j-l}(a),$ that due to $g_{l,0}=1$ simplifies to $\sum_{l=0}^{j-1}f_{l}(a)g_{l,j-l}(a)=0.$ Upon replacing $j$ by $j+1$ and taking into account $g_{j,1}(a)=-j$, the latter relation results in \begin{equation} f_{j}(a)=\frac{1}{j}\sum_{l=0}^{j-1}f_{l}(a)g_{l,j+1-l}(a), \label{eq:recurs} \end{equation} defining $f_{j}(a)$ recursively for $j>0.$ In order to apply this recursion we need $f_{0}(a)$ which can be shown to be constant and therefore $f_{0}(a)=1$ because $f_{0}(0)=1.$ Eventually, we obtain \begin{equation} C_{1}(\tau,0)=\frac{3^{2/3}}{\Gamma(2/3)}\sum_{j=0}d_{j}\tau^{j+1/3}, \label{sol:powser} \end{equation} where $d_{j}=\frac{3^{j-1}f_{j}(1/3)}{(1/3)_{j+1}}$. That means, the radial solute segregation along the solidification front at the rim is characterised by the leading term $C_{1}(r,0)\approx\frac{3^{2/3}}{\Gamma(2/3)}\ln^{1/3}(R/r)$. The first 9 coefficients of the series expansion (\ref{sol:powser}) calculated analytically by \textit{Mathematica} \cite{Mathematica} are shown in Table \ref{cap:table}. The convergence of the obtained power-series solution is limited to $\tau\leq\lim_{j\rightarrow\infty}\sqrt{-\frac{d_{j}}{d_{j+2}}}\approx2.09$ \cite{Hinch}. \begin{table} \begin{center}\begin{tabular}{|c|c|c|c|c|c|c|c|c|c|} \hline $j$& $0$& $1$& $2$& $3$& $4$& $5$& $6$& $7$& $8$\tabularnewline \hline $d_{j}$& $1$& $\frac{1}{4}$& $\frac{1}{28}$& $-\frac{1}{120}$& $-\frac{1}{390}$& $\frac{1}{960}$& $\frac{121}{383040}$& -$\frac{71}{443520}$& $-\frac{19}{403200}$ \tabularnewline \hline \end{tabular}\end{center} \caption{\label{cap:table}First 9 coefficients of the series expansion (\ref{sol:powser}) calculated analytically. } \end{table} The Laplace transform yields also the asymptotic solution for $\tau\gg1$ determined by the singularity of the image at $s=0$ where \[ \frac{1}{s}F\left(\frac{s}{3},\frac{1}{3}\right) \approx\frac{3}{\Gamma(1/3)}\frac{1}{s^{2}}\left(1+\frac{s}{3} \left(\psi(1)-\psi\left(\frac{1}{3}\right)\right)\right) \] that straightforwardly leads to \begin{equation} C_{1}(r,0)\approx c_0 \left(\ln(R/r)+c_1\right), \label{eq:C1r} \end{equation} where $c_{0}=\frac{3^{1/3}}{\Gamma\left(2/3\right)}\approx1.0651,$ $c_{1}=\frac{1}{3}\left(\psi\left(1\right)-\psi\left(\frac{1}{3}\right)\right)= \ln\sqrt{3}+\frac{\pi}{6\sqrt{3}}\approx0.8516,$ and $\psi(x)$ is the Psi (Digamma) function \cite{Abramowitz}. This solution plotted versus $\tau=\ln(R/r)$ in Fig. \ref{cap:powser} is seen to match both the numerical and the exact power series solution (\ref{sol:powser}) surprisingly well already at $\tau>1$. The numerical solution of Eq. (\ref{eq:C-tau}) is obtained by a Chebyshev collocation method with an algebraic mapping to a semi-infinite domain for $z$ and a Crank-Nicolson scheme for $\tau$ \cite{Canuto}. \begin{figure} \centering \includegraphics[width=0.75\columnwidth]{fig4.eps} \caption{ \label{cap:powser} Solute distribution along the solidification front from the rim versus $\tau=\ln(R/r)$ resulting from different approximations in comparison to the numerical and exact solutions of Eq. (\ref{eq:C-tau}).} \end{figure} Note that the solution (\ref{eq:C1r}) describing the solute concentration increasing along the solidification front as $\sim\ln(R/r)$ is not applicable at the symmetry axis $r=0$ where it becomes singular. This apparent singularity is due to the neglected radial diffusion term in Eq. (\ref{eq:C-rbnd}) which, obviously, becomes significant in the vicinity of the symmetry axis at distances comparable to the characteristic thickness of the solute boundary layer (\ref{eq:d0}) that corresponds to a dimensionless radius of $r\sim 1.$ The radial diffusion becoming effective at $r\lesssim1$ is expected to limit the concentration peak at $\sim\ln(R).$ The asymptotic solution for the solute boundary layer forming around the symmetry axis, which will be published elsewhere because of its length and complexity, yields for $R\gg1$ the peak value of the concentration perturbation at the symmetry axis \begin{equation} C_{1}(0,0)\approx c_{0}(\ln(R)+c_{1})-c_{r}, \label{eq:C1-peak} \end{equation} where $c_{r}\approx0.3772.$ The concentration distribution along the solidification front in the vicinity of the symmetry axis is shown in Fig. \ref{cap:cncola_r}. As seen, the solution approaches the finite value (\ref{eq:C1-peak}) at the symmetry axis while the asymptotic solution (\ref{eq:C1r}) represents a good approximation for $r\gtrsim 2.$ This solution is obtained numerically by a Chebyshev collocation method \cite{Canuto} applied to Eq. (\ref{eq:C-rbnd}) with the asymptotic boundary conditions $\left.r\frac{\partial C_{1}}{\partial r}\right|_{r\rightarrow\infty}= \left.z\frac{\partial C_{1}}{\partial z}\right|_{z\rightarrow\infty}=-c_{0}$ supplied by the outer asymptotic solution. This defines the solution in the corner region at the symmetry axis up to an arbitrary constant which is determined by matching with the outer analytic asymptotic solution and yields the constant $c_r$ appearing in Eq. (\ref{eq:C1-peak}). Note that in the described asymptotic approximation the difference $C_1(r,0)-C_1(0,0)$ shown in Fig. \ref{cap:cncola_r} is a function of $r$ only while the dependence on $R$ is contained entirely in $C_1(0,0)$ defined by Eq. (\ref{eq:C1-peak}). \begin{figure} \centering \includegraphics[width=0.75\columnwidth]{fig5.eps} \caption{\label{cap:cncola_r} Concentration perturbation relative to its peak value (\ref{eq:C1-peak}) along the solidification front in the vicinity of the symmetry axis together with the corresponding asymptotic solution (\ref{eq:C1r}).} \end{figure} \section{Summary and conclusions} We have analysed the effect of a converging melt flow, which is directed away from the solidification front, on the solute distribution in several simple solidification models. First, it was shown that the classical Burton-Prim-Slichter solution based on the local boundary layer approach is not applicable for such flows because of the divergence of the integral defining the effective thickness of the solute boundary layer. Second, in order to avoid this divergence we considered the model of a flat, radially-unbounded layer of melt confined between two disks representing melting and solidification fronts. This resulted in a radially uniform solute distribution which, however, breaks down as the velocity of the melt flow away from the solidification front becomes comparable to the growth rate. This suggested that a sufficiently strong radially converging melt flow is incompatible with a radially uniform concentration distribution and, consequently, radial solute segregation is unavoidable in such flows. Thus, we next analysed the radial solute segregation caused by a strong converging melt flow over a solidification front modeled by a disk of finite radius $R_{0}$. We obtained an analytic solution showing that the radial solute concentration at the solidification front depends on the cylindrical radius $r$ as $\sim\ln^{1/3}\left(R_{0}/r\right)$ and $\sim\ln\left(R_{0}/r\right)$ close to the rim of the disk and at large distances away from it, respectively. It is important to note that these scalings do not imply any singularity at the axis $r=0$. Instead, the concentration perturbation takes the value (\ref{eq:C1-peak}) at the mid-point of the finite radius disk. It has to be stressed that the radial segregation according to our analysis is by a factor $\ln(R_{0}/d_{0})$ larger than that suggested by a simple order-of-magnitude or dimensional analysis (\textit{e. g.} Eq. (\ref{eq:C1def})). Thus, for converging flows the concentration at the solidification front is determined not only by the local velocity distribution but also by the ratio of internal and external length scales which appear as a logarithmic correction factor to the result of a corresponding scaling analysis. The main conclusion is that flows converging along the solidification front, conversely to the diverging ones, cause a radial solute segregation with a logarithmic concentration peak at the symmetry axis which might be an undesirable feature for crystal growth applications. \section{Acknowledgements} Financial support from Deutsche Forschungsgemeinschaft in framework of the Collaborative Research Centre SFB 609 and from the European Commission under grant No. G1MA-CT-2002-04046 is gratefully acknowledged.
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} Semisupervised methods inevitably invoke some assumption that links the marginal distribution $p(x)$ of the features $X$ to the regression function $f(x) = \mathbb{E}[Y|X = x]$ of the label $Y$. The most common assumption is the {\em cluster assumption} in which it is assumed that $f$ is very smooth wherever $p$ exhibits clusters \citep{seeger,rigollet07,LW:nips07,ASingh:unlabeled}. In the special case where the clusters are manifolds, this is called the {\em manifold assumption} \citep{LW:nips07,belkin_niyogi,partha}. A generalization of the cluster and manifold assumptions is that the regression function is smooth with respect to some density-sensitive distance. Several recent papers propose using a density based metric or diffusion distance for semisupervised learning \citep{orlitsky,diff_maps,bousquet04}. In this paper, we analyze semisupervised inference under this generalized assumption. Singh, Nowak and Zhu \citeyearpar{ASingh:unlabeled}, Lafferty and Wasserman \citeyearpar{LW:nips07} and Nadler et al \citeyearpar{nadler09} have showed that the degree to which unlabeled data improves performance is very sensitive to the cluster and manifold assumptions. In this paper, we introduce {\em adaptive semisupervised inference}. We define a parameter $\alpha$ that controls the sensitivity of the distance metric to the density, and hence the strength of the semisupervised assumption. When $\alpha = 0$ there is no semisupervised assumption, that is, there is no link between $f$ and $p$. When $\alpha = \infty$ there is a very strong semisupervised assumption. We use the data to estimate $\alpha$ and hence we adapt to the appropriate assumption linking $f$ and $p$. This paper makes the following contributions - (a) we propose a semi-supervised learner that uses a density-sensitive kernel and show that it provides better performance than any supervised learner if the density support set has a small condition number and (b) we show that it is possible to adapt to the degree of semi-supervisedness using data-dependent choice of a parameter that controls sensitivity of the distance metric to the density. This ensures that the semisupervised learner never performs worse than a supervised learner even if the assumptions fail to hold. Preliminary simulations, to be reported in future work, confirmed that our proposed estimator adapts well to alpha and has good risk when the semisupervised smoothness holds and when it fails. {\em Related Work.} There are a number of papers that discuss conditions under which semisupervised methods can succeed or that discuss metrics that are useful for semisupervised methods. These include \citet{bousquet04}, \citet{SSL_TR}, \citet{nadler09}, \citet{orlitsky} and references therein. However, to the best of our knowledge, there are no papers that explicitly study adaptive methods that allow the data to choose the strength of the semisupervised assumption. {\em Outline.} This paper is organized as follows. In Section~\ref{sec:setup} we define a set of joint distributions ${\cal P}_{XY}(\alpha)$ indexed by $\alpha$. In Section~\ref{sec:est}, we define a density sensitive estimator $\widehat f_\alpha$ of $f$, assuming that $(f, p) \in {\cal P}_{XY}(\alpha)$. We find finite sample bounds on the error of $\widehat f_\alpha$ and we investigate the dependence of this error on $\alpha$. In Section~\ref{sec:adap}, we show that cross-validation can be used to adapt to $\alpha$. We conclude in Section~\ref{sec:disc}. \section{Definitions} \label{sec:setup} We consider the collection of joint distributions ${\cal P}_{XY} (\alpha) = {\cal P}_X \times {\cal P}_{Y|X}$ indexed by a density-sensitivity parameter $\alpha$ as follows. $X, Y$ are random variables, $X$ is supported on a compact domain ${\cal X} \subset \mathbb{R}^d$, and $Y$ is real-valued. The marginal density $p(x) \in [\lambda_0,\Lambda_0]$ is bounded over its support $\{x:p(x)>0\}$, where $0<\lambda_0,\Lambda_0<\infty$. Also, let the conditional density be $p(y|x)$ with variance bounded by $\sigma^2$, and conditional label mean or regression function be $f(x)= \mathbb{E}[Y|X=x]$, with $|f(x)|\leq M$. We say that $(p, f) \in {\cal P}_{XY}(\alpha)$ if these functions satisfy the properties described below. Before stating the properties of $f$ and $p$, we define a distance metric with density sensitivity $\alpha$. {\bf Density-sensitive distance:} We consider the following distance with density sensitivity $\alpha \in [0,\infty)$ between two points $x_1, x_2 \in {\cal X}$ that is a modification of the definition in \citet{orlitsky}: \begin{equation} D_\alpha(x_1,x_2) = \inf\limits_{\gamma\in\Gamma(x_1,x_2)} \int\limits_0^{L(\gamma)} \frac1{p(\gamma(t))^\alpha} dt, \end{equation} where $\Gamma(x_1,x_2)$ is the set of all continuous finite curves from $x_1$ to $x_2$ with unit speed everywhere and $L(\gamma)$ is the length of curve $\gamma$ (i.e. $\gamma(L(\gamma))=x_2$). Notice that large $\alpha$ makes points connected by high density paths closer, and $\alpha=0$ corresponds to Euclidean distance. Our first assumption is that the regression function $f$ is smooth with respect to the density sensitive distance: {\bf A1) Semisupervised smoothness:} The regression function $f(x) = \mathbb{E}[Y|X=x]$ is $\beta$-smooth with respect to the density-sensitive distance $D_\alpha$, i.e. there exists constants $C_1,\beta>0$ such that for all $x_1,x_2 \in {\cal X}$ $$ |f(x_1) - f(x_2)| \leq C_1 \ \Bigl[D_\alpha(x_1,x_2)\Bigr]^\beta. $$ In particular if $\alpha=0$ and $\beta=1$, this corresponds to Lipschitz smoothness. Our second assumption is that the density function $p$ is smooth with respect to Euclidean distance over the support set. Recall that the {\em support} of $p$ is $S = \{x:\ p(x) > 0\}$. {\bf A2) Density smoothness:} The density function $p(x)$ is H\"{o}lder $\eta$-smooth with respect to Euclidean distance if it has $\lfloor \eta \rfloor$ derivatives and there exists a constant $C_2>0$ such that for all $x_1,x_2 \in S$ $$ |p(x_1) - T^{\lfloor \eta \rfloor}_{x_2}(x_1)| \leq C_2 \ \|x_1-x_2\|^\eta, $$ where $\lfloor \eta \rfloor$ is the largest integer such that $\lfloor \eta \rfloor < \eta$, and $T^{\lfloor \eta \rfloor}_{x_2}$ is the Taylor polynomial of degree $\lfloor \eta \rfloor$ around the point $x_2$. The {\em condition number} of a set $S$ with boundary $\partial S$ is the largest real number $\tau>0$ such that, if $d(x,\partial S) \leq \tau$ then $x$ has a unique projection onto the boundary of $S$. Here, $d(x,\partial S) = \inf_{z\in \partial S}||x-z||$. When $\tau$ is large, $S$ cannot be too thin, the boundaries of $S$ cannot be too curved and $S$ cannot get too close to being self-intersecting. If $S$ consists of more than one connected component, then $\tau$ large also means that the connected components cannot be too close to each other. Let $\tau_0$ denote the smallest condition number of the support sets $S$ of all $p\in\mathcal{P}_X$. We shall see that semisupervised inference outperforms supervised inference when $\tau_0$ is small. Additionally, we assume that $S$ has at most $K<\infty$ connected components. In the supervised setting, we assume access to $n$ labeled data ${\cal L} = \{X_i, Y_i\}^n_{i=1}$ drawn i.i.d. from ${\cal P}_{XY}(\alpha)$, and in the semi-supervised setting, we assume access to $m$ additional unlabeled data ${\cal U} = \{X_i\}^m_{i=1}$ drawn i.i.d. from ${\cal P}_{X}$. As usual, we write $a_n = O(b_n)$ if $|a_n/b_n|$ is bounded for all large $n$. Similarly, $a_n = \Omega(b_n)$ if $|a_n/b_n|$ is bounded away from 0 for all large $n$. We write $a_n \asymp b_n$ if $a_n = O(a_n)$ and $a_n = \Omega(b_n)$. \section{Density-Sensitive Inference} \label{sec:est} Let $K(x)$ be a symmetric non-negative function and let $K_h(x) = K(\|x\|/h)$. Let \begin{equation} \hat p_m(x) = \frac{1}{m}\sum_{i=1}^m \frac{1}{h_m^d} K_{h_m}(x-X_i) \end{equation} be the kernel density estimator of $p$ with bandwidth $h_m$, based on the unlabeled data. Define the support set estimate $\hat S = \{x:\hat p_m(x) > 0\}$ and the empirical boundary region $$ \widehat{\mathcal{R}}_{\hat\partial S} = \left\{x:\inf\limits_{z\in\partial \widehat{S}} \|x-z\|_2 < 2\delta_m\right\}. $$ where $\delta_m=2c_2\sqrt{d}\left((\log^2m)/m\right)^{\frac{1}{d}}$ for some constant $c_2>0$. Now define a plug-in estimate of the $D_\alpha$ distance as follows: $$ \hat D_{\alpha,m}(x_1,x_2) = \inf\limits_{\gamma\in\hat\Gamma(x_1,x_2)}\int\limits_0^{L(\gamma)} \frac1{\hat p_m(\gamma(t))^\alpha} dt, $$ where $\hat \Gamma(x_1,x_2) = \{\gamma \in \Gamma(x_1,x_2): \forall t \in [0,L(\gamma)] \ \gamma(t) \in \hat S \setminus \hat{\mathcal{R}}_{\hat \partial S}\}$, and $\hat D_{\alpha,m}(x_1,x_2) = \infty$ if $\hat \Gamma(x_1,x_2) = \emptyset$ We consider the following semisupervised learner which uses a kernel that is sensitive to the density. In the following definitions we take, for simplicity, $K(x) = I(||x|| \leq 1)$. {\bf Semisupervised kernel estimator:} \begin{equation} \widehat f_{h,\alpha}(x) = \frac{\sum^n_{i=1}Y_i K_h\left(\widehat D_{\alpha,m}(x,X_i)\right)} {\sum^n_{i=1}K_h\left(\widehat D_{\alpha,m}(x,X_i)\right)}. \end{equation} \subsection{Performance upper bound for semisupervised estimator} \label{sec:SSL_UB} The following theorem characterizes the performance of the density sensitive semisupervised kernel estimator. \begin{thm} \label{thm:SSL_UB} Assume $\lambda_0>1+c_0$ for some constant $c_0>0$ \footnote{This assumption is more restrictive than necessary, and a more general statement can be by introducing a rescaling factor in the definition of the density-sensitive distance.} and let $\epsilon_m=c_1 (\log m)^{-1/2}$ for constant $c_1 >0$ and $\delta_m=2c_2\sqrt{d}\left((\log^2m)/m\right)^{\frac{1}{d}}$ for some constant $c_2>0$. If $\tau_0\in(3\delta_m, \infty)$ and $h > (2c_4/(\tau_0^{d-1}(\lambda_0-\epsilon_m)^\alpha))$ where $c_4>0$ is a constant, then for large enough $m$ \begin{align*} & \sup\limits_{(p,f)\in\mathcal{P}_{XY}(\alpha)} \mathbb{E}_{n,m}\left\{\int (\widehat{f}_{h,\alpha}(x)-f(x))^2 dP(x)\right\} \leq\\ & \hspace{3cm} (M^2+\sigma^2)\left(\frac{1}{m}+3 c_3 2^d\Lambda_0 \frac{\delta_m}{\tau_0}\right) \\ &\hspace{3cm}+ \left[h \left(\frac{\lambda_0+\epsilon_m}{\lambda_0}\right)^\alpha \right]^{2\beta}\\ &\hspace{3cm}+ \frac{K(M^2/e+2\sigma^2)}{n}. \end{align*} \end{thm} The proof of Theorem~\ref{thm:SSL_UB} is given in section~\ref{sec:SSL_UB_proof}. The first term is negligible when the amount of unlabeled data $m$ is large. The second term is the bias and third term is variance. If the bandwidth $$h \asymp \frac1{\delta_m^{d-1}\lambda_0^\alpha}$$ and $\alpha \asymp \log m$ is large enough, then the density-sensitive semisupervised kernel estimator is able to achieve an integrated MSE rate of $O(n^{-1})$ for all joint distributions in ${\cal P}_{XY} (\alpha)$ supported on sets with condition number $\tau_0 >3\delta_m$. \subsection{Performance lower bound for any supervised estimator} We now establish a lower bound on the performance of any supervised estimator. \begin{thm} Assume $d\geq2$ and $\alpha>0$. There exists a constant $c_5>0$ depending only on $d$ so that if $\tau_0\leq c_5 n^{-\frac{1}{d-1}}$, then \begin{align*} \inf_{\widehat{f}}\sup_{(p,f)\in\mathcal{P}_{XY}(\alpha)} \mathbb{E}_n\int(\widehat{f}(x)-f(x))^2 dP(x) = \Omega(1) \end{align*} where the inf is over all supervised estimators. \label{thm:SL_LB} \end{thm} Coupled with Theorem~\ref{thm:SSL_UB}, the results state that if the condition number of the support set is small $3\delta_m < \tau_0 \leq c_5 n^{-\frac1{d-1}}$ and $\alpha$ is large enough, then the density-sensitive semi-supervised estimator outperforms any supervised learning algorithm in terms of integrated MSE rate. A complete proof of Theorem~\ref{thm:SL_LB} is given in the appendix. Here we provide some intuition regarding the proof strategy. We construct a set of joint distributions over $X$ and $Y$ that depends on $n$, and apply Assouad's Lemma. Intuitively, we need to take advantage of the decreasing condition number $\tau_0$. This is because if $\tau_0$ were to be kept fixed, as $n$ increases the semi-supervised assumption would reduce to familiar Euclidean smoothness. So, we construct the distributions as follows. We split the unit cube in $\mathbb{R}^d$ into two rectangle sets with a small gap in between, and let the marginal density $p$ be uniform over these sets. Then we add a series of ``bumps'' between the two rectangles, as shown schematically in Figure \ref{fig:lb}. Over one of the sets we set $f\equiv M$, and over the other we set $f\equiv -M$. The number of bumps increases with $n$, implying that the condition number must decrease. The sets are designed specifically so that the condition number can be lower bounded easily as a function of $n$. In essence, as $n$ increases these boundaries become space-filling, so that there is a region where the regression function could be $M$ or $-M$, and it is not possible to tell which with only labeled data. \begin{figure} \centering \includegraphics[width=0.49\textwidth,clip=true,trim=6.6cm 4cm 6.5cm 3.3cm]{ssl_aistats_lowerbnd_schematic_mod.pdf} \vspace{-25pt} \caption{A two-dimensional cross-section of the support of a marginal density $p$ used in the proof of Theorem~\ref{thm:SL_LB}.} \label{fig:lb} \end{figure} \section{Adaptive Semisupervised Inference} \label{sec:adap} In section~\ref{sec:SSL_UB}, we established a bound on the integrated mean square error of the density-sensitive semisupervised kernel estimator. The bound is achieved by using an estimate $\hat D_\alpha$ of the density-sensitive distance. However, this requires knowing the density-sensitive parameter $\alpha$, along with other parameters. It is critical to choose $\alpha$ (and $h$) appropriately, otherwise we might incur a large error if the semisupervised assumption does not hold or holds with a different density sensitivity value $\alpha$. The following result shows that we can adapt to the correct degree of semisupervisedness $\alpha$ if cross-validation is used to select the appropriate $\alpha$ and $h$. This implies that the estimator gracefully degrades to a supervised learner if the semisupervised assumption (sensitivity of regression function to marginal density) does not hold ($\alpha = 0$). For any $f$, define the risk $R(f) = \mathbb{E}[(f(X)-Y)^2]$ and the excess risk ${\cal E}(f) = R(f) - R(f^*) = \mathbb{E}[(f(X)-f^*(X))^2]$ where $f^*$ is the true regression function. Let ${\cal H}$ be a finite set of bandwidths and let ${\cal A}$ be a finite set of values for $\alpha$. Divide the data into training data $T$ and validation data $V$. For notational simplicity, let both sets have size $n$. Let ${\cal F} = \{\hat f^T_{\alpha,h}\}_{\alpha \in {\cal A}, h\in {\cal H}}$ denote the semisupervised kernel estimators trained on data $T$ using $\alpha \in {\cal A}$ and $h \in {\cal H}$. For each $\hat f_{\alpha,h}^T\in {\cal F}$ let $\hat R^V (\hat f^T_{\alpha,h}) = n^{-1}\sum^n_{i=1}(\hat f^T_{\alpha,h}(X_i)-Y_i)^2$ where the sum is over $V$. Let $Y_i = f(X_i) + \epsilon_i$ with $\epsilon_i \stackrel{i.i.d}{\sim} {\cal N}(0,\sigma^2)$. Also, we assume that $|f(x)|, |\hat f^T_{\alpha,h}(x)| \leq M$, where $M>0$ is a constant.\footnote{ Note that the estimator can always be truncated if necessary.} \begin{thm} \label{thm:crossval} Let ${\cal F} = \{\hat f^T_{\alpha,h}\}_{\alpha \in {\cal A}, h\in {\cal H}}$ denote the semisupervised kernel estimators trained on data $T$ using $\alpha \in {\cal A}$ and $h \in {\cal H}$. Use validation data $V$ to pick $$ (\hat \alpha,\hat h) = \arg\min_{(\alpha \in {\cal A},h\in {\cal H})} \hat R^V(\hat f^T_{\alpha,h}) $$ and define the corresponding estimator $\hat f_{\hat \alpha,\hat h}$. Then, for every $0 < \delta < 1$, \begin{align*} \mathbb{E}[{\cal E}(\hat f_{\hat \alpha,\hat h})] \leq \frac1{1-a} &\left[\min_{\alpha\in {\cal A}, h \in {\cal H}} \mathbb{E}[{\cal E}(\hat f_{\alpha,h})]\right.\\ &\hspace{0.7cm}+ \left.\frac{\log(|{\cal A}||{\cal H}|/\delta)}{nt} \right] + 4\delta M^2 \end{align*} where $0<a<1$ and $0<t < 15/(38(M^2+\sigma^2))$ are constants. $\mathbb{E}$ denotes expectation over everything that is random. \end{thm} See appendix for proof. In practice, both ${\cal H}$ and ${\cal A}$ may be taken to be of size $n^a$ for some $a>0$. Then we can approximate the optimal $h$ and $\alpha$ with sufficient accuracy to achieve the optimal rate. Setting $\delta = 1/(4 M^2n)$, we then see that the penalty for adaptation is $\frac{\log(|{\cal A}||{\cal H}|/\delta)}{nt} + \delta M = O(\log n /n)$ and hence introduces only a logarithmic term. \section{Discussion} \label{sec:disc} Semisupervised methods are very powerful but, like all methods, they only work under certain conditions. We have shown that, when the support of the distribution is somewhat irregular (i.e., the boundary of the support of the density has a small condition number), then semi-supervised methods can attain better performance. Specifically, we demonstrated that a semi-supervised kernel estimator that uses a density-sensitive distance can outperform any supervised estimator in such cases. We introduced a family of estimators indexed by a parameter $\alpha$. This parameter controls the strength of the semi-supervised assumption. We showed that the behavior of the semi-supervised method depends critically on $\alpha$. Finally, we showed that cross-validation can be used to automatically adapt to $\alpha$ so that $\alpha$ does not need to be known. Hence, our method takes advantage of the unlabeled data when the semi-supervised assumption holds, but does not add extra bias when the assumption fails. Preliminary simulations confirm that our proposed estimator adapts well to alpha and has good risk when the semi-supervised smoothness holds and when it fails. We will report these results in future work. The analysis in this paper can be extended in several ways. First, it is possible to use other density sensitive metrics such as the diffusion distance \citep{wasserman08spectral}. Second, it is possible to relax the assumption that the density $p$ is strictly bounded away from 0 on its support. Finally, other estimators besides kernel estimators can be used. We will report on these extensions elsewhere. \section{Proof of Theorem~\ref{thm:SSL_UB}} \label{sec:SSL_UB_proof} Here we prove Theorem~\ref{thm:SSL_UB} stated in section~\ref{sec:SSL_UB} (repeated below for convenience), using some results given in the appendix. \begin{thm} Assume $\lambda_0>1+c_0$ for some constant $c_0>0$ \footnote{This assumption is more restrictive than necessary, and a more general statement can be by introducing a rescaling factor in the definition of the density-sensitive distance.} and let $\epsilon_m=c_1 (\log m)^{-1/2}$ for constant $c_1 >0$ and $\delta_m=2c_2\sqrt{d}\left((\log^2m)/m\right)^{\frac{1}{d}}$ for some constant $c_2>0$. If $\tau_0\in(3\delta_m, \infty)$ and $h > (2c_4/(\tau_0^{d-1}(\lambda_0-\epsilon_m)^\alpha))$ where $c_4>0$ is a constant, then for large enough $m$ \begin{align*} & \sup\limits_{(p,f)\in\mathcal{P}_{XY}(\alpha)} \mathbb{E}_{n,m}\left\{\int (\widehat{f}_{h,\alpha}(x)-f(x))^2 dP(x)\right\} \leq\\ & \hspace{3cm} (M^2+\sigma^2)\left(\frac{1}{m}+3 c_3 2^d\Lambda_0 \frac{\delta_m}{\tau_0}\right) \\ &\hspace{3cm}+ \left[h \left(\frac{\lambda_0+\epsilon_m}{\lambda_0}\right)^\alpha \right]^{2\beta}\\ &\hspace{3cm}+ \frac{K(M^2/e+2\sigma^2)}{n}. \end{align*} \end{thm} \begin{proof} Let $\mathcal{G}_m$ be the indicator of the event when the unlabeled sample is such that $\sup\limits_{x\in S\backslash\mathcal{R}_{\partial S}}|p(x)-\widehat{p}_m(x)|\leq\epsilon_m$ and $\partial\widehat{S}\subset\mathcal{R}_{\partial S}$. From Theorem \ref{thm:densityest}, \begin{align*} \mathbb{E}_{n,m}\left\{(1-\mathcal{G}_m)\int (\widehat{f}_{h,\alpha}(x)-f(x))^2 dP(x)\right\} \\\leq \frac{1}{m}(M^2+\sigma^2). \end{align*} We can write \begin{align*} &\mathbb{E}_{n,m}\left\{\mathcal{G}_m\int (\widehat{f}_{h,\alpha}(x)-f(x))^2 dP(x)\right\} \\ &= \mathbb{E}_{n,m}\left\{\mathcal{G}_m\int_{S_m^*} (\widehat{f}_{h,\alpha}(x)-f(x))^2 dP(x)\right\} \\ &+ \mathbb{E}_{n,m}\left\{\mathcal{G}_m\int_{S\backslash S_m^*} (\widehat{f}_{h,\alpha}(x)-f(x))^2 dP(x)\right\} \end{align*} where $S_m^*$ as defined in Proposition \ref{thm:dbdest2}. For the boundary region we have \begin{align*} &\mathbb{E}_{n,m}\left\{\mathcal{G}_m\int\limits_{S\backslash S_m^*} (\widehat{f}_{h,\alpha}(x)-f(x))^2 dP(x)\right\} \\ &\leq (M^2+\sigma^2) P(S\backslash S_m^*)\\ &\leq \Lambda_0(M^2+\sigma^2) \mathop{\mathrm{Leb}}(S\backslash S_m^*) \end{align*} where $\mathop{\mathrm{Leb}}$ denotes the Lebesgue measure. Since the radius of curvature of $\partial S$ is at least $\tau_0$, and $\tau_0>3\delta_m$, we have by Proposition \ref{thm:condnumarea}, \begin{align*} \mathop{\mathrm{Leb}}(S\backslash S_m^*) &\leq \mathop{\mathrm{Vol}}(\partial S) \frac{\left(\tau_0+3\delta_m\right)^d-\tau_0^d}{\tau_0^{d-1}}\\ &\leq c_3\left[\left(1+\frac{3\delta_m}{\tau_0}\right)^d-1\right]\\ &\leq c_3 \sum\limits_{i=1}^{d} \binom{d}{i} \frac{3\delta_m}{\tau_0}\\ &\leq 3 c_3 2^d \frac{\delta_m}{\tau_0} \end{align*} where $\mathop{\mathrm{Vol}}$ denotes the $d-1$-dimensional volume on $\partial S$. So \begin{align*} \mathbb{E}_{n,m}\left\{\mathcal{G}_m\int\limits_{S\backslash S_m^*} (\widehat{f}_{h,\alpha}(x)-f(x))^2 dP(x)\right\} \\ \leq 3 c_3 2^d\Lambda_0(M^2+\sigma^2) \frac{\delta_m}{\tau_0}. \end{align*} Following the derivation in Chapter 5 of \citet{gyorfi2002nonparametric}, we have \begin{align*} &\mathbb{E}_{n}\left\{\mathcal{G}_m\int\limits_{S_m^*} (\widehat{f}_{h,\alpha}(x)-f(x))^2 dP(x)\right\} \\&\leq \mathcal{G}_m C_1^2 \sup\limits_{x\in S_m^*} \sup\limits_{x'\in S\cap S_{x,h}^{\widehat{D}_{\alpha,m}}} D_\alpha(x,x')^{2\beta}\\ &+\mathcal{G}_m\frac{M^2/e+2\sigma^2}{n} \mathcal{N}\left(S_m^*,\widehat{D}_{\alpha,m},\frac{h}{2}\right) \end{align*} where $S_{x,h}^{\widehat{D}_{\alpha,m}}=\{x': \widehat{D}_{\alpha,m}(x,x')\leq h\}$, and $\mathcal{N}$ denotes the covering number. Note that since $\widehat{\Gamma}(x,x')=\emptyset \Rightarrow \widehat{D}_{\alpha,m} = \infty$, we will always have $(x,x')\in\Psi$ if $x'\in S\cap S_{x,h}^{\widehat{D}_{\alpha,m}}$ (and, of course, the same applies when $x'\in S_m^*\cap S_{x,h/2}^{\widehat{D}_{\alpha,m}}$). So we can apply Proposition \ref{thm:dbdest2} to give \begin{align*} \mathcal{G}_m \sup\limits_{x\in S_m^*} \sup\limits_{x'\in S\cap S_{x,h}^{\widehat{D}_{\alpha,m}}} D_\alpha(x,x')^{2\beta} \leq \left[h \left(\frac{\lambda_0+\epsilon_m}{\lambda_0}\right)^\alpha \right]^{2\beta} \end{align*} and \begin{align*} \mathcal{G}_m \mathcal{N}\left(S_m^*,\widehat{D}_{\alpha,m},\frac{h}{2}\right) &\leq \mathcal{G}_m \mathcal{N}\left(S_m^*,d_{S_m^*},\frac{h(\lambda_0-\epsilon_m)^\alpha}{2}\right) \end{align*} where the $d_{S_m^*}$ distance is the length of the shortest path between two points restricted to $S_m^*$, as defined in the appendix. Clearly $S_m^*$ has condition number at least $\tau_0-3\delta_m>0$. If $S_m^*$ has exactly one connected component, then Proposition \ref{thm:geodesicdiam} combined with the assumption that $h > (2c_4/(\tau_0^{d-1}(\lambda_0-\epsilon_m)^\alpha)$ implies that any point in $S_m^*$ is a $h(\lambda_0-\epsilon_m)^\alpha/2$ covering, so $$ \mathcal{N}\left(S_m^*,d_{S_m^*},\frac{h(\lambda_0-\epsilon_m)^\alpha}{2}\right)=1. $$ Since $S_m^*$ can have at most $K$ connected components, we can repeat the same argument for each component and conclude that $$ \mathcal{N}\left(S_m^*,d_{S_m^*},\frac{h(\lambda_0-\epsilon_m)^\alpha}{2}\right)\leq K. $$ So, \begin{align*} &\mathbb{E}_{n,m}\left\{\int (\widehat{f}_{h,\alpha}(x)-f(x))^2 dP(x)\right\} \\&\leq (M^2+\sigma^2)\left(\frac{1}{m}+3 c_3 2^d\Lambda_0 \frac{\delta_m}{\tau_0}\right) \\ &+ \left[h \left(\frac{\lambda_0+\epsilon_m}{\lambda_0}\right)^\alpha \right]^{2\beta}\\ &+ \frac{K(M^2/e+2\sigma^2)}{n}. \end{align*} \end{proof} \subsubsection*{Acknowledgments} This research is supported in part by AFOSR under grants FA9550-10-1-0382 and FA95500910373 and NSF under grants IIS-1116458 and DMS-0806009. \section{SSL upper bound} To prove Theorem~\ref{thm:SSL_UB}, we also need the following two results. \begin{prop}\label{thm:condnumarea} Let $\mathcal{X}$ be a compact subset of $\mathbb{R}^d$, and $T>0$. Then for any $\tau\in(0,T)$, for all sets $S\subseteq\mathcal{X}$ with condition number at least $\tau$, $\mathop{\mathrm{Vol}}(\partial S)\leq c_3/\tau$ for some $c_3$ independent of $\tau$, where $\mathop{\mathrm{Vol}}$ is the $d-1$-dimensional volume. \end{prop} \begin{proof} Let $\{z_i\}_{i=1}^N$ be a minimal Euclidean $\tau/2$-covering of $\partial S$, and $B_i=\{x:\|x-z_i\|_2\leq \tau/2\}$. Let $T_i$ be the tangent plane to $\partial S$ at $z_i$. Then using the argument made in the proof of Lemma 4 in \citet{genovese2010minimax}, \begin{align*} \mathop{\mathrm{Vol}}(B_i\cap\partial S)&\leq C_1 \mathop{\mathrm{Vol}}(B_i\cap T_i) \frac{1}{\sqrt{1-(\tau/2)^2/\tau^2}}\\ &\leq C_2 \tau^{d-1} \end{align*} for some constants $C_1$ and $C_2$ independent of $\tau$. Since $\mathcal{X}$ is compact, \begin{align*} \mathcal{N}(\partial S,\|\cdot\|_2,\tau/2)&\leq C \left(\frac{1}{\tau}\right)^d \end{align*} for some constant $C$ depending only on $\mathcal{X}$ and $T$, where $\mathcal{N}$ denotes the covering number (note that even though $\partial S$ is a $d-1$ dimensional set, we can't claim $\mathcal{N}(\partial S, \|\cdot\|_2, \tau)=O(\tau^{-(d-1)})$, since $\partial S$ can become space-filling as $\tau\rightarrow0$). So \begin{align*} \mathop{\mathrm{Vol}}(\partial S) &\leq \sum\limits_{i=1}^N \mathop{\mathrm{Vol}}(B_i\cap\partial S)\\ &\leq C_2 \tau^{d-1} \mathcal{N}(\partial S,\|\cdot\|_2,\tau/2)\\ &\leq C_2C \tau^{-1} \end{align*} and the result follows with $c_3=C_2C$. \end{proof} \begin{prop}\label{thm:geodesicdiam} Let $\mathcal{X}$ be a compact subset of $\mathbb{R}^d$, and $T>0$. Then for any $\tau\in(0,T)$, for all compact, connected sets $S\subseteq\mathcal{X}$ with condition number at least $\tau$, $\sup\limits_{u,v\in S} d_{S} (u,v)\leq c_4 \tau^{1-d}$ for some $c_4$ independent of $\tau$. \end{prop} \begin{proof} First consider the quantity \begin{align*} \sup\limits_{u,v\in\partial S} d_{S} (u,v). \end{align*} Since $\partial S\subseteq S$, clearly \begin{align*} \sup\limits_{u,v\in\partial S} d_{S} (u,v) \leq \sup\limits_{u,v\in\partial S} d_{\partial S} (u,v). \end{align*} Since $\partial S$ is closed, there must exist $u^*,v^*\in\partial S$ such that \begin{align*} \sup\limits_{u,v\in\partial S} d_{\partial S} (u,v) = d_{\partial S}(u^*,v^*). \end{align*} Let $\{z_i\}_{i=1}^{N}$ be a minimal $\tau$-covering of $\partial S$ in the $d_{\partial S}$ metric. Let $\{\widetilde{z}_i\}_{i=1}^{\widetilde{N}}\subseteq\{z_i\}_{i=1}^{N}$ such that $d_{\partial S}(u^*,\widetilde{z}_1)\leq\tau$, $d_{\partial S}(v^*,\widetilde{z}_{\widetilde{N}})\leq\tau$, and for any $1\leq i\leq\widetilde{N}-1$, $d_{\partial S}(\widetilde{z}_{i},\widetilde{z}_{i+1})\leq2\tau$. Then \begin{align*} d_{\partial S}(u^*,v^*) &\leq d_{\partial S}(u^*,\widetilde{z}_{1}) + d_{\partial S}(v^*,\widetilde{z}_{\widetilde{N}}) \\ &+ \sum\limits_{i=1}^{\widetilde{N}-1} d_{\partial S}(\widetilde{z}_{i},\widetilde{z}_{i+1})\\ &\leq 2\tau\widetilde{N}. \end{align*} So, \begin{align*} d_{\partial S}(u^*,v^*) &\leq 2 \tau \mathcal{N}(\partial S, d_{\partial S}, \tau). \end{align*} By Proposition 6.3 in \citet{niyogi2006finding} (or see Lemma 3 in \citet{genovese2010minimax}), if $x,y\in\partial S$ such that $\|x-y\|_2=a\leq\tau/2$, then $d_{\partial S}(x,y)\leq\tau-\tau\sqrt{1-(2a)/\tau}$. In particular, if $\|x-y\|_2\leq\tau/2$, then $d_{\partial S}(x,y)\leq\tau$. So any Euclidean $\tau/2$-covering of $\partial S$ is also a $\tau$-covering in the $d_{\partial S}$ metric. Then we have \begin{align*} \sup\limits_{u,v\in\partial S} d_{S} (u,v) &\leq d_{\partial S}(u^*,v^*) \\ &\leq 2 \tau \mathcal{N}(\partial S, d_{\partial S}, \tau)\\ &\leq 2 \tau \mathcal{N}(\partial S, \|\cdot\|_2, \tau/2)\\ &\leq C\tau \left(\frac{1}{\tau}\right)^d\\ &= C \tau^{1-d} \end{align*} for some constant $C$ depending only on $\mathcal{X}$ and $T$ (note that, as in the proof of \ref{thm:condnumarea}, even though $\partial S$ is a $d-1$ dimensional set, we can't claim $\mathcal{N}(\partial S, \|\cdot\|_2, \tau)=O(\tau^{-(d-1)})$, since $\partial S$ can become space-filling as $\tau\rightarrow0$). Now let $u^\dag,v^\dag\in S$ such that \begin{align*} \sup\limits_{u,v\in S} d_S (u,v)=d_S(u^\dag,v^\dag) \end{align*} which must exist since $S$ is compact. Let $u^\ddag,v^\ddag\in\partial S$ be the (not necessarily unique) projections of $u^\dag$ and $v^\dag$ onto $\partial S$. Clearly the line segment connecting $u^\dag$ and $u^\ddag$ is fully contained in $S$, and the same applies to $v^\dag$ and $v^\ddag$. So \begin{align*} d_S(u^\dag,v^\dag) &\leq d_S(u^\dag,u^\ddag) + d_S(u^\ddag,v^\ddag) + d_S(v^\ddag,v^\dag)\\ &\leq \|u^\dag-u^\ddag\|_2 + \|v^\dag-v^\ddag\|_2 + d_S(u^*,v^*)\\ &\leq 2\mathop{\mathrm{diam}}(\mathcal{X}) + C \tau^{1-d} \end{align*} and setting $c_4=2T^{d-1}\mathop{\mathrm{diam}}(\mathcal{X})+C$, the result follows. \end{proof}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} If a system has a first-order phase transition, at the transition temperature it can exist in a mixed state, where two different bulk phases are separated by an interface. The free energy carried by the interface is the interface tension $\sigma$. Because at the transition temperature the free energy densities of the bulk phases are equal, the free energy of the mixed state is higher than either of the pure phases by an amount of $F_s = \sigma A$, where $A$ is the area of the interface. In numerical simulations using the canonical ensemble at the transition temperature, the configurations containing the mixed phase are suppressed by the Boltzmann factor $e^{-F_s/T}$. When the volume of the system is increased, the suppression of the mixed state increases exponentially with the area of the interface -- hence also the time it takes for the system to tunnel from one pure phase to another increases exponentially. Recently, Berg and Neuhaus \cite{Berg91} introduced a powerful new method, the multicanonical algorithm, which avoids the exponential slowing down by artificially enhancing the probability of the configurations with an interface. The tunnelling time increases only polynomially with the increasing linear size $L$. In this method the individual spin updates generally depend on the total energy of the system in a non-linear fashion. This effectively prevents the use of vector or parallel coding (except if one runs many lattices in parallel) and cluster update algorithms. In the microcanonical demon Monte Carlo approach, developed a decade ago by M.~Creutz \cite{Creutz83}, one uses additional variables, demons, to transfer energy around the system. The total energy of the system plus the demons is absolutely conserved. However, by periodically updating the demon energies according to the Boltzmann distribution the method reduces to the canonical procedure. In this work I present an algorithm which combines these two methods: first, a system is updated {\em microcanonically} with a set of demons, and second, the demons are refreshed with a {\em multicanonical} heat bath. The demons act as a buffer, isolating the actual system from the multicanonical heat bath, thus enabling one to choose an optimal microcanonical update step for a particular problem. The update can be a highly vectorizable and parallelizable local update or a cluster update, or some combination of these. As an example, I apply the cluster version of the algorithm to the 2-dimensional 7-state Potts model. The Potts models have become standard tools for high-precision Monte Carlo studies of the first-order phase transitions. The 2-dimensional $q$-state (2d$q$s) Potts model \cite{Potts62,Wu82} is defined by the partition function \begin{equation} Z(\beta) = \sum_{\{s\}} \exp[-\beta E(s)] \end{equation} \begin{equation} E(s) = \sum_{(i,j)} (1-\delta(s_i,s_j)), {\hspace{0.5 cm}}\h s_i = 1,\ldots,q \, . \end{equation} When $q>4$, the transition is of first order. The infinite volume transition temperature is $\beta_c = 1/T_c =\log(1+\sqrt{q})$. Beside the fact that many infinite volume quantities are exactly known, rigorous finite-size scaling (FSS) predictions by C.~Borgs {et al.\ } \cite{Borgs90} offer a quantitative method for studying the approach to the asymptotic regime. I chose the 2d7s Potts model for comparison with the recent standard multicanonical calculation by W.~Janke {et al.\ } \cite{Janke92}, and the canonical one by A.~Billoire {et al.\ } \cite{Billoire92}. The lattice sizes in this work were $V = L^2 = 20^2$, $32^2$, $64^2$ and $128^2$; for the largest volume, two separate simulations were performed. This article is divided into three parts: in the first section I discuss how the standard multicanonical approach is generalized to the two-step demon algorithm, and how it can be used to obtain an estimate of the density of states, and, through this, of all thermodynamical quantities. The next section describes the actual update algorithms: the microcanonical cluster update and the multicanonical demon refresh. I present a method which enables the `slow' part of the multicanonical update to be performed in $\propto\sqrt{V}$ steps. The results of the 2d7s Potts model simulations are reported in the third section. The tunnelling time was found to increase like $L^{1.82(3)}$, which is better than the standard multicanonical method ($L^{2.65}$). Where appropriate, the thermodynamical measurements of the $128^2$ lattices fully agree with the rigorous FSS predictions of ref.~\cite{Borgs90}, and all the measurements are consistent with the multicanonical MC data of ref.~\cite{Janke92}. \section{The Multicanonical Demon Algorithm} Close to the transition temperature the canonical probability distribution $p_\beta (E)$ develops a double-peak structure, and one definition for the transition temperature $T^L_c=1/\beta^L_c$ itself is the temperature when the two peaks have equal height (fig.~{1}). Following refs.~\cite{Berg91,Janke92}, the peak locations are denoted by $E^L_1$ and $E^L_2$, and the probability density is normalized to $p^L_\beta(E^L_1) = p^L_\beta(E^L_2) = 1$. Denoting the minimum of $p^L_\beta$ between the peaks by $p^L_{\min}$, the interface tension is \cite{Binder82}: \begin{equation} \sigma = - \lim_{L\rightarrow\infty} \frac{\log p^L_{\min}}{2L}\,. \la{tension} \end{equation} Because of the periodic boundary conditions, the configurations corresponding to the minimum of the probability distribution have two interfaces separated by $\sim L/2$; hence the factor 2 in eq.~\nr{tension}. In the following the $L$-dependence of the above quantities is mostly suppressed from the notation. In the standard `direct' multicanonical method, one usually aims at a roughly constant probability density in the domain between the peaks: $p_W(E) = 1,\, E_1 \le E \le E_2$. This can be achieved by substituting the Boltzmann weight with a weight function $W(E)$: \begin{equation} p_W(E) \propto n_S(E)\, e^{-W(E)} \, , \la{mcprob} \end{equation} where $n_S(E)$ is the number of states at energy $E$. The requirement that the probability density is constant implies that $W(E) \propto \log n_S(E) = S(E)$ when $ E_1 \le E \le E_2 $; and $W(E) = \beta_c E + \mbox{\it const.}$ otherwise. Because $n_S(E)$ is what we are trying to compute in the first place, we have to use an approximate $W(E)$ instead -- obtained, for example, with finite-size scaling, canonical simulations, or previous multicanonical simulations. The measured $p_W$ is then reweighted with $e^{W(E)}$ to produce $n_S(E)$, from which all quantities of interest can be calculated -- also the improved $W(E)$. Now we want to apply multicanonical ideas to a system consisting of the original spin system and a system of demons. Heuristically, it is clear that the spin system in eq.~\nr{mcprob} can be substituted with any other system, also with this composite system. Denoting the weight function with $G$, the probability density can be written as a joint distribution in the spin system energy $E_S$ and the demon energy $E_D$: \begin{equation} p_G(E_S,E_D) \propto n_S(E_S)n_D(E_D)\,e^{-G(E_T)} \, , \la{mdprob} \end{equation} where $n_S(E_S)$ and $n_D(E_D)$ are the spin system and the demon density of states, respectively, and $E_T=E_S+E_D$ is the total energy. In the most general case the weight $G$ is a function of both $E_S$ and $E_T$, but when $E_T$ is fixed, we want $p_G$ to reduce to the microcanonical distribution $n_S(E_S)n_D(E_T-E_S)$, implying that $G$ can only be a function of $E_T$. When $E_S$ is fixed, $p_G$ reduces to the multicanonical distribution for the demons: $n_D(E_D)\,e^{-G(E_S+E_D)}$. Thus the probability distribution \nr{mdprob} is preserved in a generic two-step process of a microcanonical spin update and a multicanonical demon update, provided that both of the update steps separately satisfy detailed balance. Note that if $G(E)=\beta E$, both the demon and spin system will have canonical distributions. After the Monte Carlo simulation has produced a sample of the distribution \nr{mdprob}, $n_S$ can be solved from it. Although not strictly necessary, it is advantageous to use the known demon density of states: with $N_D$ demons with discrete energy states $0,1,\ldots$, \begin{equation} n_D(E_D) = \frac{(N_D - 1 + E_D)!}{(N_D-1)!\,E_D!} \, . \la{demden} \end{equation} Because eq.~\nr{mdprob} is valid separately for each $E_D$, $n_S$ can be expressed as a linear combination \begin{equation} n_S(E_S) \propto \sum_{E_D} A_{E_D;E_S}\, \frac{p_G(E_S,E_D)}{n_D(E_D)\,e^{-G(E_T)}}\,\, ,{\hspace{0.5 cm}} \sum_{E_D} A_{E_D;E_S} = 1 \, . \la{lincomb} \end{equation} The multipliers $A_{E_D;E_S}$ should be chosen to minimize the error in $n_S$. For each $E_S$ separately, the uncertainty in the (measured) $p_G$ is given by $\delta p_G = \sqrt{\bar p_G/N_{E_S}}$, where $\bar p_G$ is the probability distribution in the limit of infinite statistics and $N_{E_S}$ is the number of measurements with this $E_S$. (This is not true for the full distribution $p_G(E_S,E_D)$ because of the correlations between successive measurements; however, for fixed $E_S$, each measurement of $E_D$ is completely independent, as explained below). Minimizing the resulting error in eq.~\nr{lincomb}, we finally obtain \begin{equation} n_S(E_S) \propto \frac{p_S(E_S)} {\sum_{E_D} n_D(E_D)\,e^{-G(E_T)}}\, , {\hspace{0.5 cm}} p_{S}(E_S) \equiv \sum_{E_D} p_G(E_S,E_D) \, . \la{denstate} \end{equation} Note that the final result depends only on the spin system distribution $p_S(E_S)$, not on the demon energy distribution. The distribution $p_S$ corresponds to the standard multicanonical distribution eq.~\nr{mcprob} with the weight \begin{equation} e^{-W(E_S)} = \sum_{E_D}n_D(E_D)\,e^{-G(E_S+E_D)} \, . \la{weights} \end{equation} Quite generally, if we want to simulate a system with a non-canonical weight function $W(E)$, then by inverting eq.~\nr{weights} we obtain the corresponding $G(E)$. With the two-step update, the function $G$ will produce exactly the same distribution as $W$ with a direct update. Instead of aiming at a flat distribution of $E_S$, it is now more natural to try to `flatten' the $E_T$ distribution. Then, the optimal $G(E_T)$ equals to $S_T(E_T)$, the entropy of the total system. In the actual runs described here the initial estimate of $G(E_T)$ was obtained (for the lattices $\le 64^2$) from short runs with $G(E_T) = \beta E_T$; for the $128^2$ lattice, finite-size scaling was used to scale up the function used in the $64^2$ simulation. The $64^2$ and $128^2$ lattices required one further refinement run to obtain the final weight function; in the end two different weights were used for the $128^2$ lattice. To simplify the calculation, $E_T$ was restricted in the range $E_T^{\min}\le E_T\le E_T^{\max}$, where the limits were chosen such that the expectation values of $E_S$, as a function of $E_T$, bracket the peak locations $E_1$ and $E_2$ (fig.~{1}): \begin{equation} \langle E_S\rangle(E_T^{\min}) < E_1 < E_2 < \langle E_S\rangle(E_T^{\max}). \la{probrange} \end{equation} The functional form of $G(E_T)$ is not crucial, as long as it is accurate enough; here a continuous piecewise linear form was used. The required accuracy increases with the volume of the system -- the weight $G(E_T)$ is an extensive quantity (or rather $G(E_T)-\beta_c E_T \propto L$), but if $G(E_T)$ is `wrong' by an amount of, say, $\log 2$, the probability $p_G(E_T)$ will be changed by a factor of 2. The left part of fig.~{2} shows the joint distribution from the $64^2$ lattice, using $E_S$ and $E_T$ as independent variables. As can be seen, $E_S$ and $E_T$ follow each other very closely -- the length and the width of the ridge behave like $ L^2$ and $L$, respectively. The demon energy varies only very little, meaning that the microcanonical temperature $\partial S(E)/\partial E$ is almost constant, as it should be in the phase coexistence region. The right part is the same distribution, `canonized' by reweighting it with $e^{G(E_T) - \beta_c E_T}$, where $\beta_c$ is the transition temperature for this lattice (table~\ref{table2}). An interesting variation of the algorithm can be obtained by restricting $E_T$ to a discrete set of values $E_0,\,\ldots,\,E_N$, sufficiently dense so that the neighboring $E_S$ distributions have large enough overlaps. This is a microcanonical version of the `simulated tempering' method, presented by Marinari and Parisi\cite{Marinari92}. \section{The Update Algorithms} \subsection{The Microcanonical Cluster Update} In the simulations reported here one update cycle consists of one microcanonical spin + demon update sweep followed by an energy measurement and a multicanonical demon update. As the microcanonical step I used the Swendsen-Wang variation of the microcanonical cluster algorithm presented recently by M.~Creutz \cite{Creutz92}. As opposed to the standard procedure, the demons are located on the {\em links} of the lattice, instead of being connected to the spins. A link is activated only if the spins have the same value at each end of it {\em and} the demon does not carry enough energy to frustrate the link. Clusters are grown over the activated links and each cluster is flipped to a random spin value. Finally, the demon energy is increased or decreased by the amount of the corresponding link energy decrease or increase. On a $d$-dimensional lattice, this method requires $d\times V$ demons. After each update cycle the demon locations are shuffled. If the shuffling were not performed, one would construct exactly the same clusters during the next update cycle -- assuming that the demon refresh, described in the next chapter, is also skipped. The shuffling does not need to be perfect: here it was done with random offsets and step lengths when picking the demons from the demon array. The actual cluster search was performed with the Hoshen-Kopelman cluster-finding algorithm \cite{Hoshen76}. Note that it is also possible to perform local updates with the same demons; the demons can be left on the links or moved to the spins -- in the latter case only half (or $1/d$) of the demons are used during one sweep. The probability distribution \nr{mdprob} is unaffected, as long as the total number of demons remains the same. \begin{table} \center \begin{tabular}{lrrll} \cen{$L$} & iterations &\cen{$\tau_L$}& \cen{$\sigma$}&\\ \hline {\settowidth{\numlen}{0}\makebox[\numlen]{}}20 & 2 500 000 &320(5)\settowidth{\numlen}{0}\makebox[\numlen]{}\n&0.0189(3) \\ {\settowidth{\numlen}{0}\makebox[\numlen]{}}32 & 5 000 000 &821(15)\settowidth{\numlen}{0}\makebox[\numlen]{} &0.0169(2) \\ {\settowidth{\numlen}{0}\makebox[\numlen]{}}64 & 6 000 000 &2700(81)\settowidth{\numlen}{0}\makebox[\numlen]{}&0.0147(4) \\ $128_a$ & 9 000 000&10720(520)&0.01302(17) \\ $128_b$ & 6 000 000&10520(620)&0.01306(21) \\ \hline \end{tabular} \caption[1]{The tunnelling time and the interface tension from the 2d7s Potts model simulations.\la{table1}} \end{table} \subsection{The Multicanonical Demon Update} The multicanonical demon refresh is greatly facilitated by the fact that each demon is an independent degree of freedom and the demon density of states is known. The most straightforward way to perform the demon update is to touch each demon with the multicanonical heat bath: the demon $i$ is assigned a new value with the weight $\exp [-G(E_T-E_D^{i,{\rm old}}+E^{i,{\rm new}}_D)]$. For a continuous demon energy this would be the best method, even though this is a non-vectorizable process with $\sim N_D$ steps. However, since now the demon energy is discrete, there exists a method which enables a major part of the demon update to be performed in $\sim\sqrt{N_D}$ (non-vectorizable) steps. First, a new total demon energy $E^{\rm new}_D$ is calculated with a {\em global} heat-bath update: let $x$ be a random number from even distribution between 0 and 1; now the new demon energy is the smallest $E_D^{\rm new}$ satisfying \begin{equation} x \le \frac{\sum_{E'=0}^{E_D^{\rm new}} n_D(E')e^{-G(E'+E_S)}} {\sum_{E''=0}^{\infty} n_D(E'')e^{-G(E''+E_S)}} \, , \la{heatbath} \end{equation} where $G(E_T) = \infty$, when $E_T < E_T^{\min}$ or $E_T > E_T^{\max}$. This guarantees that the demon energy at fixed $E_S$ is free from autocorrelations, justifying the use of the multinomial distribution prior to eq.~\nr{denstate}. A new demon state with energy $E_D^{\rm new}$ can then be constructed from the old one by adding or subtracting energy from randomly selected demons. However, care has to be taken to ensure proper counting of states: energy is added or subtracted unit by unit, and the demon to be changed is chosen according to the respective probabilities \begin{equation} p^i_+ \equiv p^i_{E_D \rightarrow E_D+1} = \frac{E_D^i + 1}{N_D+E_D} \, , {\hspace{0.5 cm}}\h p^i_- \equiv p^i_{E_D \rightarrow E_D-1} = \frac{E_D^i}{E_D} \, . \la{demonadd} \end{equation} To prove that eq.~\nr{demonadd} is correct, it is sufficient to show that starting from the state $E_D = 0$, $p_+$ produces with equal probability all the states with the same energy. Let us construct a specific demon state $\omega$ with an energy $E_\omega$: by repeatedly applying $p_+$, the probability of a particular sequence of additions $\{i\}$ leading to this state becomes \begin{eqnarray} p_{\{i\}} = \prod_{\{i\}} p^i_+ & = & \frac{(N_D-1)!}{(N_D-1+E_D)!} \,(1!)^{N_1}\,(2!)^{N_2}\,(3!)^{N_3} \ldots \,\, , \\ E_\omega & = & N_1 + 2N_2 + 3N_3 + \ldots \,\, , \la{sequence} \end{eqnarray} where $N_e$ is the number of demons with energy $e$. Because the numbers $N_e$ are characteristic of the state $\omega$ and not of the sequence $\{i\}$, all the sequences leading to this state have the same probability. Then the probability $p_\omega$ of producing the state $\omega$ is obtained by multiplying $p_{\{i\}}$ with the number of sequences $N^\omega_{\rm seq.}$ (= number of permutations) leading to this state, \begin{equation} N^\omega_{\rm seq.} = \frac{E_\omega!} {(1!)^{N_1}\,(2!)^{N_2}\,(3!)^{N_3}\ldots}\, , \end{equation} giving $p_\omega = 1/n_D(E_\omega)$ [eq.~\nr{demden}]. This is just what we want -- all the states with the same energy are equally probable, and the sum of the probabilities is 1. It is obvious that $p_-$ is the probability of the inverse process. Because the old demon state can be understood as an intermediate step in the energy addition/subtraction sequence, we can start building the new state directly from it. Remember that during one update cycle only additions or only subtractions are performed. Because the average energy change in one cycle is $\sim\sqrt{V}$, only as many demons need to be updated; the demon shuffling after each cluster update takes care of proper mixing. By careful use of auxiliary arrays, also the integrations in eq.~\nr{heatbath} can be performed in $\sim\sqrt{V}$ steps, but this was not fully implemented in the simulations. An important technical question is how to implement the demon selection according to the probabilities of eq.~\nr{demonadd}. Two simple methods were tested: first, one can employ an appropriate accept/reject step at a random demon location -- this is a true $\propto\sqrt{N_D}$ method, but the poor acceptance rate (8\% for the 2d7s Potts model) makes this rather slow in practice. In the second method one constructs a pointer array $a$, which has an entry for the location of each {\em unit} of the demon energy. The array $a$ has $E_D$ elements, of which $E_D^i$ are pointing to the demon $i$. When $E_D \rightarrow E_D+1$, the demon is chosen as follows: one generates a random integer $i$ in the range $[1, N_D+E_D]$; if $i\le N_D$, the demon $i$ is selected, and if $i > N_D$, one chooses the demon $a_i$. By construction, this gives just the correct probability $p^i_+$. The demon address is then added to the array $a$, and $E_D$ is increased. The process is repeated until $E_D^{\rm new}$ is reached. The subtraction of the energy is performed analogously; only now does one select a random pointer $a_i$, decrease the energy of the demon $a_i$, and set $a_i = 0$. The bottleneck in this method is the initial generation of the pointer array, which requires $\sim{N_D}$ steps; however, it is a vectorizable process, and since $E_D$ has to be measured anyway, it can be performed with little overhead. In the tests the pointer array method was found to be 5--8 times faster than the accept/reject procedure, so it was chosen for the actual simulations. On the $128^2$ lattice, the whole measurement and demon update cycle used less than $6\%$ of the CPU time, the rest being taken up by the cluster operations, which contain the only non-vectorizable $\propto V$ loop. Using a Cray X-MP, one full cycle took $\sim 3.9\,\mu$s per spin. \section{Results} The performance of the algorithm is best measured by the tunnelling time, which is defined as in ref.~\cite{Janke92}: four times the tunnelling time, $4\tau_L$, is the average number of updates for the system to get from $E^L_1$ to $E^L_2$ and back. This definition gives comparable autocorrelation time definition. The times are listed in table~\ref{table1}, and shown in fig.~{3}. The fit to the three largest lattices gives $\tau_L = 1.49(17)\times L^{1.82(3)}$. This scales better than the standard multicanonical method \cite{Janke92}, which has $\tau_L = 0.082(17)\times L^{2.65(5)}$. In fact, this is even better than the {\em optimal} scaling given by the random-walk picture: in the first-order transitions, the energy gap behaves as $V$, but the width of the system energy distribution with fixed total energy increases only like $\sqrt{V}$. Assuming an ideal update algorithm, this is also the average change in the system energy during one sweep. The system has to random-walk across the gap in order to tunnel from one phase to another, and this takes $(\mbox{gap/step})^2 \sim V = L^2$ sweeps. The discrepancy is largely due to the shift of $E^L_1/V$ and $E^L_2/V$ (see figs.~{1} and {8}), which have a finite size dependence $\sim 1/L$. If we ignore these and calculate $\tau_L$ from tunnellings between the infinite volume energy density values $e_1 = 0.4453\ldots$, $e_2 = 0.7986\ldots$, the tunnelling times of the smaller lattices become shorter and we obtain a scaling law $\tau_L = 0.66(7)\times L^{1.97(3)}$, which is compatible with the random-walk limit. However, I chose to present the former result in order to enable the comparison with the previous calculation. The quality of the scaling fit is only $\chi^2 = 6.2/2$ d.o.f., and since the `physical' correlation length is of order $\sim 30$ \cite{Billoire92}, it is quite plausible that the true scaling law will be different. The scaling functions are plotted in fig.~{3}. In order to compare with the canonical update algorithm, I utilized the results of the 2d7s Potts model simulations by A.~Billoire {et al.\ } \cite{Billoire92}. Their work has high-statistics data from 5 different volumes between $16^2$ and $64^2$. I made a finite-size fit to the autocorrelation times with the heuristic function $\tau_L = a L^\alpha\,e^{2\sigma L}$, where the interface tension $\sigma$ had the fixed value $0.01174$ (see the next paragraph). The parameter values given by the fit are $a=1.01(15)$ and $\alpha=2.31(4)$, with $\chi^2=4.2/3$ d.o.f.. (Without the exponential factor, the best power-law fit gives $\chi^2=56/3$ d.o.f.) The function is plotted in fig.~{3}. The simulations in ref.~\cite{Billoire92} were performed with the one-hit Metropolis algorithm, which has a notoriously long autocorrelation time; this was balanced by a fast multispin coding. With a heat-bath update the autocorrelation time could be $\sim$ 5--8 times shorter, bringing the low end of the line close to the level of the multicanonical lines. On the $128^2$ lattice the multicanonical cluster method is $\sim$ 3 times faster than the standard multicanonical, which again is $\sim$ 5--40 times faster than the canonical method. The interface tension was measured with the Binder method \cite{Binder82}, eq.~\nr{tension}. The minima and maxima of the canonical probability distributions were found by fitting a parabola close to the extrema; the results depend on the fitting range only very weakly, when the range is large enough. All the error analysis was done by jackknifing the data to 50 sets. The measured values of $\sigma$ are shown in fig.~{4}, together with the measurements of ref.~\cite{Janke92}. Using a common FSS fit of the form \begin{equation} -\frac{1}{2L}\log p^L_{\min} = \sigma + \frac{c}{L} \la{sigmafit} \end{equation} to the three largest volumes, we obtain the result $\sigma = 0.01174(19)$ with $c=0.169(11)$. The result agrees well with ref.~\cite{Janke92} ($\sigma = 0.0121(5)$), but is seven standard deviations off from the exact infinite-volume value $\sigma = 0.010396\ldots$, recently calculated by Borgs and Janke \cite{Borgs92b}. The cited errors are only statistical, and the large difference between the values is obviously caused by the violations of the FSS formula \nr{sigmafit}. This is supported by the missing `flat bottom' around the $p^L_{\min}$; this flat part corresponds to the variations of the distance between the two interfaces, and the absence of the flat part implies that the interaction between the two interfaces is still non-negligible. The value of $\sigma$ is at least a factor of 6 smaller than the results of refs.~\cite{Potvin89,Kajantie89,Rummukainen91}, obtained with an unrelated method. In these calculations the interface was stabilized by using different temperatures on the different sides of the lattice, thus enforcing one side into the disordered phase and the other into the ordered one. The interface tension was measured as a function of the temperature difference, and the final answer was obtained by extrapolating the results to the transition temperature. This method has also been applied to $N_\tau = 2$ pure gauge QCD \cite{Kajantie90,Huang90}, and the results agree with the values obtained with the Binder method \cite{Janke92,Grossman92}. The apparent failure of this method in the case of the 2d7s Potts model is probably due to the too strong pinning effect caused by the temperature difference. In 2d, the amplitude of the interface fluctuations is large, $\sim\sqrt{L/\sigma T}$. These fluctuations are strongly suppressed by the temperature difference, which should then be very small; however, in that case the two-phase configuration would be lost, unless extremely large volumes are used. \begin{table} \center \begin{tabular}{lllll} \cen{$L$} & \cen{$\beta$(equal height)}&\cen{$\beta(C_{\max})$}& \cen{$\beta(B_{\min}$)}&\cen{$\beta$(equal weight)}\\ \hline {\settowidth{\numlen}{0}\makebox[\numlen]{}}20 &1.28474(13) &1.28443(12)&1.28444(13) &1.2939(3) \\ {\settowidth{\numlen}{0}\makebox[\numlen]{}}32 &1.28976(7) &1.28953(7) &1.28776(7) &1.29379(7) \\ {\settowidth{\numlen}{0}\makebox[\numlen]{}}64 &1.29241(4) &1.29235(3) &1.29194(3) &1.29360(4) \\ $128_a$ &1.293251(15)&1.293234(15)&1.293140(15)&1.293567(16)\\ $128_b$ &1.293242(19)&1.293227(19)&1.293133(19)&1.293560(19)\\ \hline \end{tabular} \caption[1]{Measured pseudotransition temperatures.\la{table2}} \end{table} Finally, let us compare transition temperature measurements to the exact finite-size expansions \cite{Borgs90,Lee91}. Common definitions for the pseudotransition temperature are the locations of the maximum of the heat capacity $C = \beta^2/V\, (\lg E^2\rangle -\lg E\rangle^2)$ and the minimum of the Binder parameter $V_L =\frac{1}{3}(1-\lg E'^4\rangle /\lg E'^2\rangle^2)$, where $E' = E-2V$ in order to comply with the definition of energy used in \cite{Borgs90,Janke92}. The differences of the known infinite-volume transition temperature and the measured values are plotted in fig.~{5}; also shown are the exact lowest order FSS corrections. On the $128^2$ lattices the FSS correction is within the error bars of the measured values, whereas the $64^2$ lattice is still off by $\sim 2\sigma$. Figure {6} shows the behaviour of $V_L$ as a function of $\beta$. Still another way to find the transition temperature is the `equal weight' method \cite{Borgs92}, where $\beta_w^L$ is defined as the temperature when the relative probabilistic weights of the ordered and disordered states are $q=7$ and 1, respectively: \begin{equation} q = W_O/W_D = \sum_{E<E'} p_{\beta_w^L}(E)/\sum_{E\ge E'} p_{\beta_w^L}(E), \la{eqweight} \end{equation} where $E'$ is the energy at the minimum of $p_{\beta}$ at the temperature when the two peaks have equal height. The measurements of $\beta_c$ are shown in fig.~{7}. The FSS corrections in this case are only exponential. An FSS ansatz of the form $\beta_w^L = \beta_w + a\,e^{-bL}$ was fitted to the data, with the results $a=0.0012(12)$, $b=0.05(3)$, and $\beta_w = 1.293562(14)$. The fit gives exactly the correct $\beta_c$; however, $\beta_w$ is almost completely determined by the $128^2$ data, and the effect of the values of $a$ and $b$ to the value of $\beta_w$ is negligible. The various transition temperature measurements are listed in table~\ref{table2}. The locations of the maxima of the canonical probability distribution $p_\beta$ are shown in fig.~{8}. The FSS fits to the three largest volumes give the infinite-volume result $e_{1,\infty} = 0.4421(24)$ and $e_{2,\infty} = 0.7981(17)$, which agree fairly well with the exact values $0.44539\ldots$ and $0.79867\ldots$. The latent heat is the difference of these two values: $\Delta e = 0.3559(29)$ (exact $0.35327\ldots$). Even though the difference between the heat capacity of the disordered phase ($C_{\rm D})$ and of the ordered phase ($C_{\rm O}$) is exactly known, the actual pure phase values are not. An estimate can be obtained by employing the FSS relation of refs.~\cite{Lee91,Janke92}: $C_{\rm O} = C_{L,\max} - V (\Delta s/2)^2 + 0.0038\ldots + {\cal O}(V^{-1})$, where $\Delta s = \beta_c \Delta E/V$ is the entropy difference between the two phases. The result is shown in fig.~{9}; the FSS fit yields an estimate of $C_{\rm O} = 44.4 \pm 2.2$. Because of the subtraction of two terms of order $V$, the errors grow rapidly when the volume is increased. Again, this result is consistent with that of ref.~\cite{Janke92} $47.5 \pm 2.5$. \section{Conclusions} I have presented a new hybrid-like algorithm, which combines a microcanonical spin system update with demons, and a multicanonical demon update. Like the direct multicanonical method, this algorithm does not suffer from the exponential slowing down in the first-order phase transitions. In the 2d7s Potts model simulations, the tunnelling time was found to increase like $\tau_L \sim L^{1.82(3)}$ with lattices up to $L^2 = 128^2$. Where appropriate, the measurements were compared with the analytical finite-size scaling formulas by Borgs {et al.\ } \cite{Borgs90}; within the statistical errors, the $128^2$ lattice was found to be in complete agreement with the order $1/V$ FSS predictions. Also, all results are fully compatible with those of ref.~\cite{Janke92}. However, the common FSS ansatz for the interface tension, $\sigma_L = \sigma_\infty + c/L$, fails to produce the correct infinite-volume value. This is probably due to the interactions between two interfaces, which are ignored by the FSS ansatz. Clearly, a better FSS function is needed. The main advantage of the multicanonical demon algorithm is that it offers various possibilities in choosing the algorithm. In addition to simply using either local or cluster updates, one can also adjust the number of the demons and the number of the microcanonical updates before each multicanonical step. For example, if one has a very fast local microcanonical algorithm, it might be preferable to interleave many microcanonical updates for each demon refresh, and to use a large number of demons ($N_D > V$) in order to allow large fluctuations in the system energy during the microcanonical phase -- the demon refresh can still be performed in $N_D\times\mbox{fast} + \sqrt{N_D}\times\mbox{slow}$ operations. The method can also be generalized to magnetic transitions by using demons carrying magnetization; in this case the cluster algorithm cannot be used. \section*{Acknowledgments} I am grateful to Leo K\"arkk\"ainen, A.~Irb\"ack, S.~Gupta and W.~Janke for helpful discussions. The simulations were performed with Sun ELC workstations and Cray X-MPs at CERN and at the Centre for Scientific Computing, Helsinki.
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} Koml\'os' lemma (see \cite{K67}, \cite{S86} and \cite{DS94}) is a classical result on the convergence of random variables that can be used as a substitute for compactness. It has turned out to be very useful, similarly as the Bolzano-Weierstrass theorem, and has become a work horse of stochastic analysis in the past decades. In this paper, we generalise this result to work directly with non-negative martingales and convergence in probability simultaneously at all finite stopping times. Let us briefly explain this in more detail. Koml\'os' subsequence theorem states that given a bounded sequence $(f_n)^\infty_{n=1}$ of random variables in $L^1(P)$ there exists a random variable $f \in L^1(P)$ and a subsequence $(f_{n_k})^\infty_{k=1}$ such that the C\'esaro-means of any subsequence $(f_{n_{k_j}})^\infty_{j=1}$ converge almost surely to $f$. It quickly follows that there exists a sequence $(\tilde{f}_n)^\infty_{n=1}$ of convex combinations $\tilde{f}_n\in \conv (f_n, f_{n+1}, \dots)$ that converges to $f$ almost surely that we refer to as Koml\'os' lemma. Replacing the almost sure convergence by the concept of \emph{Fatou convergence} F\"ollmer and Kramkov \cite{FK97} obtained the following variant of Koml\'os' lemma for stochastic processes. Given a sequence $(M^n)^\infty_{n=1}$ of non-negative martingales $M^n=(M^n_t)_{0 \leq t \leq 1}$ starting at $M^n_0=1$ there exists a sequence $(\overline{M}^n)^\infty_{n=1}$ of convex combinations $\overline{M}^n \in \conv (M^n, M^{n+1}, \dots)$ and a non-negative c\`adl\`ag supermartingale $\overline{X}=(\overline{X}_t)_{0 \leq t \leq 1}$ starting at $X_0=1$ such that $\overline{M}^n$ is Fatou convergent along the rationals $\mathbb{Q}\cap[0,1]$ to $\overline{X}$ in the sense that \begin{align*} \overline{X}_t &= \varlimsup_{q \in \mathbb{Q}\cap [0,1],\,q \downarrow t} \varlimsup_{n\to\infty} \overline{M}^n_q=\varliminf_{q \in \mathbb{Q}\cap [0,1],\,q \downarrow t} \varliminf_{n\to\infty} \overline{M}^n_q,\qquad\text{\text{$P$-a.s.}}, \end{align*} for all $t \in [0,1)$ and $\overline{X}_1=\lim_{n\to\infty} \overline{M}^n_1$. In this paper, we are interested in a different version of Koml\'os' lemma for non-negative martingales in the following sense. Given the sequence $(M^n)^\infty_{n=1}$ of non-negative martingales as above and a finite stopping time $\tau$ defining $f_n:=M^n_\tau$ gives a sequence of non-negative random variables that is bounded in $L^1(P)$. By Koml\'os' lemma there exist convex combinations $\widetilde{M}^n\in \conv (M^n, M^{n+1}, \dots)$ such that $\widetilde{M}^n_\tau$ converges in probability to some random variable $f_\tau$. The question is then, if we can find \emph{one} sequence $(\widetilde{M}^n)^\infty_{n=1}$ of convex combinations $\widetilde{M}^n \in \conv (M^n, M^{n+1}, \dots)$ and a stochastic process $X=(X_t)_{0 \leq t \leq 1}$ such that we have that $\widetilde{M}^n_\tau$ converges to $X_\tau$ in probability for \emph{all} finite stopping times $\tau$. Our first main result (Theorem \ref{c1}) shows that this is possible and that the limiting process $X=(X_t)_{0 \leq t \leq 1}$ is an \emph{optional strong supermartingale}. These supermartingales have been introduced by Mertens~\cite{M72} and are optional processes that satisfy the supermartingale inequality for all finite stopping times. This indictates that optional strong supermartingales are the natural processes for our purpose to work with and we expand in Theorem \ref{t1} our convergence result from martingales $(M^n)_{n=1}^\infty$ to optional strong supermartingales $(X^n)_{n=1}^\infty$. In dynamic optimisation problems our results can be used as substitute for compactness (compare, e.g., \cite{DS99}, \cite{FK97}, \cite{KS01}, \cite{KZ04}, \cite{S04}). Here the martingales $M^n$ are usually a minimising sequence of density processes of equivalent martingale measures for the dual problem or, as in \cite{DS99} and \cite{FK97}, the wealth processes of self-financing trading strategies. At a fixed stopping time the convergence in probability can always be strengthened to almost sure convergence by simply passing to a subsequence. By means of a counter-example (Proposition \ref{Ex2}) we show that this is not possible for all stopping times simultaneously. Conversely, one can ask what the smallest class of stochastic processes is that is closed under convergence in probability at all finite stopping times and contains all bounded martingales. Our second contribution (Theorem \ref{t2}) is to show that this is precisely the class of all optional strong supermartingales provided the underlying probability space is sufficiently rich to support a Brownian motion. As the limiting strong supermartingale of a sequence of martingales in the sense of convergence in probability at all finite stopping times is no longer a semimartingale, we need to restrict the integrands to be predictable finite variation processes $\varphi=(\varphi_t)_{0 \leq t \leq 1}$ to come up with a similar convergence result for stochastic integrals in Proposition \ref{p:SI}. For this, we need to extend our convergence result to ensure the convergence of the left limit processes $(X^n_-)_{n=1}^\infty$ in probability at all finite stopping times to a limiting process $X^{(0)}=(X^{(0)})_{0\leq t\leq 1}$ as well after possibly passing once more to convex combinations. It turns out that $X^{(0)}$ is a \emph{predictable strong supermartingale} that does in general \emph{not} coincide with the left limit process $X_-$ of the limiting optional strong supermartingale $X$. The notion of a predictable strong supermartingale has been introduced by Chung and Glover \cite{CG79} and refers to predictable processes that satisfy the supermartingale inequality for all \emph{predictable} stopping times. Using instead of the time interval $I=[0,1]$ its \emph{Alexandroff double arrow space} $\widetilde{I}=[0,1]\times \{0,1\}$ as index set we can merge both limiting strong supermartingales into one supermartingale $X=(X_{\tilde{t}})_{\tilde{t} \in \widetilde{I}}$ indexed by $\widetilde{I}$. Our motivation for studying these questions comes from portfolio optimisation under transaction costs in mathematical finance. While for the problem without transaction costs the solution to the dual problem is always attained as a Fatou limit, the dual optimiser under transaction costs is in general a truly l\`adl\`ag optional strong supermartingale. So we expect our results naturally to appear whenever one is optimising over non-negative martingales that are not uniformly integrable or stable under concatenation and they might find other applications as well. The paper is organised as follows. We formulate the problem and state our main results in Section \ref{sec:2}. The proofs are given in Sections \ref{sec:3}, \ref{sec:5}, \ref{sec:6} and \ref{sec:7}. Section \ref{sec:4} provides the counter-example that our convergence results cannot be strengthened to almost sure convergence. \section{Formulation of the problem and main results}\label{sec:2} Let $(\Omega, \mathcal{F}, P)$ be a probability space and $L^0(P)=L^0(\Omega, \mathcal{F}, P)$ the space of all real-valued random variables. As usual we equip $L^0(P)$ with the topology of convergence in probability and denote by $L^0_+(P)=L^0(\Omega, \mathcal{F}, P;\mathbb{R}_+)$ its positive cone. We call a subset $A$ of $L^0(P)$ bounded in probability or simply bounded in $L^0(P)$, if \mbox{$\lim_{m\to\infty} \sup_{f \in A} P (|f|>m)=0.$} Koml\'os' subsequence theorem (see \cite{K67} and \cite{S86}) states the following. \begin{theorem}\label{kssthm} Let $(f_n)^\infty_{n=1}$ be a bounded sequence of random variables in $L^1(\Omega, \mathcal{F}, P)$. Then there exists a subsequence $(f_{n_k})^\infty_{k=1}$ and a random variable $f$ such that the C\'esaro means $\frac{1}{J}\sum_{j=1}^J f_{n_{k_j}}$ of any subsequence $(f_{n_{k_j}})^\infty_{j=1}$ converge $P$-almost surely to $f$, as $J\to\infty$. \end{theorem} In applications this result is often used in the following variant that we also refer to as Koml\'os' lemma (compare Lemma A.1 in \cite{DS94}). \begin{corollary}\label{kl} Let $(f_n)^\infty_{n=1}$ be a sequence of non-negative random variables that is bounded in $L^1(P)$. Then there exists a sequence $(\tilde{f}_n)^\infty_{n=1}$ of convex combinations $$\tilde{f}_n \in \conv (f_n, f_{n+1}, \dots)$$ and a non-negative random variable $f \in L^1 (P)$ such that $\tilde{f}_n\xrightarrow{\text{$P$-a.s.}}f$. \end{corollary} As has been illustrated by the work of Kramkov and Schachermayer \cite{KS01} and \v Zitkovi\'c \cite{Z10} (see also \cite{S04}) Koml\'os' lemma can be used as a substitute for compactness, e.g.~in the derivation of minimax theorems for Lagrange functions, where the optimisation is typically over convex sets. Replacing the $P$-almost sure convergence by the concept of \emph{Fatou convergence} F\"ollmer and Kramkov \cite{FK97} used Koml\'os' lemma to come up with a similar convergence result for stochastic processes. For this, we equip the probability space $(\Omega, \mathcal{F}, P)$ with a filtration $\mathbb{F}=(\mathcal{F}_t)_{0 \leq t \leq 1}$ satisfying the usual conditions of right continuity and completeness and let $(M^n)^\infty_{n=1}$ be a sequence of non-negative martingales $M^n=(M^n_t)_{0 \leq t \leq 1}$ starting at $M^n_0=1$. For all unexplained notations from the general theory of stochastic processes and stochastic integration, we refer to the book of Dellacherie and Meyer \cite{DM82}. The construction of the Fatou limit by F\"ollmer and Kramkov can be summarised as in the following proposition. \begin{proposition}[Lemma 5.2 of \cite{FK97}]\label{p:Fatou} Let $(M^n)^\infty_{n=1}$ be a sequence of non-negative martingales $M^n=(M^n_t)_{0 \leq t \leq 1}$ starting at $M^n_0=1$. Then there exists a sequence $(\bar{M}^n)^\infty_{n=1}$ of convex combinations $$\overline{M}^n \in \conv (M^n, M^{n+1}, \dots)$$ and non-negative random variables $Z_q$ for $q \in \mathbb{Q} \cap [0,1]$ such that \begin{itemize} \item[\bf{1)}] $\overline{M}^n_q \xrightarrow{\text{\text{$P$-a.s.}}} Z_q$ for all $q \in \mathbb{Q}\cap [0,1]$. \item[\bf{2)}] The process $\overline{X}=(\overline{X}_t)_{0 \leq t \leq 1}$ given by \begin{equation} \text{$\overline{X}_t:=\lim_{q \in \mathbb{Q} \cap [0,1],\, q \downarrow t} Z_q\quad$ and $\quad\overline{X}_1=Z_1$}\label{def:Fatou} \end{equation} is a c\`adl\`ag supermartingale. \item[\bf{3)}] The process $\overline{X}=(\overline{X}_t)_{0 \leq t \leq 1}$ is the \emph{Fatou limit} of the sequence $(\overline{M}^n)^\infty_{n=1}$ along $\mathbb{Q} \cap [0,1]$, i.e. $$ \overline{X}_t=\varlimsup_{q \in \mathbb{Q} \cap [0,1],\, q \downarrow t} \varlimsup_{n \to \infty} \overline{M}^n_q = \varliminf_{q \in \mathbb{Q} \cap [0,1],\, q \downarrow t}\varliminf_{n \to \infty} \overline{M}^n_q, \quad\text{\text{$P$-a.s.}},\quad\text{and}\quad\overline{X}_1=\lim_{n\to\infty}\overline{M}^n_1. $$ \end{itemize} \end{proposition} Here it is important to note that $\lim_{q\in\mathbb{Q} \cap[0,1],\,q\downarrow t}$ denotes the limit to $t$ through all $q \in \mathbb{Q} \cap [0,1]$ that are \emph{strictly} bigger than $t$. Therefore we do not have in general that $\overline{X}_t=\lim_{n\to \infty}\overline{M}^n_t$ for $t\in[0,1)$, not even for $t\in \mathbb{Q} \cap [0,1]$, as is illustrated in the simple example below. \begin{example}\label{Ex0} Let $(Y_n)^\infty_{n=1}$ be a sequence of random variables taking values in $\{0,n\}$ such that $P[Y_n=n]=\frac{1}{n}$ and define a sequence $(M^n)^\infty_{n=1}$ of martingales $M^n=(M^n_t)_{0 \leq t \leq 1}$ by $$M^n_t= 1 + (Y^n-1) \mathbbm{1}_{\rrbracket\frac{1}{2}(1+\frac{1}{n}), 1\rrbracket}(t).$$ Then $M^n_t$ converges to $\mathbbm{1}_{\llbracket0,\frac{1}{2}\rrbracket} (t)$ for each $t \in[0,1]$. However, the c\`adl\`ag Fatou limit is $\overline{X}=\mathbbm{1}_{\llbracket0,\frac{1}{2}\llbracket}(t)$. \end{example} The convergence, of course, also fails at stopping times in general. This motivates us to ask for a different extension of Koml\'os' lemma to non-negative martingales in the following sense. Let $(M^n)^\infty_{n=1}$ be again a sequence of non-negative martingales $M^n=(M^n_t)_{0 \leq t \leq 1}$ starting at $M^n_0=1$ and $\tau$ a finite stopping time. Then defining $f_n:=M^n_\tau$ gives a sequence $(f_n)^\infty_{n=1}$ of non-negative random variables that are bounded in $L^1(P)$. By Koml\'os' lemma there exist convex combinations $\widetilde{M}^n \in \conv (M^n, M^{n+1}, \dots)$ and a non-negative random variable $f_\tau$ such that $$\widetilde{M}^n_\tau=:\tilde{f}_n \xrightarrow{\text{$P$-a.s.}} f_\tau.$$ The questions are then: \begin{itemize} \item[\bf{1)}] Can we find \emph{one} sequence $(\widetilde{M}^n)^\infty_{n=1}$ of convex combinations $$\widetilde{M}^n \in \conv (M^n, M^{n+1}, \dots)$$ such that, for \emph{all} finite stopping times $\tau$, we have \begin{equation}\label{q1} \widetilde{M}^n_\tau \xrightarrow{\text{$P$-a.s.}} f_\tau \end{equation} for some random variables $f_\tau$ that may depend on the stopping times $\tau$? \item[\bf{2)}] If {\bf 1)} is possible, can we find a stochastic process $X=(X_t)_{0 \leq t \leq 1}$ such that $X_\tau=f_\tau$ for all finite stopping times $\tau$? \item[\bf{3)}] If such a process $X=(X_t)_{0 \leq t \leq 1}$ as in {\bf 2)} exists, what kind of process is it? \end{itemize} Let us start with the last question. If such a process $X=(X_t)_{0 \leq t \leq 1}$ exists, it follows from Fatou's lemma that it is (up to optional measurability) an optional strong supermartingale. \begin{definition} A real-valued stochastic process $X=(X_t)_{0\leq t\leq 1}$ is called an \emph{optional strong supermartingale}, if \begin{itemize} \item[\textbf{1)}] $X$ is optional. \item[\textbf{2)}] $X_\tau$ is integrable for every $[0,1]$-valued stopping time $\tau$. \item[\textbf{3)}] For all stopping times $\sigma$ and $\tau$ with $0\leq\sigma\leq\tau\leq 1$ we have $$X_\sigma\geq E[X_\tau|\mathcal{F}_\sigma].$$ \end{itemize} \end{definition} These processes have been introduced by Mertens \cite{M72} as a generalization of the notion of a c\`adl\`ag (right continous with left limits) supermartingale that one is usually working with. Indeed, by the optional sampling theorem each c\`adl\`ag supermartingale is an optional strong supermartingale, but not every optional strong supermartingale has a c\`adl\`ag modification. For example, every {\it deterministic} decreasing function $(X_t)_{0 \leq t \leq 1}$ is an optional strong super-martingale, but there is little reason why it should be c\`adl\`ag. However, by Theorem 4 in Appendix I in \cite{DM82}, every optional strong supermartingale is indistinguishable from a l\`adl\`ag (left and right limits) process and so we can assume without loss of generality that all optional strong supermartingales we consider in this paper are l\`adl\`ag. Similarly to the Doob-Meyer decomposition in the c\`adl\`ag case, every optional strong supermartingale X has a unique decomposition \begin{equation} X=M-A\label{eq:MD} \end{equation} into a local martingale $M$ and a non-decreasing predictable process $A$ starting at $0$. This decomposition is due to Mertens \cite{M72} (compare also Theorem 20 in Appendix I in \cite{DM82}) and therefore called \emph{Mertens decomposition}. Note that, under the usual conditions of completeness and right continuity of the filtration, we can and do choose a c\`adl\`ag modification of the local martingale $M$ in \eqref{eq:MD}. On the other hand, the non-decreasing process $A$ is in particular l\`adl\`ag. For l\`adl\`ag processes $X=(X_t)_{0 \leq t \leq 1}$ we denote by $X_{t+}:= \lim_{h \searrow 0} X_{t+h}$ and $X_{t-}:=\lim_{h \searrow 0} X_{t-h}$ the right and left limits and by $\Delta_+X_t:=X_{t+} - X_t$ and $\Delta X_t:=X_t - X_{t-}$ the right and left jumps. We also use the convention that $X_{0-}=0$ and $X_{1+} = X_1$. After these preparations we have now everything in place to formulate our main results. The proofs will be given in the Sections \ref{sec:3}, \ref{sec:5}, \ref{sec:6} and \ref{sec:7}. \begin{theorem}\label{c1} Let $(M^n)^\i_{n=1}$ be a sequence of non-negative c\`adl\`ag martingales $M^n=(M^n_t)_{0\leq t \leq 1}$ starting at $M^n_0=1$. Then there is a sequence $(\widetilde{M}^n)_{n=1}^\infty$ of convex combinations $$\widetilde{M}^n \in \conv (M^n,M^{n+1}, \ldots)$$ and a non-negative optional strong supermartingale $X=(X_t)_{0\leq t\leq 1}$ such that, for every $[0,1]$-valued stopping time $\tau$, we have that \begin{equation}\label{M2} \widetilde{M}^n_\tau\overset{P}{\longrightarrow} X_\tau. \end{equation} \end{theorem} Combining the above with a similar convergence result for predictable finite variation processes by Campi and Schachermayer \cite{CS06} allows us to extend our convergence result to optional strong supermartingales by using the Mertens decomposition. Theorem \ref{c1} is thus only a special case of the following result. \begin{theorem}\label{t1} Let $(X^n)^\i_{n=1}$ be a sequence of non-negative optional strong supermartingales $X^n=(X_t)_{0 \leq t \leq 1}$ starting at $X^n_0=1$. Then there is a sequence $(\widetilde{X}^n)_{n=1}^\infty$ of convex combinations $$\widetilde{X}^n \in \conv (X^n,X^{n+1}, \ldots)$$ and a non-negative optional strong supermartingale $X=(X_t)_{0 \leq t \leq 1}$ such that, for every $[0,1]$-valued stopping time $\tau$, we have convergence in probability, i.e. \begin{equation}\label{C2} \widetilde{X}^n_\tau\overset{P}{\longrightarrow} X_\tau. \end{equation} \end{theorem} Note that the convergence \eqref{C2} is \emph{topological}. It corresponds to the weak topology that is generated on the space of optional processes by the topology of $L^0(P)$ and all evaluation mappings $e_\tau (X) (\omega):= X_{\tau(\omega)}(\omega)$ that evaluate an optional process $X=(X_t)_{0 \leq t \leq 1}$ at a finite stopping time $\tau$. By the optional cross section theorem this topology is Hausdorff. Given Theorem \ref{c1} and Theorem \ref{t1} above one can ask conversely what the smallest class of stochastic processes is that is closed under convergence in probability at all finite stopping times and contains the set of bounded martingales. Here the next result shows that this set is the set of optional strong supermartingales. \begin{theorem}\label{t2} Let $X=(X_t)_{0 \leq t \leq 1}$ be an optional strong supermartingale and suppose that its stochastic base $(\Omega,\mathcal{F},\mathbb{F},P)$ is sufficiently rich to support a Brownian motion $W=(W_t)_{0\leq t\leq 1}$. Then there is a sequence of bounded c\`adl\`ag martingales $(M^n)^\i_{n=1}$ such that, for every $[0,1]$-valued stopping time $\tau$, we have convergence in probability, i.e. \begin{equation}\label{eq:t2} M^n_\tau\stackrel{\text{$P$}}\longrightarrow X_\tau. \end{equation} \end{theorem} We thank N.~Perkowski and J.~Ruf for pointing out to us that they have independently obtained a similar result to Theorem \ref{t2} for c\`adl\`ag supermartingales in Proposition 5.9 of \cite{PR} by taking several limits successively. Moreover, we would like to thank J.~Ruf for insisting on a clarification of an earlier version of Theorem \ref{t2} which led us to a correction of the statement (convergence in probability in \eqref{eq:t2} as opposed to almost sure convergence) as well as to a more detailed proof. Let us now turn to the theme of stochastic integration. By Theorem \ref{c1} the limit of a sequence $(M^n)^\infty_{n=1}$ of martingales in the sense of \eqref{M2} will in general be no longer a semimartingale. In order to come up with a similar convergence result for stochastic integrals $\varphi \stackrel{\mbox{\tiny$\bullet$}}{} M^n=\int\varphi dM^n$, we therefore need to restrict the choice of integrands $\varphi=(\varphi_t)_{0 \leq t \leq 1}$ to predictable finite variation processes. As we shall explain in more detail in Section \ref{sec:7} below, this allows us to define stochastic integrals \mbox{$\varphi \stackrel{\mbox{\tiny$\bullet$}}{} X=\int\varphi dX$} with respect to optional strong supermartingales $X=(X_t)_{0 \leq t \leq 1}$ pathwise, since $X$ is l\`adl\`ag. These integrals coincide with the usual stochastic integrals, if $X=(X_t)_{0 \leq t \leq 1}$ is a semimartingale. For a general predictable, finite variation process $\varphi$, the stochastic integral $\varphi \stackrel{\mbox{\tiny$\bullet$}}{} X$ depends not only on the values of the integrator $X$ but also explicitly on that of its left limits $X_-$ (see \eqref{def:SI:2} below). As a consequence, in order to obtain a satisfactory convergence result for the integrals $\varphi \stackrel{\mbox{\tiny$\bullet$}}{} X^n$ to a limit $\varphi \stackrel{\mbox{\tiny$\bullet$}}{} X$ we have to take special care of the left limits of the integrators. (The convergence of stochastic integrals is crucially needed in applications in mathematical finance, where the integrals correspond to the gains from trading by using self-financing trading strategies.) More precisely: Given the convergence $\widetilde{X}^n_\tau \stackrel{P}{\longrightarrow} X_\tau$ as in \eqref{C2} at all $[0,1]$-valued stopping times $\tau$ of a sequence $(\widetilde{X}^n)^\infty_{n=1}$ of optional strong supermartingales do we have the convergence of the left limits \begin{equation}\label{eq:cll} \widetilde{X}^n_{\sigma-} \stackrel{P}{\longrightarrow} X_{\sigma-} \end{equation} for all $[0,1]$-valued stopping times $\sigma$ as well? For \emph{totally inaccessible} stopping times $\sigma$, we are able to prove that \eqref{eq:cll} is actually the case. \begin{proposition}\label{prop:ti} Let $(X^n)^{\infty}_{n=1}$ and $X$ be non-negative optional strong supermartingales $(X^n_t) _{0\leq t \leq 1}$ and $(X_t) _{0\leq t \leq 1}$ such that \begin{align*} X^n_q\stackrel{P}{\longrightarrow}X_q \end{align*} for every rational number $q\in [0,1]$. Then \begin{align*} X^n_{\tau-}\stackrel{P}{\longrightarrow}X_{\tau-} \end{align*} for all $[0,1]$-valued \emph{totally inaccessible} stopping times $\tau$. \end{proposition} At accessible stopping times $\sigma$, the convergence $\widetilde{X}^n_\tau\stackrel{P}{\longrightarrow}X_{\tau}$ for all finite stopping times $\tau$ does not necessarily imply the convergence \eqref{eq:cll} of the left limits $\widetilde{X}^n_{\sigma-}$. Moreover, even if the left limits $\widetilde{X}^n_{\sigma-}$ converge to some random variable $Y$ in probability, it may happen that $Y \neq X_{\sigma-}$. In order to take this phenomenon into account, we need to consider two processes $X^{(0)}=(X^{(0)}_t)_{0 \leq t \leq 1}$ and $X^{(1)}=(X^{(1)}_t)_{0 \leq t \leq 1}$ that correspond to the limiting processes of the left limits $\widetilde{X}^n_-$ and the processes $\widetilde{X}^n$ itself or, alternatively, replace the time interval $I=[0,1]$ by the set $\widetilde{I}=[0,1]\times\{0,1\}$ with the lexicographic order. The set $\widetilde{I}$ is motivated by the \emph{Alexandroff double arrow space}. Equipping the set $\widetilde{I}$ with the lexicographic order simply means that we split every point $t\in[0,1]$ into a left and a right point $(t,0)$ and $(t,1)$, respectively, such that $(t,0) < (t,1),$ that $(t,0) \leq (s,0)$ if and only if $t\leq s$ and that $(t,1) < (s,0)$ if and only if $t<s$. Then we can merge both processes $X^{(0)}=(X^{(0)}_t)_{0 \leq t \leq 1}$ and $X^{(1)}=(X^{(1)}_t)_{0 \leq t \leq 1}$ into one process \begin{equation}\label{eq:ADAS} X_{\tilde{t}} = \begin{cases} X^{(0)}_t &:\tilde{t} = (t,0),\\ X^{(1)}_t &:\tilde{t} = (t,1) \end{cases} \end{equation} for $\tilde{t}\in \widetilde{I}$, which is by \eqref{eq:relt3} below a supermartingale indexed by $\tilde{t}\in \widetilde{I}$. As the limit of the left limits, the process $X^{(0)}=(X^{(0)}_t)_{0 \leq t \leq 1}$ will be predictable and it will turn out that it is even a predictable strong supermartingale. We refer to the article of Chung and Glover \cite{CG79} (see the second remark following the proof of Theorem 3 on page 243) as well as Definition 3 in Appendix I of the book of Dellacherie and Meyer \cite{DM82} for the subsequent concept. \begin{definition}\label{def:pred} A real-valued stochastic process $X=(X_t)_{0 \leq t \leq 1}$ is called a \emph{predictable strong supermartingale}, if \begin{itemize} \item[\textbf{1)}] $X$ is predictable. \item[\textbf{2)}] $X_{\tau}$ is integrable for every $[0,1]$-valued \emph{predictable} stopping time $\tau$. \item[\textbf{3)}] For all \emph{predictable} stopping times $\sigma$ and $\tau$ with $0 \leq \sigma \leq \tau \leq 1$ we have $$X_\sigma \geq E[X_\tau | \mathcal{F}_{\sigma-}].$$ \end{itemize} \end{definition} After these preparations we are able to extend Theorem \ref{t1} to hold also for left limits. \begin{theorem}\label{t3} Let $(X^n)^\infty_{n=1}$ be a sequence of non-negative optional strong supermartingales starting at $X^n_0=1$. Then there is a sequence $(\widetilde{X}^n)^\infty_{n=1}$ of convex combinations $\widetilde{X}^n\in\conv(X^n, X^{n+1}, \dots)$, a non-negative optional strong supermartingale $X^{(1)}=(X^{(1)}_t)_{0 \leq t \leq 1}$ and a non-negative predictable strong supermartingale $X^{(0)}=(X^{(0)}_t)_{0 \leq t \leq 1}$ such that \begin{align} &\widetilde{X}^n_{\tau} \stackrel{P}{\longrightarrow} X^{(1)}_\tau, \label{eq:t3:1} \\ &\widetilde{X}^n_{\tau-} \stackrel{P}{\longrightarrow} X^{(0)}_\tau, \label{eq:t3:2} \end{align} for \emph{all} $[0,1]$-valued stopping times $\tau$ and we have that \begin{align} X^{(1)}_{\tau-}\geq X^{(0)}_{\tau} \geq E[X^{(1)}_{\tau} | \mathcal{F}_{\tau-}] \label{eq:relt3} \end{align} for all $[0,1]$-valued \emph{predictable} stopping times $\tau$. \end{theorem} With the above we can now formulate the following proposition. Note that, since $\varphi \stackrel{\mbox{\tiny$\bullet$}}{} \widetilde{X}^n \in \conv (\varphi \stackrel{\mbox{\tiny$\bullet$}}{} X^n, \varphi \stackrel{\mbox{\tiny$\bullet$}}{} X^{n+1}, \dots)$, part 2) is indeed an analogous result to Theorem \ref{t1} for stochastic integrals. \begin{proposition}\label{p:SI} Let $(X^n)^\infty_{n=1}$ be a sequence of non-negative optional strong supermartingales $X^n=(X^n_t)_{0\leq t\leq 1}$ starting at $X^n_0 = 1$. Then there exist convex combinations $\widetilde{X}^n \in \conv (X^n, X^{n+1}, \dots)$ as well as an optional and a predictable strong supermartingale $X^{(1)}$ and $X^{(0)}$ such that \begin{itemize} \item[\bf{1)}] $\widetilde{X}^n_\tau \stackrel{P}{\longrightarrow} X^{(1)}_\tau$ and $\widetilde{X}^n_{\tau-} \stackrel{P}{\longrightarrow} X^{(0)}_\tau$ for all $[0,1]$-valued stopping times $\tau.$ \item[\bf{2)}] For all predictable processes $\varphi=(\varphi_t)_{0 \leq t \leq 1}$ of finite variation, we have that $$\varphi \stackrel{\mbox{\tiny$\bullet$}}{} \widetilde{X}^n_\tau \stackrel{P}{\longrightarrow} \int^\tau_0 \varphi^c_u d X^{(1)}_u + \sum_{0 < u \leq \tau} \Delta \varphi_u (X^{(1)}_\tau - X^{(0)}_u) + \sum_{0 \leq u < \tau} \Delta_+ \varphi_u (X^{(1)}_\tau - X^{(1)}_u)$$ for all $[0,1]$-valued stopping times $\tau$, where $\varphi^c$ denotes the continuous part of $\varphi$, i.e. \begin{equation} \varphi^c_t:= \varphi_t - \sum_{0 < u \leq t} \Delta \varphi_u - \sum_{0 \leq u < t} \Delta_+ \varphi_u \quad \text{for} \quad t \in [0,1].\label{def:cont} \end{equation} \end{itemize} \end{proposition} \section{Proof of Theorems \ref{c1} and \ref{t1}}\label{sec:3} The basic idea for the proof of Theorem \ref{c1} is to consider the Fatou limit $\overline{X}=(\overline{X}_t)_{0 \leq t \leq 1}$ as defined in \eqref{def:Fatou}. Morally speaking $\overline{X}=(\overline{X}_t)_{0 \leq t \leq 1}$ should also be the limit of the sequence $(\overline{M})^\infty_{n=1}$ in the sense of \eqref{M2}. However, as we illustrated in the easy Example \ref{Ex0}, things may be more delicate. While we do not need to have convergence in probability at all finite stopping times in general, the next lemma shows that we always have one-sided $P$-almost sure convergence. \begin{lemma}\label{A1} Let $\overline{X}$ and $(\overline{M}^n)^\i_{n=1}$ be as in Proposition \ref{p:Fatou}. Then we have that \begin{equation}\label{chrisA1} (\overline{M}^n_\tau -\overline{X}_\tau)^- \xrightarrow{\text{$P$-a.s.}} 0,\quad \mbox{as} \ n\to\i, \end{equation} for all $[0,1]$-valued stopping times $\tau$, where $x^-=\max\{-x,0\}$. \end{lemma} \begin{proof} Let $\sigma_k$ be the $k$-th dyadic approximation of the stopping time $\tau$, i.e. \begin{equation} \sigma_k:=\inf\{t\in D_k~|~t>\tau \}\wedge 1,\label{A1.1} \end{equation} where $D_k = \{j2^{-k} | j=0,\dots,2^k\}$. As $\overline{M}^n$ is a martingale, we have $\overline{M}^n_{\tau} = E [\overline{M}^n_{\sigma_k} | \mathcal{F}_{\tau}]$, for every $n\in \mathbb{N}$, and therefore $$\varliminf_{n\to\infty}\overline{M}^n_{\tau}= \varliminf_{n\to\infty} E[\overline{M}^n_{\sigma_k}|\mathcal{F}_\tau] \geq E [\varliminf_{n\to\infty} \overline{M}^n_{\sigma_k} | \mathcal{F}_{\tau}] = E [Z_{\sigma_k} | \mathcal{F}_{\tau}]$$ for all $k$ by Fatou's lemma, where $Z_q$ is defined in Proposition \ref{p:Fatou}, for every $q\in\mathbb{Q}\cap[0,1]$. Since $Z_{\sigma_k} \to \overline{X}_{\tau}$ $P$-a.s.~and in $L^1(P)$ by backward supermartingale convergence (see Theorem V.30 and the proof of Theorem IV.10 in \cite{DM82} for example), we obtain that $$\varliminf_{n\to\infty} \overline{M}^n_{\tau} \geq\overline{X}_{\tau},$$ which proves \eqref{chrisA1}. \end{proof} For any sequence $(\widehat{M}^n)^\infty_{n=1}$ of convex combinations $$\widehat{M}^n \in \conv (\overline{M}^n, \overline{M}^{n+1}, \dots)$$ we can use the one-sided convergence \eqref{chrisA1} to show in the next lemma that at any given stopping time $\tau$, we either have the convergence of $\widehat{M}^n_\tau$ to $\overline{X}_\tau$ in probability or there exists a sequence $(\widetilde{M}^n)^\infty_{n=1}$ of convex combinations $$\widetilde{M}^n \in \conv (\widehat{M}^n, \widehat{M}^{n+1}, \ldots)$$ and a non-negative random variable $Y$ such that $\widetilde{M}^n_\tau \stackrel{P}{\longrightarrow} Y$. In the latter case, $Y \geq \overline{X}_\tau$ and $E[Y] > E[\overline{X}_\tau]$, as we shall now show. \begin{lemma}\label{A2} Let $\overline{X}$ and $(\overline{M}^n)^\i_{n=1}$ be as in Proposition \ref{p:Fatou}, let $\tau$ be a $[0,1]$-valued stopping time and $(\widehat{M}^n)^\infty_{n=1}$ a sequence of convex combinations $\widehat{M}^n \in \conv (\overline{M}^n, \overline{M}^{n+1}, \dots)$. Then we have either \begin{equation}\label{A2.1} (\widehat{M}^n_\tau - \overline{X}_\tau)^+ \stackrel{P}{\longrightarrow} 0, \ \mbox{as} \ n\to\i, \end{equation} with $x^+=\max\{x,0\}$ or there exists a sequence $(\widetilde{M})^\infty_{n=1}$ of convex combinations \begin{equation*} \widetilde{M}^n \in\conv (\widehat{M}^n,\widehat{M}^{n+1},\ldots) \subseteq \conv (\overline{M}^n, \overline{M}^{n+1}, \dots) \end{equation*} and a non-negative random variable $Y$ such that \begin{equation}\label{A2.3} \widetilde{M}^n_\tau \stackrel{P}{\longrightarrow} Y, \qquad \mbox{as}\ n\to\i, \end{equation} and \begin{equation}\label{A2.4} E[Y_\tau] > E[\overline{X}_{\tau}]. \end{equation} \end{lemma} \begin{proof} If \eqref{A2.1} does not hold, there exists $\alpha >0$ and a subsequence $(\widehat{M}^n)$, still denoted by $(\widehat{M}^n)^\infty_{n=1}$ again indexed by $n$, such that \begin{equation}\label{A2.5} P(\widehat{M}^n_\tau -\overline{X}_\tau >\alpha)\geq \alpha \end{equation} for all $n$. Since $E[\widehat{M}^n_\tau]=1$, there exists by Koml\'os' lemma a sequence $(\widetilde{M}^n)^\infty_{n=1}$ of convex combinations $\widetilde{M}^n \in \conv (\widehat{M}^n, \widehat{M}^{n+1}, \dots)$ and a non-negative random variable $Y$ such that \eqref{A2.3} holds. To see \eqref{A2.4}, we observe that, for each $\varepsilon>0$, \begin{equation*} \mathbbm{1}_{\{\widehat{M}^n_\tau \geq \overline{X}_\tau-\varepsilon\}} \stackrel{P}{\longrightarrow} 1, \quad \mbox{as} \ n\to\i, \end{equation*} by \eqref{chrisA1}. From the inequality \begin{equation*} \widehat{M}^n_\tau\mathbbm{1}_{A_n} \geq \overline{X}_\tau \mathbbm{1}_{A_n} + \alpha \mathbbm{1}_{A_n}, \end{equation*} where $A_n:=\{\widehat{M}^n_\tau \geq \overline{X}_\tau +\alpha\}$, we obtain \begin{equation*} \widehat{M}^n_\tau \mathbbm{1}_{\{\widehat{M}^n_\tau\geq \overline{X}_\tau-\varepsilon\}} \geq\overline{X}_\tau \mathbbm{1}_{\{\widehat{M}^n_\tau\geq\overline{X}_\tau-\varepsilon\}} +\alpha\mathbbm{1}_{A_n}. \end{equation*} Now taking the convex combinations leading to $\widetilde{M}^n$ and then $$\widetilde{Y}^n \in\conv \\ (\alpha \mathbbm{1}_{A_n}, \alpha\mathbbm{1}_{A_{n+1}},\ldots)$$ such that $\widetilde{Y}^n \stackrel{P}{\longrightarrow} \widetilde{Y}$, as $n\to \i$, we derive \begin{equation}\label{A2.5} Y \geq \overline{X}_\tau +\widetilde{Y}-\varepsilon \end{equation} by passing to limits. Since $|\widetilde{Y}^n |\le 1$ and $E[\widetilde{Y}^n]\geq\alpha^2$, we deduce from Lebesgue's theorem that $\widetilde{Y}^n \stackrel{L^1(P)}{\longrightarrow} \widetilde{Y}$, as $n\to\i$, and $E[\widetilde{Y}]\geq\alpha^2$. Therefore \eqref{A2.5} implies that \begin{equation*} E[Y] \geq E[\overline{X}_\tau] +E[\widetilde{Y}]-\varepsilon \geq E[\overline{X}_\tau] +\alpha^2-\varepsilon \end{equation*} for each $\varepsilon>0$ and hence \eqref{A2.4} by sending $\varepsilon\to 0$. \end{proof} By the previous lemma we either already have the convergence of $\widehat{M}^n_\tau$ to $\overline{X}_\tau$ in probability at a given stopping time $\tau$ or we can use Koml\'os' lemma once again to find convex combinations $\widetilde{M}^n \in \conv (\widehat{M}^n, \widehat{M}^{n+1}, \dots)$ and a random variable $Y$ such that $\widetilde{M}^n_\tau \stackrel{P}{\longrightarrow} Y$. The next lemma shows that we can exhaust this latter phenomenon by a countable number of stopping times $(\tau_m)^\infty_{m=1}$ and that we can use the random variables $Y_m:=P-\lim_{n\to\infty} \widetilde{M}^n_{\tau_m}$ to redefine the c\`adl\`ag supermartingale $\overline{X}$ at the stopping times $\tau_m$ to obtain a limiting process $\widetilde{X}=(\widetilde{X}_t)_{0\leq t\leq 1}$. The limiting process $\widetilde{X}$ will be an optional strong supermartingale and we can relate the loss of mass $Y_m- \overline{X}_{\tau_m}$ to the right jumps $\Delta_+ \widetilde{A}_{\tau_m}$ of the predictable part of the Mertens decomposition $\widetilde{X} = \widetilde{M} - \widetilde{A}.$ \begin{lemma}\label{lE6} In the setting of Proposition \ref{p:Fatou} let $(\tau_m)_{m=1}^\infty$ be a sequence of $[0,1]\cup\{\infty\}$-valued stopping times with disjoint graphs, i.e.~$\llbracket \tau_m\rrbracket \cap \llbracket \tau_k \rrbracket = \emptyset$ for $m\ne k$. Then there exists a sequence $(\widetilde{M}^n)_{n=1}^\infty$ of convex combinations $\widetilde{M}^n\in\conv(\overline{M}^n,\overline{M}^{n+1},\ldots)$ such that, for each $m\in\mathbb{N}$, the sequence $(\widetilde{M}^n_{\tau_m})^\i_{n=1}$ converges $P$-a.s.~to a random variable $Y_m$ on $\{\tau_m < \infty \}$. The process $\widetilde{X}=(\widetilde{X}_t)_{0\le t\le 1}$ given by \begin{equation}\label{eq:lsup} \widetilde{X}_t(\omega) = \left\{ \begin{array}{cl} Y_{m}(\omega) &:\text{$t=\tau_m(\omega)<\infty$ and $m\in\mathbb{N}$}, \\ \overline{X}_t(\omega) &:\text{elsewhere} \end{array}\right. \end{equation} is an optional strong supermartingale with the following properties: \begin{itemize} \item[\bf{1)}] $\widetilde{X}_+=\overline{X}$, where $\widetilde{X}_+$ denotes the process of the right limits of $\widetilde{X}$. \item[\bf{2)}] Denoting by $\widetilde{X} =\widetilde{M}-\widetilde{A}$ the Mertens decomposition of $\widetilde{X}$ we have \begin{equation}\label{E6} \widetilde{X}_{\tau_m} -\overline{X}_{\tau_m} = - \Delta_+ \widetilde{X}_{\tau_m} = \Delta_+\widetilde{A}_{\tau_m} := \widetilde{A}_{{\tau_m}_+} -\widetilde{A}_{\tau_m} \end{equation} for each $m\in\mathbb{N}.$ \end{itemize} \end{lemma} \begin{proof} Combining Koml\'os' lemma with a diagonalisation procedure we obtain non-negative random variables $Y_m$ and convex combinations $\widetilde{M}^n \in\conv (\overline{M}^n,\overline{M}^{n+1},\ldots)$ such that \begin{align*} \widetilde{M}^n_{\tau_m} &\xrightarrow{\text{\text{$P$-a.s.}}} Y_m, \end{align*} for all $m\in\mathbb{N}$ and we can define the process $\widetilde{X}$ via \eqref{eq:lsup}. This process $\widetilde{X}$ is clearly optional. To show that $\widetilde{X}$ is indeed an optional strong supermartingale, we need to verify that \begin{equation}\label{lE6:1} \widetilde{X}_{\varrho_1} \geq E[\widetilde{X}_{\varrho_2} |\mathcal{F}_{\varrho_1}] \end{equation} for every pair of $[0,1]$-valued stopping times $\varrho_1$ and $\varrho_2$ such that $\varrho_1\leq\varrho_2$. For this, we observe that it is sufficient to consider \eqref{lE6:1} on the set $\{\varrho_1 <\varrho_2\}$. For $i=1,2$ denote by $(\varrho_{i,k})_{k=1}^\infty$ the $k$-th dyadic approximation of $\varrho_i$ as in \eqref{A1.1} above. Then we have {\allowdisplaybreaks \begin{align} E[\widetilde{X}_{\varrho_2}|\mathcal{F}_{\varrho_1}]&=E\left[\lim\limits_{n\to\i} \sum_{m=1}^\infty\widetilde{M}^n_{\tau_m}\mathbbm{1}_{\{\tau_m=\varrho_2\}} +\lim_{k\to\infty}\big(\lim_{n\to\i} \overline{M}^n_{\varrho_{2,k}}\big)\mathbbm{1}_{\{\tau_m\ne\varrho_2,\ \forall m\}} \Bigg|\mathcal{F}_{\varrho_1} \right] \nonumber\\ &=E\left[\lim\limits_{n\to\i} \sum_{m=1}^\infty\widetilde{M}^n_{\tau_m}\mathbbm{1}_{\{\tau_m=\varrho_2\}} +\lim_{k\to\infty}\big(\lim_{n\to\i} \widetilde{M}^n_{\varrho_{2,k}}\big)\mathbbm{1}_{\{\tau_m\ne\varrho_2,\ \forall m\}} \Bigg|\mathcal{F}_{\varrho_1} \right] \nonumber\\ &\leq E\left[\lim\limits_{n\to\i} \sum_{m=1}^\infty\widetilde{M}^n_{\tau_m}\mathbbm{1}_{\{\tau_m=\varrho_2\}} +\lim_{k\to\infty}\big(\lim_{n\to\i} E[\widetilde{M}^n_{\varrho_{2,k}}|\mathcal{F}_{\varrho_{2}}]\big)\mathbbm{1}_{\{\tau_m\ne\varrho_2,\ \forall m\}} \Bigg|\mathcal{F}_{\varrho_1} \right] \label{line3}\\ &= E\Big[\lim_{n\to\i}\widetilde{M}^n_{\varrho_2}\Big|\mathcal{F}_{\varrho_1} \Big]\label{line4}\\ &\leq E\Big[\lim_{k\to\i}\lim_{n\to\i}E[\widetilde{M}^n_{\varrho_2}|\mathcal{F}_{\varrho_{1,k}}]\Big|\mathcal{F}_{\varrho_1} \Big]\label{line5}\\ &=E\Big[\lim_{k\to\i}\lim_{n\to\i}\widetilde{M}^n_{\varrho_{1,k}}\Big|\mathcal{F}_{\varrho_1} \Big]\label{line6}\\ &=E\left[\lim_{k\to\i}\lim\limits_{n\to\i} \sum_{m=1}^\infty\widetilde{M}^n_{\varrho_{1,k}}\mathbbm{1}_{\{\tau_m=\varrho_1\}} +\lim_{k\to\i}\lim_{n\to\i}\widetilde{M}^n_{\varrho_{1,k}}\mathbbm{1}_{\{\tau_m\ne\varrho_1,\ \forall m\}} \Bigg|\mathcal{F}_{\varrho_1} \right]\nonumber\\ &\le \lim_{k\to\i}\lim\limits_{n\to\i} \sum_{m=1}^\infty E[\widetilde{M}^n_{\varrho_{1,k}}|\mathcal{F}_{\varrho_1}]\mathbbm{1}_{\{\tau_m=\varrho_1\}} +E\left[\lim_{k\to\i}\lim_{n\to\i} \overline{M}^n_{\varrho_{1,k}} \Bigg|\mathcal{F}_{\varrho_1} \right]\mathbbm{1}_{\{\tau_m\ne\varrho_1,\ \forall m\}}\label{line8}\\ &=\lim\limits_{n\to\i}\sum_{m=1}^\infty \widetilde{M}^n_{\tau_m}\mathbbm{1}_{\{\tau_m=\varrho_1\}} +E\left[\lim_{k\to\i} Z_{\varrho_{1,k}} \Bigg|\mathcal{F}_{\varrho_1} \right]\mathbbm{1}_{\{\tau_m\ne\varrho_1,\ \forall m\}}\label{line9}\\ &=\sum_{m=1}^\infty \widetilde{X}_{\tau_m}\mathbbm{1}_{\{\tau_m=\varrho_1\}} +\overline{X}_{\varrho_1}\mathbbm{1}_{\{\tau_m\ne\varrho_1,\ \forall m\}}=\widetilde{X}_{\varrho_1}\label{line10} \end{align}} by using Fatou's lemma in \eqref{line3}, \eqref{line5} and \eqref{line8}, the martingale property of the $\widetilde{M}^n$ and the convergence in probability of the $M^n$ in \eqref{line4}, \eqref{line6} and \eqref{line9} and exploiting the backward supermartingale convergence of $(Z_{\varrho_{1,k}})_{k=1}^\i$ in \eqref{line10}. 1) We argue by contradiction and assume that $G:=\{\widetilde{X}_+ \not= \overline{X}\}$ has $P(\pi (G))>0$, where $\pi:\Omega \times [0,1] \to \Omega$ is given by $\pi\big((\omega,t)\big)=\omega$. As the set $G$ is optional, there exists by the optional cross-section theorem (Theorem IV.84 in \cite{DM82}) a $[0,1]\cup \{\infty\}$-valued stopping time $\sigma$ such that $\llbracket\sigma_{\{\sigma<\infty\}} \rrbracket \subseteq G$ and $P(\sigma < \infty) > 0$, which is equivalent to the assumption that the set $F:=\{\widetilde{X}_{\sigma_+}\ne\overline{X}_\sigma\}$ has strictly positive measure $P(F)>0$. Without loss of generality we can assume that there exists $\delta >0$ such that $F\subseteq\{\sigma +\delta <1\}$. Let $(h_i)_{i=1}^\infty$ be a sequence of real numbers decreasing to $0$ that are no atoms of the laws $\tau_m-\sigma$ for all $m\in\mathbb{N}$. Then defining $\sigma_i:=(\sigma+h_i)_F\wedge 1$ for each $i\in\mathbb{N}$ gives a sequence of stopping times such that $\widetilde{X}_{\sigma_i} =\overline{X}_{\sigma_i}$ for each $i$ and $\sigma_i \searrow\sigma$ on $F$. But this implies that \begin{equation} \widetilde{X}_{\sigma +} =\lim\limits_{i\to\i} \widetilde{X}_{\sigma_i} =\lim\limits_{i\to\i} \overline{X}_{\sigma_i}=\overline{X}_\sigma \ \text{on} \ F, \end{equation} which contradicts $P(F)>0$ and hence also $P(\pi (G))> 0.$ 2) By property 1) modifying $\overline{X}$ at countably many stopping times $(\tau_m)^\i_{m=1}$ to obtain $\widetilde{X}$ leaves right limits of the l\`adl\`ag optional strong supermartingale $\widetilde{X}$ invariant so that these remain \begin{equation} \widetilde{X}_{\tau_m +} = \overline{X}_{\tau_m^+}=\overline{X}_{\tau_m} \qquad \text{on} \ \{\tau_m <1\} \quad \text{for each} \ m. \end{equation} Since $\widetilde{M}$ is c\`adl\`ag, this implies that \begin{equation} \widetilde{X}_{\tau_m} -\overline{X}_{\tau_m} = -\Delta_+ \widetilde{X}_{\tau_m} =\Delta_+ \widetilde{A}_{\tau_m} \end{equation} for each $m$ thus proving property 2). \end{proof} Continuing with the proof of Theorem \ref{c1}, the idea is to define the limiting supermartingale $X$ by \eqref{eq:lsup} and to use Lemma \ref{lE6} to enforce the convergence at a well chosen \emph{countable number} of stopping times $(\tau_m)_{m=1}^\infty$ to obtain the convergence in \eqref{C2} for \emph{all} stopping times. It is rather intuitive that one has to take special care of the jumps of the limiting process $X$. As these can be exhausted by a sequence $(\tau_k)^{\infty}_{k=1}$ of stopping times, the previous lemma can take care of this issue. However, the subsequent example shows that there also may be a problem with the convergence in \eqref{M2} at a stopping time $\tau$ at which $\overline{X}$ is {\it continuous}. \begin{example} Let $\sigma:\Omega \longrightarrow [0,1]$ be a \emph{totally inaccessible} stopping time, $(A_t)_{0 < t \leq 1}$ its compensator so that $(\mathbbm{1}_{\llbracket \sigma, 1 \rrbracket} (t) - A_t)_{0 \leq t \leq 1}$ is a martingale. Let $(Y_n)^\infty_{n=1}$ be a sequence of random variables independent of $\sigma$ such that $Y_n$ takes values in $ \{0, n\}$ and $P[Y_n = n] = \frac{1}{n}$. Define the \emph{continuous} supermartingale $$X^1_t = 1 - A_t, \quad \quad 0 \leq t \leq 1,$$ \noindent and the optional strong supermartingale $$X^2_t = 1 - A_t + \mathbbm{1}_{\llbracket\sigma\rrbracket} (t), \quad \quad 0 \leq t \leq 1.$$ Define the sequences $(M^{1,n})^\infty_{n=1}$ and $(M^{2,n})^\infty_{n=1}$ of martingales by \begin{align*} M^{1,n}_t&=1 - A_t +Y_n \mathbbm{1}_{\llbracket\sigma,1\rrbracket} (t),\\ M^{2,n}_t&=1 - A_t + \mathbbm{1}_{\llbracket\sigma,1\rrbracket} (t) + (Y_n-1)\mathbbm{1}_{\llbracket\sigma+\frac{1}{n},1\rrbracket} (t). \end{align*} for $t \in[0,1]$ and $n \in \mathbb{N}$. Then we have that \begin{align}\begin{split} &M^{1,n}_\tau\overset{P}\longrightarrow X^1_\tau,\\ &M^{2,n}_\tau\overset{P}\longrightarrow X^2_\tau\label{eq:F1:1} \end{split}\end{align} for all $[0,1]$-valued stopping times $\tau$. The left and right limits of $X^1$ and $X^2$ coincide, i.e. $X^1_{-}=X^2_-$ and $X^1_{+}=X^2_{+}$, but $X^1\ne X^2$. As $X^1 = X^1_- = X^1_{+} = X^2_{+}$ coincides with the Fatou limits $\overline{X}^1$ (and $\overline{X}^2$ resp.) of $(M^{1,n})^\infty_{n=1}$ (and $(M^{2,n})^\infty_{n=1}$ resp.) this example illustrates that we cannot deduce from the Fatou limits $\overline{X}^1$ and $\overline{X}^2$, where it is necessary to correct the convergence by using Lemma \ref{lE6}. Computing the Mertens decompositions $X^1=M^1-A^1$ and $X^2=M^2-A^2$ we obtain \begin{align*} M^1&=1,\\ A^1&=\varrho\wedge t,\\ M^2&=1-\varrho\wedge t+\mathbbm{1}_{\llbracket\sigma,1\rrbracket},\\ A^2&=\mathbbm{1}_{\rrbracket\sigma,1\rrbracket}. \end{align*} This shows that using $X^2$ instead of $\overline{X}^2=X^1$ changes the compensator of $M^2$ not only after the correction in the sense of Lemma \ref{lE6} on $\rrbracket\sigma,1\rrbracket$ but on all of $[0,1]$. \end{example} As the previous example shows, it might be difficult to identify the stopping times $(\tau_m)^\infty_{m=1}$, where one needs to enforce the convergence in probability by using Lemma~\ref{lE6}. Therefore we combine the previous lemmas with an exhaustion argument to prove Theorem \ref{c1}. \begin{proof}[Proof of Theorem \ref{c1}] Let $\mathbb{T}$ be the collection of all families $\top =(\tau_m)^{N(\top)}_{m=1}$ of finitely many $[0,1]\cup\{\infty\}$-valued stopping times $\tau_m$ with disjoint graphs. For each $\top\in\mathbb{T}$, we consider an optional strong supermartingale $X^\top$ that is obtained by taking convex combinations $\widetilde{X}^{n,\top}\in\conv(\overline{M}^n,\overline{M}^{n+1},\ldots)$ such that $\widetilde{X}^{n,\top}_{\tau_m} \stackrel{P}{\longrightarrow} Y^\top_m$ on $\{\tau_m < \infty\}$ for each $m=1,\ldots, N(\top)$ and then setting \begin{equation} X^\top_t(\omega) = \begin{cases} Y^\top_m(\omega) &:\text{$t=\tau_m(\omega)<\infty$ and $m=1,\ldots, N(\top)$},\\ \overline{X}_t(\omega) \quad &: \text{else}, \end{cases} \end{equation} as explained in Lemma \ref{lE6}. Then each $X^\top$ has a Mertens decomposition \begin{equation} X^\top = M^\top - A^\top \end{equation} and we have by part 2) of Lemma \ref{lE6} that \begin{align*} E\left[ \sum\limits^{N(\top)}_{m=1} (X^\top_{\tau_{m}\wedge 1} - \overline{X}_{\tau_{m}\wedge 1})\right]&= E\left[\sum\limits^{N(\top)}_{m=1} \Delta_+ A^\top_{\tau_{m}\wedge 1}\right] \le 1. \end{align*} Therefore \begin{equation} \widehat\vartheta:=\sup\limits_{\top\in\mathbb{T}} E\left[ \sum\limits^{N(\top)}_{m=1} (X^\top_{\tau_{m}\wedge 1} -\overline{X}_{\tau_{m}\wedge 1})\right] \le 1, \end{equation} and there exists a maximising sequence $(\top_k)_{k=1}^\infty$ such that \begin{equation} E\left[ \sum\limits^{N(\top_k)}_{m=1} (X^{\top_k}_{\tau_{m}\wedge 1} -\overline{X}_{\tau_{m}\wedge 1})\right] \nearrow \sup\limits_{\top\in\mathbb{T}} E\left[\sum\limits^{N(\top)}_{m=1} (X^{\top}_{\tau_{m}\wedge 1} -\overline{X}_{\tau_{m}\wedge 1})\right] =\widehat\vartheta. \end{equation} It is easy to see that we can assume that $(\top_k)^\infty_{k=1}$ can be chosen to be increasing, i.e. $\top_k\subseteq\top_{k+1}$ for each $k$. This means that $\top_{k+1}$ just adds some stopping times to those which appear in $\top_k$. Then $\widetilde{\top}:=\cup^\i_{k=1} \top_k$ is a countable collection of stopping times $(\tau_m)^\i_{m=1}$ with disjoint graphs and by Lemma \ref{lE6} there exists an optional strong supermartingale $X^{\widetilde{\top}}$ and convex combinations $X^{n,\widetilde{\top}}\in\conv (\overline{M}^n,\overline{M}^{n+1},\ldots)$ such that $X^{n,\widetilde{\top}}_{\widetilde{\tau}_m} \stackrel{P}{\longrightarrow} Y^{\widetilde{\top}}_m$ for all $m$ and \begin{equation} X^{\widetilde{\top}}_t (\omega):= \begin{cases} Y^{\widetilde{\top}}_m(\omega) \quad &: t=\tau_m(\omega)<\infty, \\ \overline{X}_t(\omega) &: \text{else}. \end{cases} \end{equation} As we can suppose without loss of generality that $X^{n,\top_{k+1}}\in\conv (X^{n,\top_k}, X^{n+1,\top_k},\ldots)$ and $X^{n,\widetilde{\top}}\in\conv (X^{n,\top_k}, X^{n+1,\top_{n+1}},\ldots),$ we have that $Y^{\top_k}_m= Y^{\top_{k+1}}_m = Y_m^{\widetilde{\top}}$ on $\{\tau_m <1\}$ for all $k\geq m$. Let $X^{\widetilde{\top}} = M^{\widetilde{\top}} -A^{\widetilde{\top}}$ be the Mertens decomposition of $X^{\widetilde{\top}}$. Then \begin{equation} \Delta_+ A^{\widetilde{\top}}_{\tau_m} = X^{\widetilde{\top}}_{\tau_m} -\overline{X}_{\tau_m} = X^{\top_k}_{\tau_m} -\overline{X}_{\tau_m} =\Delta_+ A^{\top_k}_{\tau_m} \end{equation} on $\{\tau_m <1\}$ for $m\le N(\top_k)$, since as explained in the proof of Lemma \ref{lE6} modifying $\overline{X}$ at countably many stopping times does not change the right limits and these remain \begin{equation} X^{\widetilde{\top}}_{\tau_m +} =\overline{X}_{\tau_m} = X^{\top_k}_{\tau_m+} \quad \text{on $\{\tau_m < 1\}$ for $m\le N(\top_k).$} \end{equation} This implies that \begin{equation} \sum\limits^{N(\top_k)}_{m=1} (X^{\top_k}_{\tau_{m}\wedge 1} - \overline{X}_{\tau_{m}\wedge 1}) =\sum\limits^{N(\top_k)}_{m=1} (X^{\widetilde{\top}}_{\tau_{m}\wedge 1} -\overline{X}_{\tau_{m}\wedge 1}) =\sum\limits^{N(\top_k)}_{m=1} \Delta_+ A^{\widetilde{\top}}_{\tau_{m}\wedge 1} \end{equation} and therefore \begin{equation} E\left[ \sum\limits^\i_{m=1} \Delta_+ A^{\widetilde{\top}}_{\tau_{m}\wedge 1}\right] = E\left[ \sum\limits^\i_{m=1} (X^{\widetilde{\top}}_{\tau_{m}\wedge 1} -\overline{X}_{\tau_{m}\wedge 1})\right] = \widehat\vartheta \end{equation} by the monotone convergence theorem. Now suppose that there exists a $[0,1]$-valued stopping time $\tau$ such that $X^{n,{\widetilde{\top}}}_\tau$ does not converge in probability to $X^{\widetilde{\top}}_\tau$. By Lemma \ref{A2} we can then pass once more to convex combinations $\widetilde{M}^n \in\conv (X^{n,{\widetilde{\top}}}, X^{n+1,{\widetilde{\top}}},\ldots)$ such that there exists a random variable $Y$ such that $\widetilde{M}^n_\tau \stackrel{P}{\longrightarrow} Y$, $\widetilde{M}^n_{\tau_m} \stackrel{P}{\longrightarrow} Y^{\widetilde{\top}}_m$ and an optional strong supermartingale $\widetilde{X}$ such that \begin{equation} \widetilde{X}_t(\omega) = \begin{cases} Y(\omega) &: t=\tau(\omega) \leq 1, \\ X^{\widetilde{\top}}_t(\omega) &: \text{else}. \end{cases} \end{equation} However, since $E[\widetilde{X}_\tau -\overline{X}_\tau]>0$ by Lemma \ref{A2}, setting ${\widetilde{\top}}_k:=\top_k\cup \{\top\}$ gives a sequence in $\mathbb{T}$ such that \begin{align*} \lim\limits_{k\to\i} E\left[\sum\limits^{N({\widetilde{\top}}_k)}_{m=1} (X^{\widetilde{\top}_k}_{\tau_{m}\wedge 1} -\overline{X}^{{\widetilde{\top}}_k}_{\tau_{m}\wedge 1})\right]& =\lim\limits_{k\to\i} E\left[\sum\limits^{N(\top_k)}_{m=1} (X^{\top_k}_{\tau_{m}\wedge 1} - \overline{X}_{\tau_{m}\wedge 1})\right] + E[\widetilde{X}_\tau -\overline{X}_\tau] \\ &= \widehat{\vartheta} + E[\widetilde{X}_\tau -\overline{X}_\tau] > \widehat{\vartheta} \end{align*} and therefore a contradiction to the definition of $\widehat{\vartheta}$ as supremum. Here we can take the convex combinations $\widetilde{M}^n \in\conv (X^{n,{\widetilde{\top}}}, X^{n+1,{\widetilde{\top}}},\ldots)$ for all ${\widetilde{\top}}_k$. \end{proof} Combining Theorem \ref{c1} with a similar convergence result for predictable finite variation processes by Campi and Schachermayer \cite{CS06} we now deduce Theorem 2.7 from Theorem 2.6. \begin{proof}[Proof of Theorem \ref{t1}] We consider the extension of Theorem \ref{c1} to local martingales first. For this, let $(X^n)^\infty_{n=1}$ be a sequence of non-negative local martingales $X^n=(X^n_t) _{0\leq t \leq 1}$ and $(\sigma^n_m)^{\infty}_{m=1}$ a localising sequence of $[0,1]$-valued stopping times for each $X^n$. Then, for each $n\in\mathbb{N}$, there exists $m(n)\in\mathbb{N}$ such that $P(\sigma^n_m <1)<2^{-(n+1)}$ for all $m\geq m(n)$. Define the martingales \begin{equation} M^n:=(X^n)^{\sigma^n_{m(n)}} \end{equation} that satisfy $M^k=X^k$ for all $k\geq n$ on $F_n:= \bigcap\limits_{k\geq n} \{\sigma^k_{m(k)} =1\}$ with $P(F_n)>1-2^{-n}$. By Theorem \ref{c1} there exist a sequence of convex combinations $\widetilde{M}^n \in \conv (M^n, M^{n+1}, \dots)$ and an optional strong supermartingale $X$ such that $$ \widetilde{M}^k_{\tau} \stackrel{P}{\longrightarrow} X_{\tau}\quad\text{on $F_n$} $$ for all $[0,1]$-valued stopping times $\tau$. Therefore taking $\widetilde{X}^n \in \conv (X^n, X^{n+1}, \dots)$ with the same weights as $\widetilde{M}^n \in \conv (M^n, M^{n+1}, \dots)$ gives $$ \widetilde{X}^k_{\tau} \stackrel{P}{\longrightarrow} X_{\tau}\quad\text{on $F_n$} $$ for all $[0,1]$-valued stopping times $\tau$ and for each $n$ and, since $\widetilde{X}^k = \widetilde{M}^k$ for all $k\geq n$. But, since $P(F_n^c)<2^{-n} \to 0$, as $n\to\infty$ this implies that $\widetilde{X}^k_{\tau} \stackrel{P}{\longrightarrow} X_{\tau}$ for all $[0,1]$-valued stopping times $\tau$. This finishes the proof in the case when the $X^n$ are local martingales. For the case of optional strong supermartingales, let $(X^n)^\infty_{n=1}$ be a sequence of non-negative optional strong supermartingales $X^n=(X^n_t)_{0 \leq t \leq 1}$ and $X^n = M^n - A^n$ their Mertens decompositions into a c\`adl\`ag local martingale $M^n$ and a predictable, non-decreasing, l\`adl\`ag process $A^n$. As the local martingales $M^n \geq X^n + A^n \geq X^n$ are non-negative, there exists by the first part of the proof a sequence of convex combinations $\widehat{M}^n\in \conv (M^n, M^{n+1}, \dots)$ and an optional strong supermartingale $\widehat{X}$ with Mertens decomposition $\widehat{X} = \widehat{M} - \widehat{A}$ such that \begin{equation}\label{chris2} \widehat{M}^n_{\tau} \stackrel{P}{\longrightarrow} \widehat{X}_{\tau} \end{equation} for all $[0,1]$-valued stopping times $\tau$. Now let $\widehat{A}^n \in \conv (A^n, A^{n+1},\dots)$ be the convex combinations that are obtained with the same weights as the $\widehat{M}^n$. Then there exists a sequence $(\widetilde{A}^n)^\infty_{n=1}$ of convex combinations $\widetilde{A}^n \in \conv (\widehat{A}^n, \widehat{A}^{n+1}, \dots)$ and a predictable, non-decreasing, l\`adl\`ag process $\widetilde{A}$ such that \begin{equation}\label{chris1} P\left[\lim\limits_{n\to\i} \widetilde{A}^n_t =\widetilde{A}_t,\ \forall t\in[0,1]\right] =1. \end{equation} Indeed, we only need to show that $(\widetilde{A}^n_1)_{n\in\mathbb{N}}$ is bounded in $L^0(P)$, then \eqref{chris1} follows from Proposition 3.4 of Campi and Schachermayer in \cite{CS06}. By monotone convergence we obtain \begin{align*} E[\widetilde{A}^n_1] = \lim_{m\to\infty} E[\widetilde{A}^n_{1\wedge\sigma^n_m}] = \lim_{m\to\infty} E[\widetilde{M}^n_{1\wedge\sigma^n_m} -\widetilde{X}^n_{1\wedge\sigma^n_m}]\le 1 \end{align*} for all $n\in \mathbb{N}$ and therefore the boundedness in $L^0(P)$. Here $\widetilde{M}^n \in \conv (\widehat{M}^n, \widehat{M}^{n+1}, \dots)$ and $\widetilde{X}^n \in \conv (\widehat{X}^n, \widehat{X}^{n+1}, \dots)$ denote convex combinations having the same weights as the $\widehat{A}^n$ and $(\sigma^n_m)^\infty_{m=1}$ is a localising sequence of stopping times for the local martingale~$\widetilde{M}^n$. Taking convex combinations does not change the convergence \eqref{chris2}, and so $\widetilde{X}^n \in \conv (X^n, X^{n+1}, \dots)$ is a sequence of convex combinations and $\widetilde{X}:=\widehat{X} - \widehat{A}$ an optional strong supermartingale such that \begin{equation} \widetilde{X}^n_{\tau} \stackrel{P}{\longrightarrow} \widetilde{X}_{\tau} \end{equation} for all $[0,1]$-valued stopping times $\tau$. \end{proof} \begin{remark} \noindent \begin{itemize} \item[\textbf{1)}] Observe that the proof of Theorem \ref{t1} actually shows that the limiting optional strong supermartingale $X$ is equal to $\overline{X}$ up to a set that is included in the graphs of countably many stopping times $(\tau_m)^\infty_{m=1}$. \item[\textbf{2)}] Replacing Koml\'os' lemma (Corollary \ref{kl}) by Koml\'os' subsequence theorem (Theorem \ref{kssthm}) in the proof of Theorems \ref{c1} and \ref{t1} we obtain by taking subsequences of subsequences rather than convex combinations of convex combinations the following stronger assertion: Given a sequence $(X^n)^\infty_{n=1}$ of non-negative optional strong supermartingales $X^n=(X^n_t)_{0 \leq t \leq 1}$ starting at $X^n_0=1$ there exists a subsequence $(X^{n_k})^\infty_{k=1}$ and an optional strong supermartingale $X=(X_t)_{0 \leq t \leq 1}$ such that the C\'esaro means $\frac{1}{J} \sum^J_{j=1} X^{n_{k_j}}$ of any subsequence $(X^{n_{k_j}})^\infty_{j=1}$ converge to $X$ in probability at all finite stopping times, as $J\to \infty.$ \end{itemize} \end{remark} \section{A counter-example}\label{sec:4} At a \emph{single} finite stopping time $\tau$ we may, of course, pass to a subsequence to obtain that $\widetilde{M}^n_\tau$ converges not only in probability but also $P$-almost surely to $\widetilde{X}_\tau$. The next proposition shows that we cannot strengthen Theorem \ref{c1} to obtain $P$-almost sure convergence for \emph{all} finite stopping times simultaneously. The obstacle is, of course, that the set of all stopping times is far from being countable. \begin{proposition}\label{Ex2} Let $(M^n)^\infty_{n=1}$ be a sequence of independent non-negative continuous martingales $M^n=(M^n_t)_{0 \leq t \leq 1}$ starting at $M^n_0=1$ such that \begin{equation} M^n_\tau \overset{P}{\longrightarrow} 1-\tau\label{Ex2.2} \end{equation} for all $[0,1]$-valued stopping times $\tau$. Then we have for all $\varepsilon>0$ and all sequences $(\widetilde{M}^n)^\infty_{n=1}$ of convex combinations $\widetilde{M}^n\in\conv(M^n, M^{n +1},\ldots)$ that there exists a stopping time $\tau$ such that $$P\left[\varlimsup_{n\to\infty} \widetilde{M}^n_\tau =+\infty\right] > 1- \varepsilon$$ \end{proposition} \begin{remark} If $(\Omega, \mathcal{F}, (\mathcal{F}_t)_{0 \leq t \leq 1}, P)$ supports a sequence $(W^n)^\infty_{n=1}$ of independent Brownian motions $W^n=(W^n_t)_{0 \leq t \leq 1}$, the existence of a sequence $(M^n)_{n=1}^\infty$ verifying \eqref{Ex2.2} follows similarly as in the proof of Theorem \ref{t2} in Section \ref{sec:5} below. \end{remark} For the proof of Proposition $\ref{Ex2}$ we will need the following auxiliary lemma. \begin{lemma} \label {Ex2.1} In the setting of Proposition $\ref{Ex2}$, let $\tau$ and $\sigma$ be two $[0, 1]$-valued stopping times such that $\tau\leq \sigma$ and $\tau<\sigma$ on some $A\in\mathcal{F}_\tau$ with $P(A)>0.$ Then there exists, for all $c>1$, a constant $\gamma=\gamma(c,\tau, \sigma)>0$ and a number $N=N(\tau,\sigma) \in \mathbb{N}$ such that $$ P\left(\sup_{t\in[\tau,\sigma]} \widetilde{M}^n_t > c + 1\right) \geq \gamma$$ for all $n\geq N$. \end{lemma} \begin{proof} Let $\alpha=\frac{E[(\sigma-\tau)\mathbbm{1}_A]}{P(A)}$ and $\varepsilon \in(0,1)$ such that $\alpha > (c+4) \varepsilon $ and \begin{align*} P(B_n) &\geq (1-\varepsilon) P (A) \end{align*} for all $n\geq N$, where \begin{align*} &A_n:= \{| \widetilde{M}^n_\tau - (1-\tau) | < \varepsilon\} \cap A,\\ &B_n:= \{| \widetilde{M}^n_\sigma - (1-\sigma) | < \varepsilon\} \cap A_n. \end{align*} Then setting $\varrho_n:=\inf\{t \in [\tau, \sigma]~|~\widetilde{M}^n_t > c + 1\}$ we can estimate \begin{align*} E[\widetilde{M}^n_\tau \mathbbm{1}_{A_n}] &=E[\widetilde{M}^n_{\varrho_n\wedge 1} \mathbbm{1}_{A_n}]\\ &=E\left[\widetilde{M}^n_{\varrho_n\wedge 1} \Big(\mathbbm{1}_{A_n\cap\{\varrho_n \leq 1\}} + \mathbbm{1} _{\{\varrho_n>1\}\cap B_n}+\mathbbm{1}_{\{\varrho_n>1\}\cap B_n^c\cap A_n}\Big)\right]\\ &\leq (c + 1) P ( \varrho_n\leq 1,\ A_n) + E[(1-\sigma + \varepsilon) \mathbbm{1} _{B_n}]+ (c + 1) P (B_n^c\cap A_n) \end{align*} by the optional sampling theorem and the continuity of $\widetilde{M}^n$. Since $$E[\widetilde{M}^n_{\tau} \mathbbm{1}_{A_n}] \geq E[(1-\tau-\varepsilon) \mathbbm{1}_{A_n}] \geq E[(1-\tau-\varepsilon)\mathbbm{1}_{B_n}],$$ we obtain that \begin{align*} E\Big[\big((1-\tau-\varepsilon) - (1-\sigma+\varepsilon)\big)\mathbbm{1}_{B_n}\Big] - (c+1) \big(P(A)-P(B_n)\big)&\leq (c + 1) P(\varrho_n\leq 1,\ A_n)\\ &\leq (c+1) P(\varrho_n \leq 1) \end{align*} and therefore that \begin{align*} \gamma:&= \frac{\alpha -3\varepsilon - (c+1)\varepsilon}{c+1} P(A) \leq P(\varrho_n \leq 1)=P\left(\sup _{t \in[\tau, \sigma]} \widetilde{M}^n_{\tau} > c + 1\right) \end{align*} for all $n\geq N$, where $\gamma>0$ by our choice of $\varepsilon$, as $E[(\sigma-\tau)\mathbbm{1}_{B_n}]\geq (\alpha-\varepsilon)P(A)$. \end{proof} \begin{proof}[Proof of Proposition \ref{Ex2}] We shall define $\tau$ as an increasing limit of a sequence of stopping times $\tau_m$. For this, we set $n_0=0$, $\tau_0=0$ and $\sigma_0= \frac{1}{2}$ and then define for $m \in \mathbb{N}$ successively \begin{align*} n_m(\omega)&:=\inf \{n\in\mathbb{N}~|~n>n_{m-1}(\omega) \text{ and $\exists t\in[\tau_{m-1}(\omega), \sigma_{m-1} (\omega)]$ with $\widetilde{M}^n_t(\omega) \geq 2^m+1$}\},\\ \tau_m(\omega)&:=\inf\big\{t\in\big(\tau_{m-1}(\omega),\sigma_{m-1}(\omega)\big)~\big|~ \widetilde{M}_t^{n_{m}(\omega)} (\omega) \geq 2^m+1\big\}\wedge 1,\\ \sigma_m(\omega)&:=\inf\{t>\tau_m(\omega)~|~\widetilde{M}_t^{n_{m}(\omega)} (\omega) < 2^m \}\wedge \sigma_{m-1} (\omega). \end{align*} By construction and the continuity of $\widetilde{M}^n$ we then have, for all $k\geq m$, that $$\text{$\widetilde{M}^{n_m(\omega)}_t (\omega) \geq 2^m$ for all $t\in[\tau_k(\omega), \sigma_k(\omega)]$}$$ on $\{\tau _k < 1\}$. Therefore setting $\tau:=\lim_{m\to \infty} \tau_m$ gives that $$\text{$\widetilde{M}^{n_m(\omega)}_\tau(\omega) \geq 2^m$ for all $m$}$$ on $\{\tau<1\}$. So it only remains to show that \begin{equation} P(\tau<1)\geq 1-\varepsilon. \label{eq:EX2.1} \end{equation} We prove \eqref{eq:EX2.1} by induction. For this, assume that there exists for each $m\in\mathbb{N}_0,$ some $\alpha_m>0$ and $N_m\in\mathbb{N}_0$ such that $P(D_m)< 1 - \ve2^{-m}$ for \begin{equation} D_m:=\{\sigma_m>\tau_m+\alpha_m,\ n_m\in(N_{m-1}, N_m]\} \label{eq:Ex2.2} \end{equation} Indeed, for $m=0$, we can choose $\alpha_0=\frac{1}{2}, N_{-1}=0$ and $N_0=1$. Regarding the induction step we first show that $n_m<\infty$ $P$-a.s.~on $D_{m-1}$ . To that end, we can assume w.l.o.g.~that the $\big(\widetilde{M}^n\big)_{n=1}^\infty$ are also independent by choosing the blocks of which we take the convex combinations disjoint and passing to a subsequence. As we are only making an assertion about the limes superior, this will be sufficient. Moreover, we observe that $$F:=\{n_m<\infty\}\cap D_{m-1}=\cup_{n=N_{m-1}}^\infty F_n\cap D_{m-1}$$ with $F_n:=\big\{\exists t\in\big(\tau_{m-1} (\omega), \sigma_{m-1}(\omega)\big]~\big|~ \widetilde{M}^n_t(\omega)\geq 2^m+1\big\}$. Then using the estimate $1-x\leq \exp (-x)$ and the independence of the $F_n$ of each other and $D_{m-1}$ gives \begin{align*} P(D_{m-1}\cap F^c)&=\lim_{k\to\infty} P\Bigg(\bigcap\limits_{n=N_{m-1}}^k F_n^c\Bigg) P(D_{m-1})\\ &=\lim_{k\to\infty} \prod^k_{n=N_{m-1}} \big(1-P(F_n)\big) P(D_{m-1})\\ &\leq \lim_{k\to\infty} \exp \left(-\sum^k_{n=N_{m-1}} P(F_n)\right) P(D_{m-1}). \end{align*} Since $\sum^{\infty}_{n=N_{m-1}} P(F_n)=\infty$ by Lemma \ref{Ex2.1}, this implies that $P(D_{m-1} \cap F^c) = 0$ and hence that $n_m<\infty$ $P$-a.s.~on $D_{m-1}$. More precisely, by applying Lemma \ref{Ex2.1} for $c=2^m$ with $\tau=\tau_{m-1}$, $\sigma=\sigma_{m-1}$ and $A=D_{m-1}$ to $\widetilde{M}^n$ for $n\geq N_{m-1}$ we get that $P(F_n)\geq \gamma>0$ for all $n\geq N_{m-1}$. Therefore $\tau_m < 1$ $P$-a.s.~on $D_{m-1}$ as well. By the continuity of the $\widetilde{M}^n$ and, as $\tau_m<\frac{1}{2}$ on $D_{m-1}$, we obtain that $\frac{1}{2} \geq \sigma_m>\tau_m$ $P$-a.s. on $D_{m-1}$, which finishes the induction step. Now, since $\{\tau <1\}\supseteq \cap_{m=1}^\infty D_m=:D$ and $$P(D)\geq 1- \sum_{m=1}^\infty P(D_m^c)= 1- \sum_{m=1}^\infty \frac{\varepsilon}{2^m}=1-\varepsilon,$$ we have established \eqref{eq:Ex2.2}, which completes the proof of the proposition. \end{proof} \section{Proof of Theorem \ref{t2}}\label{sec:5} We now pass to the proof of Theorem \ref{t2}. The following lemma yields a building block. \begin{lemma} \label {l:t2} Let $W=(W_t) _{0 \leq t \leq 1}$ be a standard Brownian motion on $(\Omega, \mathcal{F}, \mathbb{F}, P)$ and $\varrho$ a $[0,1] \cup\{\infty\}$-valued stopping time. Then there exists a sequence $(\varphi^n)^{\infty}_{n=1}$ of predictable integrands of finite variation such that $M^n:= \varphi^n \stackrel{\mbox{\tiny$\bullet$}}{} W \geq-1$ is a bounded martingale for each $n\in \mathbb{N}$ and \begin{equation} M^n_\tau \xrightarrow{P\text{-a.s.}} - \mathbbm{1}_{\rrbracket \varrho, 1 \rrbracket} (\tau)=-\mathbbm{1}_{\{\tau > \varrho\}},\quad \text{as $n\to\infty$}, \end{equation} for all $[0, 1]$-valued stopping times $\tau$. \end{lemma} \begin{proof} We consider the case $\varrho \equiv 0$ first. There are many possible choices for the integrands $(\varphi^n)^{\infty}_{n=1}$. To come up with one, we use the deterministic functions $$\psi^n_t:= \frac{1}{2^{-n}-t}\mathbbm{1}_{(0,2^{-n})}(t).$$ Then the continuous martingales $N^n:= (\psi^n \stackrel{\mbox{\tiny$\bullet$}}{} W_t)_{0 \leq t <2^{-n}}$ are well-defined, for each $n \in \mathbb{N}.$ It follows from the Dambis--Dubins--Schwarz Theorem that the stopping times \begin{align*} \tau_n&:= \inf\{t \in(0,2^{-n}) | N^n_t= -1\},\\ \sigma_{n,k}&:= \inf\{t \in(0,2^{-n}) | N^n_t >k\} \end{align*} are $P$-a.s.~strictly smaller than $2^{-n}$ for all $n,k\in \mathbb{N}$, since $$\langle N^n\rangle_t= \frac{1}{2^{-n}-t} - \frac{1}{2^{-n}} \quad \mbox{for} \quad t\in [0,2^{-n})$$ and $\lim_{t\nearrow 2^{-n}}\langle N^n\rangle_t=\infty$. Therefore setting $\widetilde{\psi}^{n,k} = \psi^n \mathbbm{1}_{\llbracket 0,\tau_n\wedge \sigma_{n,k} \rrbracket}$ gives a sequence $$\widetilde{N}^{n,k} = \widetilde{\psi}^{n,k} \stackrel{\mbox{\tiny$\bullet$}}{} W = (\psi^n \stackrel{\mbox{\tiny$\bullet$}}{} W)^{\tau_n\wedge\sigma_{n,k}}$$ of bounded martingales such that, for all $[0, 1]$-valued stopping times $\tau$, $$\widetilde{N}^{n,k}_{\tau}\xrightarrow{P\text{-a.s.}} -1 \quad \mbox{on}~\{\tau\geq 2^{-n}\}, \quad \mbox{as} \quad k\to\infty,$$ since $\sigma_{n,k} \nearrow 2^{-n}$ $P$-a.s,~as $k\to \infty.$ Defining $\varphi^n:=\widetilde{\psi}^{n,k(n)}$ and $M^n=\widetilde{N}^{n,k(n)}$ as a suitable diagonal sequence such that $M^n_{2^{-n}}=\widetilde{N}^{n,k(n)}_{2^{-n}}\to -1$, as $n\to\infty$, then yields the assertion for $\varrho\equiv0$, as $M^n_0=0$ for all $n\in\mathbb{N}$ and $\mathbbm{1}_{\{\tau \geq 2 ^{-n}\}} \xrightarrow{P\text{-a.s.}} \mathbbm{1}_{\{\tau > 0\}}$, as $n\to\infty$. Next we observe that, if we consider for some $[0, 1] \cup \{\infty\}$-valued stopping time $\sigma$ the stopped Brownian notion $W^{\sigma}=(W_{\sigma \wedge t})_{0 \leq t \leq 1}$ then we obtain by the above argument that \begin{equation*} (M^n)^{\sigma}_{\tau} = M^n_{\sigma \wedge \tau} = \big(\varphi^n \stackrel{\mbox{\tiny$\bullet$}}{} (W^{\sigma})\big)_{\tau} \xrightarrow{P\text{-a.s.}} \mathbbm{1}_{(0,1)} (\sigma \wedge \tau) \end{equation*} for every $[0,1]$-valued stopping time $\tau.$ For the general case $\varrho \not\equiv 0$, consider the process $\overline{W}_t:=(W_{t+\varrho} - W_\varrho)_{0 \leq t \leq 1}$ which is a Brownian motion with respect to the filtration $\overline{\mathbb{F}}:=(\overline{\mathcal{F}}_t)_{0 \leq t \leq 1}:=(\mathcal{F}_{(t+\varrho)\wedge 1})_{0 \leq t \leq 1}$ that is independent of $\mathcal{F}_{\varrho}$ and stopped at the $\overline{\mathbb{F}}$-stopping time $\bar{\sigma}:=(1-\varrho).$ Then the general case $\varrho\not\equiv 0$ follows by applying the result for $\varrho \equiv 0$ for the stopped Brownian motion $\overline{W}$ and the stopping time $\bar{\tau} = (\tau - \varrho)_{\{\tau > \varrho\}}$ which is always smaller than $\bar{\sigma}$. Indeed, as the corresponding martingales $\overline{M}^n$ obtained for $\overline{W}$ with respect to $(\overline{\mathcal{F}}_t)_{0 \leq t \leq 1}$ start at $0$, the processes \begin{equation*} M^n_t(\omega) = \begin{cases} 0 &:t \leq \varrho (\omega) \wedge 1,\\ \overline{M}^n_{t+\varrho(\omega)}(\omega) &:\varrho(\omega) < t \leq 1 \end{cases} \end{equation*} are martingales with respect to the filtration $\mathbb{F}=(\mathcal{F}_t)_{0 \leq t \leq 1}$ that converge to $\mathbbm{1}_{\llbracket \varrho, 1 \rrbracket} (\tau)$ $P$-a.s. for every $[0,1]$-valued $\mathbb{F}$-stopping time $\tau$. \end{proof} \begin{proof}[Proof of Theorem \ref{t2}] Let $X=M - A$ be the Mertens decomposition of the optional strong supermartingale $X$. It is then sufficient to show the assertion for $M$ and $A$ separately. 1) We begin with the local martingale $M$. As any localising sequence $(\tau_m)^\infty_{m=1}$ of stopping times for $M$ gives a sequence $\widetilde{M}^m:= M^{\tau_m}$ of martingales that converges uniformly in probability, we obtain a sequence $\overline{M}^n$ of martingales that converges $P$-a.s.~uniformly to $M$ by passing to a subsequence $(\widetilde{M})^\infty_{n=1}$ such that $P(\tau_n <1) < 2^{-n}$. To see that we can choose the $M^n$ to be bounded, we observe that setting $$\overline{M}_t^{n,k}:= E [\overline{M}^n_1 \wedge k \vee - k | \mathcal{F}_t]$$ for $t\in [0,1]$ gives for every martingale $\overline{M}^n$ a sequence of bounded martingales $\overline{M}^{n,k} = (\overline{M}^{n,k}_t)_{0 \leq t \leq 1}$ such that $\overline{M}^{n,k}_1\xrightarrow{L^1(P)}\overline{M}^n_1,$ as $k\to \infty$, and therefore locally in $\mathcal{H}^1(P)$ by Theorem 4.2.1 in \cite{J79}. By the Burkholder-Davis-Gundy inequality (see for example Theorem IV.48 in \cite{P04}) this also implies uniform convergence in probability and hence $P$-a.s.~uniform convergence by passing to a subsequence, again indexed by $k$. Then taking a diagonal sequence $(\overline{M}^{n,k(n)})^\infty_{n=1}$ gives a sequence of martingales $(M^n)^\infty_{n=1} = (\overline{M}^{n,k(n)})^\infty_{n=1}$ that converges $P$-a.s.~uniformly to $M$ and therefore also satisfies \eqref{eq:t2} for every $[0,1]$-valued stopping time $\tau.$ 2) To prove the assertion for the predictable part $A$, we decompose $$A=A^{c}+\sum^\infty_{i=1} \Delta_{+} A_{\sigma_i} \mathbbm{1}_{\rrbracket \sigma_i, 1 \rrbracket}+\sum^\infty_{j=1} \Delta A_{\varrho_j} \mathbbm{1}_{\llbracket \varrho_j, 1 \rrbracket}$$ into its continuous part $A^c$, its totally right-discontinuous part $A^{rd}:=\sum^\infty_{i=1} \Delta_{+}A_{\sigma_i} \mathbbm{1}_{\rrbracket \sigma_i, 1 \rrbracket}$ and totally left-discontinuous part $A^{ld}:=\sum^\infty_{j=1} \Delta A_{\varrho_j} \mathbbm{1}_{\llbracket \varrho_j, 1 \rrbracket}$. By superposition it is sufficient to approximate $-A^c$, each single right jump process $-A_{\sigma_i} \mathbbm{1}_{\rrbracket \sigma_i, 1 \rrbracket}$ for $i\in\mathbb{N}$ and each single left jump process $-\Delta A_{\varrho_j} \mathbbm{1}_{\llbracket \varrho_j, 1 \rrbracket}$ for $j\in\mathbb{N}$ separately. Indeed, let $(M^{c,n})_{n=1}^\infty$, $(M^{rd,i,n})_{n=1}^\infty$ for each $i\in\mathbb{N}$ and $(M^{ld,j,n})_{n=1}^\infty$ for each $j\in\mathbb{N}$ be sequences of bounded martingales such that \begin{align} M^{c,n}_{\tau}&\stackrel{P}\longrightarrow-A^{c}_\tau,\label{p:eq1}\\ M^{rd,i,n}_{\tau}&\stackrel{P}\longrightarrow-\Delta_{+} A_{\sigma_i} \mathbbm{1}_{\rrbracket \sigma_i, 1 \rrbracket}(\tau),\label{p:eq2}\\ M^{ld,j,n}_{\tau}&\stackrel{P}\longrightarrow-\Delta A_{\varrho_j} \mathbbm{1}_{\llbracket \varrho_j, 1 \rrbracket}(\tau),\label{p:eq3} \end{align} as $n\to\infty$, for all $[0,1]$-valued stopping times $\tau$. Then setting $$M^n:=M^{c,n}+\sum_{i=1}^n M^{rd,i,n}+\sum_{j=1}^n M^{ld,j,n}$$ gives a sequence of bounded martingales such that $M^n_{\tau}\stackrel{P}\longrightarrow-A_\tau$, as $n\to\infty$, for all $[0,1]$-valued stopping times $\tau$. 2.a) We begin with showing the existence of $(M^{rd,i,n})_{n=1}^\infty$ for some fixed $i\in\mathbb{N}$. For this, we set $$\vartheta_t^{i,n}:= (\Delta_+ A_{\sigma_i} \wedge n)\mathbbm{1} _{\rrbracket \sigma_i, 1 \rrbracket} \varphi^n_{t} \in L^2(W),$$ where $(\varphi^n)^\infty_{n=1}$ is a sequence of integrands as obtained in Lemma \ref{l:t2} for the stopping time $\varrho=\sigma_i$. Then it follows immediately from Lemma \ref{l:t2} that $\vartheta^{i,n} \stackrel{\mbox{\tiny$\bullet$}}{} W_\tau \xrightarrow{P\text{-a.s.}} \Delta_+ A_{\sigma_i} \mathbbm{1}_{\rrbracket \sigma_i, 1 \rrbracket} (\tau)$, as $n\to\infty$, for every $[0,1]$-valued stopping time $\tau$ and therefore that \begin{equation*} M^{rd,i,n}:= \vartheta^{i,n} \stackrel{\mbox{\tiny$\bullet$}}{} W \end{equation*} gives a sequence of bounded martingales such that \eqref{p:eq2} holds. Note that by the construction of the integrands $\varphi^n$ in Lemma \ref{l:t2} the approximating martingales $M^{rd,i,n}$ are $0$ on $\llbracket 0,\sigma_i\rrbracket$, constant to either $-\Delta_+ A_{\sigma_i} \wedge n$ or $(\Delta_+ A_{\sigma_i} \wedge n) k(n)$ on $\llbracket \sigma_i+2^{-n},1\rrbracket$. Therefore they converge $P$-a.s.~uniformly to $-\Delta_+ A_{\sigma_i}$ on $\llbracket \sigma_i+2^{-m},1\rrbracket$ for each $m\in\mathbb{N}$. 2.b) To obtain the approximating sequence $(M^{ld,i,n})_{n=1}^\infty$ for some fixed $j\in\mathbb{N}$, we observe that the stopping time $\varrho_j$ is predictable and let $(\varrho_{j,k})^\infty_{k=1}$ be an announcing sequence of stopping times, i.e.~a non-decreasing sequence of stopping times such that $\varrho_{j,k} < \varrho_j$ on $\{\varrho_j > 0\}$ and $\varrho_{j,k} \xrightarrow{P\text{-a.s.}} \varrho_j,$ as $k\to \infty$. Since $\Delta A_{\varrho_j}\in L^1(P)$ is $\mathcal{F}_{\varrho_j-}$-measurable by Theorem IV.67.b) in \cite{DM78} and $\mathcal{F}_{\varrho_j-}=\bigvee^\infty_{k=1} \mathcal{F}_{\varrho_{j,k}}$ by Theorem IV.56.d) in \cite{DM78}, we have that \begin{equation} E[\Delta A_{\varrho_j} | \mathcal{F}_{\varrho_{j,k}}] \xrightarrow{P\text{-a.s.}} \Delta A_{\varrho_j}, \quad \mbox{as} \quad k\to\infty, \end{equation} by martingale convergence. Therefore setting \begin{equation} \widetilde{A}^{ld,j,k}:= E[\Delta A_{\varrho_j} | \mathcal{F}_{\varrho_{j,k}} ] \mathbbm{1}_{\rrbracket \varrho_{j,k},1 \rrbracket} \end{equation} gives a sequence of single right jump processes that converges to $\Delta A_{\varrho_j} \mathbbm{1}_{\llbracket \varrho_j, 1 \rrbracket}$ $P$-a.s.~at each $[0,1]$-valued stopping time $\tau$, since $\mathbbm{1}_{\rrbracket \varrho_{j,k,},1 \rrbracket} (\tau) \xrightarrow{P\text{-a.s.}} \mathbbm{1}_{\llbracket \varrho_j,1 \rrbracket} (\tau),$ as $k\to\infty$, for all $[0,1]$-valued stopping times $\tau$. By part 2.a) there exists for each $k\in\mathbb{N}$ a sequence $(\widetilde{M}^{j,k,n})_{n=1}^\infty$ of bounded martingales such that $\widetilde{M}^{j,k,n}_\tau \xrightarrow{P\text{-a.s.}}-\widetilde{A}^{ld,j,k}_\tau$, as $n\to\infty$, for all $[0,1]$-valued stopping times $\tau$. For the stopping time $\varrho_j$ we can therefore find a diagonal sequence $(\widetilde{M}^{j,k,n(k)})_{k=1}^\infty$ such that $\widetilde{M}^{j,k,n(k)}_{\varrho_j} \xrightarrow{P\text{-a.s.}}-\widetilde{A}^{ld,j,k}_{\varrho_j}$, as $k\to\infty$. By the proof of Lemma \ref{l:t2} and part 2.a) above we can choose the martingales $\widetilde{M}^{j,k,n(k)}$ such that $\widetilde{M}^{j,k,n(k)}\equiv0$ on $\llbracket 0,\varrho_{j,k}\rrbracket$ and $\widetilde{M}^{j,k,n(k)}\equiv -\big(E[\Delta A_{\varrho_j} | \mathcal{F}_{\varrho_{j,k}} ]\wedge n(k)\big)$ on $\llbracket (\varrho_{j,k}+2^{-n(k)})_{F_k},1\rrbracket$, where the set $$F_k:=\left\{\widetilde{M}^{j,k,n(k)}_{\varrho_j+2^{-n(k)}}= -\big(E[\Delta A_{\varrho_j} | \mathcal{F}_{\varrho_{j,k}} ]\wedge n(k)\big)\right\}$$ has probability $P(F_k)>1-2^{-k}$. This sequence $(\widetilde{M}^{j,k,n(k)})_{k=1}^\infty$ therefore already satisfies $\widetilde{M}^{j,k,n(k)}_\tau \xrightarrow{P\text{-a.s.}}-\Delta A_{\varrho_j} \mathbbm{1}_{\llbracket \varrho_j, 1 \rrbracket}(\tau)$ for all $[0,1]$-valued stopping times $\tau$ and we have \eqref{p:eq3}. 2.c) For the approximation of the continuous part $A^c$, we observe that by the left-continuity and adaptedness of $A^c$ there exists a sequence $(\widetilde{A}^n)^\infty_{n=1}$ of non-decreasing integrable simple predictable processes that converges uniformly in probability to $A^c$ and hence $P$-a.s.~uniform by passing to a fast convergent subsequence again indexed by $n$; see for example Theorem II.10 in \cite{P04}. Recall that a simple predictable process is a predictable process $\widetilde{A}$ of the form \begin{equation}\label{simple} \widetilde{A}=\sum^m_{i=1} \Delta_{+} A_{\sigma_i} \mathbbm{1}_{\rrbracket \sigma_i, 1 \rrbracket}, \end{equation} where $(\sigma_i)^m_{i=1}$ are $[0,1]\cup \{\infty\}$-valued stopping times such that $\sigma_i < \sigma_{i+1}$ for $i=1, \dots, m-1$ and $\Delta_+ A_{\sigma_i}$ is $\mathcal{F}_{\sigma_i}$-measurable. By part 2.a) there exists, for each $n\in\mathbb{N}$, a sequence $(\widetilde{M}^{n,k})_{k=1}^\infty$ of martingales such that $\widetilde{M}^{n,k}_\tau\xrightarrow{P\text{-a.s.}}-\widetilde{A}^n_\tau$, as $k\to\infty$, for all $[0,1]$-valued stopping times $\tau$. Therefore we can pass to a diagonal sequence $\widetilde{M}^{n,k(n)}$ such that \begin{equation} P\left[\lim_{n\to\infty}\widetilde{M}^{n,k(n)}_q=-A^c_q,\ \forall q\in\mathbb{Q}\cap[0,1]\right]=1.\label{p:eq4} \end{equation} By Theorem \ref{t1} there exists a sequence $(M^n)_{n=1}^\infty$ of convex combinations $$M^n\in\conv(\widetilde{M}^{n,k(n)},\widetilde{M}^{n+1,k(n+1)},\ldots)$$ and an optional strong supermartingale $X$ such that $M^n_\tau\stackrel{P}\longrightarrow X_\tau$ for all $[0,1]$-valued stopping times $\tau$. To complete the proof it therefore only remains to show that $X=-A^c$. For this, we argue by contradiction and assume that the optional set $G:=\{X\ne-A^c\}$ is not evanescent, i.e.~that $P\big(\pi(G)\big)>0$, where $\pi\big((\om,t)\big)=\omega$ denotes the projection on the first component. By the optional cross-section theorem (Theorem IV.84 in \cite{DM82}) there then exists a $[0,1]\cup\{\infty\}$-valued stopping time $\tau$ such that $X_\tau\ne-A^c_\tau$ on $F:=\{\tau<\infty\}$ with $P(F)>0$, which we can decompose into an accessible stopping time $\tau^A$ and a totally inaccessible stopping time $\tau^I$ such that $\tau=\tau^A\wedge\tau^I$ by Theorem IV.81.c) in \cite{DM78}. On $\{\tau^I<\infty\}$ we obtain that $M^n_{\tau^I-}=M^n_{\tau^I}\stackrel{P}\longrightarrow X_{\tau^I}$ and $A^c_{\tau^I-}=A^c_{\tau^I}$ from the continuity of $M^n$ and $A^c$. Therefore $X_{\tau^I}=-A^c_{\tau^I}$, as $M^n_{\tau^I-}\stackrel{P}\longrightarrow X_{\tau^I-}$ by Proposition \ref{prop:ti} and $X_{\tau^I-}=-A^c_{\tau^I-}$ by \eqref{p:eq4}. This implies that $P(\tau^I<\infty)=0$ and hence $P(\tau^A<\infty)=P(F)>0$. Since $\tau^A$ is accessible, there exists a predictable stopping time $\sigma$ such that $P(\tau^A=\sigma<\infty)>0$. By the strong supermartingale property of $X$ we have that $$\text{$X_{\sigma-}\geq E[X_{\sigma}|\mathcal{F}_{\sigma-}]\geq E[X_{\sigma+}|\mathcal{F}_{\sigma-}]$ on $\{\sigma<\infty\}$,}$$ as $\sigma$ is predictable. Since $X_-=-A^c_-$ and $X_+=-A^c_+$ by \eqref{p:eq4}, this implies that $X_{\sigma}=-A^c_{\sigma}$ by the continuity of $A^c$. However, this contradicts $P(F)>0$ and therefore shows \eqref{p:eq1}, which completes the proof. \end{proof} \section{Proof of Theorem \ref{t3}} \label{sec:6} We begin with the proof of Proposition \ref{prop:ti} for this, we will use the following variant of Doob's up-crossing inequality that holds uniformly over the set $\mathfrak{X}$ of non-negative optional strong supermartingales $X=(X_t)_{0 \leq t \leq 1}$ starting at $X_0=1$. \begin{lemma}\label{l:W1} For each $\varepsilon>0$ and $\delta>0$, there exists a constant $C=C(\varepsilon,\delta) \in \mathbb{N}$ such that \begin{align*} \sup_{X \in \mathfrak{X}} P [M_{\varepsilon} (X) > C]&< \delta, \end{align*} where the random variable $M_\varepsilon(X)$ is pathwise defined as the maximal amount of moves of the process $X$ of size bigger than $\varepsilon$, i.e. \begin{align*} M_\varepsilon (X) (\omega) := \sup \big\{m \in \mathbb{N}~\big|~\text{$|X_{t_i}(\omega) - X_{t_{i-1}}(\omega)| > \varepsilon,$ for $0\leq t_0 < t_1 < \dots < t_m \leq 1$}\big\}. \end{align*} \end{lemma} \begin{proof} Choose $n \in \mathbb{N}$ such that $\frac{1}{n} \leq \frac{\varepsilon}{2}$, fix some $X\in \mathfrak{X}$ and denote by $X=M-A$ its Mertens decomposition. Then $M=X+A$ is a non-negative c\`adl\`ag local martingale and hence a c\`adl\`ag supermartingale such that $$E[M_t]\leq 1$$ for all $t\in[0,1]$. Letting $C_1 \in \mathbb{N}$ with $C_1 \geq \frac{2}{\delta}$ we obtain from Doob's maximal inequality that $$P\left(M_1^*:=\sup_{0 \leq s \leq 1} M_s > C_1\right) \leq \frac{1}{C_1} \leq \frac{\delta}{2}$$ Then we divide the interval $[0, C_1]$ into $n C_1=:N$ subintervals $I_k:= [\frac{k}{N}, \frac{k+1}{N}]$ of equal length of at most $\frac{\varepsilon}{2}$ for $k=0, \dots, N-1$. The basic intuition behind this is that, whenever the non-negative (c\`adl\`ag) local martingale $M=(M_t)_{0 \leq t \leq 1}$ moves more than $\varepsilon$, while its supremum stays below $C_1$, it has at least to cross one of the subintervals $I_k$. For each interval $I_k$ we can estimate the number $U (M; I_k)$ of up-crossings of the interval $I_k$ by the process $M=(M_t)_{0 \leq t \leq 1}$ up to time $1$ by Doob's up-crossing inequality by $$P[U (M;I_k) > C_2] \leq \frac{N}{C_2} E[U (M;I_k)] \leq \frac{N}{C_2} \sup_{0 \leq t \leq 1} E[M_t] \leq \frac{N}{C_2}.$$ Choosing $\tilde{C}_2=\frac{2N^2}{\delta}$ we obtain that $$P[U(M;I_k) >\tilde{C}_2] \leq \frac{\delta}{2N}.$$ Then summing over all intervals gives for the number $U_{\varepsilon}(M)$ of up-moves of the process $M$ of size $\varepsilon$ that $$P[U_\varepsilon(M) >\tilde C_2N]\leq P[M^*_1 \leq C_1,~\text{$\exists k\in\{1, \dots, N\}$ with $U(M;I_k) > \tilde{C}_2$}] + P[M^*_1 >C_1] \leq \delta.$$ Since $X=M-A$ is non-negative starting at $X_0=1$ and $A$ is non-decreasing, the number $M_\varepsilon(X)$ of moves of $X$ of size $\varepsilon$ is smaller than $2(U_{\varepsilon}(X) + N)$. Therefore we can conclude that \begin{equation}\label{l:W1:eq1} P[M_{\varepsilon}(X) > C] \leq \delta \end{equation} for $C=2(\tilde{C}_2 +1) N$. To complete the proof, we observe that the constants $C_1$ and $C=2(\tilde{C}_2 + 1)N$ are independent of the choice of the optional strong supermartingale $X\in \mathfrak{X}$ and we can therefore take the supremum over all $X\in \mathfrak{X}$ in the equality. \end{proof} Let $X= (X_t)_{0 \leq t\leq 1}$ be a l\`ag (existence of left limits) process and $\tau$ be a $(0,1]$-valued stopping time. For $m\in \mathbb{N}$, let $\tau_m$ be the $m$-th dyadic approximation of the stopping time $\tau$ as defined in \eqref{A1.1}. Note that $\tau_m$ is $\{\frac{1}{2^m}, \dots,1\}$-valued, as $\tau>0$. As $(X_t)_{0 \leq t \leq 1}$ is assured to have l\`ag trajectories, we obtain \begin{equation}\label{P2} X_{\tau_m-2^{-m}} \xrightarrow{\text{$P$-a.s.}} X_{\tau-}, \quad\text{as}\quad m\to\infty, \end{equation} and therefore in probability. The next lemma gives a quantitative version of this rather obvious fact. \begin{lemma} \label{l:W2} Let $\tau$ be a totally inaccessible $(0,1]$-valued stopping time. Then the convergence in probability in \eqref{P2} above holds true uniformly over all non-negative optional strong supermartingales $X \in \mathfrak{X}$, i.e.~$X=(X_t)_{0 \leq t \leq 1}$, starting at $X_0=1$. More precisely, we have for each $\varepsilon>0$ that \begin{equation}\label{P3a} \lim_{m\to\infty} \sup_{X \in \mathfrak{X}} P[| X_{\tau_m-2^{-m}} -X_{\tau-} | >\varepsilon]=0. \end{equation} \end{lemma} \begin{proof} Denote by $A=(A_t)_{0 \leq t \leq 1}$ the compensator of $\tau$, which is the unique continuous increasing process such that $(\mathbbm{1}_{\llbracket \tau,1 \rrbracket} - A_t)_{0 \leq t \leq 1}$ is a martingale. For every predictable set $G\subseteq \Omega \times[0,1]$ we then have \begin{equation}\label{186} P \left[\tau \in G\right] = E \left[\mathbbm{1}_G \mathbbm{1}_{\llbracket\tau \rrbracket}\right] = E \left[\int^1_0 \mathbbm{1}_G(t)d \mathbbm{1}_{\llbracket\tau, 1 \rrbracket} (t) \right] = E\left [\int^1_0\mathbbm{1}_G(t)dA_t\right]. \end{equation} Here we used that the predictable $\sigma$-algebra on $\Omega \times [0,1]$ is generated by the left-open stochastic intervals, i.e. intervals of the form $\rrbracket \sigma_1, \sigma_2\rrbracket$ for stopping times $\sigma_1$ and $\sigma_2$ and a monotone class argument to deduce the second equality in \eqref{186}. The third equality is the definition of the compensator. Fix $X \in \mathfrak{X}$, $\varepsilon>0,$ $\delta>0$ and apply Lemma \ref{l:W1} and the integrability of $A_1$ to find $c=c(\varepsilon, \delta, \tau)$ such that the exceptional set \begin{equation}\label{187} F_1=\{M_{\varepsilon} (X) \geq c\} \end{equation} satisfies \begin{equation}\label{188} E[\mathbbm{1}_{F_1} A_1]<\delta. \end{equation} Find $m$ large enough such that \begin{equation}\label{189} E[\mathbbm{1}_{F_2} A_1]<\delta, \end{equation} where $F_2$ is the exceptional set \begin{equation}\label{190} F_2=\left\{~\text{$\exists k\in\{1,\ldots,2^m\}$ such that $A_{\frac{k}{2^m}} - A_{\frac{k-1}{2^m}} > \frac{\delta}{c}$}\right\}. \end{equation} Define $G$ to be the predictable set \begin{equation}\label{191} G=\bigcup^{2^m}_{k=1}\bigg\{(\omega, t) ~\bigg| \frac{k-1}{2^m} < t \leq \frac{k}{2^m} \quad \text{and} \quad \sup_{\frac{k-1}{2^m} \leq u \leq t} | X_{u-}(\omega) - X_{\frac{k-1}{2^m}} (\omega) | \leq \varepsilon\bigg\} \end{equation} We then have $P [\tau \notin G] < 3 \delta$. Indeed, applying \eqref{186} to the complement $G^c$ of $G$ we get $$P [\tau \notin G] = E\left [\Big(\mathbbm{1}_{F_1 \cup F_2} + \mathbbm{1}_{\Omega \setminus (F_1 \cup F_2)}\Big) \int^1_0 \mathbbm{1}_{G^c }dA_t\right],$$ where $F_1$ and $F_2$ denote the exceptional sets in \eqref{187} and \eqref{190}. By \eqref{188} and \eqref{189} \begin{equation} E\left[\mathbbm{1}_{F_1 \cup F_2} \int^1_0 \mathbbm{1}_{G^c} dA_t\right] \leq 2\delta. \end{equation} On the set $\Omega \setminus (F_1 \cup F_2)$ we deduce from \eqref{187}, and \eqref{190} and \eqref{191} that $$\int^1_0 \mathbbm{1}_{G^c}dA_t \leq c \frac{\delta}{c} = \delta$$ so that \begin{equation}\label{192} P [\tau \notin G] \leq 3 \delta. \end{equation} For $(\omega,t) \in G$ such that $\frac{k-1}{2^m} < t \leq \frac{k}{2^m}$ we have $$|X_{t-}(\omega) - X_{\frac{k-1}{2^m}} (\omega) | \leq \varepsilon$$ so that by \eqref{192} we get $$P \big[| X_{\tau-} - X_{\tau_m-2^{-m}}| > \varepsilon \big] < 3 \delta,$$ which shows \eqref{P3a}. \end{proof} \begin{proof}[Proof of Proposition \ref{prop:ti}] Fix $\varepsilon > 0$ and apply Lemma \ref{l:W2} to find $m\in \mathbb{N}$ such that \begin{equation}\label{193} P\big[| \widetilde{X}_{\tau_m-2^{-m}} - \widetilde{X}_{\tau-} | > \varepsilon\big] < \varepsilon, \end{equation} for each $\widetilde{X} \in \mathfrak{X}$. As $(X^n_q)^\infty_{n=1}$ converges to $X_q$ in probability, for every rational number $q \in \mathbb{Q} \cap [0,1]$ we have $$P\left[\max_{0 \leq k \leq 2^m} | X^n_{\frac{k}{2^m}} - X_{\frac{k}{2^m}} | > \varepsilon\right] < \varepsilon,$$ for all $n\geq N(\varepsilon)$. We then may apply \eqref{193} to $X^n$ and $X$ to conclude that $$P[| X^n_{\tau-} - X_{\tau-} | > 3 \varepsilon] < 3\varepsilon.$$ \end{proof} With Proposition \ref{prop:ti} we have now everything in place to prove Theorem \ref{t3}. \begin{proof}[Proof of Theorem \ref{t3}] The existence of the optional strong supermartingale $X^{(1)}$ is the assertion of Theorem \ref{t1}. To obtain the predictable strong supermartingale $X^{(0)}$, we observe that, since $\widetilde{X}^n$ and $X^{(1)}$ are l\`adl\`ag, the optional set $$F:=\cup^\infty_{n=1} \{\widetilde{X}^n \neq \widetilde{X}^n_-\} \cup \{X^{(1)} \neq X^{(1)}_-\}$$ has at most countably many sections and therefore there exists by Theorem 117 in Appendix IV of \cite{DM78} a countable number of $[0,1] \cup \{\infty\}$-valued stopping times $(\sigma_m)^{\infty}_{m=1}$ with disjoint graphs such that $F=\cup_{m=1}^\infty \llbracket \sigma_m\rrbracket$. By Theorem IV.81.c) in \cite{DM78} we can decompose each stopping time $\sigma_m$ into an accessible stopping time $\sigma^A_m$ and a totally inaccessible stopping time $\sigma^I_m$ such that $\sigma_m=\sigma^A_m \wedge \sigma^I_m$. Again combining Koml\'os' lemma with a diagonalisation procedure we obtain a sequence of convex combinations $\widetilde{X}^n\in\conv (X^n, X^{n+1}, \dots)$ such that $\widetilde{X}^n_\tau \stackrel{P}{\longrightarrow} X^{(1)}_\tau$ for all $[0,1]$-valued stopping times $\tau$ as well as $$ \widetilde{X}^n_{\tau_m-} \longrightarrow Y^{(0)}_m, \quad\text{$P$-a.s.,\quad as $n\to\infty$},$$ for all stopping times $\tau_m:=\sigma^A_m\wedge 1$ and suitable non-negative random variables $Y^{(0)}_m$ for $m\in \mathbb{N}$. Now we can define $X^{(0)}$ by $$X^{(0)}_t(\omega) = \begin{cases} Y^{(0)}_m(\omega) &:\text{$t = \sigma_m^A(\omega)$ and $m\in\mathbb{N}$},\\ X^{(1)}_{t-}(\omega)=X^{(1)}_t(\omega) &: \text{else}. \end{cases} $$ For all $[0,1]$-valued stopping times $\tau$, we then have the convergence \eqref{eq:t3:2}, i.e. \begin{align*} \widetilde{X}^n_{\tau-} (\omega) &= \widetilde{X}^n_{\tau} (\omega) \mathbbm{1}_F \big(\omega, \tau (\omega)\big) + \sum^{\infty}_{m=1} \widetilde{X}^n_{\tau_m^-} \mathbbm{1}_{\{\sigma^A_m=\tau\}} + \sum^{\infty}_{m=1} \widetilde{X}^n_{\sigma^I_m-} \mathbbm{1}_{\{\sigma^I_m=\tau\}}\\ &\stackrel{P}\longrightarrow X_{\tau}^{(0)}(\omega)\mathbbm{1} _F\big(\omega, \tau, (\omega)\big) + \sum^\infty_{m=1} Y^{(0)}_m \mathbbm{1}_{\{\sigma^A_m=\tau\}} + \sum^\infty_{m=1} X_{\sigma^I_m-}^{(1)} \mathbbm{1}_{\{\sigma_m^I=\tau\}}, \end{align*} since $\widetilde{X}^n=\widetilde{X}^n_-$ for all $n\in\mathbb{N}$ on $F$ and $\widetilde{X}^n_{\sigma-} \mathbbm{1}_{\{\sigma=\tau\}} \xrightarrow{P} X_{\sigma-} \mathbbm{1}_{\{\sigma=\tau\}}$ for all $[0,1]$-valued totally inaccessible stopping times $\tau$ by Proposition \ref{prop:ti}. As all stopping times $\sigma_m^A$ are accessible and each $Y_m^{(0)}$ is $\mathcal{F}_{\tau_{m}-}$-measurable, we have that $X^{(0)}$ is an accessible process such that $X^{(0)}_\tau \mathbbm{1}_{\{\tau < \infty\}}$ is $\mathcal{F}_{\tau-}$-measurable for every stopping time $\tau$. Therefore $X^{(0)}$ is by Theorem 3.20 in~\cite{D72} even predictable. By Remark 5.c) in Appendix I of \cite{DM82} the left limit process $\widetilde{X}^n_-$ of each optional strong supermartingale $\widetilde{X}^n$ is a predictable strong supermartingale satisfying $$\widetilde{X}^n_{\tau-} \geq E[\widetilde{X}^n_\tau | \mathcal{F}_{\tau-}]$$ for all $[0,1]$-valued predictable stopping times. Therefore the predictable strong supermartingale property (part 3) of Definition \ref{def:pred}) and $X^{(0)}_{\tau} \geq E[X^{(1)}_{\tau} | \mathcal{F}_{\tau-}]$ follow immediately from \eqref{eq:t3:1} and \eqref{eq:t3:2} by Fatou's lemma. To see $X^{(1)}_{\tau-}\geq X^{(0)}_{\tau}$, let $(\tau_m)_{m=1}^\infty$ be a foretelling sequence of stopping times for the predictable stopping time $\tau$. Then we have $$\widetilde{X}^{n}_{\tau_m} \geq E[\widetilde{X}^{n}_{\tau_{m+k}} | \mathcal{F}_{\tau_m}]$$ for all $n,m,k\in\mathbb{N}$. Applying Fatou's lemma we then obtain $$\widetilde{X}^{n}_{\tau_m} \geq E[\widetilde{X}^{n}_{\tau-} | \mathcal{F}_{\tau_m}]$$ by sending $k\to\infty$, $$X^{(1)}_{\tau_m} \geq E[X^{(0)}_{\tau-} | \mathcal{F}_{\tau_m}]$$ by sending also $n\to\infty$ and finally $X^{(1)}_{\tau-}\geq X^{(0)}_{\tau}$ by sending $m\to\infty$. \end{proof} \section{Proof of Proposition \ref{p:SI}}\label{sec:7} One application of Theorem \ref{t3} is a convergence result for stochastic integrals of predictable integrands of finite variation with respect to non-negative optional strong supermartingales. Fix a non-negative optional strong supermartingale $X \in \mathfrak{X}$ and let $\varphi = (\varphi_t)_{0 \leq t \leq 1}$ be a predictable process of finite variation, so that it has l\`adl\`ag paths. We then define \begin{align}\label{def:SI:1} \int^t_0 X_u(\omega) d\varphi_u(\omega):= \int^t_0 X_u(\omega) d\varphi^c_u(\omega) + \sum_{0 <u \leq t} X_{u-}(\omega) \Delta\varphi_u(\omega) + \sum_{0 \leq u < t} X_u(\omega) \Delta_+\varphi_u(\omega) \end{align} for all $t\in[0,1]$, which is $P$-a.s.~pathwise well-defined, as $X$ is l\'adl\'ag and $\varphi$ of finite variation. Here the integral $\int^t_0 X_u (\omega) d\varphi^c_u(\omega)$ with respect to the continuous part $\varphi^c$ (see \eqref{def:cont}) can be defined as a pathwise Riemann-Stieltjes integral or a pathwise Lebesgue-Stieltjes integral, as both integrals coincide. To ensure the integration integration by parts formula \begin{equation} \varphi_t(\omega)X_t(\omega) - \varphi_0(\omega) X_0 (\omega) = \int^t_0 \varphi_u(\omega) dX_u(\omega) + \int^t_0 X_u (\omega)d\varphi_u(\omega),\label{SI:PI} \end{equation} we define the stochastic integral $\varphi \stackrel{\mbox{\tiny$\bullet$}}{} X_t:=\int^t_0 \varphi_u dX_u$ by \begin{align}\label{def:SI:2} \int^t_0 \varphi_u (\omega) dX_u (\omega):={}& \int^t_0 \varphi^c_u (\omega) dX_u (\omega)+ \sum_{0 <u \leq t} \Delta\varphi_u(\omega) \big(X_t(\omega) - X_{u-}(\omega)\big)\notag\\ & + \sum_{0 \leq u < t} \Delta_+\varphi_u(\omega) \big(X_t(\omega) - X_{u}(\omega)\big) \end{align} for $t \in [0,1]$ that is again pathwise well-defined. The integral $\int^t_0 \varphi^c_u (\omega) dX_u (\omega)$ can again be defined as a pathwise Riemann-Stieltjes integral or a pathwise Lebesgue-Stieltjes integral. If $X=(X_t)_{0 \leq t \leq 1}$ is a semimartingale, the definition of $(\int^t_0 \varphi_u dX_u)_{0\leq t\leq 1}$ via \eqref{def:SI:2} coincides with the classical stochastic integral. We first derive an auxiliary result. \begin{lemma}\label{l:SI} Let $(X^n)^\infty_{n=1}$, $X^{(0)}$ and $X^{(1)}$ be l\`adl\`ag stochastic processes such that \begin{itemize} \item[\bf{i)}] $X^n_\tau \stackrel{P}{\longrightarrow} X^{(1)}_\tau$ and $X^n_{\tau-} \stackrel{P}{\longrightarrow} X^{(0)}_\tau$ for all $[0,1]$-valued stopping times $\tau$. \item[\bf{ii)}] For all $\varepsilon > 0$ and $\delta > 0$, there are constants $C_1(\delta) > 0$ and $C_2(\varepsilon, \delta) > 0$ such that \begin{align} \sup_{X\in \mathcal{X}^0} P[\sup_{0 \leq s \leq 1} | X_s | > C_1 (\delta)] \leq \delta,\label{l:SI:a}\\ \sup_{X\in \mathcal{X}^1} P[M_\varepsilon (X) > C_2 (\varepsilon, \delta)] \leq \delta,\label{l:SI:b} \end{align} where $\mathcal{X}^0=\{X^{(0)}, X^{(1)}, X^n, X^n_- \text{ for $n \in \mathbb{N}$} \}$, $\mathcal{X}^1=\{X^{(1)}, X^n\text{ for $n \in \mathbb{N}$} \}$ and $$M_\varepsilon(X):=\sup\big\{m \in \mathbb{N} ~\big|~ |X_{t_i} (\omega) - X_{t_{i-1}} (\omega) | > \varepsilon\text{ for $0 \leq t_0 < t_1 < t_m \leq 1$}\big\}$$ for $X \in \mathcal{X}^1.$ \end{itemize} Then we have, for all predictable processes $\varphi=(\varphi_t)_{0 \leq t \leq 1}$ of finite variation, that \begin{itemize} \item[\bf{1)}] $\int^\tau_0 X^n_u d\varphi_u \stackrel{P}{\longrightarrow} \int^\tau_0 X^{(1)}_u d\varphi^c_u + \sum_{0 < u \leq \tau} X_u^{(0)} \Delta \varphi_u + \sum_{0 \leq u < \tau} X_u^{(1)} \Delta _+\varphi_u$ \item[\bf{2)}] $\int^\tau_0 \varphi_u dX^n_u \stackrel{P}{\longrightarrow} \int^\tau_0 \varphi^c_u dX^{(1)}_u + \sum_{0 < u \leq \tau} \Delta \varphi_u (X^{(1)}_\tau - X^{(0)}_u) + \sum_{0 \leq u < \tau} \Delta_+ \varphi_u (X^{(1)}_\tau - X^{(1)}_u)$ \end{itemize} for all $[0,1]$-valued stopping times $\tau$. The convergence 1) is even uniformly in probability. \end{lemma} \begin{proof} 1) We first show that \begin{equation} \sup_{0\leq t\leq 1}\left|\sum_{0 < u \leq t} X^n_{u-} \Delta \varphi_u - \sum_{0 < u \leq t} X^{(0)}_{u-} \Delta \varphi_u\right|\stackrel{P}{\longrightarrow}0,\quad\text{as $n\to\infty$}, \label{eq:cl:1} \end{equation} i.e.~uniformly in probability. The proof of the convergence $$\sup_{0\leq t\leq 1}\left|\sum_{0 < u \leq t} X^n_u \Delta_+ \varphi_u - \sum_{0 < u \leq t} X^{(1)}_{u-} \Delta_+ \varphi_u\right|\stackrel{P}{\longrightarrow}0,\quad\text{as $n\to\infty$}, $$ is completely analog and therefore omitted. Since $\varphi$ is predictable and of finite variation and hence l\`adl\`ag, there exists a sequence $(\tau_m)_{m=1}^\infty$ of $[0,1]\cup\{\infty\}$-valued stopping times exhausting the jumps of $\varphi$. Using the stopping times $(\tau_m)_{m=1}^\infty$ we can write \begin{align*} \sum_{0 < u \leq t} X_u \Delta \varphi_u = \sum^{\infty}_{m=1} X_{\tau_m} \Delta \varphi_{\tau_m} \mathbbm{1}_{\{\tau_m \leq t\}} \end{align*} for all $X\in\mathcal{X}^0$ and estimate \begin{multline}\label{eq:SI:1} \sup_{0\leq t\leq 1}\left| \sum^\infty_{m=1} X^n_{\tau_m-} \Delta \varphi_{\tau_m} \mathbbm{1}_{\{\tau_m \leq t\}} - \sum^\infty_{m=1} X^{(0)}_{\tau_m} \Delta \varphi_{\tau_m} \mathbbm{1}_{\{\tau_m \leq t\}} \right|\\ \leq \sum^N_{m=1} | X^n_{\tau_m-} - X^{(0)}_{\tau_m} | | \Delta \varphi_{\tau_m} | + \sup_{m \in \mathbb{N}} | X^n_{\tau_m-} - X^{(0)}_{\tau_m} | \sum^\infty_{m=N+1} | \Delta \varphi_{\tau_m} |. \end{multline} Combining \eqref{eq:SI:1} with the fact that $\varphi$ is of finite variation we obtain \eqref{eq:cl:1}, as $$\sup_{m \in \mathbb{N}} | X^n_{\tau_m-} - X^{(0)}_{\tau_m} | \sum^\infty_{m=N+1} | \Delta \varphi_{\tau_m} |\stackrel{P}{\longrightarrow}0,\quad\text{as $N\to\infty$},$$ by \eqref{l:SI:a} and $\sum^N_{m=1} | X^n_{\tau_m-} - X^{(0)}_{\tau_m} | | \Delta \varphi_{\tau_m} |\stackrel{P}{\longrightarrow}0$, as $n\to\infty$, for each $N$ by assumption i). The key observation for the proof of the convergence \begin{equation}\label{eq:cl:3} \sup_{0\leq t\leq 1}\left|\int^t_0 X^n_u d\varphi^c_u - \int^t_0 X^{(1)}_u d \varphi^c_u\right|\stackrel{P}{\longrightarrow}0,\quad\text{as $n\to\infty$}, \end{equation} is that we can use assumption ii) to approximate the stochastic Riemann-Stieltjes integrals by Riemann sums in probability uniformly for all $X \in \mathcal{X}^1$, as either the integrator or the integrand moves very little. Indeed, for $\varepsilon>0$ and $c_1,c_2>0$ we have that \begin{multline*} \sup_{0\leq t\leq 1}\left|\int^t_0 X_ud\varphi^c_u - \sum^N_{m=1} X_{\sigma_{m-1}}\big(\varphi^c_{\sigma_m\wedge t} - \varphi^c_{\sigma_{m-1}\wedge t}\big) \right|\\ \leq \sum^N_{m=1} \sup_{u \in [\sigma_{m-1}, \sigma_m]} | X_u - X_{\sigma_{m-1}} |\big(|\varphi^c | _{\sigma_m} - | \varphi^c|_{\sigma_{m-1}}\big) \leq c_2 2c_1 \frac{\varepsilon}{4c_1c_2} + \frac{\varepsilon}{2c_1} c_1 = \varepsilon \end{multline*} on $\{|\varphi|_1\leq c_1\} \cap \{X^*_1 \leq c_1\} \cap \{M_{\frac{\varepsilon}{2c_1}} (X) \leq c_2\}$, where the stopping times $(\sigma_m)^\infty_{m=0}$ are given by $\sigma_0=0$ and $$\sigma_m:= \inf\Big\{t > \sigma_{m-1}~\Big|~|\varphi^c|_t - |\varphi^c|_{\sigma_{m-1}} > \frac{\varepsilon}{4c_1 c_2}\Big\} \wedge 1$$ and $N=\frac{4c_1 c_2}{\varepsilon}$. Choosing $c_1,c_2>0$ and hence $N$ sufficiently large we therefore obtain $$ \sup_{X \in \mathcal{X}^1}P\left(\sup_{0\leq t\leq 1}\left|\int^t_0 X_nd\varphi^c_u - \sum^N_{m=1} X_{\sigma_{m-1}}\big(\varphi^c_{\sigma_m\wedge t} - \varphi^c_{\sigma_{m-1}\wedge t}\big) \right| >\varepsilon\right)<\delta $$ for any $\delta>0$ by assumption ii). Combing this with the estimate \begin{align*} \sup_{0\leq t\leq 1}\left|\int^t_0 X^n_u d \varphi^c_u - \int^t_0 X^{(1)}_u d \varphi^c_u \right|&\leq\sup_{0\leq t\leq 1}\left|\int^t_0 X^n_u d \varphi^c_u - \sum^N_{m=1} X^n_{\sigma_{m-1}}\big(\varphi^c_{\sigma_m\wedge t} - \varphi^c_{\sigma_{m-1} \wedge t}\big)\right|\\ &+\sum^N_{m=1} | X^n_{\sigma_{m-1}} - X^{(1)}_{\sigma_{m-1}} |\big(|\varphi^c |_{\sigma_m} - |\varphi^c |_{\sigma_{m-1}}\big)\\ &+\sup_{0\leq t\leq 1}\left|\int^t_0 X^{(1)}_u d \varphi^c_u - \sum^N_{m=1} X^{(1)}_{\sigma_{m-1}} (\varphi^c_{\sigma_m\wedge t} - \varphi^c_{\sigma_{m-1}\wedge t})\right| \end{align*} then implies \eqref{eq:cl:3}, as $$\max_{m=0, \dots, N-1} | X^n_{\sigma_m} - X^{(1)}_{\sigma_m}|\stackrel{P}{\longrightarrow}0,\quad\text{as $n\to\infty$,}$$ for each fixed $N$ by assumption i). 2) As $X^n_\tau \varphi_\tau \stackrel{P}{\longrightarrow} X^{(1)}_\tau \varphi_\tau$ for all $[0,1]$-valued stopping times, this assertion follows immediately from part 1) and the integration by parts formula \eqref{SI:PI}. \end{proof} Combining the previous lemma with Lemma \ref{l:W1} allows us now to conclude the proof of Proposition \ref{p:SI}. \begin{proof}[Proof of of Proposition \ref{p:SI}] Part 1) is Theorem \ref{t3} and part 2) follows from Lemma \ref{l:SI} as soon as we have shown that its assumptions are satisfied. Assumption i) is part 1) and for the set $\mathcal{X}^1$ assumption ii) can be derived from Lemma \ref{l:W1}. Therefore it only remains to show \eqref{l:SI:a} for $X^{(0)}$ and $X^n_-$ for $n \in \mathbb{N}$. For the left limits \eqref{l:SI:a} follows from the validity of the latter for the processes $X^n$ for $n \in \mathbb{N}$ and for the predictable strong supermartingale $X^{(0)}$ from (3.1) in Appendix I of \cite{DM82}. \end{proof}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} The principal task here is to initiate a theory of finite transformation monoids that is similar in spirit to the theory of finite permutation groups that can be found, for example, in~\cite{dixonbook,cameron}. I say similar in spirit because attempting to study transformation monoids by analogy with permutation groups is like trying to study finite dimensional algebras by analogy with semisimple algebras. In fact, the analogy between finite transformation monoids and finite dimensional algebras is quite apt, as the theory will show. In particular, an analogue of Green's theory~\cite[Chapter 6]{Greenpoly} of induction and restriction functors relating an algebra $A$ with algebras of the form $eAe$ with $e$ idempotent plays a key role in this paper, whereas there is no such theory in permutation groups as there is but one idempotent. There are many worthy books that touch upon --- or even focus on --- transformation monoids~\cite{CP,Higginsbook,howiebook,GM,Lipscomb}, as well as a vast number of research articles on the subject. But most papers in the literature focus on specific transformation monoids (such as the full transformation monoid, the symmetric inverse monoid, the monoid of order preserving transformations, the monoid of all partial transformations, etc.) and on combinatorial issues, e.g., generalizations of cycle notation, computation of the submonoid generated by the idempotents~\cite{Howie}, computation of generators and relations, computation of Green's relations, construction of maximal submonoids satisfying certain properties, etc. The only existing theory of finite transformation and partial transformation monoids as a general object is the Krohn-Rhodes wreath product decomposition theory~\cite{PDT,KRannals,Arbib}, whose foundations were laid out in the book of Eilenberg~\cite{Eilenberg}. See also~\cite{qtheor} for a modern presentation of the Krohn-Rhodes theory, but with a focus on abstract rather than transformation semigroups. The Krohn-Rhodes approach is very powerful, and in particular has been very successful in dealing with problems in automata theory, especially those involving classes of languages. However, the philosophy of Krohn-Rhodes is that the task of classifying monoids (or transformation monoids) up to isomorphism is hopeless and not worthwhile. Instead, one uses a varietal approach~\cite{Eilenberg} similar in spirit to the theory of varieties of groups~\cite{Neumann}. But there are some natural problems in automata theory where one really has to stick with a given transformation monoid and cannot perform the kind of decompositions underlying the Krohn-Rhodes theory. One such problem is the \v{C}ern\'y conjecture, which has a vast literature~\cite{Pincerny,pincernyconjecture,synchgroups,dubuc,cerny,volkovc1,rystsov1,rystsov2,AMSV,Trahtman,traht2,volkovc2,Kari,volkovc3,rystcom,rystrank,mycerny,Karicounter,VolkovLata,PerrinBeal,strongtrans,strongtrans2,mortality,beal,Salomcerny,averaging}. In the language of transformation monoids, it says that if $X$ is a set of maps on $n$ letters such that some product of elements of $X$ is a constant map, then there is a product of length at most $(n-1)^2$ that is a constant map. The best known upper bound is cubic~\cite{twocomb}, whereas it is known that one cannot do better than $(n-1)^2$~\cite{cerny}. Markov chains can often be fruitfully studied via random mappings: one has a transformation monoid $M$ on the state set $\Omega$ and a probability $P$ on $M$. One randomly chooses an element of $M$ according to $P$ and has it act on $\Omega$. A theory of transformation monoids, in particular of the associated matrix representation, can then be used to analyze the Markov chain. This approach has been adopted with great success by Bidigare, Hanlon and Rockmore~\cite{BHR}, Diaconis and Brown~\cite{DiaconisBrown1,Brown1,Brown2} and Bj\"orner~\cite{bjorner1,bjorner2}; see also my papers~\cite{mobius1,mobius2}. This is another situation to which the Krohn-Rhodes theory does not seem to apply. This paper began as an attempt to systematize and develop some of the ideas that have been used by various authors while working on the \v{C}ern\'y conjecture. The end result is the beginnings of a theory of transformation monoids. My hope is that the theory initiated here will lead toward some progress on the \v{C}ern\'y conjecture. However, it is also my intent to interest combinatorialists, group theorists and representation theorists in transformation monoids and convince them that there is quite a bit of structure there. For this reason I have done my best not to assume any background knowledge in semigroup theory and to avoid usage of certain semigroup theoretic notions and results, such as Green's relations~\cite{Green} and Rees's theorem~\cite{CP}, that are not known to the general public. In particular, many standard results in semigroup theory are proved here in a novel way, often using transformation monoid ideas and in particular an analogue of Schur's lemma. The first part of the paper is intended to systemize the foundations of the theory of transformation monoids. A certain amount of what is here should be considered folklore, although probably some bits are new. I have tried to indicate what I believe to be folklore or at least known to the \textit{cognoscenti}. In particular, some of Sections~\ref{sthree} and~\ref{sfour} can be viewed as a specialization of Sch\"utzenberger's theory of unambiguous matrix monoids~\cite{berstelperrinreutenauer}. The main new part here is the generalization of Green's theory~\cite{Greenpoly} from the context of modules to transformation monoids. A generalization of Green's results to semirings, with applications to the representation theory of finite semigroups over semirings, can be found in~\cite{ZurJohnBen}. The second part of the paper is a first step in the program of understanding primitive transformation monoids. In part, they can be understood in terms of primitive groups in much the same way that irreducible representations of monoids can be understood in terms of irreducible representations of groups via Green's theory~\cite{Greenpoly,myirreps} and the theory of Munn and Ponizovsky~\cite[Chapter 5]{CP}. The tools of orbitals and orbital digraphs are introduced, generalizing the classical theory from permutation groups~\cite{dixonbook,cameron}. The third part of the paper commences a detailed study of the modules associated to a transformation monoid. In particular, the projective cover of the transformation module is computed for the case of a transitive action by partial or total transformations. The paper ends with applications of Markov chains to the study of transformation semigroups. \section{Actions of monoids on sets} Before turning to transformation monoids, i.e., monoids acting faithfully on sets, we must deal with some ``abstract nonsense'' type preliminaries concerning monoid actions on sets and formalize notation and terminology. \subsection{$M$-sets} Fix a monoid $M$. A (right) \emph{action} of $M$ on a set $\Omega$ is, as usual, a map $\Omega\times M\to \Omega$, written $(\alpha,m)\mapsto \alpha m$, satisfying, for all $\alpha\in \Omega$, $m,n\in M$, \begin{enumerate} \item $\alpha 1=\alpha$; \item $(\alpha m)n = \alpha (mn)$. \end{enumerate} Equivalently, an action is a homomorphism $M\to T_{\Omega}$, where $T_{\Omega}$ is the monoid of all self-maps of $\Omega$ acting on the right. In this case, we say that $\Omega$ is an \emph{$M$-set}. The action is \emph{faithful} if the corresponding morphism is injective. Strictly speaking, there is a unique action of $M$ on the empty set, but in this paper we tacitly assume that we are dealing only with actions on non-empty sets. A \emph{morphism} $f\colon \Omega\to \Lambda$ of $M$-sets is a map such that $f(\alpha m)=f(\alpha)m$ for all $\alpha\in \Omega$ and $m\in M$. The set of morphisms from $\Omega$ to $\Lambda$ is denoted $\hom_M(\Omega,\Lambda)$. The category of right $M$-sets will be denoted $\pv{Set}^{M^{\mathrm {op}}}$ following category theoretic notation for presheaf categories~\cite{Mac-CWM}. The $M$-set obtained by considering the right action of $M$ on itself by right multiplication is called the \emph{regular} $M$-set. It is a special case of a free $M$-set. An $M$-set $\Omega$ is \emph{free} on a set $X$ if there is a map $\iota\colon X\to M$ so that given a function $g\colon X\to \Lambda$ with $\Lambda$ an $M$-set, there is a unique morphism of $M$-sets $f\colon \Omega\to \Lambda$ such that \[\xymatrix{X\ar[r]^{\iota}\ar[rd]_g&\Omega\ar@{..>}[d]^f\\ & \Lambda}\] commutes. The free $M$-set on $X$ exists and can explicitly be realized as $X\times M$ where the action is given by $(x,m')m = (x,m'm)$ and the morphism $\iota$ is $x\mapsto (x,1)$. The functor $X\mapsto X\times M$ from $\pv {Set}$ to $\pv{Set}^{M^{\mathrm {op}}}$ is left adjoint to the forgetful functor. In concrete terms, an $M$-set $\Omega$ is free on a subset $X\subseteq \Omega$ if and only if, for all $\alpha\in \Omega$, there exists a unique $x\in X$ and $m\in M$ such that $\alpha=xm$. We call $X$ a \emph{basis} for the $M$-set $\Omega$. Note that if $M$ is a group, then $\Omega$ is free if and only if $M$ acts \emph{freely} on $\Omega$, i.e., $\alpha m=\alpha$, for some $\alpha\in \Omega$, implies $m=1$. In this case, any transversal to the $M$-orbits is a basis. Group actions are to undirected graphs as monoid actions are to directed graphs (digraphs). Just as a digraph has both weak components and strong components, the same is true for monoid actions. Let $\Omega$ be an $M$-set. A non-empty subset $\Delta$ is \emph{$M$-invariant} if $\Delta M\subseteq M$; we do not consider the empty set as an $M$-invariant subset. An $M$-invariant subset of the form $\alpha M$ is called \emph{cyclic}. The cyclic sub-$M$-sets form a poset $\mathop{\mathrm{Pos}}\nolimits(\Omega)$ with respect to inclusion. The assignment $\Omega\to \mathop{\mathrm{Pos}}\nolimits(\Omega)$ is a functor $\pv{Set}^{M^{\mathrm {op}}}\to \pv{Poset}$. A cyclic subset will be called \emph{minimal} if it is minimal with respect to inclusion. Associated to $\mathop{\mathrm{Pos}}\nolimits(\Omega)$ is a preorder on $\Omega$ given by $\alpha\leq_{\Omega} \beta$ if and only if $\alpha M\subseteq \beta M$. If $\Omega$ is clear from the context, we drop the subscript and simply write $\leq$. From this preorder arise two naturally defined equivalence relations: the symmetric-transitive closure $\simeq$ of $\leq$ and the intersection $\sim$ of $\leq$ and $\geq$. More precisely, $\alpha\simeq \beta$ if and only if there is a sequence $\alpha=\omega_0,\omega_1,\ldots, \omega_n=\beta$ of elements of $\Omega$ such that, for each $0\leq i\leq n-1$, either $\omega_i\leq \omega_{i+1}$ or $\omega_{i+1}\leq \omega_i$. On the other hand, $\alpha\sim \beta$ if and only if $\alpha\leq \beta$ and $\beta\leq \alpha$, that is, $\alpha M=\beta M$. The equivalence classes of $\simeq$ shall be called \emph{weak orbits}, whereas the equivalence classes of $\sim$ shall be called \emph{strong orbits}. These correspond to the weak and strong components of a digraph. If $M$ is a group, then both notions coincide with the usual notion of an orbit. Notice that weak orbits are $M$-invariant, whereas a strong orbit is $M$-invariant if and only if it is a minimal cyclic subset $\alpha M$. The action of $M$ will be called \emph{weakly transitive} if it has a unique weak orbit and shall be called \emph{transitive}, or \emph{strongly transitive} for emphasis, if it has a unique strong orbit. Observe that $M$ is transitive on $\Omega$ if and only if there are no proper $M$-invariant subsets of $\Omega$. Thus transitive $M$-sets can be thought of as analogues of irreducible representations; on the other hand weakly transitive $M$-sets are the analogues of indecomposable representations since it is easy to see that the action of $M$ on $\Omega$ is weakly transitive if and only if $\Omega$ is not the coproduct (disjoint union) of two proper $M$-invariant subsets. The regular $M$-set is weakly transitive, but if $M$ is finite then it is transitive if and only if $M$ is a group. The weak orbit of an element $\alpha\in \Omega$ will be denoted $\mathcal O_w(\alpha)$ and the strong orbit $\mathcal O_s(\alpha)$. The set of weak orbits will be denoted $\pi_0(\Omega)$ (in analogy with connected components of graphs; and in any event this designation can be made precise in the topos theoretic sense) and the set of strong orbits shall be denoted $\Omega/M$. Note that $\Omega/M$ is naturally a poset isomorphic to $\mathop{\mathrm{Pos}}\nolimits(\Omega)$ via the bijection $\mathcal O_s(\alpha)\mapsto \alpha M$. Also note that $\pi_0(\Omega)$ is in bijection with $\pi_0(\mathop{\mathrm{Pos}}\nolimits(\Omega))$ where we recall that if $P$ is a poset, then the set $\pi_0(P)$ of connected components of $P$ is the set of equivalence classes of the symmetric-transitive closure of the partial order (i.e., the set of connected components of the Hasse diagram of $P$). We shall also have need to consider $M$-sets with zero. An element $\alpha\in \Omega$ is called a \emph{sink} if $\alpha M=\{\alpha\}$. An \emph{$M$-set with zero}, or \emph{pointed $M$-set}, is a pair $(\Omega,0)$ where $\Omega$ is an $M$-set and $0\in M$ is a distinguished sink\footnote{This usage of the term ``pointed transformation monoid'' differs from that of~\cite{qtheor}.}. An $M$-set with zero $(\Omega,0)$ is called \emph{$0$-transitive} if $\alpha M=\Omega$ for all $\alpha\neq 0$. Notice that an $M$-set with zero is the same thing as an action of $M$ by partial transformations (just remove or adjoin the zero) and that $0$-transitive actions correspond to transitive actions by partial functions. Morphisms of $M$-sets with zero must preserve the zero and, in particular, in this context $M$-invariant subsets are assumed to contain the zero. The category of $M$-sets with zero will be denoted $\pv{Set}_*^{M^{\mathrm{op}}}$ as it is the category of all contravariant functors from $M$ to the category of pointed sets. \begin{Prop}\label{uniquesink} Suppose that $\Omega$ is a $0$-transitive $M$-set. Then $0$ is the unique sink of $\Omega$. \end{Prop} \begin{proof} Suppose that $\alpha\neq 0$. Then $0\in \Omega=\alpha M$ shows that $\alpha$ is not a sink. \end{proof} A strong orbit $\mathcal O$ of $M$ on $\Omega$ is called \emph{minimal} if it is minimal in the poset $\Omega/M$, or equivalently the cyclic poset $\omega M$ is minimal for $\omega\in \mathcal O$. The union of all minimal strong orbits of $M$ on $\Omega$ is $M$-invariant and is called the \emph{socle} of $\Omega$, denoted $\soc \Omega$. If $M$ is a group, then $\soc \Omega=\Omega$. The case that $\Omega=\soc \Omega$ is analogous to that of a completely reducible representation: one has that $\Omega$ is a coproduct of transitive $M$-sets. If $\Omega$ is an $M$-set with zero, then a minimal non-zero strong orbit is called \emph{$0$-minimal}. In this setting we define the socle to be the union of all the $0$-minimal strong orbits together with zero; again it is an $M$-invariant subset. A \emph{congruence} or \emph{system of imprimitivity} on an $M$-set $\Omega$ is an equivalence relation $\equiv$ such that $\alpha \equiv \beta$ implies $\alpha m\equiv \beta m$ for all $\alpha,\beta\in \Omega$ and $m\in M$. In this case, the quotient $\Omega/{\equiv}$ becomes an $M$-set in the natural way and the quotient map $\Omega\to \Omega/{\equiv}$ is a morphism. The standard isomorphism theorem holds in this context. If $\Delta\subseteq \Omega$ is $M$-invariant, then one can define a congruence $\equiv_{\Delta}$ by putting $\alpha\equiv_{\Delta} \beta$ if $\alpha=\beta$ or $\alpha,\beta\in \Delta$. In other words, the congruence $\equiv_{\Delta}$ crushes $\Delta$ to a point. The quotient $M$-set is denoted $\Omega/\Delta$. The class of $\Delta$, often denoted by $0$, is a sink and it is more natural to view $\Omega/\Delta$ as an $M$-set with zero. The reader should verify that if \begin{equation}\label{series} \Omega = \Omega_0\supset\Omega_1\supset \Omega_2\supset\cdots \supset \Omega_k \end{equation} is an unrefinable chain of $M$-invariant subsets, then the successive quotients $\Omega_i/\Omega_{i+1}$ are in bijection with the strong orbits of $M$ on $\Omega$. If we view $\Omega_i/\Omega_{i+1}$ as an $M$-set with zero, then it is a $0$-transitive $M$-set corresponding to the natural action of $M$ on the associated strong orbit by partial maps. Of course, $\Omega_k$ will be a minimal strong orbit and hence a minimal cyclic sub-$M$-set. For example, if $N$ is a submonoid of $M$, there are two natural congruences on the regular $M$-set associated to $N$: namely, the partition of $M$ into weak orbits of the left action of $N$ and the partition of $M$ into the strong orbits of the left action of $N$. To the best of the author's knowledge, only the latter has every been used in the literature and most often when $M=N$. More generally, if $\Omega$ is an $M$-set, a relation $\rho$ on $\Omega$ is said to be \emph{stable} if $\alpha\mathrel{\rho} \beta$ implies $\alpha m\mathrel{\rho}\beta m$ for all $m\in M$. If $\Upsilon$ is any set, then we can make it into an $M$-set via the trivial action $\alpha m=\alpha$ for all $\alpha\in \Upsilon$ and $m\in M$; such $M$-sets are called \emph{trivial}. This gives rise to a functor $\Delta\colon \pv{Set}\to \pv{Set}^{M^{\mathrm{op}}}$. The functor $\pi_0\colon \pv{Set}^{M^{\mathrm {op}}}\to \pv{Set}$ provides the left adjoint. More precisely, we have the following important proposition that will be used later when applying module theory. \begin{Prop}\label{connectedcomp} Let $\Omega$ be an $M$-set and $\Upsilon$ a trivial $M$-set. Then a function $f\colon \Omega\to \Upsilon$ belongs to $\hom_M(\Omega,\Upsilon)$ if and only if $f$ is constant on weak orbits. Hence $\hom_M(\Omega,\Upsilon)\cong \pv{Set}(\pi_0(\Omega),\Upsilon)$. \end{Prop} \begin{proof} As the weak orbits are $M$-invariant, if we view $\pi_0(\Omega)$ as a trivial $M$-set, then the projection map $\Omega\to \pi_0(\Omega)$ is an $M$-set morphism. Thus any map $f\colon \Omega\to \Upsilon$ that is constant on weak orbits is an $M$-set morphism. Conversely, suppose that $f\in \hom_M(\Omega,\Upsilon)$ and assume $\alpha \leq \beta\in \Omega$. Then $\alpha =\beta m$ for some $m\in M$ and so $f(\alpha)=f(\beta m)=f(\beta)m=f(\beta)$. Thus the relation $\leq$ is contained in $\ker f$. But $\simeq$ is the equivalence relation generated by $\leq$, whence $f$ is constant on weak orbits. This completes the proof. \end{proof} \begin{Rmk} The right adjoint of the functor $\Delta$ is the so-called ``global sections'' functor $\Gamma\colon \pv{Set}^{M^{\mathrm{op}}}\to \pv {Set}$ taking an $M$-set $\Omega$ to the set of $M$-invariants of $\Omega$, that is, the set of global fixed points of $M$ on $\Omega$. \end{Rmk} We shall also need some structure theory about automorphisms of $M$-sets. \begin{Prop}\label{schurforsets} Let $\Omega$ be a transitive $M$-set. Then every endomorphism of $\Omega$ is surjective. Moreover, the fixed point set of any non-trivial endomorphism of $\Omega$ is empty. In particular, the automorphism group of $\Omega$ acts freely on $\Omega$. \end{Prop} \begin{proof} If $f\colon \Omega\to \Omega$ is an endomorphism, then $f(\Omega)$ is $M$-invariant and hence coincides with $\Omega$. Suppose that $f$ has a fixed point. Then the fixed point set of $f$ is an $M$-invariant subset of $\Omega$ and thus coincides with $\Omega$. Therefore, $f$ is the identity. \end{proof} In particular, the endomorphism monoid of a finite transitive $M$-set is its automorphism group. \subsection{Green-Morita theory} An important role in the theory to be developed is the interplay between $M$ and its subsemigroups of the form $eMe$ with $e$ an idempotent of $M$. Notice that $eMe$ is a monoid with identity $e$. The group of units of $eMe$ is denoted $G_e$ and is called the \emph{maximal subgroup} of $M$ at $e$. The set of idempotents of $M$ shall be denoted $E(M)$; more generally, if $X\subseteq M$, then $E(X)=E(M)\cap X$. First we need to define the tensor product in the context of $M$-sets (cf.~\cite{actsbook,Mac-CWM}). Let $\Omega$ be a right $M$-set and $\Lambda$ a left $M$-set. A map $f\colon \Omega\times \Lambda\to \Phi$ of sets is \emph{$M$-bilinear} if $f(\omega m,\lambda) = f(\omega, m\lambda)$ for all $\omega\in \Omega$, $\lambda \in \Lambda$ and $m\in M$. The universal bilinear map is $\Omega\times \Lambda \to \Omega\otimes_M \Lambda$ given by $(\omega,\lambda)\mapsto \omega\otimes \lambda$. Concretely, $\Omega\otimes_M \Lambda$ is the quotient of $\Omega\times \Lambda$ by the equivalence relation generated by the relation $(\omega m,\lambda)\approx (\omega, m\lambda)$ for $\omega\in \Omega$, $\lambda\in \Lambda$ and $m\in M$. The class of $(\omega,\lambda)$ is denoted $\omega\otimes \lambda$. Suppose that $N$ is a monoid and that $\Lambda$ is also right $N$-set. Moreover, assume that the left action of $M$ commutes with the right action of $N$; in this case we call $\Lambda$ a \emph{bi-$M$-$N$-set}. Then $\Omega\otimes_M \Lambda$ is a right $N$-set via the action $(\omega\otimes \lambda)n = \omega\otimes (\lambda n)$. That this is well defined follows easily from the fact that the relation $\approx$ is stable for the right $N$-set structure because the actions of $M$ and $N$ commute. For example, if $N$ is a submonoid of $M$ and $\{\ast\}$ is the trivial $N$-set, then $\{\ast\}\otimes_N M$ is easily verified to be isomorphic as an $M$-set to the quotient of the regular $M$-set by the weak orbits of the left action of $N$ on $M$. If $\Upsilon$ is a right $N$-set and $\Lambda$ a bi-$M$-$N$ set, then $\hom_N(\Lambda,\Upsilon)$ is a right $M$-set via the action $(fm)(\lambda) = f(m\lambda)$. The usual adjunction between tensor product and hom holds in this setting. We just sketch the proof idea. \begin{Prop}\label{adjunction} Let $\Omega$ be a right $M$-set, $\Lambda$ a bi-$M$-$N$-set and $\Upsilon$ a right $N$-set. Then there is a natural bijection \[\hom_N(\Omega\otimes_M \Lambda,\Upsilon)\cong \hom_M(\Omega,\hom_N(\Lambda,\Upsilon))\] of sets. \end{Prop} \begin{proof} Both sides are in bijection with $M$-bilinear maps $f\colon \Omega\times \Lambda\to \Upsilon$ satisfying $f(\omega,\lambda n) = f(\omega,\lambda)n$ for $\omega\in \Omega$, $\lambda\in \Lambda$ and $n\in N$. \end{proof} Something we shall need later is the description of $\Omega\otimes_M \Lambda$ when $\Lambda$ is a free left $M$-set. \begin{Prop}\label{freesetbasisfortensor} Let $\Omega$ be a right $M$-set and let $\Lambda$ be a free left $M$-set with basis $B$. Then $\Omega\otimes_M\Lambda$ is in bijection with $\Omega\times B$. More precisely, if $\lambda\in \Lambda$, then one can uniquely write $\lambda = m_{\lambda}b_{\lambda}$ with $m_{\lambda}\in M$ and $b_{\lambda}\in B$. The isomorphism takes $\omega\otimes \lambda$ to $(\omega m_{\lambda},b_{\lambda})$. \end{Prop} \begin{proof} It suffices to show that the map $f\colon \Omega\times \Lambda\to \Omega\times B$ given by $(\omega,\lambda)\mapsto (\omega m_{\lambda},b_{\lambda})$ is the universal $M$-bilinear map. It is bilinear because freeness implies that if $n\in M$, then since $n\lambda= nm_{\lambda}b_{\lambda}$, one has $m_{n\lambda} = nm_{\lambda}$ and $b_{n\lambda}=b_{\lambda}$. Thus \[f(\omega,n\lambda)=(\omega nm_{\lambda},b_{\lambda}) = f(\omega n,\lambda)\] and so $f$ is $M$-bilinear. Suppose now that $g\colon \Omega\times \Lambda\to \Upsilon$ is $M$-bilinear. Then define $h\colon \Omega\times B\to \Upsilon$ by $h(\omega,b) = g(\omega,b)$. Then \[h(f(\omega,\lambda))=h(\omega m_{\lambda},b_{\lambda}) = g(\omega m_{\lambda},b_{\lambda}) = g(\omega,\lambda)\] where the last equality uses $M$-bilinearity of $g$ and that $m_{\lambda}b_{\lambda}=\lambda$. This completes the proof. \end{proof} We are now in a position to present the analogue of the Morita-Green theory~\cite[Chapter 6]{Greenpoly} in the context of $M$-sets. This will be crucial for analyzing transformation monoids, in particular, primitive ones. The following result is proved in an identical manner to its ring theoretic counterpart. \begin{Prop}\label{restrictionfunctor} Let $e\in E(M)$ and let $\Omega$ be an $M$-set. Then there is a natural isomorphism $\hom_M(eM,\Omega)\cong \Omega e$. \end{Prop} \begin{proof} Define $\varphi\colon \hom_M(eM,\Omega)\to \Omega e$ by $\varphi(f)=f(e)$. This is well defined because $f(e)=f(ee)=f(e)e\in \Omega e$. Conversely, if $\alpha\in \Omega e$, then one can define a morphism $F_{\alpha}\colon eM\to \Omega$ by $F_{\alpha}(m) = \alpha m$. Observe that $F_\alpha(e)=\alpha e=\alpha$ and so $\varphi(F_\alpha)=\alpha$. Thus to prove these constructions are inverses it suffices to observe that if $f\in \hom_M(eM,\Omega)$ and $m\in eM$, then $f(m)=f(em)=f(e)m=F_{\varphi(f)}(m)$ for all $m\in eM$. \end{proof} We shall need a stronger form of this proposition for the case of principal right ideals generated by idempotents. Associate to $M$ the category $M_E$ (known as the \emph{idempotent splitting} of $M$) whose object set is $E(M)$ and whose hom sets are given by $M_E(e,f) = fMe$. Composition \[M_E(f,g)\times M_E(e,f)\to M_E(e,g),\] for $e,f,g\in E(M)$, is given by $(m,n)\mapsto mn$. This is well defined since $gMf\cdot fMe\subseteq gMe$. One easily verifies that $e\in M_E(e,e)$ is the identity at $e$. The endomorphism monoid $M_E(e,e)$ of $e$ is $eMe$. The idempotent splitting plays a crucial role in semigroup theory~\cite{Tilson,qtheor}. The following result is well known to category theorists. \begin{Prop}\label{categoryofprojectiveMsets} The full subcategory $\pv C$ of $\pv{Set}^{M^{\mathrm {op}}}$ with objects the right $M$-sets $eM$ with $e\in E(M)$ is equivalent to the idempotent splitting $M_E$. Consequently, the endomorphism monoid of the $M$-set $eM$ is $eMe$ (with its natural left action on $eM$). \end{Prop} \begin{proof} Define $\psi\colon M_E\to \pv C$ on objects by $\psi(e)=eM$; this map is evidentally surjective. We already know (by Proposition~\ref{restrictionfunctor}) that, for each pair of idempotents $e,f$ of $M$, there is a bijection $\psi_{e,f}\colon fMe\to \hom_M(eM,fM)$ given by $\psi_{e,f}(n) = F_n$ where $F_n(m)=nm$. So to verify that the family $\{\psi_{e,f}\}$, together with the object map $\psi$, provides an equivalence of categories, we just need to verify functoriality, that is, if $n_1\in fMe$ and $n_2\in gMf$, then $F_{n_2}\circ F_{n_1}=F_{n_2n_1}$ and $F_e=1_{eM}$. For the latter, clearly $F_e(m)=em=m$ for any $m\in eM$. As to the former, $F_{n_2}(F_{n_1}(m)) = F_{n_2}(n_1m) = n_2(n_1m)=F_{n_2n_1}(m)$. For the final statement, because $M_E(e,e)=eMe$ it suffices just to check that the actions coincide. But if $m\in eM$ and $n\in eMe$, then the corresponding endomorphism $F_n\colon eM\to eM$ takes $m$ to $nm$. \end{proof} As a consequence, we see that if $e,f\in E(M)$, then $eM\cong fM$ if and only if there exists $m\in eMf$ and $m'\in fMe$ such that $mm'=e$ and $m'm=f$. In semigroup theoretic lingo, this is the same thing as saying that $e$ and $f$ are $\mathscr D$-equivalent~\cite{CP,qtheor,Higginsbook,Green}. If $e,f\in E(M)$ are $\mathscr D$-equivalent, then because $eMe$ is the endomorphism monoid of $eM$ and $fMf$ is the endomorphism monoid of $fM$, it follows that $eMe\cong fMf$ (and hence $G_e\cong G_f$) as $eM\cong fM$. The reader familiar with Green's relations~\cite{Green,CP} should verify that the elements of $fMe$ representing isomorphisms $eM\to fM$ are exactly those $m\in M$ with $f\R m\eL e$. It is a special case of more general results from category theory that if $M$ and $N$ are monoids, then $\pv{Set}^{M^{\mathrm {op}}}$ is equivalent to $\pv{Set}^{N^{\mathrm{op}}}$ if and only if $M_E$ is equivalent to $N_E$, if and only if there exists $f\in E(N)$ such that $N= NfN$ and $M\cong fNf$; see also~\cite{Talwar3}. In particular, for finite monoids $M$ and $N$ it follows that $\pv{Set}^{M^{\mathrm {op}}}$ and $\pv{Set}^{N^{\mathrm{op}}}$ are equivalent if and only if $M\cong N$ since the ideal generated by a non-identity idempotent of a finite monoid is proper. The proof goes something like this. The category $M_E$ is equivalent to the full subcategory on the projective indecomposable objects of $\pv{Set}^{M^{\mathrm {op}}}$ and hence is taken to $N_E$ under any equivalence $\pv{Set}^{M^{\mathrm {op}}}\to \pv{Set}^{N^{\mathrm{op}}}$. If the object $1$ of $M_E$ is sent to $f\in E(N)$, then $M\cong fNf$ and $N=NfN$. Conversely, if $f\in E(N)$ with $fNf\cong M$ and $NfN=N$, then $fN$ is naturally a bi-$M$-$N$-set using that $M\cong fNf$. The equivalence $\pv{Set}^{M^{\mathrm {op}}}\to \pv{Set}^{N^{\mathrm{op}}}$ then sends an $M$-set $\Omega$ to $\Omega\otimes _M fN$. Fix now an idempotent $e\in E(M)$. Then $eM$ is a left $eMe$-set and so $\hom_M(eM,\Omega)\cong \Omega e$ is a right $eMe$-set. The action on $\Omega e$ is given simply by restricting the action of $M$ to $eMe$. Thus there results a restriction functor $\mathop{\mathrm{res}}\nolimits_e\colon \pv{Set}^{M^{\mathrm {op}}}\to \pv{Set}^{eMe^{\mathrm {op}}}$ given by \[\mathop{\mathrm{res}}\nolimits_e(\Omega)=\Omega e.\] It is easy to check that this functor is exact in the sense that it preserves injectivity and surjectivity. It follows immediately from the isomorphism $\mathop{\mathrm{res}}\nolimits_e(-)\cong \hom_M(eM,(-))$ that $\mathop{\mathrm{res}}\nolimits_e$ has a left adjoint, called \emph{induction}, $\mathop{\mathrm{ind}}\nolimits_e\colon \pv{Set}^{eMe^{\mathrm {op}}}\to \pv{Set}^{M^{\mathrm {op}}}$ given by \[\mathop{\mathrm{ind}}\nolimits_e(\Omega) = \Omega\otimes_{eMe} eM.\] Observe that $\Omega\cong \mathop{\mathrm{ind}}\nolimits_e(\Omega)e$ as $eMe$-sets via the map $\alpha\mapsto \alpha\otimes e$ (which is the unit of the adjunction). As this map is natural, the functor $\mathop{\mathrm{res}}\nolimits_e\mathop{\mathrm{ind}}\nolimits_e$ is naturally isomorphic to the identity functor on $\pv{Set}^{eMe^{\mathrm {op}}}$. Let us note that if $\Omega$ is a right $M$-set, then each element of $\Omega\otimes_M Me$ can be uniquely written in the form $\alpha\otimes e$ with $\alpha\in \Omega$. Thus the natural map $\Omega\otimes_M Me\to \Omega e$ sending $\alpha\otimes e$ to $\alpha e$ is an isomorphism. Hence Proposition~\ref{restrictionfunctor} shows that $\mathop{\mathrm{res}}\nolimits_e$ also has a right adjoint $\mathop{\mathrm{coind}}\nolimits_e\colon \pv{Set}^{eMe^{\mathrm {op}}}\to \pv{Set}^{M^{\mathrm {op}}}$, termed \emph{coinduction}, defined by putting \[\mathop{\mathrm{coind}}\nolimits_e(\Omega) = \hom_{eMe}(Me,\Omega).\] Note that $\mathop{\mathrm{coind}}\nolimits_e(\Omega) e\cong \Omega$ as $eMe$-sets via the map sending $f$ to $f(e)$ (which is the counit of the adjunction) and so $\mathop{\mathrm{res}}\nolimits_e\mathop{\mathrm{coind}}\nolimits_e$ is also naturally isomorphic to the identity functor on $\pv{Set}^{eMe^{\mathrm {op}}}$. The module theoretic analogues of these constructions are essential to much of representation theory, especially monoid representation theory~\cite{Greenpoly,myirreps,rrbg}. \begin{Prop}\label{inductionprop} Let $\Omega$ be an $eMe$-set. Then $\mathop{\mathrm{ind}}\nolimits_e(\Omega)eM=\mathop{\mathrm{ind}}\nolimits_e(\Omega)$. \end{Prop} \begin{proof} Indeed, $\alpha\otimes m = (\alpha\otimes e)m\in \mathop{\mathrm{ind}}\nolimits_e(\Omega)eM$ for $m\in eM$. \end{proof} Let us now investigate these constructions in more detail. First we consider how the strong and weak orbits of $M$ and $Me$ interact. \begin{Prop}\label{relatedorbits} Let $\alpha,\beta \in \Omega e$. Then $\alpha\leq_\Omega \beta$ if and only if $\alpha\leq_{\Omega e}\beta$. In other words, there is an order embedding $f\colon \mathop{\mathrm{Pos}}\nolimits(\Omega e)\to \mathop{\mathrm{Pos}}\nolimits (\Omega)$ taking $\alpha eMe$ to $\alpha M$. \end{Prop} \begin{proof} Trivially, $\alpha\in \beta eMe$ implies $\alpha M\subseteq \beta M$. Conversely, suppose that $\alpha M\subseteq \beta M$. Then $\alpha eMe = \alpha Me\subseteq \beta Me=\beta eMe$. \end{proof} As an immediate consequence, we have: \begin{Cor}\label{restrictorbit} The strong orbits of $\Omega e$ are the sets of the form $\mathcal O_s(\alpha)\cap \Omega e$ with $\alpha\in \Omega e$. Consequently, if $\Omega$ is a transitive $M$-set, then $\Omega e$ is a transitive $eMe$-set. \end{Cor} The relationship between weak orbits of $\Omega$ and $\Omega e$ is a bit more tenuous. \begin{Prop}\label{weakorbitrestriction} There is a surjective map $\varphi\colon \pi_0(\Omega e)\to \pi_0(\Omega)$. Hence if $\Omega e$ is weakly transitive, then $\Omega$ is weakly transitive. \end{Prop} \begin{proof} The order embedding $\mathop{\mathrm{Pos}}\nolimits(\Omega e)\to \mathop{\mathrm{Pos}}\nolimits(\Omega)$ from Proposition~\ref{relatedorbits} induces a map $\varphi\colon \pi_0(\Omega e)\to \pi_0(\Omega)$ that sends the weak orbit of $\alpha\in \Omega e$ under $eMe$ to its weak orbit $\mathcal O_w(\alpha)$ under $M$. This map is onto, because $\mathcal O_w(\omega)=\mathcal O_w(\omega e)$ for any $\omega\in \Omega$. \end{proof} In general, the map $\varphi$ in Proposition~\ref{weakorbitrestriction} is not injective. For example, let $\Omega = \{1,2,3\}$ and let $M$ consist of the identity map on $\Omega$ together with the maps \[e=\begin{pmatrix} 1 & 2& 3\\ 2 &2 &3\end{pmatrix}, \quad f=\begin{pmatrix} 1 & 2& 3\\ 3 & 2& 3\end{pmatrix}.\] Then $M$ is weakly transitive on $\Omega$, but $eMe = \{e\}$, $\Omega e=\{2,3\}$ and $eMe$ is not weakly transitive on $\Omega e$. Next we relate the substructures and the quotient structures of $\Omega$ and $\Omega e$ via Galois connections. The former is the easier one to deal with. If $\Omega$ is an $M$-set, then $\mathop{\mathrm{Sub}}\nolimits_M(\Omega)$ will denote the poset of $M$-invariant subsets. \begin{Prop}\label{substructureGalois} There is a surjective map of posets \[\psi\colon \mathop{\mathrm{Sub}}\nolimits_M(\Omega)\to \mathop{\mathrm{Sub}}\nolimits_{eMe}(\Omega e)\] given by $\Lambda\mapsto \Lambda e$. Moreover, $\psi$ admits an injective left adjoint given by $\Delta\mapsto \Delta M$. More concretely, this means that $\Delta M$ is the least $M$-invariant subset $\Lambda$ such that $\Lambda e=\Delta$. \end{Prop} \begin{proof} If $\Lambda$ is $M$-invariant, then $\Lambda eeMe\subseteq \Lambda e$ and hence $\Lambda e\in \mathop{\mathrm{Sub}}\nolimits_{eMe}(\Omega e)$. Clearly, $\psi$ is an order preserving map. If $\Delta\subseteq \Omega e$ is $eMe$-invariant, then $\Delta M$ is $M$-invariant and $\Delta = \Delta e\subseteq \Delta Me=\Delta eMe\subseteq \Delta$. Thus $\psi$ is surjective. Moreover, if $\Lambda\in \mathop{\mathrm{Sub}}\nolimits_M(\Omega)$ satisfies $\Lambda e=\Delta$, then $\Delta M\subseteq \Lambda eM\subseteq \Lambda$. This completes the proof. \end{proof} We now show that induction preserves transitivity. \begin{Prop}\label{preservetransitive} Let $\Omega$ be a transitive $eMe$-set. Then $\mathop{\mathrm{ind}}\nolimits_e(\Omega)$ is a transitive $M$-set. \end{Prop} \begin{proof} Since $\mathop{\mathrm{ind}}\nolimits_e(\Omega)e\cong \Omega$ is transitive, if $\Lambda\subseteq \mathop{\mathrm{ind}}\nolimits_e(\Omega)$ is $M$-invariant, then we have $\Lambda e=\mathop{\mathrm{ind}}\nolimits_e(\Omega)e$. Thus Propositions~\ref{inductionprop} and~\ref{substructureGalois} yield $\mathop{\mathrm{ind}}\nolimits_e(\Omega)=\mathop{\mathrm{ind}}\nolimits_e(\Omega)eM\subseteq \Lambda$ establishing the desired transitivity. \end{proof} It is perhaps more surprising that similar results also hold for the congruence lattice. If $\Omega$ is an $M$-set, denote by $\mathop{\mathrm{Cong}}\nolimits_M(\Omega)$ the lattice of congruences on $\Omega$. If $\equiv$ is a congruence on $\Omega e$, then we define a congruence $\equiv'$ on $\Omega$ by $\alpha\equiv' \beta$ if and only if $\alpha me\equiv \beta me$ for all $m\in M$. \begin{Prop}\label{enlargecongruence} Let $\equiv$ be a congruence on $\Omega e$. Then: \begin{enumerate} \item $\equiv'$ is a congruence on $\Omega$; \item $\equiv'$ restricts to $\equiv$ on $\Omega e$; \item $\equiv'$ is the largest congruence on $\Omega$ satisfying (2). \end{enumerate} \end{Prop} \begin{proof} Trivially, $\equiv'$ is an equivalence relation. To see that it is a congruence, suppose $\alpha\equiv' \beta$ and $n\in M$. Then, for any $m\in M$, we have $\alpha nme\equiv \beta nme$ by definition of $\equiv'$. Thus $\alpha n\equiv' \beta n$ and so $\equiv'$ is a congruence. To prove (2), suppose that $\alpha,\beta\in \Omega e$. If $\alpha\equiv' \beta$, then $\alpha=\alpha e\equiv \beta e=\beta$ by definition of $\equiv'$. Conversely, if $\alpha\equiv \beta$ and $m\in M$, then $\alpha me=\alpha eme\equiv \beta eme=\beta me$. Thus $\alpha\equiv' \beta$. Finally, suppose that $\approx$ is a congruence on $\Omega$ that restricts to $\equiv$ on $\Omega e$ and assume $\alpha\approx \beta$. Then for any $m\in M$, we have $\alpha me,\beta me\in \Omega e$ and $\alpha me\approx \beta me$. Thus $\alpha me\equiv \beta me$ by hypothesis and so $\alpha\equiv' \beta$. This completes the proof. \end{proof} Let us reformulate this result from a categorical viewpoint. \begin{Prop}\label{quotientGalois} The map $\varrho\colon \mathop{\mathrm{Cong}}\nolimits_M(\Omega)\to \mathop{\mathrm{Cong}}\nolimits_{eMe}(\Omega e)$ induced by restriction is a surjective morphism of posets. Moreover, it admits an injective right adjoint given by ${\equiv}\mapsto {\equiv'}$. \end{Prop} \section{Transformation monoids}\label{sthree} A \emph{transformation monoid} is a pair $(\Omega, M)$ where $\Omega$ is a set and $M$ is a submonoid of $T_{\Omega}$. Notice that if $e\in E(M)$, then $(\Omega e,eMe)$ is also a transformation monoid. Indeed, if $m,m'\in eMe$ and restrict to the same function on $\Omega e$, then for any $\alpha\in \Omega$, we have $\alpha m=\alpha em=\alpha em'=\alpha m'$ and hence $m=m'$. A transformation monoid $(\Omega, M)$ is said to be \emph{finite} if $\Omega$ is finite. Of course, in this case $M$ is finite, too. In this paper, we are primarily interested in the theory of finite transformation monoids. If $|\Omega|=n$, then we say that $(\Omega,M)$ has \emph{degree} $n$. \subsection{The minimal ideal} For the moment assume that $(\Omega, M)$ is a finite transformation monoid. Following standard semigroup theory notation going back to Sch\"utzenberger, if $m\in M$, then $m^{\omega}$ denotes the unique idempotent that is a positive power of $m$. Such a power exists because finiteness implies $m^k=m^{k+n}$ for some $k>0$ and $n>k$. Then $m^{a+n}=m^a$ for any $a\geq k$ and so if $r$ is the unique natural number $k\leq r\leq k+n-1$ that is divisible by $n$, then $(m^{r})^2=m^{2r}=m^r$. Uniqueness follows because $\{m^a\mid a\geq k\}$ is easily verified to be a cyclic group with identity $m^r$. For the basic structure theory of finite semigroups, the reader is referred to~\cite{Arbib} or~\cite[Appendix A]{qtheor}. If $M$ is a monoid, then a \emph{right ideal} $R$ of $M$ is a non-empty subset $R$ so that $RM\subseteq R$; in other words, right ideals are $M$-invariant subsets of the (right) regular $M$-set. Left ideals are defined dually. The strong orbits of the regular $M$-set are called \emph{$\R$-classes} in the semigroup theory literature. An \emph{ideal} is a subset of $M$ that is both a left and right ideal. If $M$ is a monoid, then $M^{\mathrm {op}}$ denotes the monoid obtained by reversing the multiplication. Notice that $M^{\mathrm {op}}\times M$ acts on $M$ by putting $x(m,m') = mxm'$. The ideals are then the $M^{\mathrm {op}}\times M$-invariant subsets; note that this action is weakly transitive. The strong orbits of this action are called \emph{$\J$-classes} in the semigroup literature. If $\Lambda$ is an $M$-set and $R$ is a right ideal of $M$, then observe that $\Lambda R$ is an $M$-invariant subset of $\Lambda$. A key property of finite monoids that we shall use repeatedly is stability. A monoid $M$ is \emph{stable} if, for any $m,n\in M$, one has that: \begin{align*} MmnM=MmM &\iff mnM=mM;\\ MnmM=MmM&\iff Mnm=Mm. \end{align*} A proof can be found, for instance, in~\cite[Appendix A]{qtheor}. We offer a different (and easier) proof here for completeness. \begin{Prop} Finite monoids are stable. \end{Prop} \begin{proof} We handle only the first of the two conditions. Trivially, $mnM=mM$ implies $MmnM=MmM$. For the converse, assume $MmnM=MmM$. Clearly, $mnM\subseteq mM$. Suppose that $u,v\in M$ with $umnv=m$. Then $mM\subseteq umnM$ and hence $|mM|\leq |umnM|\leq |mnM|\leq |mM|$. It follows that $mM=mnM$. \end{proof} An important consequence is the following. Let $G$ be the group of units of a finite monoid $M$. By stability, it follows that every right/left unit of $M$ is a unit and consequently $M\setminus G$ is an ideal. Indeed, suppose $m$ has a right inverse $n$, i.e., $mn=1$. Then $MmM=M=M1M$ and so by stability $Mm=M$. Thus $m$ has a left inverse and hence an inverse. The following result is usually proved via stability, but we use instead the techniques of this paper. \begin{Prop}\label{D=J} Let $M$ be a finite monoid and suppose that $e,f\in E(M)$. Then $eM\cong fM$ if and only if $MeM=MfM$. Consequently, if $e,f\in E(M)$ with $MeM=MfM$, then $eMe\cong fMf$ and hence $G_e\cong G_f$. \end{Prop} \begin{proof} If $eM\cong fM$, then by Proposition~\ref{categoryofprojectiveMsets} that there exist $m\in fMe$ and $m'\in eMf$ with $m'm=e$ and $mm'=f$. Thus $MeM=MfM$. Conversely, if $MeM=MfM$, choose $u,v\in M$ with $uev=f$ and put $m=fue$, $m'=evf$. Then $m\in fMe$, $m'\in eMf$ and $mm'=fueevf=f$. Thus the morphism $F_m\colon eM\to fM$ corresponding to $m$ (as per Proposition~\ref{categoryofprojectiveMsets}) is surjective and in particular $|fM|\leq |eM|$. By symmetry, $|eM|\leq |fM|$ and so $F_m$ is an isomorphism by finiteness. The last statement follows since $eM\cong fM$ implies that $eMe\cong fMf$ by Proposition~\ref{categoryofprojectiveMsets} and hence $G_e\cong G_f$. \end{proof} A finite monoid $M$ has a unique minimal ideal $I(M)$. Indeed, if $I_1,I_2$ are ideals, then $I_1I_2\subseteq I_1\cap I_2$ and hence the set of ideals of $M$ is downward directed and so has a unique minimum by finiteness. Trivially, $I(M)= MmM=I(M)mI(M)$ for any $m\in I(M)$ and hence $I(M)$ is a simple semigroup (meaning it has no proper ideals). Such semigroups are determined up to isomorphism by Rees's theorem~\cite{CP,qtheor,Rees} as Rees matrix semigroups over groups. However, we shall not need the details of this construction in this paper. If $m\in I(M)$, then $m^{\omega}\in I(M)$ and so $I(M)$ contains idempotents. Let $e\in E(I(M))$. The following proposition is a straightforward consequence of the structure theory of theory of finite semigroups. We include a somewhat non-standard proof using transformation monoids. \begin{Prop}\label{schutzrep} Let $M$ be a finite monoid and $e\in E(I(M))$. Then \begin{enumerate} \item $eM$ is a transitive $M$-set; \item $eMe=G_e$; \item $G_e$ is the automorphism group of $eM$. In particular, $eM$ is a free left $G_e$-set; \item If $f\in E(I(M))$, then $fM\cong eM$ and hence $G_e\cong G_f$. \end{enumerate} \end{Prop} \begin{proof} If $m\in eM$, then $m=em$ and hence, as $MemM=I(M)=MeM$, stability yields $eM=emM=mM$. Thus $eM$ is a transitive $M$-set. Since $eM$ is finite, Proposition~\ref{schurforsets} shows that the endomorphism monoid of $eM$ coincides with its automorphism group, which moreover acts freely on $eM$. But the endomorphism monoid is $eMe$ by Proposition~\ref{categoryofprojectiveMsets}. Thus $eMe=G_e$ and $eM$ is a free left $G_e$-set. For the final statement, observe that $MeM=I(M)=MfM$ and apply Proposition~\ref{D=J}. \end{proof} It is useful to know the following classical characterization of the orbits of $G_e$ on $eM$. \begin{Prop}\label{Lclasses} Let $e\in E(I(M))$ and $m,m'\in eM$. Then $G_em=G_em'$ if and only if $Mm=Mm'$. \end{Prop} \begin{proof} This is immediate from the dual of Proposition~\ref{relatedorbits} and the fact that $eMe=G_e$. \end{proof} An element $s$ of a semigroup $S$ is called (von Neumann) \emph{regular} if $s=sts$ for some $t\in S$. For example, every element of $T_{\Omega}$ is regular~\cite{CP}. It is well known that, for a finite monoid $M$, every element of $I(M)$ is regular in the semigroup $I(M)$. In fact, we have the following classical result. \begin{Prop}\label{unionofgroups} Let $M$ be a finite monoid. Then the disjoint union \[I(M)=\biguplus_{e\in E(I(M))} G_e\] is valid. Consequently, each element of $I(M)$ is regular in $I(M)$. \end{Prop} \begin{proof} Clearly maximal subgroups are disjoint. Suppose $m\in I(M)$ and choose $k>0$ so that $e=m^k$ is idempotent. Then because \[MeM=Mmm^{k-1}M=I(M)=MmM,\] we have by stability that $eM=mM$. Thus $em=m$ and similarly $me=m$. Hence $m\in eMe=G_e$. This establishes the disjoint union. Clearly, if $g$ is in the group $G_e$, then $gg^{-1} g=g$ and so $g$ is regular. \end{proof} The next result is standard. Again we include a proof for completeness. \begin{Prop}\label{LRdontchange} Let $N$ be a submonoid of $M$ and suppose that $n,n'\in N$ are regular in $N$. Then $nN=n'N$ if and only if $nM=n'M$ and dually $Nn=Nn'$ if and only if $Mn=Mn'$. \end{Prop} \begin{proof} We handle only the case of right ideals. Trivially, $nN=n'N$ implies $nM=n'M$. For the converse, suppose $nM=n'M$. Write $n'=n'bn'$ with $b\in N$. Assume that $n=n'm$ with $m\in M$. Then $n'bn=n'bn'm=n'm=n$ and so $nN\subseteq n'N$. A symmetric argument establishes $n'N\subseteq nN$. \end{proof} In the case $M\leq T_{\Omega}$, the minimal ideal has a (well-known) natural description. Let $\Omega$ be a finite set and let $f\in T_{\Omega}$. Define the \emph{rank} of $f$ \[\mathop{\mathrm{rk}}\nolimits(f)=|f(\Omega)|\] by analogy with linear algebra. It is well known and easy to prove that $T_{\Omega}fT_{\Omega}=T_{\Omega}gT_{\Omega}$ if and only if $\mathop{\mathrm{rk}}\nolimits(f)=\mathop{\mathrm{rk}}\nolimits(g)$~\cite{CP,Higginsbook}. By stability it follows that $f\in G_{f^{\omega}}$ if and only if $\mathop{\mathrm{rk}}\nolimits(f)=\mathop{\mathrm{rk}}\nolimits(f^2)$. The next theorem should be considered folklore. \begin{Thm}\label{minimalideal} Let $(\Omega,M)$ be a transformation monoid with $\Omega$ finite. Let $r$ be the minimum rank of an element of $M$. Then \[I(M)=\{m\in M\mid \mathop{\mathrm{rk}}\nolimits(m)=r\}.\] \end{Thm} \begin{proof} Let $J=\{m\in M\mid \mathop{\mathrm{rk}}\nolimits(m)=r\}$; it is clearly an ideal and so $I(M)\subseteq J$. Suppose $m\in J$. Then $m^2\in J$ and so $\mathop{\mathrm{rk}}\nolimits(m^2)=r=\mathop{\mathrm{rk}}\nolimits (m)$. Thus $m$ belongs to the maximal subgroup of $T_{\Omega}$ at $m^{\omega}$ and so $m^k=m$ for some $k>0$. It follows that $m$ is regular in $M$. Suppose now that $e\in E(I(M))$. Then we can find $u,v\in M$ with $umv=e$. Then $eume=e$ and so $eumM=eM$. Because $\mathop{\mathrm{rk}}\nolimits(eum)=r=\mathop{\mathrm{rk}}\nolimits(m)$, it follows that $T_{\Omega}eum=T_{\Omega}m$ by stability. But $eum$ and $m$ are regular in $M$ (the former by Proposition~\ref{unionofgroups}) and thus $Meum=Mm$ by Proposition~\ref{LRdontchange}. Thus $m\in I(M)$ completing the proof that $J=I(M)$. \end{proof} We call the number $r$ from the theorem the \emph{min-rank} of the transformation monoid $(\Omega,M)$. Some authors call this the rank of $M$, but this conflicts with the well-established usage of the term ``rank'' in permutation group theory. In $T_{\Omega}$ one has $fT_{\Omega}=gT_{\Omega}$ if and only if $\ker f=\ker g$ and $T_{\Omega}f=T_{\Omega}g$ if and only if $\Omega f=\Omega g$~\cite{CP,Higginsbook}. Therefore, Proposition~\ref{LRdontchange} immediately yields: \begin{Prop} Let $(\Omega,M)$ be a finite transformation monoid and suppose $m,m'\in I(M)$. Then $mM=m'M$ if and only if $\ker m=\ker m'$ and $Mm=Mm'$ if and only if $\Omega m=\Omega m'$. \end{Prop} The action of $M$ on $\Omega$ induces an action of $M$ on the power set $P(\Omega)$. Define \[\min_M\nolimits(\Omega) = \{\Omega m\mid m\in I(M)\}\] to be the set of images of elements of $M$ of minimal rank. \begin{Prop}\label{minsetinvariant} The set $\min_M(\Omega)$ is an $M$-invariant subset of $P(\Omega)$. \end{Prop} \begin{proof} Observe that $\min_M(\Omega) = \{\Omega\}I(M)$ and the latter set is trivially $M$-invariant. \end{proof} Let $s\in I(M)$ and suppose that $\ker s = \{P_1,\ldots,P_r\}$. Then if $X\in \min_M(\Omega)$, the fact that $r=|Xs|=|X|$ implies that $|X\cap P_i|\leq 1$ for $i=1,\ldots, r$. But since $\ker s$ is a partition into $r=|X|$ blocks, we conclude that $|X\cap P_i|=1$ for all $i=1,\ldots, r$. We state this as a proposition. \begin{Prop}\label{kernelpartition} Let $X\in \min_M(\Omega)$ and $s\in I(M)$. Suppose that $P$ is a block of $\ker s$. Then $|X\cap P|=1$. In particular, right multiplication by $s$ induces a bijection $X\to Xs$. \end{Prop} We now restate some of our previous results specialized to the case of minimal idempotents. See also~\cite{berstelperrinreutenauer}. \begin{Prop}\label{minimalfacts} Let $(\Omega,M)$ be a finite transformation monoid and let $e\in E(I(M))$. Then: \begin{enumerate} \item $(\Omega e,G_e)$ is a permutation group of degree the min-rank of $M$; \item $|\Omega e/G_e|\geq |\pi_0(\Omega)|$; \item If $M$ is transitive on $\Omega$, then $(\Omega e,G_e)$ is a transitive permutation group. \end{enumerate} \end{Prop} Another useful and well-known fact is that if $(\Omega,M)$ is a finite transitive transformation monoid, then $I(M)$ is transitive on $\Omega$. \begin{Prop}\label{minidealistrans} Let $(\Omega,M)$ be a finite transitive transformation monoid. Then the semigroup $I(M)$ is transitive on $\Omega$ (i.e., there are no proper $I(M)$-invariant subsets). \end{Prop} \begin{proof} If $\alpha\in \Omega$, then $\alpha I(M)$ is $M$-invariant and so $\alpha I(M)=\Omega$. \end{proof} In the case that the maximal subgroup $G_e$ of the minimal ideal is trivial and the action of $M$ on $\Omega$ is transitive, one has that each element of $I(M)$ acts as a constant map and $\Omega\cong eM$. This fact should be considered folklore. \begin{Prop}\label{constantmapcase} Let $(\Omega,M)$ be a finite transitive transformation monoid and let $e\in E(I(M))$. Suppose that $G_e$ is trivial. Then $I(M)=eM$, $\Omega\cong eM$ and $I(M)$ is the set of constant maps on $\Omega$. \end{Prop} \begin{proof} If $f\in E(I(M))$, then $G_f\cong G_e$ implies $G_f$ is trivial. Proposition~\ref{unionofgroups} then implies that $I(M)$ consists only of idempotents. By Proposition~\ref{minimalfacts}, the action of $G_f$ on $\Omega f$ is transitive and hence $|\Omega f|=1$; say $\Omega f=\{\omega_f\}$. Thus each element of $I(M)$ is a constant map. In particular, $ef=f$ for all $f\in I(M)$ and hence $eM=I(M)$. By transitivity of $I(M)$ on $\Omega$ (Proposition~\ref{minidealistrans}), we have that each element of $\Omega$ is the image of a constant map from $I(M)$. Consequently, we have a bijection $eM\to \Omega$ given by $f\mapsto \omega_f$ (injectivity follows from faithfulness of the action on $\Omega$). The map is a morphism of $M$-sets because if $m\in M$, then $fm\in I(M)$ and $\Omega fm= \{\omega_fm\}$ and so $\omega_{fm}=\omega_fm$ by definition. This shows that $\Omega\cong eM$. \end{proof} Let us relate $I(M)$ to the socle of $\Omega$. \begin{Prop}\label{socle} Let $(\Omega,M)$ be a finite transformation monoid. Then $\Omega I(M)=\soc \Omega$. Hence the min-ranks of $\Omega$ and $\soc \Omega$ coincide. \end{Prop} \begin{proof} Let $\alpha\in \soc \Omega$. Then $\alpha M$ is a minimal cyclic sub-$M$-set and hence a transitive $M$-set. Therefore, $\alpha M=\alpha I(M)$ by transitivity of $M$ on $\alpha M$ and so $\alpha \in \Omega I(M)$. Conversely, suppose that $\alpha\in \Omega I(M)$, say $\alpha =\omega m$ with $\omega\in \Omega$ and $m\in I(M)$. Let $\beta\in \alpha M$. We show that $\beta M=\alpha M$, which will establish the minimality of $\alpha M$. Suppose that $\beta = \alpha n$ with $n\in M$. Then $\beta = \omega mn$ and $mn\in I(M)$. Stability now yields $mM=mnM$ and so we can find $n'\in M$ with $mnn'=m$. Thus $\beta n'=\omega mnn'=\omega m=\alpha$. It now follows that $\alpha M$ is minimal and hence $\alpha\in \soc \Omega$. \end{proof} \subsection{Wreath products} We shall mostly be interested in transitive (and later $0$-transitive) transformation semigroups. In this section we relate transitive transformation monoids to induced transformation monoids and give an alternative description of certain tensor products in terms of wreath products. This latter approach underlies the Sch\"utzenberger representation of a monoid~\cite{Schutzrep,CP,qtheor}. Throughout this section, $M$ is a finite monoid. Not all finite monoids have a faithful transitive representation. A monoid $M$ is called \emph{right mapping} with respect to its minimal ideal if it acts faithfully on the right of $I(M)$~\cite{Arbib,qtheor}. Regularity implies that if $e_1,\ldots,e_k$ are idempotents forming a transversal to the $\R$-classes of $I(M)$, then $I(M) = \biguplus_{i=1}^m e_kM$. (Indeed, if $mnm=m$, then $mn$ is idempotent and $mM=mnM$.) But all these right $M$-sets are isomorphic (Proposition~\ref{schutzrep}). Thus $M$ is right mapping with respect to $I(M)$ if and only if $M$ acts faithfully on $eM$ for some (equals any) idempotent of $I(M)$ and so in particular $M$ has a faithful transitive representation. The converse is true as well. \begin{Prop}\label{inducedquotient} Let $(\Omega, M)$ be a transformation monoid and let $e\in E(M)$. Suppose that $\Omega = \Omega eM$, e.g., if $M$ is transitive. Then $M$ acts faithfully on $eM$ and there is a surjective morphism $f\colon \mathop{\mathrm{ind}}\nolimits_e(\Omega e)\to \Omega$ of $M$-sets. \end{Prop} \begin{proof} The counit of the adjunction yields a morphism $f\colon \mathop{\mathrm{ind}}\nolimits_e(\Omega e)\to \Omega$, which is surjective because \[f(\mathop{\mathrm{ind}}\nolimits_e(\Omega e)) = f(\mathop{\mathrm{ind}}\nolimits_e(\Omega e)eM) = \Omega eM=\Omega\] where we have used Proposition~\ref{inductionprop} and that $f$ takes $\mathop{\mathrm{ind}}\nolimits_e(\Omega e)e$ bijectively to $\Omega e$. Trivially, if $m,m'\in M$ act the same on $eM$, then they act the same on $\mathop{\mathrm{ind}}\nolimits_e(\Omega e)=\Omega_e\otimes_{eMe} eM$. It follows from the surjectivity of $f$ that $m,m'$ also act the same on $\Omega$ and so $m=m'$. \end{proof} As a consequence we see that a finite monoid $M$ has a faithful transitive representation if and only if it is right mapping with respect to its minimal ideal. Suppose that $(\Omega,M)$ and $(\Lambda,N)$ are transformation monoids. Then $N$ acts on the left of the monoid $M^{\Lambda}$ by endomorphisms by putting $nf(\lambda) = f(\lambda n)$. The corresponding semidirect product $M^{\Lambda}\rtimes N$ acts faithfully on $\Omega\times \Lambda$ via the action \[(\omega,\lambda)(f,n) = (\omega f(\lambda),\lambda n).\] The resulting transformation monoid $(\Omega\times \Lambda, M^{\Lambda}\rtimes N)$ is called the \emph{transformation wreath product} and is denoted $(\Omega, M)\wr (\Lambda, N)$. The semidirect product $M^{\Lambda}\rtimes N$ is denoted $M\wr (\Lambda,N)$. The wreath product is well known to be associative on the level of transformation monoids~\cite{Eilenberg}. Suppose now that $M$ is finite and $e\in E(I(M))$. Notice that since $G_e$ acts on the left of $eM$ by automorphisms, the quotient set $G_e\backslash eM$ has the structure of a right $M$-set given by $G_en\cdot m = G_enm$. The resulting transformation monoid is denoted $(G_e\backslash eM,\mathop{\mathsf{RLM}}\nolimits(M))$ in the literature~\cite{qtheor,Arbib}. The monoid $\mathop{\mathsf{RLM}}\nolimits(M)$ is called \emph{right letter mapping} of $M$. Let's consider the following slightly more general situation. Suppose that $G$ is a group and $M$ is a monoid. Let $\Lambda$ be a right $M$-set and suppose that $G$ acts freely on the left of $\Lambda$ by automorphisms of the $M$-action. Then $M$ acts naturally on the right of $G\backslash \Lambda$. Let $B$ be a transversal to $G\backslash \Lambda$; then $\Lambda$ is a free $G$-set on $B$. Suppose that $\Omega$ is a right $G$-set. Then Proposition~\ref{freesetbasisfortensor} shows that $\Omega\otimes_G \Lambda$ is in bijection with $\Omega\times B$ and hence in bijection with $\Omega\times G\backslash \Lambda$. If we write $\ov{G\lambda}$ for the representative from $B$ of the orbit $G\lambda$ and define $g_{\lambda}\in G$ by $\lambda =g_{\lambda}\ov{G\lambda}$, then the bijection is $\omega\otimes \lambda\to (\omega g_{\lambda},\ov{G\lambda})\mapsto (\omega g_{\lambda},G\lambda)$. The action of $M$ is then given by $(\omega,G\lambda)m = (\omega g_{\ov{G\lambda} m},G\lambda m)$. This can be rephrased in terms of the wreath product, an idea going back to Frobenius for groups and Sch\"utzenberger for monoids~\cite{CP,CP2}; see also~\cite{selfsimilar} for a recent exposition in the group theoretic context. \begin{Prop}\label{generalizedschutz} Let $(\Lambda,M)$ be a transformation monoid and suppose that $G$ is a group of automorphisms of the $M$-set $\Lambda$ acting freely on the left. Let $\Omega$ be a right $G$-set. Then: \begin{enumerate} \item If $\Omega$ is a transitive $G$-set and $\Lambda$ is a transitive $M$-set, then $\Omega\otimes_G \Lambda\cong \Omega\times G\backslash \Lambda$ is a transitive $M$-set. \item If $\Omega$ is a faithful $G$-set, then the action of $M$ on $\Omega\otimes _G \Lambda\cong \Omega\times G\backslash \Lambda$ is faithful and is contained in the wreath product \[(\Omega,G)\wr (G\backslash \Lambda,\ov M)\] where $\ov M$ is the quotient of $M$ by the kernel of its action on $G\backslash \Lambda$. \end{enumerate} \end{Prop} \begin{proof} We retain the notation from just before the proof. We begin with (1). Let $(\alpha_0,G\lambda_0)$ and $(\alpha_1,G\lambda_1)$ be elements of $\Omega\times G\backslash \Lambda$. Without loss of generality, we may assume $\lambda_0,\lambda_1\in B$. By transitivity we can choose $m\in M$ with $\lambda_0m=\lambda_1$. Then $(\alpha_0,G\lambda_0)m = (\alpha_0,G\lambda_1)$. Then by transitivity of $G$, we can find $g\in G$ with $\alpha'g=\alpha_1$. By transitivity of $M$, there exists $m'\in M$ such that $g\lambda_1=\lambda_1m'$. Then $\ov{G\lambda_1m'}=\lambda_1$ and $g_{\lambda_1 m'}=g$. Therefore, \[(\alpha_0,G\lambda_1)m' = (\alpha_0 g_{\lambda_1m'},G\lambda_1) = (\alpha_0g,G\lambda_1)=(\alpha_1,G\lambda_1).\] This establishes the transitivity of $M$ on $\Omega\otimes_G \Lambda$. To prove (2), first suppose that $m\neq m'$ are elements of $M$. Then we can find $\lambda\in \Lambda$ such that $\lambda m\neq \lambda m'$. Then $g\lambda m\neq g\lambda m'$ for all $g\in G$ and so we may assume that $\lambda\in B$. If $G\lambda m\neq G\lambda m'$, we are done. Otherwise, $\lambda m = g_{\lambda m}\ov{G\lambda m}$ and $\lambda m' = g_{\lambda m'}\ov{G\lambda m}$ and hence $g_{\lambda m}\neq g_{\lambda m'}$. Thus by faithfulness of the action of $G$, we have $\alpha\in \Omega$ such that $\alpha g_{\lambda m}\neq \alpha g_{\lambda m'}$. Therefore, we obtain \[(\alpha,G\lambda)m =(\alpha g_{\lambda m},G\lambda m)\neq (\alpha g_{\lambda m'},G\lambda m) = (\alpha,G\lambda)m'\] establishing the faithfulness of $M$ on $\Omega\otimes_G \Lambda$. Finally, we turn to the wreath product embedding. Write $\ov m$ for the class of $m\in M$ in the monoid $\ov M$. For $m\in M$, we define $f_m\colon G\backslash\Lambda\to \Omega$ by $f_m(G\lambda) = g_{\ov{G\lambda}m}$. Then $(f_m,\ov m)$ is an element of the semidirect product $G^{G\backslash\Lambda}\rtimes \ov M$ and if $\alpha\in \Omega$ and $\lambda\in \Lambda$, then \[(\alpha,G\lambda)(f_m,\ov m) = (\alpha f_m(G\lambda),G\lambda m)=(\alpha g_{\ov{G\lambda}m},G\lambda m) = (\alpha,G\lambda)m\] as required. Since the action of $M$ on $\Omega\times G\backslash \Lambda$ is faithful, this embeds $M$ into the wreath product. \end{proof} A particularly important case of this result is when $(\Omega,M)$ is a transitive transformation monoid and $G$ is a group of $M$-set automorphisms of $\Omega$; the action of $G$ is free by Proposition~\ref{schurforsets}. Observing that $\Omega=G\otimes_G \Omega$, we have the following corollary. \begin{Cor} Let $(\Omega,M)$ be a transitive transformation monoid and $G$ a group of automorphisms of $(\Omega,M)$. Then $\Omega$ is in bijection with $G\times G\backslash \Omega$ and the action of $M$ on $\Omega$ is contained in the wreath product $(G,G)\wr (G\backslash \Omega,\ov M)$ where $\ov M$ is the quotient of $M$ by the kernel of its action on $G\backslash \Omega$. \end{Cor} Another special case is the following slight generalization of the classical Sch\"utzenberger representation~\cite{CP,Arbib,qtheor}, which pertains to the case $\Omega=G_e$ (as $\mathop{\mathrm{ind}}\nolimits_e(G_e)\cong eM$); cf.~\cite{CP2}. \begin{Cor} Suppose that $M$ is a finite right mapping monoid (with respect to $I(M))$ and let $e\in E(I(M))$. If $\Omega$ is a transitive $G_e$-set, then $\mathop{\mathrm{ind}}\nolimits_e(\Omega)$ is a transitive $M$-set. Moreover, if $\Omega$ is faithful, then $\mathop{\mathrm{ind}}\nolimits_e(\Omega)$ is a faithful $M$-set and $(\mathop{\mathrm{ind}}\nolimits_e(\Omega),M)$ is contained inside of the wreath product $(\Omega,G_e)\wr (G_e\backslash eM,\mathop{\mathsf{RLM}}\nolimits(M))$. \end{Cor} Thus faithful transitive representations of a right mapping monoid $M$ are, up to division~\cite{Arbib,Eilenberg,qtheor}, the same things as wreath products of the right letter mapping representation with transitive faithful permutation representations of the maximal subgroup of $I(M)$. \section{Finite $0$-transitive transformation monoids}\label{sfour} In this section we begin to develop the corresponding theory for finite $0$-transitive transformation monoids. Much of the theory works as in the transitive case once the correct adjustments are made. For this reason, we will not tire the reader by repeating analogues of all the previous results in this context. What we call a $0$-transitive transformation monoid is called by many authors a \emph{transitive partial transformation monoid}. Assume now that $(\Omega, M)$ is a finite $0$-transitive transformation monoid. The zero map, which sends all elements of $\Omega$ to $0$, is denoted $0$. \begin{Prop}\label{haszero} Let $(\Omega, M)$ be a finite $0$-transitive transformation monoid. Then the zero map belongs to $M$ and $I(M)=\{0\}$. \end{Prop} \begin{proof} Let $e\in E(I(M))$. First note that $0\in \Omega e$. Next observe that if $0\neq \alpha\in \Omega e$, then $\alpha eMe= \alpha Me=\Omega e$ and hence $G_e=eMe$ is transitive on $\Omega e$. But $0$ is a fixed point of $G_e$ and so we conclude that $\Omega e=\{0\}$ and hence $e=0$. Then trivially $I(M)=MeM=\{0\}$. \end{proof} An ideal $I$ of a monoid $M$ with zero is called \emph{$0$-minimal} if $I\neq 0$ and the only ideal of $M$ properly contained in $I$ is $\{0\}$. It is easy to see that $I$ is $0$-minimal if and only if $MaM=I$ for all $a\in I\setminus \{0\}$, or equivalently, the action of $M^{\mathrm {op}}\times M$ on $I$ is $0$-transitive. In a finite monoid $M$ with zero, a $0$-minimal ideal is regular (meaning all its elements are regular in $M$) if and only if $I^2=I$~\cite{CP,qtheor}. We include a proof for completeness. \begin{Prop}\label{regularideal} Suppose that $I$ is a $0$-minimal ideal of a finite monoid $M$. Then $I$ is regular if and only if $I^2=I$. Moreover, if $I\neq I^2$, then $I^2=0$. \end{Prop} \begin{proof} If $I$ is regular and $0\neq m\in I$, then we can write $m=mnm$ with $n\in M$ and so $m=m(nm)\in I^2$. It follows $I^2=I$. Conversely, if $I^2=I$ and $m\in I\setminus \{0\}$, then we can write $m=ab$ with $a,b\in I\setminus \{0\}$. Then $MmM=MabM=MaM=MbM$ and so stability yields $mM=aM$ and $Mm=Mb$. Therefore, we can write $a=mx$ and $b=ym$ and hence $m=mxym$ is regular. For the final statement, suppose $I\neq I^2$. Then $I^2$ is an ideal strictly contained in $I$ and so $I^2=0$. \end{proof} Of course if $I$ is regular, then it contains non-zero idempotents. Using this one can easily show~\cite{CP,qtheor} that each element of $I$ is regular in the semigroup $I$. In fact, $I$ is a $0$-simple semigroup and hence its structure is determined up to isomorphism by Rees's theorem~\cite{CP,qtheor,Rees}. If $\Omega$ is an $M$-set and $\Lambda$ is an $M$-set with $0$, then the map sending each element of $\Omega$ to $0$ is an $M$-set map, which we again call the zero map and denote by $0$. \begin{Prop}\label{Schuragain} Let $\Omega$ be an $M$-set and $\Lambda$ a $0$-transitive $M$-set. Then every non-zero morphism $f\colon \Omega\to \Lambda$ of $M$-sets is surjective. \end{Prop} \begin{proof} If $f\colon \Omega\to \Lambda$ is a non-zero morphism, then $0\neq f(\Omega)$ is $M$-invariant and hence equals $\Lambda$ by $0$-transitivity. \end{proof} As a corollary we obtain an analogue of Schur's lemma. \begin{Cor}\label{freeactionagain} Let $\Omega$ be a finite $0$-transitive $M$-set. Then every non-zero endomorphism of $\Omega$ is an automorphism. Moreover, $\mathrm{Aut}_M(\Omega)$ acts freely on $\Omega\setminus \{0\}$. \end{Cor} \begin{proof} By Proposition~\ref{Schuragain}, any non-zero endomorphism of $\Omega$ is surjective and hence is an automorphism. Since any automorphism of $\Omega$ fixes $0$ (as it is the unique sink by Proposition~\ref{uniquesink}), it follows that $\Omega\setminus \{0\}$ is invariant under $\mathrm{Aut}_M(\Omega)$. If $f\in \mathrm{Aut}_M(\Omega)$, then its fixed point set is $M$-invariant and hence is either $0$ or all of $\Omega$. This shows that the action of $\mathrm{Aut}_M(\Omega)$ on $\Omega \setminus \{0\}$ is free. \end{proof} We can now prove an analogue of Proposition~\ref{schutzrep} for $0$-minimal ideals. Again this proposition is a well-known consequence of the classical theory of finite semigroups. See~\cite{berstelperrinreutenauer} for the corresponding result in the more general situation of unambiguous representations of monoids. \begin{Prop}\label{schutzrep2} Let $M$ be a finite monoid with zero, let $I$ be a regular $0$-minimal ideal and let $e\in E(I)\setminus \{0\}$. Then: \begin{enumerate} \item $eM$ is a $0$-transitive $M$-set; \item $eMe=G_e\cup \{0\}$; \item $G_e$ is the automorphism group of the $M$-set $eM$ and so in particular, $eM\setminus \{0\}$ is a free left $G_e$-set; \item If $f\in E(I)\setminus \{0\}$, then $fM\cong eM$ and hence $G_e\cong G_f$; moreover, one has $fMe\setminus \{0\}$ and $eMf\setminus \{0\}$ are in bijection with $G_e$. \end{enumerate} \end{Prop} \begin{proof} Trivially $0\in eM$. Suppose that $0\neq m\in eM$. Then $m=em$ and hence, as $MmM=MemM=MeM$, stability yields $mM=eM$. Thus $eM$ is a $0$-transitive $M$-set. Since $eM$ is finite, Corollary~\ref{freeactionagain} shows that the endomorphism monoid of $eM$ consists of the zero morphism and its group of units, which acts freely on $eM\setminus \{0\}$. But the endomorphism monoid is $eMe$ by Proposition~\ref{categoryofprojectiveMsets}. Thus $eMe=G_e\cup \{0\}$ and $eM\setminus \{0\}$ is a free left $G_e$-set. Now we turn to the last item. Since $MeM=I=MfM$, we have that $eM\cong fM$ by Proposition~\ref{D=J}. Clearly the automorphism group $G_e$ of $eM$ is in bijection with the set of isomorphisms $eM\to fM$; but this latter set is none other than $fMe\setminus \{0\}$. The argument for $eMf\setminus \{0\}$ is symmetric. \end{proof} Of course the reason for developing all this structure is the folklore fact that a finite $0$-transitive transformation monoid has a unique $0$-minimal ideal, which moreover is regular. Any element of this ideal will have minimal non-zero rank. \begin{Thm}\label{unique0min} Let $(\Omega,M)$ be a finite $0$-transitive transformation monoid. Then $M$ has a unique $0$-minimal ideal $I$; moreover, $I$ is regular and acts $0$-transitively (as a semigroup) on $\Omega$. \end{Thm} \begin{proof} We already know that $0\in M$ by Proposition~\ref{haszero}. Let $I$ be a $0$-minimal ideal of $M$ (it has one by finiteness). Then $\Omega I$ is $M$-invariant. It is also non-zero since $I$ contains a non-zero element of $M$. Thus $\Omega I=\Omega$. Therefore, $\Omega I^2=\Omega I=\Omega$ and so $I^2\neq 0$. We conclude by Proposition~\ref{regularideal} that $I$ is regular. This also implies the $0$-transitivity of $I$ because if $0\neq \alpha\in \Omega$, then $\alpha I\supseteq \alpha MI=\Omega I=\Omega$. Finally, suppose that $I'$ is any non-zero ideal of $M$. Then $\Omega I'\neq 0$ and is $M$-invariant. Thus $\Omega=\Omega I' = \Omega II'$ and so $0\neq II'\subseteq I\cap I'$. By $0$-minimality, we conclude $I=I\cap I'\subseteq I'$ and hence $I$ is the unique $0$-minimal ideal of $M$. \end{proof} We also have the following analogue of Proposition~\ref{minimalfacts}(3). \begin{Prop}\label{transitivityofgroup} Let $(\Omega, M)$ be a finite $0$-transitive transformation monoid with $0$-minimal ideal $I$ and let $0\neq e\in E(I)$. Then $(\Omega e\setminus \{0\},G_e)$ is a transitive permutation group. \end{Prop} \begin{proof} If $0\neq \alpha\in \Omega e$, then $\alpha eMe=\alpha Me=\Omega e$. But $eMe=G_e\cup \{0\}$ and hence $\alpha G_e=\Omega e\setminus \{0\}$ (as $0$ is a fixed point for $G_e$). \end{proof} Again, in the case that $G_e$ is trivial, one can say more, although not as much as in the transitive case. \begin{Prop}\label{aperiodicbottom} Let $(\Omega, M)$ be a finite $0$-transitive transformation monoid with $0$-minimal ideal $I$ and let $0\neq e\in E(I)$. Suppose that $G_e$ is trivial. Then each element of $I\setminus\{0\}$ has rank $2$ and $\Omega\cong eM$. \end{Prop} \begin{proof} First observe that since $G_e$ is trivial, Proposition~\ref{transitivityofgroup} implies that $\Omega e$ contains exactly one non-zero element. Thus, for each $m\in I\setminus \{0\}$, there is a unique non-zero element $\omega_m\in \Omega$ so that $\Omega m= \{0,\omega_m\}$, as all non-zero elements of $I$ have the same rank and have $0$ in their image. We claim that $0\mapsto 0$ and $m\mapsto \omega_m$ gives an isomorphism between $eM$ and $\Omega$. First we verify injectivity. Since $m\in eM\setminus \{0\}$ implies $eM=mM$, all elements of $eM\setminus \{0\}$ have the same kernel. This kernel is a partition $\{P_1,P_2\}$ of $\Omega$ with $0\in P_1$. Then all elements of $eM$ send $P_1$ to $0$ and hence each element of $eM$ is determined by where it sends $P_2$. Thus $m\mapsto \omega_m$ is injective on $eM$. Clearly it is a morphism of $M$-sets because if $m\in eM\setminus \{0\}$ and $n\in M$, then either $mn=0$ and hence $\omega_mn\in \Omega mn=\{0\}$ or $\{0,\omega_{mn}\}=\Omega mn=\{0,\omega_mn\}$. Finally, to see that the map is surjective observe that $\omega_ee=\omega_e$ and so $\{0\}\neq \omega_eeM$. The $0$-transitivity of $M$ then yields $\omega_eeM=\Omega$. But then if $0\neq \alpha\in \Omega$, we can find $m\in eM\setminus \{0\}$ so that $\alpha=\omega_em=\omega_{em}=\omega_m$. This completes the proof. \end{proof} One can develop a theory of induced and coinduced $M$-sets with zero and wreath products in this context and prove analogous results, but we avoid doing so for the sake of brevity. We do need one result on congruences. \begin{Prop}\label{enlargecongruence2} Let $(\Omega,M)$ be a finite $0$-transitive transformation monoid with $0$-minimal ideal $I$ and let $0\neq e\in E(I)$. Suppose that $\equiv$ is a congruence on $(\Omega e\setminus \{0\},G_e)$. Then there is a unique largest congruence $\equiv'$ on $\Omega$ whose restriction to $\Omega e\setminus \{0\}$ is $\equiv$. \end{Prop} \begin{proof} First extend $\equiv$ to $\Omega e$ by setting $0\equiv 0$. Then $\equiv$ is a congruence for $eMe=G_e\cup \{0\}$ and any congruence $\sim$ whose restriction to $\Omega e\setminus \{0\}$ equals $\equiv$ satisfies $0\sim 0$. The result now follows from Proposition~\ref{enlargecongruence}. \end{proof} A monoid $M$ that acts faithfully on the right of a $0$-minimal ideal $I$ is said to be \emph{right mapping} with respect to $I$~\cite{Arbib,qtheor}. In this case $I$ is the unique $0$-minimal ideal of $M$, it is regular and $M$ acts faithfully and $0$-transitively on $eM$ for any non-zero idempotent $e\in E(I)$. Conversely, if $(\Omega, M)$ is finite $0$-transitive, then one can verify (similarly to the transitive case) that if $0\neq e\in E(I)$, where $I$ is the unique $0$-minimal ideal of $M$, then $M$ acts faithfully and $0$-transitively on $eM$ and hence is right mapping with respect to $I$. Indeed, if $0\neq \omega\in \Omega e$, then $\omega eM$ is non-zero and $M$-invariant, whence $\Omega =\omega eM$. Thus if $m,m'\in M$ act the same on $eM$, then they also act the same on $\Omega$. Alternatively, one can use induced modules in the category of $M$-sets with zero to prove this. \section{Primitive transformation monoids} A transformation monoid $(\Omega,M)$ is \emph{primitive} if it admits no non-trivial proper congruences. In this section, we assume throughout that $|\Omega|$ is finite. Trivially, if $|\Omega|\leq 2$ then $(\Omega,M)$ is primitive, so we shall also tacitly assume that $|\Omega|\geq 3$, \begin{Prop}\label{primitive} Suppose that $(\Omega, M)$ is a primitive transformation monoid with $2<|\Omega|$. Then $M$ is either transitive or $0$-transitive. In particular, $M$ is weakly transitive. \end{Prop} \begin{proof} If $\Delta$ is an $M$-invariant subset, then consideration of $\Omega/\Delta$ shows that either $\Delta=\Omega$ or $\Delta$ consists of a single point. Singleton invariant subsets are exactly sinks. However, if $\alpha,\beta$ are sinks, then $\{\alpha,\beta\}$ is an $M$-invariant subset. Because $|\Omega|>2$, we conclude that $\Omega$ has at most one sink. First suppose that $\Omega$ has no sinks. Then if $\alpha\in \Omega$, one has that $\alpha M\neq \{\alpha\}$ and hence by primitivity $\alpha M=\Omega$. As $\alpha$ was arbitrary, we conclude that $M$ is transitive. Next suppose that $\Omega$ has a sink $0$. We already know it is unique. Hence if $0\neq \alpha\in M$, then $\alpha M\neq \{\alpha\}$ and so $\alpha M=\Omega$. Thus $M$ is $0$-transitive. The final statement follows because any transitive or $0$-transitive action is trivially weakly transitive. \end{proof} The following results constitute a transformation monoid analogue of Green's results relating simple modules over an algebra $A$ with simple modules over $eAe$ for an idempotent $e$, cf.~\cite[Chapter 6]{Greenpoly}. \begin{Prop}\label{restricttoidempotents} Let $(\Omega, M)$ be a primitive transformation monoid and $e\in E(M)$. Then $(\Omega e, eMe)$ is a primitive transformation monoid. Moreover, if $|\Omega e|>1$, then $\Omega\cong \mathop{\mathrm{ind}}\nolimits_e(\Omega e)/{='}$ where $='$ is the congruence on $\mathop{\mathrm{ind}}\nolimits_e(\Omega e)$ associated to the trivial congruence $=$ on $\mathop{\mathrm{ind}}\nolimits_e(\Omega e)e\cong \Omega e$ as per Proposition~\ref{enlargecongruence}. \end{Prop} \begin{proof} Suppose first that $(\Omega e,eMe)$ admits a non-trivial proper congruence $\equiv$. Then Proposition~\ref{enlargecongruence} shows that $\equiv'$ is a non-trivial proper congruence on $\Omega$. This contradiction shows that $(\Omega e,eMe)$ is primitive. Next assume $|\Omega e|>1$. The counit of the adjunction provides a morphism \[f\colon \mathop{\mathrm{ind}}\nolimits_e(\Omega e)\to \Omega.\] As the image is $M$-invariant and contains $\Omega e$, which is not a singleton, it follows that $f$ is surjective. Now $\ker f$ must be a maximal congruence by primitivity of $\Omega$. However, the restriction of $f$ to $\mathop{\mathrm{ind}}\nolimits_e(\Omega e)e\cong \Omega e$ is injective. Proposition~\ref{enlargecongruence} shows that $='$ is the largest such congruence on $\mathop{\mathrm{ind}}\nolimits_e(\Omega e)$. Thus $\ker f$ is $='$, as required. \end{proof} Of course, the case of interest is when $e$ belongs to the minimal ideal. \begin{Cor}\label{primitivegroup} Suppose that $(\Omega,M)$ is a primitive transitive transformation monoid and that $e\in E(I(M))$. Then $(\Omega e,G_e)$ is a primitive permutation group. If $G_e$ is non-trivial, then $\Omega = \mathop{\mathrm{ind}}\nolimits_e(\Omega e)/{='}$. \end{Cor} This result is analogous to the construction of the irreducible representations of $M$~\cite{myirreps}. In the transitive case if $G_e$ is trivial, then we already know that $\Omega\cong eM=\mathop{\mathrm{ind}}\nolimits_e(\Omega e)$ (since $|\Omega e|=1$) and that $I(M)$ consists of the constant maps on $\Omega$ (Proposition~\ref{constantmapcase}). In this case, things can be quite difficult to analyze. For instance, let $(\Omega,G)$ be a permutation group and let $(\Omega,\ov G)$ consist of $G$ along with the constant maps on $\Omega$. Then it is easy to see that $(\Omega,G)$ is primitive if and only if $(\Omega,\ov G)$ is primitive. The point here is that any equivalence relation is stable for the ideal of constant maps and so things reduce to $G$. Sometimes it is more convenient to work with the coinduced action. The following is dual to Proposition~\ref{restricttoidempotents}. \begin{Prop}\label{primitivecoinduced} Let $(\Omega, M)$ be a primitive transformation monoid and let $e\in E(M)$ with $|\Omega e|>1$. Then there is an embedding $g\colon \Omega\rightarrow \mathop{\mathrm{coind}}\nolimits_e(\Omega e)$ of $M$-sets. The image of $g$ is $\mathop{\mathrm{coind}}\nolimits_e(\Omega e)eM$, which is the least $M$-invariant subset containing $\mathop{\mathrm{coind}}\nolimits_e(\Omega e)e\cong \Omega e$. \end{Prop} \begin{proof} The unit of the adjunction provides the map $g$ and moreover, $g$ is injective on $\Omega e$. Because $|\Omega e|>1$, it follows that $g$ is injective by primitivity. For the last statement, observe that $\Omega eM=\Omega$ by primitivity because $|\Omega e|>1$. Thus $g(\Omega) = g(\Omega e)eM = \mathop{\mathrm{coind}}\nolimits_e(\Omega e)eM$. \end{proof} We hope that the theory of primitive permutation groups can be used to understand transitive primitive transformation monoids in the case the maximal subgroups of $I(M)$ are non-trivial. Next we focus on the case of a $0$-transitive transformation monoid. \begin{Prop}\label{primitivegroup2} Let $(\Omega, M)$ be a $0$-transitive primitive transformation monoid with $0$-minimal ideal $I$ and suppose $0\neq e\in E(I)$. Then one has that $(\Omega e\setminus \{0\},G_e)$ is a primitive permutation group. \end{Prop} \begin{proof} If $(\Omega e\setminus \{0\},G_e)$ admits a non-trivial proper congruence, then so does $\Omega$ by Proposition~\ref{enlargecongruence2}. \end{proof} Again one can prove that $(\Omega,M)$ is a quotient of an induced $M$-set with zero and embeds in a coinduced $M$-set with zero when $|\Omega e\setminus \{0\}|>1$. In the case that $G_e$ is trivial, we know from Proposition~\ref{aperiodicbottom} that $\Omega\cong eM$ and each element of the $0$-minimal ideal $I$ acts on $\Omega$ by rank $2$ transformations (or equivalently by rank $1$ partial transformations on $\Omega\setminus \{0\}$). Recall that a monoid $M$ is an \emph{inverse monoid} if, for each $m\in M$, there exists a unique $m^*\in M$ with $mm^*m=m$ and $m^*mm^*$. Inverse monoids abstract monoids of partial injective maps, e.g., Lie pseudogroups\cite{Lawson}. It is a fact that the idempotents of an inverse monoid commute~\cite{Lawson,CP}. We shall use freely that in an inverse monoid one has $eM=mM$ with $e\in E(M)$ if and only if $mm^*=e$ and dually $Me=Mm$ if and only if $m^*m=e$. We also use that $(mn)^*=n^*m^*$~\cite{Lawson}. The next result describes all finite $0$-transitive transformation inverse monoids (transitive inverse monoids are necessarily groups). This should be considered folklore, although the language of tensor products is new in this context; more usual is the language of wreath products. The corresponding results for the matrix representation associated to a transformation inverse monoid can be found in~\cite{mobius2}. \begin{Thm}\label{inversecase} Let $(\Omega,M)$ be a finite transformation monoid with $M$ an inverse monoid. \begin{enumerate} \item If $M$ is transitive on $\Omega$, then $M$ is a group. \item If $\Omega$ is a $0$-transitive $M$-set, then $M$ acts on $\Omega\setminus \{0\}$ by partial injective maps and $\Omega\cong (\Omega e\setminus \{0\})\otimes_{G_e}eM$ where $e$ is a non-zero idempotent of the unique $0$-minimal ideal $I$ of $M$. \end{enumerate} \end{Thm} \begin{proof} Suppose first that $M$ is transitive on $\Omega$. It is well known that the minimal ideal $I(M)$ of a finite inverse monoid is a group~\cite{CP,Arbib,qtheor}. Let $e$ be the identity of this group. Then since $I(M)$ is transitive on $\Omega$, we have $\Omega=\Omega e$. Thus $e$ is the identity of $M$ and so $M=I(M)$ is a group. Next suppose that $M$ is $0$-transitive on $\Omega$. Let $I$ be the $0$-minimal ideal of $M$ and let $e\in E(I)\setminus \{0\}$. We claim that $\alpha e\neq 0$ implies $\alpha\in \Omega e$. Indeed, if $\alpha e\neq 0$, then $\alpha eI=\Omega$ and so we can write $\alpha=\alpha em$ with $m\in I$. Then $\alpha eme=\alpha e\neq 0$. Thus $eme$ is a non-zero element of $eMe=G_e\cup \{0\}$. Therefore, $e=(eme)^*eme=em^*eme$ and hence $m^*me=m^*mem^*eme=em^*eme=e$. But $em^*m=m^*me=e$ and thus $e\in m^*mMm^*m=G_{m^*m}\cup \{0\}$. We conclude $e=m^*m$. Thus $\alpha =\alpha em=\alpha emm^*m=\alpha eme$ and so $\alpha \in \Omega e$. Of course, this is true for any idempotent of $E(I)\setminus \{0\}$, not just for $e$. Now let $f\in E(M)\setminus \{0\}$ and suppose that $\omega f\neq 0$. We claim $\omega f=\omega$. Indeed, choose $\alpha\in \Omega f\setminus \{0\}$. Then $\alpha I=\Omega$ by $0$-transitivity and so we can write $\omega=\alpha m$ with $m\in I$. Then $\omega f=\alpha mf$. Because $\alpha= \alpha mf(mf)^*(mf)$ it follows that $\alpha mf(mf)^*\neq 0$. The previous paragraph applied to $mf(mf)^*\in E(I)\setminus \{0\}$ yields $\alpha=\alpha mf(mf)^*=\alpha mfm^*$. Therefore, $\omega =\alpha m=\alpha mfm^*m=\alpha mf=\omega f$. Suppose next that $\omega_1,\omega_2\in \Omega\setminus \{0\}$ and $m\in M$ with $\omega_1m=\omega_2m\neq 0$. Then $\omega_1mm^*=\omega_2mm^*\neq 0$ and so by the previous paragraph $\omega_1=\omega_1mm^*=\omega_2mm^*=\omega_2$. We conclude that the action of $M$ on $\Omega\setminus \{0\}$ by partial maps is by partial injective maps. Let $e\in E(I)\setminus \{0\}$ and put $\Lambda =\Omega e\setminus \{0\}$. Then $(\Lambda,G_e)$ is a transitive permutation group by Proposition~\ref{transitivityofgroup}. Consider $\Lambda\otimes_{G_e}eM$. Observe that if $\alpha,\beta\in \Lambda$ and $\alpha g=\beta$ with $g\in G_e$, then $\beta\otimes 0=\alpha g\otimes 0=\alpha \otimes g0=\alpha\otimes 0$. Thus $\Lambda\times \{0\}$ forms an equivalence class of $\Lambda\otimes_{G_e}eM$ that we denote by $0$. It is a sink for the right action of $M$ on $\Lambda\otimes_{G_e}eM$ and hence we can view the latter set as a right $M$-set with zero. Define $F\colon \Lambda\otimes_{G_e}eM\to \Omega$ by $\alpha\otimes m\mapsto \alpha m$. This is well defined because the map $\Lambda\times eM\to \Omega$ given by $(\alpha,m)\mapsto \alpha m$ is $G_e$-bilinear. The map $F$ is a morphism of $M$-sets with zero because $F(\alpha\otimes m)m' = \alpha mm' = F(\alpha\otimes mm')$ and $0$ is sent to $0$. Observe that $F$ is onto. Indeed, fix $\alpha\in\Lambda$. Then since $\alpha eM=\alpha M=\Omega$ by $0$-transitivity, given $\omega\in \Omega\setminus \{0\}$, we can find $m\in eM$ with $\omega =\alpha m$. Thus $\omega=F(\alpha\otimes m)$. We conclude that $F$ is surjective. To show injectivity, first observe that if $F(\alpha\otimes m)=0$, then $m=0$. Indeed, assume $m\neq 0$. Then $m\in eM\setminus \{0\}$ implies that $eM=mM$ and hence $mm^*=e$. Thus $0=\alpha mm^*=\alpha e=\alpha$. This contradiction shows that $m=0$ and hence only $0$ maps to $0$. Next suppose that $F(\alpha\otimes m)=F(\beta\otimes n)$ with $m,n\in eM\setminus \{0\}$. Then $\alpha m=\beta n$. From $mm^*=e$, we obtain $0\neq \alpha =\alpha e=\alpha mm^* =\beta nm^*$ and $nm^*\in eMe\setminus \{0\}=G_e$. Then $e=nm^*mn^*$ and so $nm^*m=nm^*mn^*n=en=n$. Therefore, $\alpha\otimes m=\beta nm^*\otimes m=\beta \otimes nm^*m=\beta\otimes n$ completing the proof that $F$ is injective. \end{proof} This theorem shows that the study of ($0$-)transitive representations of finite inverse monoids reduces to the case of groups. It also reduces the classification of primitive inverse transformation monoids to the case of permutation groups. \begin{Cor} Let $(\Omega,M)$ be a primitive finite transformation monoid with $M$ an inverse monoid. Then either $(\Omega,M)$ is a primitive permutation group, or it is $0$-transitive and $G_e=\{e\}$ for any non-zero idempotent $e$ of the unique $0$-minimal ideal of $M$. In the latter case, $(\Omega,M)\cong (eM,M)$. \end{Cor} \begin{proof} A primitive transformation monoid is either transitive or $0$-transitive (Proposition~\ref{primitive}). By Theorem~\ref{inversecase}, if $(\Omega,M)$ is transitive, then it is a primitive permutation group. Otherwise, the theorem provides an isomorphism $(\Omega,M)\cong (\Omega e\setminus \{0\}\otimes_{G_e}eM,M)$ where $e$ is a non-zero idempotent in the $0$-minimal ideal of $M$. Suppose that $|G_e|>1$. Since $\Omega e\setminus \{0\}$ is a faithful $G_e$-set, we conclude $|\Omega e\setminus \{0\}|>1$. Functoriality of the tensor product yields a non-injective, surjective $M$-set morphism \[(\Omega,M)\to (\{\ast\}\otimes_{G_e}eM,M)\cong (G_e\backslash eM,M).\] As $0$ and $e$ are in different orbits of $G_e$, this morphism is non-trivial. This contradiction establishes that $G_e$ is trivial. We conclude that $(\Omega,M)\cong (eM,M)$ by Proposition~\ref{aperiodicbottom}. \end{proof} \begin{Rmk} A finite primitive transformation monoid $(\Omega,M)$ can only have a non-trivial automorphism group $G$ if $M$ is a group. Indeed, consideration of $G\backslash \Omega$ shows that either $G$ is trivial or transitive. But if $G$ is transitive, then $M$ is a monoid of endomorphisms of a finite transitive $G$-set and hence is a permutation group. \end{Rmk} \section{Orbitals} Let us recall that if $(\Omega,G)$ is a transitive permutation group, then the orbits of $G$ on $\Omega^2=\Omega\times \Omega$ are called \emph{orbitals}. The diagonal orbital $\Delta$ is called the \emph{trivial orbital}. The \emph{rank} of $G$ is the number of orbitals. For instance, $G$ has rank $2$ if and only if $G$ is $2$-transitive. Associated to each non-trivial orbital $\mathcal O$ is an orbital digraph $\Gamma(\mathcal O)$ with vertex set $\Omega$ and edge set $\mathcal O$. Moreover, there is a vertex transitive action of $G$ on $\Gamma(\mathcal O)$. A classical result of D.~Higman is that the weak and strong components of an orbital digraph coincide and that $G$ is primitive if and only if each orbital digraph is connected~\cite{dixonbook,cameron}. The goal of this section is to obtain the analogous results for transformation monoids. The inspiration for how to do this comes out of Trahtman's paper~\cite{Trahtman} on the \v{C}ern\'y conjecture for aperiodic automata. He considers there certain strong orbits of $M$ on $\Omega^2$ and it turns out that these have the right properties to play the role of orbitals. After coming up with the definition of orbital presented below, I did an extensive search of the literature with Google and found the paper of Scozzafava~\cite{orbitoids}. In this paper, if $(\Omega,M)$ is a finite transformation monoid, then a minimal strong orbit is termed an \emph{orbitoid}. Scozzafava then views the orbitoids of $M$ on $\Omega^2$ as the analogue of orbitals. He provides two pieces of evidence to indicate that his notion of orbital is ``correct''. The first is that the number of orbitoids of $M$ on $\Omega^2$ is to equal the number of orbitoids of a point stabilizer on $\Omega$, generalizing the case of permutation groups. The second is that from an orbitoid of $\Omega^2$, one obtains an action of $M$ on a digraph by graph endomorphisms. However, this approach does not lead to a generalization of Higman's theorem characterizing primitivity of permutation groups in terms of connectedness of non-trivial orbital digraphs. Suppose for instance that $G$ is a transitive permutation group on $\Omega$ and $M$ consists of $G$ together with the constant maps on $\Omega$. Then the unique orbitoid of $M$ on $\Omega^2$ is the diagonal $\Delta$ and so one has no non-trivial orbitals in the sense of~\cite{orbitoids}. On the other hand, it is easy to see that $M$ is primitive if and only if $G$ is primitive. In fact, it is clear that if $M$ contains constant maps, then there is no non-trivial digraph on $\Omega$ preserved by $M$ if we use the standard notion of digraph morphism. Our first step is to define the appropriate category of digraphs in which to work. \subsection{Digraphs and cellular morphisms} A (simple) \emph{digraph} $\Gamma$ consists of a set of vertices $V$ and an anti-reflexive relation $E$ on $V\times V$. If $v,w\in V$, then there is an edge from $v$ to $w$, denoted $(v,w)$, if $(v,w)\in E$. A \emph{walk} $p$ of length $m$ in a digraph is a sequence of vertices $v_0,v_1,\ldots, v_m$ such that, for each $0\leq i\leq m-1$, one has $(v_i,v_{i+1})$ is an edge, or $v_i=v_{i+1}$. In particular, for each vertex $v$, there is an \emph{empty walk} of length $0$ consisting of only the vertex $v$. A walk is called \emph{simple} if it never visits a vertex twice. The walk $p$ is closed if $v_0=v_m$. A closed non-empty walk is called a \emph{cycle} if the only repetition occurs at the final vertex. If $v_0,v_1,\ldots, v_m$ is a walk, then a \emph{deletion} is a removal of a subwalk $v_i,v_{i+1}$ with $v_i=v_{i+1}$. A walk that admits no deletions is called \emph{non-degenerate}; we consider empty walks as non-degenerate. Deletion is confluent and so from any walk $v_0,\ldots,v_m$, we can obtain a unique \emph{non-degenerate} walk $(v_0,\ldots,v_m)^{\wedge}$ by successive deletions (the resulting path may be empty). Define a preorder on the vertices of $\Gamma$ by putting $v\leq w$ if there is a walk from $w$ to $v$. Then the symmetric-transitive closure $\simeq$ of $\leq$ is an equivalence relation on the vertices. If this relation has a single equivalence class, then the digraph $\Gamma$ is said to be \emph{weakly connected} or just \emph{connected} for short. In general, the \emph{weak components} of $\Gamma$ are the maximal weakly connected subgraphs of $\Gamma$. They are disjoint from each other and have vertex sets the $\simeq$-equivalence classes (with the induced edge sets). The digraph $\Gamma$ is \emph{strongly connected} if $v\leq w$ and $w\leq v$ hold for all vertices $v,w$. In general, the \emph{strong components} are the maximal strongly connected subgraphs. A strong component is said to be \emph{trivial} if it contains no edges; otherwise it is \emph{non-trivial}. A digraph is said to be \emph{acyclic} if all its strong components are trivial. In this case, the preorder $\leq$ is in fact a partial order on the vertex set. It is easy to see that if a strong component is non-trivial, then each of its edges belongs to a cycle. Conversely, a digraph in which each edge belongs to a cycle is strongly connected. In particular, a digraph is acyclic if and only if it contains no cycles, whence the name. Usually morphisms of digraphs are required to send edges to edges, but we need to consider here a less stringent notion of morphism. Namely, we allow maps with degeneracies, i.e., that map edges to vertices. More precisely, if $\Gamma=(V,E)$ and $\Gamma'=(V',E')$ are digraphs, then a \emph{cellular morphism} is a map $f\colon V\to V'$ such that if $(v,w)\in E$, then either $f(v)=f(w)$ or $(f(v),f(w))\in E'$. The reason for the term ``cellular'' is that if we view graphs as $1$-dimensional CW-complexes, then it is perfectly legal to map a cell to a lower dimensional cell. If $p=v_0,\ldots, v_m$ is a walk in $\Gamma$, then $f(p)=f(v_0),\ldots, f(v_m)$ is a walk in $\Gamma'$; however, non-degenerate walks can be mapped to degenerate walks. It is trivial to see that if $f\colon \Gamma\to \Gamma'$ is a morphism, then $f$ takes weak components of $\Gamma$ into weak components of $\Gamma'$ and strong components of $\Gamma$ into strong components of $\Gamma'$. A cycle $C$ in a digraph $\Gamma$ is \emph{minimal} if it has minimal length amongst all cycles of $\Gamma$. Minimal cycles exist in non-acyclic digraphs and have length at least $2$ because we do not allow loop edges. \begin{Prop}\label{minimaltominimal} Let $f\colon \Gamma\to \Gamma$ be a cellular endomorphism of $\Gamma$ and let $C$ be a minimal cycle of $\Gamma$. Then either $f(C)$ is a minimal cycle or $f(C)^{\wedge}$ is empty. \end{Prop} \begin{proof} Let $m$ be the length of $C$ and suppose $f(C)^{\wedge}$ is non-empty. Then $f(C)^{\wedge}$ is a closed path of length at most $m$. If it is not a cycle, then it contains a proper subwalk that is a cycle of length smaller than the length of $C$, a contradiction. Thus $f(C)^{\wedge}$ is a cycle. But then minimality of $C$ implies that $f(C)^{\wedge}$ has length $m$. Thus $f(C)=f(C)^{\wedge}$ is a minimal cycle. \end{proof} By an action of a monoid $M$ on a digraph $\Gamma=(V,E)$, we mean an action by cellular morphisms. In other words, $M$ acts on $V$ in such a way that the reflexive closure of $E$ is stable for the action of $M$. We say the action is \emph{vertex transitive} if $M$ is transitive on $V$; we say that it is \emph{edge transitive} if either $M$ acts transitively on $E$ or $M$ acts $0$-transitively on $(E\cup \Delta)/\Delta$ where $\Delta = \{(v,v)\mid v\in V\}$ is the diagonal. Equivalently, for each pair of edges $e,f\in E$, there is an element $m\in M$ with $em=f$ where in the setting of monoid actions on digraphs, we shall use the notation of right actions. \begin{Lemma}\label{edgetransitive} Suppose that $\Gamma$ is a non-acyclic digraph admitting an edge transitive monoid $M$ of cellular endomorphisms. Then every edge of $\Gamma$ belongs to a minimal cycle. \end{Lemma} \begin{proof} Let $C$ be a minimal cycle of $\Gamma$ and fix an edge $e$ of $C$. Suppose now that $f$ is an arbitrary edge of $\Gamma$. By edge transitivity, there exists $m\in M$ with $em=f$. Since $(Cm)^{\wedge}$ is non-empty (it contains the edge $f$), it follows that $Cm$ is minimal by Proposition~\ref{minimaltominimal}. This completes the proof. \end{proof} An immediate corollary of the lemma is the following result. \begin{Cor}\label{acyclicorstrong} Suppose that $\Gamma$ is a digraph admitting an edge transitive monoid of cellular endomorphisms. Then either $\Gamma$ is acyclic or each weak component of $\Gamma$ is strongly connected. \end{Cor} \begin{proof} If $\Gamma$ is not acyclic, then Lemma~\ref{edgetransitive} shows that each edge of $\Gamma$ belongs to a cycle. It is then immediate that each weak component is strongly connected (since the relation $\leq$ is symmetric in this case). \end{proof} \subsection{Orbital digraphs} Suppose now that $(\Omega,M)$ is a transitive transformation monoid. Then $M$ acts on $\Omega^2=\Omega\times \Omega$ by $(\alpha,\beta)m = (\alpha m,\beta m)$. Notice that $\Delta=\{(\alpha,\alpha)\mid \alpha\in \Omega\}$ is a (minimal) strong orbit. We call $\Delta$ the \emph{trivial orbital} of $M$. A strong orbit $\mathcal O\neq \Delta$ is an \emph{orbital} if it is minimal in the poset $(\Omega^2/M)\setminus \{\Delta\}$, or equivalently if it is $0$-minimal in $\Omega^2/\Delta$. Such orbitals are called \emph{non-trivial}. This coincides with the usual group theoretic notion when $M$ is a group~\cite{dixonbook,cameron}. Non-trivial orbitals were first studied by Trahtman~\cite{Trahtman} under a different name in the context of the \v{C}ern\'y conjecture. The number of orbitals of $M$ is called the \emph{rank} of $M$ because this is the well-established terminology in group theory. From now on we assume that $\Omega$ is finite in this section. For permutation groups, it is well known~\cite{dixonbook,cameron} that the number of orbitals is equal to the number of suborbits (recall that a suborbit is an orbit of the point stabilizer). This is not the case for transformation monoids. For example, if $\Omega$ is a finite set of size $n$ and $M$ consists of the identity map and the constant maps, then there are $n^2-n+1$ orbitals, which is larger than the number of points of $\Omega$. If $\mathcal O$ is a non-trivial orbital, then the corresponding \emph{orbital digraph} $\Gamma(\mathcal O)$ has vertex set $\Omega$ and edge set $\mathcal O$. Since $\mathcal O$ is a strong orbit, it follows that $M$ acts edge transitively on $\Gamma(\mathcal O)$ by cellular morphisms. Hence we have the following immediate consequence of Corollary~\ref{acyclicorstrong}. \begin{Thm}\label{posetorstrongconnected} Let $(\Omega,M)$ be a transformation monoid and let $\mathcal O$ be a non-trivial orbital. Then the orbital digaph $\Gamma(\mathcal O)$ is either acyclic or each weak component of $\Gamma(\mathcal O)$ is strongly connected. \end{Thm} It was shown by Trahtman~\cite{Trahtman} that if $M$ is aperiodic, then $\Gamma(\mathcal O)$ is always acyclic (using different terminology: he speaks neither of digraphs nor orbitals). Here we recall that a finite monoid $M$ is \emph{aperiodic} if each of its maximal subgroups $G_e$ with $e\in E(M)$ is trivial, or equivalently, if $M$ satisfies an identity of the form $x^n=x^{n+1}$. On the other hand, if $M$ is a non-trivial group, then each weak component of $\Gamma(\mathcal O)$ is strongly connected~\cite{dixonbook,cameron}. Let $\mathcal O$ be a non-trivial orbital. Then $M$ either acts transitively on $\mathcal O$ (if it is a minimal strong orbit) or $0$-transitively on the set $\til{\mathcal O}=(\mathcal O\cup \Delta)/\Delta$. In either case, the action need not be faithful. For example if $I(M)$ consists of the constant maps on $\Omega$, then all of $I(M)$ acts as the zero map on $\til \mathcal O$. Let $M(\mathcal O)$ be the faithful quotient. If $M$ is aperiodic, then so is $M(\mathcal O)$. \begin{Thm}\label{superTraht} Let $(\Omega,M)$ be a finite transformation monoid and suppose that $\mathcal O$ is a non-trivial orbital. \begin{enumerate} \item If $M$ acts transitively on $\mathcal O$, then $\Gamma(\mathcal O)$ is acyclic if $G_e$ is trivial for $e\in E(I(M(\mathcal O)))$. \item If $M$ acts $0$-transitively on $\til \mathcal O$ and $e\in E(I)\setminus \{0\}$ where $I$ is the $0$-minimal ideal of $M(\mathcal O)$, then $\Gamma(\mathcal O)$ is acyclic if $G_e$ is trivial. \end{enumerate} \end{Thm} \begin{proof} We handle (2) only as (1) is similar, but simpler. Suppose that $G_e$ is trivial, but that $\Gamma(\mathcal O)$ is not acyclic. Since $G_e$ is transitive on $\mathcal O e\setminus \Delta$, we have $|\mathcal O e\setminus \Delta|=1$. Let $(\alpha,\beta)\in \mathcal O e\setminus \Delta$. By Lemma~\ref{edgetransitive}, $(\alpha,\beta)$ belongs to some minimal cycle $C$. Let $m\in M$ with $m$ mapping to $e$ in $M(\mathcal O)$. Then $(\alpha,\beta)m=(\alpha,\beta)$ and so $Cm$ is a minimal cycle by Proposition~\ref{minimaltominimal}. If $X$ is the set of edges of $C$, this yields that $Xe$ is a subset of $\mathcal O e\setminus \Delta$ of size greater than $1$. This contradiction shows that $\Gamma(\mathcal O)$ is acyclic. \end{proof} Theorem~\ref{superTraht} admits the following corollary, due to Trahtman with a different formulation. \begin{Cor}[Trahtman~\cite{Trahtman}] Let $(\Omega, M)$ be a transitive finite transformation monoid with $M$ aperiodic. The each non-trivial orbital digraph $\Gamma(\mathcal O)$ is acyclic and hence defines a non-trivial partial order on $\Omega$ that is stable for the action of $M$. \end{Cor} If $(\Omega, G)$ is a finite transitive permutation group, then a classical result of D.~Higman says that $G$ is primitive if and only if each non-trivial orbital digraph is strongly connected (equals weakly connected in this context)~\cite{dixonbook,cameron}. We now prove the transformation monoid analogue. It is this result that justifies our choice of the notion of an orbital. \begin{Thm}\label{orbitalthm} A finite transitive transformation monoid $(\Omega,M)$ is primitive if and only if each of its non-trivial orbital digraphs is weakly connected. \end{Thm} \begin{proof} Suppose first that $(\Omega,M)$ is primitive and let $\mathcal O$ be a non-trivial orbital. Then the partition of $\Omega$ into the weak components of $\Gamma(\mathcal O)$ is a non-trivial congruence. Indeed, as $M$ acts by cellular morphisms, it preserves the weak components; moreover, $\Gamma(\mathcal O)$ has at least one edge so not all weak components are trivial. It follows by primitivity that there is just one weak component, i.e., $\Gamma(\mathcal O)$ is weakly connected. Conversely, assume that each non-trivial orbital digraph is weakly connected and let $\equiv$ be a non-trivial congruence on $\Omega$. Then $\equiv$ is an $M$-invariant subset of $\Omega^2$ strictly containing the diagonal $\Delta$. By finiteness, we conclude that $\equiv$ contains a minimal strong orbit of $\Omega^2\setminus \{\Delta\}$, that is, there is a non-trivial orbital $\mathcal O$ with $\mathcal O\subseteq {\equiv}$. The weak components of $\Gamma(\mathcal O)$ are the equivalence classes of the equivalence relation generated by $\mathcal O$ and hence each weak component of $\Gamma(\mathcal O)$ is contained in a single $\equiv$-class. But $\Gamma(\mathcal O)$ is weakly connected, so $\Omega$ is contained in a single $\equiv$-class, that is, $\equiv$ is not a proper congruence. This completes the proof that $(\Omega,M)$ is primitive. \end{proof} As a corollary, we obtain the following. \begin{Cor} Let $(\Omega,M)$ be a primitive finite transitive transformation monoid with $M$ aperiodic. Then $\Omega$ admits a stable connected partial order. \end{Cor} Later on, it will be convenient to have a name for the set of weak orbits of $M$ on $\Omega^2$. We shall call them \emph{weak orbitals}. \section{Transformation modules} Our goal now is to study the representations associated to a transformation monoid. The theory developed here has a different flavor from the group case because there is an interesting duality that arises. Fix for this section a finite transformation monoid $(\Omega,M)$ and a field $K$ of characterstic $0$. Let $KM$ be the corresponding monoid algebra. Associated to the $M$-set $\Omega$ are a right $KM$-module and a left $KM$-module together with a dual pairing. This pairing has already been implicitly exploited in a number of papers in the \v{C}ern\'y conjecture literature, e.g.,~\cite{dubuc,Kari,mycerny,averaging}. The \emph{transformation module} associated to $(\Omega,M)$ is the right $KM$-module $K\Omega$. That is we take a $K$-vector space with basis $\Omega$ and extend the action of $M$ on $\Omega$ linearly: formally, for $m\in M$, define \[\left(\sum_{\omega\in \Omega} c_{\omega}\omega\right) m = \sum_{\omega\in\Omega}c_{\omega}\omega m.\] The \emph{dual transformation module} is the space $K^{\Omega}$ of $K$-valued functions on $\Omega$ with the left $KM$-module structure given by $mf(\omega) = f(\omega m)$ for $m\in M$ and $f\colon \Omega\to M$. When $M$ is a group, these two representations are the same under the natural correspondence between left modules and right modules, but for monoids these modules are simply dual to each other. There is a non-degenerate pairing $\langle\ ,\ \rangle\colon K\Omega\times K^{\Omega}\to K$ given by \begin{equation}\label{pairing} \langle \alpha,f\rangle = f(\alpha) \end{equation} for $\alpha\in \Omega$. The pairing on general linear combinations is given by \[\left\langle \sum_{\alpha\in \Omega}c_{\alpha}\alpha,f\right\rangle = \sum_{\alpha\in \Omega}c_{\alpha}f(\alpha).\] Observe that $K^{\Omega}$ has basis the Dirac functions $\delta_{\omega}$ with $\omega\in \Omega$. If $m\in M$, then one verifies that \[m\delta_{\omega} = \sum_{\alpha\in \omega m^{-1}} \delta_{\alpha}\] and more generally if $S\subseteq \Omega$ and $I_S$ denotes the indicator (or characteristic) function of $S$, then \[mI_S=I_{Sm^{-1}}.\] Intuitively, the action of $M$ on $K^{\Omega}$ is by inverse images and this is why $K\Omega$ and $K^\Omega$ contain the same information in the case of groups. The following adjointness holds. \begin{Prop}\label{adjoint} The left and right actions of $m\in M$ on $K\Omega$ and $K^{\Omega}$ are adjoint. That is, for $v\in K\Omega$, $f\in K^{\Omega}$ and $m\in M$, one has \[\langle vm,f\rangle = \langle v,mf\rangle\] \end{Prop} \begin{proof} It suffices by linearity to handle the case $v=\alpha\in \Omega$. Then \[\langle \alpha m,f\rangle = f(\alpha m) =mf(\alpha) = \langle \alpha,mf\rangle,\] as required. \end{proof} As a consequence, we see that $K^{\Omega}$ is dual to $K\Omega$, that is, \[K^{\Omega}\cong \hom_K(K\Omega,K)\] as left $KM$-modules. We remark that the bases $\Omega$ and $\{\delta_{\omega}\mid \omega\in \Omega\}$ are dual with respect to the pairing \eqref{pairing}. If $|\Omega|=n$ and we fix an ordering $\Omega =\{\omega_1,\ldots, \omega_n\}$, then it is convenient to identify elements of $K\Omega$ with row vectors in $K^n$ and elements of $K^{\Omega}$ with column vectors (by associating $f$ with the column vector $(f(\omega_1),\ldots,f(\omega_n))^T$). The dual pairing then turns into the usual product of a row vector with a column vector. If $\rho\colon M\to M_n(K)$ is the matrix representation afforded by the right $KM$-module $K\Omega$, then the action on column vectors is the matrix representation afforded by the left $KM$-module $K^{\Omega}$. We mention the following trivial observation. \begin{Prop} Let $(\Omega,M)$ be a finite transformation monoid and suppose that $\mathcal O_1,\ldots,\mathcal O_s$ are the weak orbits of $M$. Then $K\Omega\cong \bigoplus_{i=1}^sK\mathcal O_i$ and $K^\Omega\cong \bigoplus_{i=1}^sK^{\mathcal O_i}$ where we identify $K^{\mathcal O_i}$ with those functions $\Omega\to K$ supported on $\mathcal O_i$, for $1\leq i\leq s$. \end{Prop} Thus for most purposes, it suffices to restrict our attention to the weakly transitive case. \subsection{The subspace of $M$-invariants} Let $V$ be a left/right $KM$-module. Then $V^M$ denotes the subspace of \emph{$M$-invariants}, that is, of all vectors fixed by $M$. If $K$ is the trivial left/right $KM$-module, then $V^M\cong \hom_{KM}(K,V)$. Unlike the case of groups, it is not in general true that $\hom_{KM}(K,V)\cong \hom_{KM}(V,K)$. In fact, we shall see in a moment that in most cases $K\Omega^M=\{0\}$, whereas the $K$-dimension of $\hom_{KM}(K\Omega,K)$ is the number of weak orbits of $M$. It is also the case that the module $K\Omega$ is almost never semisimple and quite often the multiplicity of the trivial module as a composition factor of $K\Omega$ is strictly greater than the number of weak orbits of $M$. The following result generalizes a standard result from permutation group theory. \begin{Prop}\label{fixedset} Consider $\hom_{KM}(K\Omega,K)$ where $K$ is given the structure of a trivial $KM$-module and $\hom_M(\Omega,K)$ where $K$ is given the structure of a trivial $M$-set. Then there are $K$-vector space isomorphisms \begin{equation}\label{eq:fixedset} \hom_{KM}(K\Omega,K)\cong \hom_M(\Omega,K)=(K^{\Omega})^M\cong K^{\pi_0(\Omega)}. \end{equation} More precisely, $f\in (K^{\Omega})^M$ if and only if it is constant on weak orbits of $\Omega$. Consequently, $\dim_K \hom_{KM}(K\Omega,K)=\dim_K (K^{\Omega})^M$ is the number of weak orbits of $M$ on $\Omega$. \end{Prop} \begin{proof} A $K$-linear map $T\colon K\Omega\to K$ is the same thing as a map $\Omega\to K$ because $\Omega$ is a basis for $K\Omega$. Clearly, $T$ is a $KM$-module morphism if and only if the associated mapping $\Omega\to K$ is an $M$-set morphism. This provides the first isomorphism of \eqref{eq:fixedset}. Proposition~\ref{connectedcomp} shows that $f\colon \Omega\to K$ is an $M$-set morphism if and only if it is constant on weak orbits yielding the isomorphism of the second and fourth terms of \eqref{eq:fixedset}. Finally, observe that $f\colon \Omega\to K$ is an $M$-set map if and only if $f(\omega m)=f(\omega)$ for all $\omega\in \Omega$, $m\in M$. But this is equivalent to asking $mf=f$ for all $m\in M$, i.e., $f\in (K^{\Omega})^M$. This completes the proof. \end{proof} The situation for $K\Omega^M$ is quite different. It is well known that a finite monoid $M$ admits a surjective maximal group image homomorphism $\sigma\colon M\to G(M)$ where $G(M)$ is a finite group. This map is characterized by the universal property that if $\varphi\colon M\to H$ is a homomorphism from $M$ into a group $H$, then there is a unique homomorphism $\psi\colon G(M)\to H$ so that \[\xymatrix{M\ar[r]^\sigma\ar[rd]_\varphi & G(M)\ar@{..>}[d]^\psi \\ & H}\] commutes. Using the fact that a finite monoid is a group if and only if it has a unique idempotent, one can describe $G(M)$ as the quotient of $M$ by the least congruence for which all idempotents are equivalent and $\sigma$ as the quotient map. Alternatively, it is the quotient by the intersection of all congruences on $M$ whose corresponding quotient is a group. \begin{Prop}\label{invariantsonright0} If $(\Omega,M)$ is a transformation monoid, then $K\Omega^M\neq 0$ if and only if there is an $M$-invariant subset $\Lambda$ fixed by all idempotents of $M$. \end{Prop} \begin{proof} Suppose first that $\Lambda$ is an $M$-invariant subset fixed by all idempotents of $M$. Then $\Lambda$ is naturally a $G(M)$-set and $K\Lambda^M=K\Lambda^{G(M)}$. Group representation theory then yields that $\dim_K K\Lambda^{G(M)}$ is the number of orbits of $G(M)$ on $\Lambda$, and so is non-zero. Thus $K\Omega^M\neq 0$. Next suppose that $v\in K\Omega^M$. Then $ve=v$ for all idempotents $e\in E(M)$, so $v\in \bigcap_{e\in E(M)}K\Omega e$. Suppose that $v=\sum_{\lambda\in \Lambda}c_{\lambda}\lambda$ with $c_{\lambda}\neq 0$ for all $\lambda\in \Lambda$. Then $\Lambda\subseteq \Omega e$ for all $e\in E(M)$. Also, if $m\in M$, then $vm=v$ implies that $\Lambda m=\Lambda$ and so $\Lambda$ is $M$-invariant. \end{proof} As corollaries, we obtain the following results. \begin{Cor}\label{invariantsonright} Suppose that all elements of $I(M)$ have the same image $\Lambda$. Then $K\Omega^M\neq 0$. \end{Cor} \begin{proof} Let $e$ be an idempotent of $M$ and choose $m\in I(M)$. Then $me\in I(M)$ and so $\Lambda = \Omega me\subseteq \Omega e$. Thus all idempotents of $M$ fix $\Lambda$. Since $\min_M(\Omega) = \{\Lambda\}$, it follows that $\Lambda$ is $M$-invariant. The result now follows from Proposition~\ref{invariantsonright0}. \end{proof} The next corollary shows that in the transitive setting only groups admit non-trivial invariants. \begin{Cor} Suppose that $(\Omega, M)$ is transitive. Then $K\Omega^M\neq 0$ if and only if $M$ is a group. In particular, if $(\Omega,M)$ is transitive, then the module $K\Omega$ is semisimple if and only if $M$ is a group. \end{Cor} \begin{proof} If $M$ is a group, then $\dim_K K\Omega^M$ is the number of orbits of $M$ on $G$ and hence is non-zero. For the converse, suppose $K\Omega^M\neq 0$. Then Proposition~\ref{invariantsonright0} implies that there is an $M$-invariant subset $\Lambda\subseteq \Omega$ such that every idempotent of $M$ fixes $\Lambda$. But transitivity implies $\Lambda=\Omega$. Thus the unique idempotent of $M$ is the identity. We conclude that $M$ is a group. The final statement follows because if $K\Omega$ is semisimple, then the fact that $\hom_{KM}(K\Omega,K)\neq 0$ (by Proposition~\ref{fixedset}) implies that the trivial representation is a subrepresentation of $K\Omega$. But this means that $K\Omega^M\neq 0$ and so $M$ is a group. Conversely, if $M$ is a group, then $K\Omega$ is semisimple by Maschke's theorem. \end{proof} Let us now interpret some of these results for associated actions of $M$. A $K$-bilinear form $B\colon K\Omega\times K\Omega\to K$ is said to be \emph{$M$-invariant} if \[B(vm,wm)=B(v,w)\] for all $v,w\in K\Omega$ and $m\in M$. Let $\mathop{\mathrm{bil}}\nolimits_K(K\Omega)$ be the space of $K$-bilinear forms on $K\Omega$. There is a natural left $KM$-module structure on $\mathop{\mathrm{bil}}\nolimits_K(K\Omega)$ given by putting $(mB)(v,w) = B(vm,wm)$. Then $\mathop{\mathrm{bil}}\nolimits_K(K\Omega)^M$ is the space of $M$-invariant $K$-bilinear forms. As $K$-bilinear forms are determined by their values on a basis, it is easy to see that $\mathop{\mathrm{bil}}\nolimits_K(K\Omega)\cong K^{\Omega\times \Omega}$ as $KM$-modules. Moreover, $\mathop{\mathrm{bil}}\nolimits_K(K\Omega)^M\cong (K^{\Omega\times \Omega})^M$. Thus we have proved the following. \begin{Prop} The dimension of the space of $M$-invariant $K$-bilinear forms on $K\Omega$ is the number of weak orbitals of $(\Omega,M)$. \end{Prop} Let $\Omega^{\{2\}}$ denote the subset of $P(\Omega)$ consisting of all $1$- and $2$-element subsets. Then $\Omega^{\{2\}}$ is $M$-invariant and can be identified with the quotient of $\Omega^2$ by the equivalence relation putting $(\alpha,\omega)\equiv (\omega,\alpha)$ for all $\alpha,\omega$. It is then easy to see that $K^{\Omega^{\{2\}}}$ is isomorphic as a $KM$-module to the space of symmetric $K$-bilinear forms on $K\Omega$ and hence $(K^{\Omega^{\{2\}}})^M$ is the space of $M$-invariant symmetric $K$-bilinear forms on $K\Omega$. Thus we have proved: \begin{Prop} The dimension of the space of $M$-invariant symmetric $K$-bilinear forms on $K\Omega$ is the number of weak orbits of $M$ on $\Omega^{\{2\}}$. \end{Prop} \subsection{The augmentation submodule} If $(\Omega,M)$ is a finite transformation monoid, one always has the augmentation map $\varepsilon\colon K\Omega\to K$ given by \[\varepsilon(v)=\langle v,I_{\Omega}\rangle.\] If $v=\sum_{\omega\in\Omega}c_{\omega}\omega$, then $\varepsilon(v)=\sum_{\omega\in\Omega}c_{\omega}$. Clearly $I_{\Omega}$ is constant on weak orbits and so $\varepsilon$ is a $KM$-module homomorphism (where $K$ is given the trivial $KM$-module structure). Thus $\ker \varepsilon$ is a $KM$-submodule, called the \emph{augmentation submodule}, and is denoted $\mathrm{Aug}(K\Omega)$. A key fact, which plays a role in the \v{C}ern\'y conjecture literature, is that $m\in M$ is a constant map if and only if $m$ annihilates the augmentation submodule. Indeed $\mathrm{Aug}(K\Omega)$ consists of those vectors $v=\sum_{\omega\in\Omega}c_{\omega}\omega$ such that $\sum_{\omega\in\Omega}c_{\omega}=0$. If we fix $\omega_0\in \Omega$, then the set of differences $\omega-\omega_0$ where $\omega$ runs over $\Omega\setminus \{\omega_0\}$ is a basis for $\mathrm{Aug}(K\Omega)$. Thus $m$ annihilates $\mathrm{Aug}(\Omega)$ if and only if $\omega m=\omega_0 m$ for all $\omega\in \Omega$, i.e., $m$ is a constant map. This has a generalization, due to the author and Almeida~\cite{mortality} (inspired by Rystsov~\cite{rystrank}), that we reproduce here for the reader's convenience. First we need some notation. If $X\subseteq \Omega$, let $[X] =\sum_{\omega\in X}\omega$. \begin{Prop}\label{minrankspace} Let $(\Omega,M)$ be a transformation monoid of degree $n$ and min-rank $r$. Let $K\Omega_r$ be the subspace of $\mathrm{Aug}(K\Omega)$ spanned by the differences $[X]-[Y]$ with $X,Y\in \min_M\nolimits(\Omega)$. Then $K\Omega_r$ is a $KM$-submodule with $\dim_K K\Omega_r\leq n-r$. Moreover, if $M$ is transitive, then $m\in M$ annihilates $K\Omega_r$ if and only if $m\in I(M)$. \end{Prop} \begin{proof} First observe that $\varepsilon([X]-[Y]) = r-r=0$ for $X,Y\in \min_M\nolimits(\Omega)$ and so $K\Omega_r\subseteq \mathrm{Aug}(K\Omega)$. The $M$-invariance of $K\Omega_r$ follows from the fact that $\min_M(\Omega)$ is $M$-invariant and Proposition~\ref{kernelpartition}. Fix $s\in I(M)$ and let $\ker s=\{P_1,\ldots,P_r\}$. Proposition~\ref{kernelpartition} shows that if $X\in \min_M(\Omega)$, then $|X\cap P_i|=1$ for $i=1,\ldots, r$. But $|X\cap P_i| = \langle [X],I_{P_i}\rangle$ and so $K\Omega_r\subseteq \mathrm{Span}\{I_{P_i}\mid 1\leq i\leq r\}^{\perp}$. Since $\{P_1,\ldots,P_r\}$ is a partition, the indicator functions $I_{P_1},\ldots,I_{P_r}$ trivially form a linearly independent subset of $K^\Omega$. As our pairing is non-degenerate, we may conclude that $\dim_K K\Omega_r\leq n-r$. Suppose now that $(\Omega,M)$ is transitive. Then if $m\in I(M)$, trivially $Xm=\Omega m$ for any $X\in \min_M(\Omega)$. Thus $m$ annihilates $K\Omega_r$. Suppose that $m\notin I(M)$. Then $m$ has rank at least $r+1$. Choose $X\in \min_M(\Omega)$. Then $|Xm|=r$ and hence $Xm$ is a proper subset of $\Omega m$. Let $\alpha\in \Omega m\setminus Xm$ and suppose that $\alpha =\beta m$. By transitivity of $I(M)$ on $\Omega$, we can find $n\in I(M)$ with $\beta\in \Omega n=Y$. Then $([Y]-[X])m = [Y]m-[X]m\neq 0$ as the coefficient of $\alpha$ in $[Y]m$ is non-zero, whereas the coefficient of $\alpha$ in $[X]m$ is zero. Thus $m$ does not annihilate $K\Omega_r$, completing the proof. \end{proof} Of course, when the min-rank is $1$ and $(\Omega,M)$ is transitive, then $I(M)$ consists of the constant maps and $K\Omega_{1}=\mathrm{Aug}(K\Omega)$. On the other extreme, if the min-rank of $(\Omega,M)$ is $n$, that is, $(\Omega,M)$ is a permutation group, then $K\Omega_n=\{0\}$. Our next result generalizes a result from~\cite{synchgroups} for permutation groups. Let us continue to assume that $K$ is a field of characteristic $0$. Then a transformation monoid $(\Omega,M)$ is said to be a \emph{$KI$-monoid} if $\mathrm{Aug}(K\Omega)$ is a simple $KM$-module. It is well known that for permutation groups being a $\mathbb CI$-group is equivalent to $2$-transitivity and being an $\mathbb RI$-group is equivalent to $2$-homogeneity~\cite{synchcoop}. The case of $\mathbb QI$-groups has been studied in~\cite{synchgroups,dixonQI,synchcoop}. The results of~\cite{synchgroups} imply that a $KI$-group $(\Omega,G)$ is primitive and if $f$ is any non-invertible map on $\Omega$, then $\langle G,f\rangle$ contains a constant map. Here is the general case. \begin{Thm}\label{KImonoid} Let $(\Omega,M)$ be a $KI$-monoid. Then: \begin{enumerate} \item $(\Omega,M)$ is primitive; \item If in addition $(\Omega,M)$ is transitive, then either it is a permutation group or contains a constant map. \end{enumerate} \end{Thm} \begin{proof} Let $\equiv$ be a non-trivial proper congruence on $\Omega$. Functoriality of the transformation module construction and the observation that the trivial module $K$ is the transformation module associated to the trivial action of $M$ on a one-point set yield the commutative diagram \[\xymatrix{K\Omega\ar[rr]^{\psi}\ar[rd]_{\varepsilon}&&K[\Omega/{\equiv}]\ar[ld]^{\varepsilon'}\\ &K&}\] with $\psi$ induced by the quotient map and with $\varepsilon,\varepsilon'$ the augmentations. As $\equiv$ is proper and non-trivial it follows that $\ker \psi$ is a non-zero proper $KM$-submodule of $\ker \varepsilon=\mathrm{Aug}(K\Omega)$, contradicting that $(\Omega,M)$ is a $KI$-monoid. Thus $(\Omega,M)$ is primitive. To prove the second item, assume by way of contradiction that the min-rank $r$ of $(\Omega,M)$ satisfies $1<r<n$. Since $r<n$, $\min_M\nolimits(\Omega)$ has at least two elements and so $K\Omega_r\neq 0$. On the other hand, Proposition~\ref{minrankspace} shows that $K\Omega_r$ is a $KM$-submodule of $\mathrm{Aug}(K\Omega)$ of dimension at most $n-r<n-1=\dim \mathrm{Aug}(K\Omega)$. This contradicts that $(\Omega,M)$ is a $KI$-monoid. \end{proof} \subsection{Partial transformation modules} Next we consider the case of $M$-sets with zero. Even if we start with a transformation monoid $(\Omega,M)$, consideration of the quotient $\Omega/\Lambda$ by an $M$-invariant subset $\Lambda$ will lead us to this case. So suppose that $(\Omega,M)$ is a finite transformation monoid with $\Omega$ an $M$-set with zero. For the moment we shall denote the zero of $\Omega$ by $\zeta$ to distinguish it from the zero element of $K\Omega$. Define the \emph{partial transformation module} (or \emph{contracted transformation module}) \[K_0\Omega = K\Omega/K\zeta.\] This is indeed a $KM$-module because $K\zeta$ is a $KM$-submodule. As a $K$-vector space, $K_0\Omega$ has basis the cosets $\alpha+K\zeta$ with $\zeta\neq\alpha\in \Omega$. Thus from now on we identify $\zeta$ with the zero of $K_0\Omega$ and return to using $0$ for the distinguished sink of $\Omega$. We identify $\alpha$ with the coset $\alpha+K\zeta$ for $0\neq \alpha\in \Omega$. Said differently, we can view $K_0\Omega$ as a $K$-vector space with basis $\Omega\setminus\{0\}$. The action of $m\in M$ on $\Omega$ is extended linearly, but where we now identify the zero of $\Omega$ with the zero of $K\Omega_0$. An alternate viewpoint is the following (retaining the above notation). The augmentation \[\varepsilon\colon K\Omega\to K\] splits via the map $K\to K\Omega$ given by $c\mapsto c\zeta$. Thus \[K\Omega=\mathrm{Aug}(K\Omega)\oplus K\zeta\cong \mathrm{Aug}(K\Omega)\oplus K\] as a $KM$-module and so $K_0\Omega=K\Omega/K\zeta\cong \mathrm{Aug}(K\Omega)$. The natural basis to take for $\mathrm{Aug}(K\Omega)$ consists of all differences $\omega-\zeta$ with $\omega\in \Omega\setminus \{\zeta\}$. Then the action of $m\in M$ is given by $m(\omega-\zeta) = m\omega-\zeta$ and so this provides another model of $K_0\Omega$. If $M$ contains the zero map $z$, then $z$ acts on $K_0\Omega$ as a zero and so $K_0\Omega$ is naturally a module for the \emph{contracted monoid algebra} $K_0M=KM/Kz$. Therefore, we will continue to use $0$ to denote the zero map on $\Omega$ and we shall identify the zero of $M$ with the zero of $K_0M$ and view $K_0\Omega$ as a $K_0M$-module. The representations of $K_0M$ are exactly the representations of $M$ that send $0$ to the zero matrix. In particular, the trivial $KM$-module $K$ is not a $K_0M$-module and hence $K_0\Omega$ does not contain the trivial representation as a constituent. We record this as a proposition. \begin{Prop} Let $(\Omega,M)$ be transformation monoid where $\Omega$ is an $M$-set with zero and suppose that $M$ contains the zero map. Then the trivial module is not a constituent of $K_0\Omega$ and in particular $K_0\Omega^M=0$. \end{Prop} Notice that if $\Omega$ is an $M$-set and $\Lambda$ is an $M$-invariant subset, then there is an isomorphism $K\Omega/K\Lambda\cong K_0[\Omega/\Lambda]$. We shall use both notations as convenient. Returning to the case of a transformation monoid $(\Omega,M)$ where $\Omega$ is an $M$-set with zero, we would like the analogue of the dual pairing \eqref{pairing}. Let us again momentarily use the notation $\zeta$ for the zero of $\Omega$. Let $K^{\Omega}_0$ be the subspace of all function $f\colon \Omega\to K$ such that $f(\zeta)=0$. This is a $KM$-submodule because $f(\zeta)=0$ implies $mf(\zeta)=f(\zeta m)=f(\zeta)=0$ for all $m\in M$. As $K^{\Omega}_0$ is the annihilator of $K\zeta$ with respect to the pairing \eqref{pairing}, it follows that the pairing descends to a non-degenerate dual pairing $K_0\Omega\times K^{\Omega}_0\rightarrow K$ given by \[\langle \alpha,f\rangle =f(\alpha)\] for $\alpha\in \Omega\setminus \{0\}$ which is compatible with the $KM$-module structure. Alternatively, if we identify $K_0\Omega$ with $\mathrm{Aug}(K\Omega)$, then we can just restrict the original pairing \eqref{pairing}. We now return to writing $0$ for $\zeta$ and identify $K^{\Omega}_0$ with $K^{\Omega\setminus \{0\}}$. The left action of $m\in M$ on $f\colon K^{\Omega\setminus \{0\}}\to K$ is then given by \[mf(\alpha) = \begin{cases} f(\alpha m) & \alpha m\neq 0\\ 0 & \text{else.}\end{cases}\] The dual basis to $\Omega\setminus \{0\}$ consists of the functions $\delta_{\alpha}$ with $\alpha\in \Omega\setminus \{0\}$. If $M$ contains the zero map $z$, then $z$ annihilates $K^{\Omega}_0$ (viewed as a subspace of $K^{\Omega}$) and hence $K^{\Omega}_0$ is a left $K_0M$-module. Let us return to the case of a finite transformation monoid $(\Omega,M)$ (with or without zero). Consider a strong orbit $\mathcal O_s(\omega)$ of $M$ on $\Omega$. Let $\Upsilon(\omega) = \omega M\setminus \mathcal O_s(\omega)$. Then $\omega M$ is an $M$-invariant subset of $\Omega$ and $\Upsilon(\omega)$ is an $M$-invariant subset of $\omega M$. Thus we can form the quotient $0$-transitive $M$-set $\omega M/\Upsilon (\omega)$ and hence the partial transformation module $K_0[\omega M/\Upsilon (\omega)]\cong K\omega M/K\Upsilon (\omega)$ (where if $\Upsilon(\omega)=\emptyset$, we interpret $\omega M/\Upsilon (\omega)=\omega M$ and $K\Upsilon(\omega)=0$). This module has a basis in bijection with $\mathcal O_s(\omega)$. Thus we can put a right $KM$-module structure on $K\mathcal O_s(\omega)$ by putting, for $\alpha\in \mathcal O_s(\omega)$, \[\alpha m = \begin{cases} \alpha\cdot m & \alpha\cdot m\in \mathcal O_s(\omega)\\ 0 & \alpha\cdot m\notin \mathcal O_s(\omega)\end{cases}\] where for the moment we use $\cdot$ to indicate the action in $\Omega$. With this module structure, we have a $KM$-isomorphism $K\mathcal O_s(\omega)\cong K_0[\omega M/\Upsilon (\omega)]$. If one considers an unrefinable series of $M$-invariant subsets of $\Omega$ as per \eqref{series}, then one obtains a series \[K\Omega = K\Omega_0\supset K\Omega_1\supset K\Omega_2\supset\cdots \supset K\Omega_k\supset \{0\}\] with successive quotients the modules of the form $K\mathcal O_s(\omega)$ with $\omega\in \Omega$. In particular, every irreducible constituent of $K\Omega$ is a constituent of some $K\mathcal O_s(\omega)$ with $\omega\in \Omega$. \section{A brief review of monoid representation theory} In this section we briefly review the theory of irreducible representations of finite monoids. This theory was first developed by Munn, Ponizovsky and Clifford~\cite[Chapter 5]{CP}. It was further refined and elaborated on by Rhodes and Zalcstein~\cite{RhodesZalc}, Lallement and Petrich~\cite{LallePet} and McAlister~\cite{McAlisterCharacter}. In~\cite{myirreps} a modern functorial approach was adopted based on Green's theory~\cite[Chapter 6]{Greenpoly}; more in depth information can be found in~\cite{rrbg}. See also~\cite{ZurJohnBen} for the analogue over semirings. The advantage of this approach is that it avoids reliance on technical semigroup theory and at the same time clarifies the situation by highlighting functoriality and adjunctions. Fix a finite monoid $M$. If $e\in E(M)$, define $I_e = \{m\in M\mid e\notin MmM\}$ and observe that $I_e$ is an ideal of $M$. We follow the obvious conventions when $I_e=\emptyset$, that is, $e\in E(I(M))$. Define $A_e= KM/KI_e\cong K_0[M/I_e]$. Stability immediately yields that $I_e\cap eMe=eMe\setminus G_e$. Thus $eA_ee\cong KG_e$. Hence by Green's theory~\cite{Greenpoly,myirreps} there are induction, restriction and coinduction functors between $KG_e$-modules and $A_e$-modules. Viewing the category of $A_e$-modules as a full subcategory of the category of $KM$-modules, we have the following functors: \begin{gather*} \mathop{\mathrm{Ind}}\nolimits_e\colon \mathrm{mod}\text{-} KG_e\to \mathrm{mod}\text{-} KM\\ \mathop{\mathrm{Res}}\nolimits_e\colon \mathrm{mod}\text{-} KM\to \mathrm{mod}\text{-} KG_e\\ \mathop{\mathrm{Coind}}\nolimits_e\colon \mathrm{mod}\text{-} KG_e\to \mathrm{mod}\text{-} KM \end{gather*} defined by \begin{align*} \mathop{\mathrm{Ind}}\nolimits_e(V) &= V\otimes_{KG_e}e(KM/KI_e) = V\otimes_{KG_e}K_0[eM/eI_e]\\ \mathop{\mathrm{Res}}\nolimits_e(V) &= Ve\\ \mathop{\mathrm{Coind}}\nolimits_e(V) &= \hom_{KG_e}((KM/KI_e)e,V) = \hom_{G_e}(Me\setminus {I_ee},V). \end{align*} Moreover, we have the following results~\cite{myirreps,rrbg}. \begin{Prop}\label{repfacts} Let $e\in E(M)$. Let $K$ be any field (not necessarily characteristic zero). \begin{enumerate} \item If $V$ is a $KM$-module annihilated by $I_e$ and $W$ is a $KG_e$-module, then there are natural isomorphisms: \begin{align*} \hom_{KM}(\mathop{\mathrm{Ind}}\nolimits_e(W),V)&\cong \hom_{KG_e}(W,\mathop{\mathrm{Res}}\nolimits_e (V))\\ \hom_{KM}(V,\mathop{\mathrm{Coind}}\nolimits_e(W))&\cong \hom_{KG_e}(\mathop{\mathrm{Res}}\nolimits_e (V),W). \end{align*} \item The functors $\mathop{\mathrm{Res}}\nolimits_e\mathop{\mathrm{Ind}}\nolimits_e$ and $\mathop{\mathrm{Res}}\nolimits_e\mathop{\mathrm{Coind}}\nolimits_e$ are naturally isomorphic to the identity functor on $\mathrm{mod}\text{-} KG_e$. \item The functors $\mathop{\mathrm{Ind}}\nolimits_e$, $\mathop{\mathrm{Res}}\nolimits_e$ and $\mathop{\mathrm{Coind}}\nolimits_e$ are exact and preserve direct sum decompositions. Moreover, $\mathop{\mathrm{Ind}}\nolimits_e$ and $\mathop{\mathrm{Coind}}\nolimits_e$ preserve indecomposability. \end{enumerate} \end{Prop} \begin{proof} We just sketch the proof. See~\cite{myirreps,rrbg,Greenpoly} for details. The first part follows from the classical adjunction between tensor products and hom functors once one observes that $\mathop{\mathrm{Res}}\nolimits_e(V)\cong\hom_{A_e}(eA_e,V)\cong V\otimes_{A_e} Ae$. The second part is direct from Green-Morita theory~\cite[Chapter 6]{Greenpoly}; see also~\cite{myirreps}. Let us turn to the last part. The point here is that $eM\setminus eI_e$ is a free left $G_e$-set and $Me\setminus {I_ee}$ is a free right $G_e$-set~\cite{qtheor,CP}. Thus $e(KM/KI_e)$ and $(KM/KI_e)e$ are free $KG_e$-modules and so $\mathop{\mathrm{Ind}}\nolimits_e$ and $\mathop{\mathrm{Coind}}\nolimits_e$ are exact. As any additive functor preserves direct sum decompositions it remains to consider indecomposability. To see that these functors preserve indecomposability, let $V$ be a $KG_e$-module and observe that (1) and (2) yield \[\hom_{KM}(\mathop{\mathrm{Ind}}\nolimits_e(V),\mathop{\mathrm{Ind}}\nolimits_e(V))\cong \hom_{KG_e}(V,\mathop{\mathrm{Res}}\nolimits_e\mathop{\mathrm{Ind}}\nolimits_e(V))\cong \hom_{KG_e}(V,V)\] and in fact this isomorphism is a ring isomorphism. But a module is indecomposable if and only if the only idempotents in its endomorphism algebra are $0$ and $1$. Thus $V$ is indecomposable if and only if $\mathop{\mathrm{Ind}}\nolimits_e(V)$ is indecomposable. The argument for $\mathop{\mathrm{Coind}}\nolimits_e(V)$ is identical. \end{proof} From the theory of Green~\cite{Greenpoly,myirreps}, if $V$ is a simple $KG_e$-module, then $\mathop{\mathrm{Ind}}\nolimits_e(V)$ has a unique maximal submodule $\mathop{\mathrm{rad}}(\mathop{\mathrm{Ind}}\nolimits_e(V))$ that can be described as the largest submodule annihilated by $e$, or alternatively \[\mathop{\mathrm{rad}}(\mathop{\mathrm{Ind}}\nolimits_e(V))=\{v\in \mathop{\mathrm{Ind}}\nolimits_e(V)\mid vme=0, \forall m\in M\}.\] The quotient $\til V=\mathop{\mathrm{Ind}}\nolimits_e(V)/\mathop{\mathrm{rad}}(\mathop{\mathrm{Ind}}\nolimits_e(V))$ is then a simple $KM$-module and $\til Ve\cong V$; in fact, the image of the projection $\mathop{\mathrm{Ind}}\nolimits_e(V)\to \til V$ under the restriction functor $\mathop{\mathrm{Res}}\nolimits_e$ is the identity as $e$ annihilates $\mathop{\mathrm{rad}}(\mathop{\mathrm{Ind}}\nolimits_e(V))$. It turns out that all simple $KM$-modules are constructed in this way~\cite{myirreps}. \begin{Thm} Let $K$ be a field and $M$ a finite monoid. Choose a transversal of idempotents $e_1,\ldots,e_m$ to the set of principal ideals generated by idempotents. Let $\mathop{\mathrm{Irr}}\nolimits(KG_{e_i})$ contain one simple $KG_{e_i}$-module from each isomorphism class. Then the modules of the form $\til V=\mathop{\mathrm{Ind}}\nolimits_{e_i}(V)/\mathop{\mathrm{rad}}(\mathop{\mathrm{Ind}}\nolimits_{e_i}(V))$ where $V\in \mathop{\mathrm{Irr}}\nolimits(KG_{e_i})$ and $1\leq i\leq m$ form a complete set of representatives of the isomorphism classes of simple $KM$-modules. \end{Thm} Recall that if $V$ is a $KM$-module, then $\mathop{\mathrm{rad}}(V)$ is the intersection of all the maximal submodules of $V$. The quotient $V/\mathop{\mathrm{rad}}(V)$ is a semisimple module called the \emph{top} of $V$, denoted $\mathrm{top}(V)$. The description of the radical of $\mathop{\mathrm{Ind}}\nolimits_{e_i}(V)$ for $V$ a simple $KG_{e_i}$-module generalizes. \begin{Prop}\label{computeradical} Let $M$ be a finite monoid and $e\in E(M)$. Suppose that $K$ is a field of characteristic zero and $V$ is a $KG_e$-module. Then \begin{equation}\label{radicaleq} \mathop{\mathrm{rad}}(\mathop{\mathrm{Ind}}\nolimits_e(V)) = \{w\in \mathop{\mathrm{Ind}}\nolimits_e(V)\mid wme=0, \forall m\in M\} \end{equation} is the largest submodule of $\mathop{\mathrm{Ind}}\nolimits_e(V)$ annihilated by $e$. \end{Prop} \begin{proof} Denote by $U$ the right hand side of \eqref{radicaleq}; it is clearly the largest $KM$-submodule of $\mathop{\mathrm{Ind}}\nolimits_e(V)$ annihilated by $e$. Let $V=\bigoplus_{i=1}^sm_iV_i$ be the decomposition of $V$ into simple $KG_e$-modules. Then as \[\mathop{\mathrm{Ind}}\nolimits_e(V)\cong \bigoplus_{i=1}^sm_i\mathop{\mathrm{Ind}}\nolimits_e(V_i),\] and $\til V_i=\mathop{\mathrm{Ind}}\nolimits_e(V_i)/\mathop{\mathrm{rad}}(\mathop{\mathrm{Ind}}\nolimits_e(V_i))$, we have an exact sequence of $KM$-modules \[0\longrightarrow \mathop{\mathrm{rad}}(\mathop{\mathrm{Ind}}\nolimits_e(V))\longrightarrow \mathop{\mathrm{Ind}}\nolimits_e(V)\rightarrow \bigoplus_{i=1}^sm_i\til V_i\longrightarrow 0.\] Using the exactness of the restriction functor $\mathop{\mathrm{Res}}\nolimits_e$ and the fact that it maps the projection $\mathop{\mathrm{Ind}}\nolimits_e(V_i)\to \til V_i$ to the identity map $V_i\to V_i$, we see that $0=\mathop{\mathrm{Res}}\nolimits_e (\mathop{\mathrm{rad}}(\mathop{\mathrm{Ind}}\nolimits_e(V)))=\mathop{\mathrm{rad}}(\mathop{\mathrm{Ind}}\nolimits_e(V))e$. This shows that $\mathop{\mathrm{rad}}(\mathop{\mathrm{Ind}}\nolimits_e(V))\subseteq U$. For the converse, let $\varphi\colon \mathop{\mathrm{Ind}}\nolimits_e(V)\to W$ be an epimorphism of $KM$-modules with $W$ a simple $KM$-module. Then $I_e$ annihilates $W$ and so by the adjunction, we have a non-zero morphism $V\to We$ and so $We\neq 0$. Now $\varphi(U)$ is a submodule of $W$. If it is non-zero, then $\varphi(U)=W$. But then $We=\varphi(U)e=\varphi(Ue)=\varphi(0)=0$, a contradiction. Thus $U\subseteq \ker \varphi$. As $\varphi$ was arbitrary, we conclude that $U\subseteq \mathop{\mathrm{rad}}(\mathop{\mathrm{Ind}}\nolimits_e(V))$. \end{proof} A fact we shall use later is that \[\mathop{\mathrm{Ind}}\nolimits_e(V)eKM=V\otimes_{KG_e}e(KM/I_e)eKM=\mathop{\mathrm{Ind}}\nolimits_e(V)\] because $e(KM/I_e)e=KG_e$ and $VKG_e=V$. \section{The projective cover of a transformation module} From now on we assume that the characteristic of our field $K$ is zero and we fix a finite monoid $M$. An important special case of the above theory is when $e\in E(I(M))$. In this case $I_e=\emptyset$ and so $\mathop{\mathrm{Ind}}\nolimits_e(V) = V\otimes_{KG_e}eKM$ and $\mathop{\mathrm{Coind}}\nolimits_e(V) = \hom_{G_e}(Me,V)$. Moreover, the adjunctions of Proposition~\ref{repfacts} hold for all $KM$-modules $V$. Observe that $\mathop{\mathrm{Ind}}\nolimits_e(KG_e) = KG_e\otimes_{KG_e}eKM=eKM$ is a projective $KM$-module (as $KM = eKM\oplus (1-e)KM$). Let \[KG_e = \bigoplus_{i=1}^s d_iV_i\] be the decomposition of $KG_e$ into simple modules. Then the decomposition \[eKM = \mathop{\mathrm{Ind}}\nolimits_e(KG_e) = \bigoplus_{i=1}^s d_i\mathop{\mathrm{Ind}}\nolimits_e(V_i)\] establishes that the $\mathop{\mathrm{Ind}}\nolimits_e(V_i)$ are projective modules. Furthermore, $\mathop{\mathrm{Ind}}\nolimits_e(V_i)$ is indecomposable by Proposition~\ref{repfacts}. Thus $\mathop{\mathrm{Ind}}\nolimits_e(V_i)\to \til V_i$ is the projective cover of the simple module $\til V_i$. We recall here that if $V$ is a module over a finite dimensional algebra $A$, then the projective cover $P$ of $V$ is a projective module $P$ together with an epimorphism $\pi\colon P\to V$ such that $\pi$ induces an isomorphism $\mathrm{top}(P)\to \mathrm{top}(V)$~\cite{assem}. Equivalently, it is an epimorphism $\pi\colon P\to V$ with $\ker\pi\subseteq \mathop{\mathrm{rad}}(P)$. The projective cover of a module is unique up to isomorphism~\cite{assem}. The projective covers of the simple modules are the projective indecomposables. We have thus proved: \begin{Prop}\label{projcover1} Let $K$ be a field of characteristic zero and $M$ a finite monoid. Let $e\in E(I(M))$ and assume that $V_i$ is a simple $KG_e$-module. Then the projection $\mathop{\mathrm{Ind}}\nolimits_e(V_i)\to \til V_i$ is the projective cover of the simple $KM$-module $\til V_i$. \end{Prop} Note that if $\Lambda$ is a right $M$-set and $\Omega$ is a bi-$M$-$N$-set, then $K[\Lambda\otimes_M\Omega]\cong K\Lambda\otimes_{KM} K\Omega$ as a right $KN$-module, as is immediate from the universal property of tensor products. In particular, if $e\in E(I(M))$ and $\Omega$ is a $G_e$-set, then one has $K[\mathop{\mathrm{ind}}\nolimits_e(\Omega)]\cong \mathop{\mathrm{Ind}}\nolimits_e(K\Omega)$. Taking $\Omega$ to be the trivial $G_e$-set $\{\ast\}$, we then have $\mathop{\mathrm{Ind}}\nolimits_e(K)\cong K[\mathop{\mathrm{ind}}\nolimits_e(\{\ast\})] = K(\{\ast\}\otimes_{G_e} eM) = K(G_e\backslash eM)$. Thus Proposition~\ref{projcover1} has the following consequence. \begin{Cor}\label{trivialcover} The projective cover of the trivial representation of $KM$ is the augmentation map $\varepsilon\colon K[G_e\backslash eM]\to K$ where $e\in E(I(M))$. In particular, if $(\Omega,M)$ is a transitive transformation monoid with the maximal subgroup of $I(M)$ trivial, then $K\Omega$ is a projective indecomposable representation with simple top the trivial $KM$-module and radical $\mathrm{Aug}(K\Omega)$. \end{Cor} \begin{proof} It just remains to verify the final statement. But Proposition~\ref{constantmapcase} shows that in this case $\Omega\cong eM=G_e\backslash eM$. \end{proof} Let $A$ be any finite dimensional $K$-algebra and $P$ a projective indecomposable with corresponding simple module $S=P/\mathop{\mathrm{rad}}(P)$. Then it is well known that, for any $A$-module $V$, the $K$-dimension of $\hom_A(P,V)$ is the multiplicity of $S$ as an irreducible constituent of $V$~\cite{assem}. Hence we have the reciprocity result: \begin{Prop}\label{constituentmultiplicity} Suppose $e\in E(I(M))$ and $V_i$ is a simple $KG_e$-module. Let $W$ be a $KM$-module. Then the multiplicity of $\til V_i$ as a constituent of $W$ is the same as the multiplicity of $V_i$ as a constituent of $\mathop{\mathrm{Res}}\nolimits_e (W)=We$. \end{Prop} \begin{proof} Since $\mathop{\mathrm{Ind}}\nolimits_e(V_i)$ is the projective cover of $\til V_i$, we have that the multiplicity of $\til V_i$ in $W$ is \[\dim_K \hom_{KM}(\mathop{\mathrm{Ind}}\nolimits_e(V_i),W)=\dim_K\hom_{KG_e}(V_i,\mathop{\mathrm{Res}}\nolimits_e(W))\] and this latter dimension is the multiplicity of $V_i$ in $We$. \end{proof} The advantage of this proposition is that one can then apply the orthogonality relations of group representation theory~\cite{curtis} in order to compute the multiplicity. Applying this to the special case of the trivial representation of $KG_e$ yields: \begin{Cor} Let $(\Omega,M)$ be a transformation monoid. The multiplicity of the trivial $KM$-module as an irreducible constituent of $K\Omega$ is the number of orbits of $G_e$ on $\Omega e$ where $e\in E(I(M))$. This can be strictly larger than \[\dim_K\hom_{KM}(K\Omega,K)=|\pi_0(\Omega)|.\] \end{Cor} \begin{proof} By standard group representation theory, the multiplicity of the trivial representation of $G_e$ in $K\Omega e$ is the number of orbits of $G_e$ on $\Omega e$~\cite{curtis,cameron,dixonbook}. The final statement follows from Proposition~\ref{fixedset} and the example just after Proposition~\ref{weakorbitrestriction}. \end{proof} Next we want to establish the analogues of Propositions~\ref{projcover1} and~\ref{constituentmultiplicity} for the case of monoids with zero. \begin{Prop}\label{projcover2} Let $M$ be a finite monoid with zero containing a unique $0$-minimal ideal $I$ and let $K$ be a field of characteristic zero. Let $0\neq e\in E(I)$ and suppose that $V$ is a simple $KG_e$-module. Then $\mathop{\mathrm{Ind}}\nolimits_e(V)$ is a projective indecomposable $KM$-module and the projection $\mathop{\mathrm{Ind}}\nolimits_e(V)\to \til V$ is the projective cover. Moreover, if $W$ is a $K_0M$-module, then the multiplicity of $\til V$ as a constituent in $W$ is the same as the multiplicity of $V$ as a constituent in $We$. \end{Prop} \begin{proof} First observe that if $z$ is the zero of $M$, then $z$ and $1-z$ are central idempotents of $KM$ and so we have an isomorphism of $K$-algebras \[KM= (1-z)KM\oplus zK\cong K_0M\oplus K.\] Thus $K_0M$ is a projective $KM$-module. But $K_0M=eK_0M\oplus (1-e)K_0M$ and so $eK_0M$ is a projective $KM$-module. Suppose that $KG_e=\bigoplus_{i=1}^s d_iV_i$ is the decomposition into simple $KG_e$-modules. Then \[eK_0M = KG_e\otimes_{KG_e} eK_0M = \mathop{\mathrm{Ind}}\nolimits_e(KG_e) = \bigoplus_{i=1}^s d_i\mathop{\mathrm{Ind}}\nolimits_e(V_i)\] and thus each $\mathop{\mathrm{Ind}}\nolimits_e(V_i)$ is a projective module. Proposition~\ref{repfacts} then yields that $\mathop{\mathrm{Ind}}\nolimits_e(V_i)$ is a projective indecomposable and hence the canonical projection $\mathop{\mathrm{Ind}}\nolimits_e(V_i)\to \til V_i$ is the projective cover. For the final statement, Proposition~\ref{repfacts} provides the isomorphism \[\hom_{KM}(\mathop{\mathrm{Ind}}\nolimits_e(V),W)\cong \hom_{KG_e}(V,We).\] The dimension of the left hand side is the multiplicity of $\til V$ as a constituent of $W$, whereas the dimension of the right hand side is the multiplicity of $V$ as a constituent in $We$. \end{proof} An immediate corollary is the following. \begin{Cor} Let $(\Omega,M)$ be a $0$-transitive finite transformation monoid such that $G_e$ is trivial for $0\neq e\in E(I)$ where $I$ is the $0$-minimal ideal of $M$. Then $K_0\Omega$ is a projective indecomposable $KM$-module. \end{Cor} \begin{proof} We know that $\Omega\cong eM$ from Proposition~\ref{aperiodicbottom} and so $K_0\Omega\cong eK_0M=\mathop{\mathrm{Ind}}\nolimits_e(K)$ and hence is a projective indecomposable by Proposition~\ref{projcover2}. \end{proof} In~\cite{mobius2} it is proved that if $(\Omega,M)$ is a $0$-transitive transformation inverse monoid, then the module $K_0\Omega$ is semisimple and decomposes as follows. Let $e$ be a non-zero idempotent of the $0$-minimal ideal of $M$ and let $\bigoplus_{i=1}^sm_iV_i$ be decomposition of $K\Omega e$ into simple $KG_e$-modules. Then \[K_0\Omega\cong \bigoplus_{i=1}^sm_i\til V_i.\] For more general transformation monoids, we lose semisimplicity. But we show here that the analogous result holds at the level of the projective cover. Of course, in characteristic zero, inverse monoid algebras are semisimple~\cite{CP} and so the simple modules are the projective indecomposables. \subsection{The transitive case} We describe here the projective cover of $K\Omega$ when $(\Omega,M)$ is transitive (and in slightly more generality). \begin{Thm}\label{mainprojcover} Let $(\Omega,M)$ be a finite transformation monoid and $K$ a field of characteristic zero. Let $e\in E(I(M))$ and suppose that $\Omega eM=\Omega$; this happens, for instance, if $(\Omega,M)$ is transitive. Then the natural map \[\varphi\colon \mathop{\mathrm{Ind}}\nolimits_e(K\Omega e)\to K\Omega\] induced by the identity map on $K\Omega e$ is the projective cover. \end{Thm} \begin{proof} First we observe that $\varphi$ is an epimorphism because \[\varphi(\mathop{\mathrm{Ind}}\nolimits_e(K\Omega e)) = \varphi(\mathop{\mathrm{Ind}}\nolimits_e(K\Omega e)eKM) = K\Omega eM=K\Omega.\] It remains to show that $\ker \varphi\subseteq \mathop{\mathrm{rad}}(\mathop{\mathrm{Ind}}\nolimits_e(K\Omega e))$. By Proposition~\ref{computeradical} this occurs if and only if $e$ annihilates $\ker \varphi$. But we have an exact sequence \[0\longrightarrow \ker \varphi\longrightarrow \mathop{\mathrm{Ind}}\nolimits_e(K\Omega e)\xrightarrow{\,\,\varphi\,\,} K\Omega\longrightarrow 0\] and hence application of $\mathop{\mathrm{Res}}\nolimits_e$, which is exact, and the fact that $\mathop{\mathrm{Res}}\nolimits_e(\varphi)=1_{K\Omega e}$ yield an exact sequence \[0\longrightarrow (\ker \varphi)e\longrightarrow K\Omega e\xrightarrow{1_{K\Omega e}} K\Omega e\longrightarrow 0.\] Thus $(\ker \varphi)e=0$, as required. \end{proof} As a corollary, we have the following description of $\mathrm{top}(K\Omega)$. \begin{Cor}\label{cortomainproj} Under the hypotheses of Theorem~\ref{mainprojcover} one has \[\mathrm{top}(K\Omega)\cong \bigoplus_{i=1}^sm_i\til{V_i}\] where $K\Omega e=\bigoplus_{i=1}^sm_iV_i$ is the decomposition into simple $KG_e$-modules. In particular, if $(\Omega,M)$ is transitive (and hence $(\Omega e,G_e)$ is transitive), then $\sum_{i=1}^sm_i^2$ is the rank of the permutation group $(\Omega e,G_e)$. \end{Cor} \begin{proof} The first part is clear from Theorem~\ref{mainprojcover}; the second part follows from a well-known result in permutation group theory~\cite{dixonbook,cameron}. \end{proof} \subsection{The $0$-transitive case} Our next result is the analogous theorem for the $0$-transitive case. Observe that if $(\Omega,M)$ is a $0$-transitive finite transformation monoid and $e$ is a non-zero idempotent of the $0$-minimal ideal $I$, then $K_0\Omega e$ is the permutation module associated to the permutation group $(\Omega e\setminus\{0\},G_e)$. \begin{Thm}\label{mainproj2} Let $(\Omega,M)$ be a finite $0$-transitive transformation monoid and $K$ be a field of characteristic $0$. Let $e\neq 0$ be an idempotent of the $0$-minimal ideal $I$ of $M$. Then the natural homomorphism \[\varphi\colon \mathop{\mathrm{Ind}}\nolimits_e(K_0\Omega e)\to K_0\Omega\] induced by the identity map on $K_0\Omega e$ is the projective cover. In particular, if $KM$ is semisimple, then $\mathop{\mathrm{Ind}}\nolimits_e(K_0\Omega e)\cong K_0\Omega$. \end{Thm} \begin{proof} The homomorphism $\varphi$ is surjective by the computation \[\varphi(\mathop{\mathrm{Ind}}\nolimits_e(K_0\Omega e)) = \varphi(\mathop{\mathrm{Ind}}\nolimits_e(K_0\Omega e)eKM) = K_0\Omega eM = K_0\Omega\] where the last equality uses $0$-transitivity. To show that $\varphi$ is the projective cover, we must show that $\ker \varphi$ is contained in $\mathop{\mathrm{rad}}(\mathop{\mathrm{Ind}}\nolimits_e(K_0\Omega e))$, or equivalently by Proposition~\ref{computeradical}, that $e$ annihilates $\ker \varphi$. This is proved exactly as in Theorem~\ref{mainprojcover}. Applying the exact functor $\mathop{\mathrm{Res}}\nolimits_e$ to the exact sequence \[0\longrightarrow \ker \varphi\longrightarrow \mathop{\mathrm{Ind}}\nolimits_e(K_0\Omega e)\xrightarrow{\,\,\varphi\,\,} K_0\Omega\longrightarrow 0\] and using that $\mathop{\mathrm{Res}}\nolimits_e(\varphi)=1_{K_0\Omega e}$ we obtain the exact sequence \[0\longrightarrow (\ker \varphi)e\longrightarrow K_0\Omega e\xrightarrow{1_{K_0\Omega e}} K_0\Omega e\longrightarrow 0.\] It follows that $(\ker \varphi)e=0$, completing the proof. \end{proof} In particular, Theorem~\ref{mainproj2} has as a special case the result in~\cite{mobius2} decomposing the partial transformation module associated to a $0$-transitive transformation inverse monoid. Of course, we have the following analogue of Corollary~\ref{cortomainproj}. \begin{Cor}\label{cortomainproj2} Under the same assumptions as Theorem~\ref{mainproj2} one has \[\mathrm{top}(K_0\Omega)\cong \bigoplus_{i=1}^sm_i\til{V_i}\] where $K_0\Omega e=\bigoplus_{i=1}^sm_iV_i$ is the decomposition into simple $KG_e$-modules. Moreover, $\sum_{i=1}^sm_i^2$ is the rank of the permutation group $(\Omega e\setminus \{0\},G_e)$. \end{Cor} \section{Probabilities, Markov chains and Neumann's lemma} A partition $\{P_1,\ldots, P_r\}$ on a finite set $\Omega$ is said to be \emph{uniform} if all the blocks have the same size, i.e., $|P_1|=\cdots=|P_r|$. Let's consider a probabilistic generalization. Recall that a \emph{probability distribution} on $\Omega$ is a function $\mu\colon \Omega\to [0,1]$ such that $\sum_{\omega\in \Omega}\mu(\omega)=1$. The \emph{support} $\mathrm{supp}(\mu)$ is the set of elements $\omega\in \Omega$ with $\mu(\omega)\neq 0$. One can then view $\mu$ as a probability measure on $\Omega$ by putting \[\mu(A) = \sum_{\omega\in A}\mu(\omega)\] for a subset $A\subseteq \Omega$. The uniform distribution $U$ on $\Omega$ is defined by $U(\omega)=1/|\Omega|$ for all $\omega\in \Omega$. Of course $U(A)=|A|/|\Omega|$. Thus a partition is uniform if and only if each of its blocks are equiprobable with respect to the uniform distribution. More generally, if $\mu$ is a probability distribution on $\Omega$, we shall say that the partition $\{P_1,\ldots, P_r\}$ of $\Omega$ is \emph{$\mu$-uniform} if $\mu(P_1)=\cdots=\mu(P_r)$. P.~Neumann in his work on synchronizing groups~\cite{pneumann} showed that if $(\Omega,M)$ is a finite transformation monoid with transitive group of units $G$, then the kernel of each element of $I(M)$ is a uniform partition. In this section we consider a generalization of his result. Our results can also be viewed as a generalization of a result of Friedman from~\cite{Friedman}. We shall need to introduce a few more notions from probability theory. If $f\colon \Omega\to \mathbb R$ is a \emph{random variable} on $\Omega$, that is a real-valued function, then the \emph{expected value} of $f$ with respect to the probability distribution $\mu$ is \[E_{\mu}(f) = \sum_{\omega\in \Omega}f(\omega)\mu(\omega).\] A \emph{Markov chain} with state set $\Omega$ is given by a stochastic matrix \[P\colon \Omega\times \Omega\to [0,1]\] called the \emph{transition matrix} of the chain. The adjective ``stochastic'' means that each row is a probability distribution on $\Omega$, i.e., for any fixed $\alpha\in \Omega$, one has \[\sum_{\omega\in\Omega} P(\alpha,\omega)=1.\] Viewing probability distributions on $\Omega$ as row vectors, it follows that if $\mu$ is a probability distribution, then so is $\mu P$ where \[\mu P(\alpha) = \sum_{\omega\in \Omega}\mu(\omega)P(\omega,\alpha).\] In particular, if $\mu$ is an initial distribution on $\Omega$, then $\mu P^k$ is the distribution at the $k^{th}$-step of the Markov chain. A distribution $\pi$ is said to be \emph{stationary} if $\pi P=\pi$. To a Markov chain with state set $\Omega$ and transition matrix $P$ one associates a digraph (possibly with loop edges) by declaring $(\alpha,\beta)$ to be an edge if $P(\alpha,\beta)>0$. The Markov chain is said to be \emph{irreducible} if the associated digraph is strongly connected. The following is a classical theorem in Markov chain theory. \begin{Thm}\label{Markovchaintheorem} Let $P$ be the transition matrix of an irreducible Markov chain with state set $\Omega$. Then $P$ has a unique stationary distribution $\pi$, which moreover has support $\Omega$. Furthermore, \[\lim_{k\to \infty}\frac{1}{k}\sum_{i=0}^{k-1}P^i = \Pi\] where $\Pi$ is the $\Omega\times \Omega$ matrix whose rows are all equal to $\pi$. \end{Thm} Let $(\Omega,M)$ be a finite transformation monoid and suppose that $\mu$ is a probability distribution on $M$. Then we can define a Markov chain with state space $\Omega$ by putting \begin{equation}\label{defineMarkovop} P(\alpha,\beta) = \sum_{\alpha m=\beta}\mu(m); \end{equation} so $P(\alpha,\beta)$ is the probability that an element $m\in M$ chosen randomly according to $\mu$ takes $\alpha$ to $\beta$. To see that $P$ is stochastic, notice that \[\sum_{\beta\in \Omega}P(\alpha,\beta) = \sum_{\beta\in\Omega}\sum_{\alpha m=\beta}\mu(m) = \sum_{m\in M}\sum_{\beta=\alpha m}\mu(m)=\sum_{m\in M}\mu(m)=1.\] If $N=\langle\mathop{\mathrm{supp}}(\mu)\rangle$ is transitive on $\Omega$, then $P$ is the transition matrix of an irreducible Markov chain. Indeed, the digraph associated to $P$ is the underlying digraph of the automaton with state set $\Omega$ and input alphabet $\mathop{\mathrm{supp}}(\mu)$. Observe that if $\nu$ is a probability distribution on $\Omega$, we can identify it with the element \[\sum_{\omega\in \Omega}\nu(\omega)\omega\in \mathbb R\Omega.\] Similarly, we can identify $\mu$ with the element \[\sum_{m\in M}\mu(m)m\in \mathbb RM.\] Then one easily verifies that \[\nu \mu = \sum_{\omega\in \Omega,m\in M}\nu(\omega)\mu(m)\omega m,\] whereas the coefficient of $\beta$ in $\nu P$ is \[\sum_{\omega\in \Omega}\nu(\omega)P(\omega,\beta) = \sum_{\omega\in \Omega,\omega m=\beta}\nu(\omega)\mu(m).\] Thus under our identifications, we see that $\nu \mu=\nu P$ and hence $\nu P^k=\nu \mu^k$. Our next result is an ergodic theorem in this context. \begin{Thm}[Ergodic theorem]\label{ergodic} Let $(\Omega,M)$ be a finite transformation monoid and let $\nu$ be a probability distribution on $\Omega$. Suppose that $\mu$ is a probability distribution on $M$ such that $N=\langle\mathop{\mathrm{supp}}(\mu)\rangle$ is transitive on $\Omega$ and let $P$ be the transition matrix of the irreducible Markov chain defined in \eqref{defineMarkovop}. Denote by $\pi$ the stationary distribution of $P$. If $f\colon \Omega\to \mathbb R$ is a random variable such that \[E_{\nu}(mf)=E_{\nu}(f)\] for all $m\in N$, then the equality \[E_{\pi}(f)=E_{\nu}(f)\] holds. \end{Thm} \begin{proof} We use here the dual pairing of $\mathbb R\Omega$ and $\mathbb R^{\Omega}$. Notice that if $\theta$ is any probability distribution on $\Omega$, then viewing $\theta\in \mathbb R\Omega$, we have \[E_{\theta}(f) = \sum_{\omega\in \Omega}f(\omega)\theta(\omega) = \langle \theta,f\rangle.\] Also observe that if $\lambda$ is any probability distribution with support contained in $N$, then $E_{\nu}(\lambda f)=E_{\nu}(f)$ where we view $\lambda\in \mathbb RM$. Indeed, linearity of expectation implies that \[E_{\nu}(\lambda f) = \sum_{m\in N}\lambda(m)E_{\nu}(mf) = \sum_{m\in N}\lambda(m)E_{\nu}(f)=E_{\nu}(f).\] A simple calculation reveals that $\nu\Pi=\pi$ and so applying Theorem~\ref{Markovchaintheorem} and the above observations (with $\lambda=\mu^i$) yields \begin{align*} E_{\pi}(f) &= \langle \pi,f\rangle = \langle \nu\Pi,f\rangle = \left\langle \nu\lim_{k\to \infty}\frac{1}{k}\sum_{i=0}^{k-1}P^i,f\right\rangle\\ &=\lim_{k\to \infty}\frac{1}{k}\sum_{i=0}^{k-1}\langle \nu \mu^i,f\rangle =\lim_{k\to \infty}\frac{1}{k}\sum_{i=0}^{k-1}\langle \nu,\mu^if\rangle \\ &=\lim_{k\to \infty}\frac{1}{k}\sum_{i=0}^{k-1}E_{\nu}(\mu^if)=\lim_{k\to \infty}\frac{1}{k}\sum_{i=0}^{k-1}E_{\nu}(f)\\ &=E_{\nu}(f) \end{align*} as required. \end{proof} As a consequence, we obtain the following result. \begin{Lemma}\label{generalizedneumann} Let $(\Omega,M)$ be a finite transformation monoid and let $\mu$ be a probability distribution on $M$ such that $N=\langle \mathop{\mathrm{supp}}(\mu)\rangle$ is transitive. Let $P$ be the stochastic matrix \eqref{defineMarkovop} and let $\pi$ be the stationary distribution of the irreducible Markov chain with transition matrix $P$. Suppose that $B$ and $S$ are subsets of $\Omega$ such that $|S\cap Bm^{-1}|=1$ for all $m\in N$. Then $|S|\cdot \pi(B)=1$. \end{Lemma} \begin{proof} Observe that taking $m=1$, we have $|S\cap B|=1$. Let $\nu$ be the probability distribution on $\Omega$ given by $I_S/|S|$. Then, for $m\in N$, we have \[E_{\nu}(mI_B) = E_{\nu}(I_{Bm^{-1}}) = \nu(Bm^{-1}) = |S\cap Bm^{-1}|/|S|=1/|S|=E_{\nu}(I_B).\] Thus the ergodic theorem yields \[1/|S|=E_{\nu}(I_B)=E_{\pi}(I_B)=\pi(B)\] and so $1=|S|\cdot \pi(B)$ as required. \end{proof} A particular example is the case that $(\Omega,G)$ is a transitive permutation group and $\mu$ is the uniform distribution on $G$. One easily verifies that $\pi$ is the uniform distribution on $\Omega$ (since each element of $G$ fixes the uniform distribution on $\Omega$ as an element of $\mathbb R\Omega$). Thus the lemma says in this setting that if $S,B$ are subsets of $\Omega$ with $|S\cap Bg|=1$ for all $g\in G$, then $|S|\cdot |B|=|\Omega|$. This is a result of P.~Neumann. \begin{Thm}\label{genneumann} Let $(\Omega,M)$ be a finite transformation monoid and let $\mu$ be a probability distribution on $M$ such that $\langle \mathop{\mathrm{supp}}(\mu)\rangle$ is transitive on $\Omega$. Let $P$ be the transition matrix of the irreducible Markov chain defined in \eqref{defineMarkovop} and let $\pi$ be the stationary distribution of $P$. Let $s\in I(M)$ and suppose that $\ker s = \{B_1,\ldots, B_r\}$. Then $|\Omega s|\cdot \pi(B_i)=1$ for $i=1,\ldots,r$. In particular, $\ker s$ is $\pi$-uniform. \end{Thm} \begin{proof} Observe that if $m\in M$, then $\Omega ms=\Omega s$ as all elements of $I(M)$ have the same rank. Hence if $\omega_i = B_is$ (and so $B_i=\omega_is^{-1}$), for $i=1,\ldots,r$, then \[\ker ms = \{\omega_1(ms)^{-1},\ldots, \omega_r(ms)^{-1}\} = \{B_1m^{-1},\ldots, B_rm^{-1}\}.\] Proposition~\ref{kernelpartition} now implies that $|\Omega s\cap B_im^{-1}|=1$ for all $1\leq i\leq r$. As $m$ was arbitrary, Lemma~\ref{generalizedneumann} yields $|\Omega s|\cdot \pi(B_i)=1$ for $i=1,\ldots, r$. \end{proof} As a consequence, we obtain Neumann's lemma~\cite{pneumann}. \begin{Cor}[Neumann's lemma]\label{neumann} Let $(\Omega,M)$ be a finite transformation monoid with a transitive group of units. Then $\ker m$ is a uniform partition for all $m\in I(M)$. \end{Cor} \begin{proof} Let $G$ be the group of units of $M$ and let $\mu$ be the uniform distribution on $G$. Then, as observed earlier, $\pi$ is the uniform distribution on $\Omega$. The result is now immediate from Theorem~\ref{genneumann}. \end{proof} We can now present Neumann's proof~\cite{Neumann} of a result of Pin~\cite{Pincerny}; it also can be deduced from Theorem~\ref{KImonoid} since a transitive permutation group of prime degree is a $\mathbb QI$-group, cf.~\cite{synchgroups}. \begin{Prop}\label{pinresult} Suppose that $(\Omega,M)$ is a transformation monoid with transitive group of units and $|\Omega|$ is prime. Then either $M$ is a group or $M$ contains a rank $1$ transformation (i.e., a constant map). \end{Prop} \begin{proof} The kernel of each element of $I(M)$ is a uniform partition. Since $|\Omega|$ is prime, it follows that either each element of $I(M)$ is a permutation or each element of $I(M)$ is a constant map. In the former case, $I(M)=M$ is a group; in the latter case $M$ contains a rank $1$ map. \end{proof} Neumann's lemma can be generalized to transformation monoids containing an Eulerian subset. Let $(\Omega,M)$ be a finite transformation monoid. Let us say that a subset $A\subseteq M$ is \emph{Eulerian} if $\langle A\rangle$ is transitive and, for each $\omega\in \Omega$, the equality \begin{equation}\label{Eulerian} |A|=\sum_{a\in A} |\omega a^{-1}| \end{equation} holds. The reason for this terminology is that if one considers the automaton with input alphabet $A$, state set $\Omega$ and transition function $\Omega\times A\to \Omega$ given by $(\omega,a)\mapsto \omega a$, then the underlying digraph of the automaton (with multiple edges allowed) contains a directed Eulerian path precisely under the assumption that $A$ is Eulerian. Eulerian automata were considered by Kari in the context of the Road Coloring Problem and the \v{C}ern\'y conjecture~\cite{Kari}. Notice that if $A$ consists of permutations and $\langle A\rangle$ is transitive on $\Omega$, then it is trivially Eulerian because each $|\omega a^{-1}|=1$. Thus the following theorem is a generalization of Neumann's lemma. \begin{Thm}\label{eulerneumann} Suppose that $(\Omega,M)$ is a finite transformation monoid containing an Eulerian subset $A$. Then $\ker m$ is a uniform partition for all $m\in I(M)$. In particular, if $|\Omega|$ is prime, then either $M$ is a group or $M$ contains a rank $1$ map. \end{Thm} \begin{proof} Suppose that $|A|=k$. Define a probability distribution $\mu$ on $M$ by putting $\mu=(1/k)I_A$. Let $P$ be the stochastic matrix \eqref{defineMarkovop}. The corresponding Markov chain is irreducible, let $\pi$ be its stationary distribution. We claim that $\pi$ is the uniform distribution on $\Omega$. Theorem~\ref{generalizedneumann} will then imply that $\ker m$ is uniform for each $m\in I(M)$. It is well known and easy to see that the uniform distribution is stationary for a Markov chain if and only if the transition matrix $P$ is doubly stochastic, meaning that the columns of $P$ also sum to $1$. In our case, the sum of the entries of the column of $P$ corresponding to $\omega\in \Omega$ is \[\sum_{\alpha\in\Omega}P(\alpha,\omega)=\sum_{\alpha\in \Omega}\sum_{\alpha m=\omega}\mu(m) = \sum_{m\in M}\mu(m)\cdot |\omega m^{-1}| = \frac{1}{k}\sum_{a\in A}|\omega a^{-1}| = 1\] where we have used \eqref{Eulerian}. The final statement is proved exactly as in Proposition~\ref{pinresult}. \end{proof} Theorem~\ref{eulerneumann} holds more generally for any finite transformation monoid $(\Omega,M)$ such that there is a probability distribution $\mu$ on $M$ with $\langle\mathop{\mathrm{supp}}(\mu)\rangle$ transitive and the matrix $P$ from \eqref{defineMarkovop} doubly stochastic. It is not hard to construct transformation monoids for which this occurs that do not contain Eulerian subsets. The corresponding class of automata was termed \emph{pseudo-Eulerian} by the author in~\cite{averaging}. \subsection{A Burnside-type lemma} The classical Burnside lemma (which in fact was known to Cauchy and Frobenius) says that the number of orbits of a permutation group equals the average number of fixed points. The best we can say for transformation monoids is the following, where $\mathrm{Fix}(m)$ is the fixed-point set of $m\in M$ and $\mathrm{Stab}(\omega)$ is the stabilizer of $\omega\in \Omega$. \begin{Lemma} Let $(\Omega,M)$ be a finite transformation monoid. Suppose that $\mu$ is a probability distribution of $M$ and $\pi$ is a probability distribution on $\Omega$. Let $F$ be the random variable defined on $M$ by $F(m)=\pi(\mathrm{Fix}(m))$ and let $S$ be the random variable defined on $\Omega$ by $S(\omega)=\mu(\mathrm{Stab}(\omega))$. Then $E_{\mu}(F)=E_{\pi}(S)$. \end{Lemma} \begin{proof} This is a trivial computation: \begin{align*} E_{\mu}(F) &= \sum_{m\in M} \pi(\mathrm{Fix}(m))\mu(m)= \sum_{\omega m=\omega}\pi(\omega)\mu(m)= \sum_{\omega\in\Omega} \mu(\mathrm{Stab}(\omega))\pi(\omega)\\ &= E_{\pi}(S) \end{align*} as required. \end{proof} The classical Burnside lemma is obtained by taking $M$ to be a group $G$, $\mu$ to be the uniform distribution on $G$ and $\pi$ to be the uniform distribution on $\Omega$: one simply observes that $\mu(\mathrm{Stab}(\omega))=|\mathrm{Stab}(\omega)|/|G|=1/|\omega\cdot G|$. Suppose that $|\Omega|=n$, $M=T_{\Omega}$ and one takes $\mu$ and $\pi$ to be uniform. Clearly $|\mathrm{Stab}(\omega)| = n^{n-1}$. Thus we have the well known result \[\frac{1}{|T_{\Omega}|}\sum_{f\in T_{\Omega}}|\mathrm{Fix}(f)| = \frac{1}{n^n}\sum_{\omega\in \Omega}|\mathrm{Stab}(\omega)|=1\] just as in the case of the symmetric group $S_\Omega$. \subsection*{Acknowledgments} In the summer of 2008 I visited with Jorge Almeida in Porto, which led to our joint work~\cite{mortality}. This paper grew out of initial discussions we had at that time. \def\malce{\mathbin{\hbox{$\bigcirc$\rlap{\kern-7.75pt\raise0,50pt\hbox{${\tt m}$}}}}}\def$'${$'$} \def$'${$'$} \def$'${$'$} \def$'${$'$} \def$'${$'$} \def$'${$'$} \def$'${$'$} \def$'${$'$} \def$'${$'$}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction}\label{sec-Introduction} The {\em generalised Sudoku problem} is an NP-complete problem which, effectively, requests a Latin square that satisfies some additional constraints. In addition to the standard requirement that each row and column of the Latin square contains each symbol precisely once, Sudoku also demands {\em block constraints}. If there are $N$ symbols, the Latin square is of size $N \times N$. If $N$ is a perfect square, then the Latin square can be divided into $N$ regions of size $\sqrt{N} \times \sqrt{N}$, called {\em blocks}. Then the block constraints demand that each of these blocks also contain each of the symbols precisely once. Typically, the symbols in a Sudoku puzzle are simply taken as the natural numbers $1$ to $N$. In addition, Sudoku puzzles typically have fixed values in some of the cells, which dramatically limits the number of valid solutions. If the fixed values are such that only a unique solution remains, the Sudoku puzzle is said to be {\em well-formed}. The standard version where $N = 9$ has, in recent years, become a common form of puzzle found in newspapers and magazines the world over. Although variants of the problem have existed for over a century, Sudoku in its current format is a fairly recent problem, first published in 1979 under the name Number Place. The name Sudoku only came into existence in the 1980s. In 2003, the generalised Sudoku problem was shown to be ASP-complete \cite{asp}, which in turn implies that it is NP-complete. Hence, it is theoretically as difficult as any problems in the set $\mathcal{NP}$ of decision problems for which a positive solution can be certified in polynomial time. Note that although there are more general versionsvariants of Sudoku (such as rectangular versions), the square variant described above where $N$ is a perfect square suffices for NP-completeness. Hence, for the remainder of this manuscript, it will be assumed that we are restricted to considering the square variant. Since being shown to be NP-complete, Sudoku has subsequently been converted to various NP-complete problems, most notably constraint satisfaction \cite{csp}, boolean satisfiability \cite{sat} and integer programming \cite{ip}. Another famous NP-complete problem is the {\em Hamiltonian cycle problem} (HCP), which is defined as follows. For a simple graph (that is, one containing no self-loops or multi-edges) containing vertex set $V$ and edge set $E : V \rightarrow V$, determine whether any simple cycles containing all vertices in $V$ exist in the graph. Such cycles are called {\em Hamiltonian cycles}, and a graph containing at least one Hamiltonian cycle is called {\em Hamiltonian}. Although HCP is defined for directed graphs, in practice most heuristics that actually solve HCP are written for undirected graphs. Since both Sudoku and HCP are NP-complete, it should be possible to reduce Sudoku to HCP. In this manuscript, a constructive algorithm that constitutes such a reduction is given. The resultant instance of HCP is a sparse graph or order $O(N^3)$. If many values are fixed, it is likely that the resultant graph can be made smaller by clever graph reduction heuristics; to this end, we apply a basic graph reduction heuristic to two example Sudoku instances to investigate the improvement offered. It should be noted that reductions of NP-complete problems to HCP is an interesting but still largely unexplored field of research. Being one of the classical NP-complete problems (indeed, one of the initial 21 NP-complete problems described by Karp \cite{karp}), HCP is widely studied and several very efficient algorithms for solving HCP exist. HCP is also an attractive target problem in many cases because the resultant size of the instance is relatively small by comparison to other potential target problems. Indeed, the study of which NP-complete problems provide the best target frameworks for reductions is an ongoing field of research. For more on this topic, as well as examples of other reductions to HCP, the interested reader is referred to \cite{dewdney,creignou,setsplitting,hcp23hcp}. \section{Conversion to HCP}\label{sec-conversion} At it's core, a Sudoku problem with $N$ symbols (which we will consider to be the natural numbers from 1 to $N$) has three sets of constraints to be simultaneously satisfied. \begin{enumerate}\item Each of the $N$ blocks must contain each number from 1 to $N$ precisely once. \item Each of the $N$ rows must contain each number from 1 to $N$ precisely once. \item Each of the $N$ columns must contain each number from 1 to $N$ precisely once.\end{enumerate} The variables of the problem are the $N^2$ cells, which can each be assigned any of the $N$ possible values, although some of the cells may have fixed values depending on the instance. In order to cast an instance of Sudoku as an instance of Hamiltonian cycle problem, we need to first encode every possible variable choice as a subgraph. The idea will be that traversing the various subgraphs in certain ways will correspond to particular choices for each of the variables. Then, we will link the various subgraphs together in such a way that they can only be consecutively traversed if none of the constraints are violated by the variable choices. In the final instance of HCP that is produced, the vertex set $V$ will comprise of the following, where $a$, $i$, $j$ and $k$ all take values from $1$ to $N$: \begin{itemize}\item A single starting vertex $s$ and finishing vertex $f$ \item Block vertices: $N^2$ vertices $b_{ak}$, corresponding to number $k$ in block $a$ \item Row vertices: $N^2$ vertices $r_{ik}$, corresponding to number $k$ in row $i$ \item End Row vertices: $N$ vertices $t_i$ corresponding to row $i$ \item Column vertices: $N^2$ vertices $c_{jk}$ corresponding to number $k$ in column $j$ \item End Column vertices: $N$ vertices $d_j$ corresponding to column $j$ \item Puzzle vertices: $3N^3$ vertices $x_{ijkl}$ corresponding to number $k$ in position $(i,j)$, for $l = 1, 2, 3$ \item End Puzzle vertices: $N^2$ vertices $v_{ij}$ corresponding to position $(i,j)$ \item Duplicate Puzzle vertices: $3N^3$ vertices $y_{ijkl}$ corresponding to number $k$ in position $(i,j)$, for $l = 1, 2, 3$ \item End Duplicate Puzzle vertices: $N^2$ vertices $w_{ij}$ corresponding to position $(i,j)$\end{itemize} The graph will be linked together in such a way that any valid solution to the Sudoku puzzle will correspond to a Hamiltonian cycle in the following manner. \begin{enumerate}\item The starting vertex $s$ is visited first. \item For each $a$ and $k$, suppose number $k$ is placed in position $(i,j)$ in block $a$. Then, vertex $b_{ak}$ is visited, followed by all $x_{ijml}$ for $m \neq k$, followed by all $y_{ijml}$ for $m \neq k$. This process will ensure constraint 1 is satisfied. \item For each $i$ and $k$, suppose number $k$ is placed in position $(i,j)$ in row $i$. Then, vertex $r_{ik}$ is visited, followed by $x_{ijk3}$, $x_{ijk2}$, $x_{ijk1}$ and then $v_{ij}$. If $k = N$ (ie if $i$ is about to be incremented or we are finished step 3) then this is followed by $t_i$. This process will ensure constraint 2 is satisfied. \item For each $j$ and $k$, suppose number $k$ is placed in position $(i,j)$ in column $j$. Then, vertex $c_{jk}$ is visited, followed by $y_{ijk3}$, $y_{ijk2}$, $y_{ijk1}$ and then $w_{ij}$. If $k = N$ (ie if $j$ is about to be incremented or we are finished step 4) then this is followed by $d_j$. This process will ensure constraint 3 is satisfied. \item The finishing vertex $f$ is visited last and the Hamiltonian cycle returns to $s$.\end{enumerate} What follows is a short description of how steps 1--5 are intended to work. A more detailed description follows in the next section. The idea of the above is that we effectively create two identical copies of the Sudoku puzzle. In step 2, we place numbers in the puzzles, which are linked together in such a way to ensure the numbers are placed identically in both copies. Placing a number $k$ into position $(i,j)$, contained in block $a$, is achieved by first visiting $b_{ak}$, and then proceeding to visit every puzzle vertex $x_{ijml}$ {\bf except} for when $m = k$, effectively leaving the assigned number \lq\lq open", or unvisited. Immediately after visiting the appropriate puzzle vertices, the exact same duplicate puzzle vertices $y_{ijml}$ are visited as well, leaving the assigned number unvisited in the second copy as well. Since each block vertex $b_{ak}$ is only visited once, each number is placed precisely once in each block, satisfying constraint 1. The hope is, after satisfying constraint 1, that the row and column constraints have also been satisfied. If not, it will prove impossible to complete steps 3 and 4 without needing to revisit a vertex that was visited in step 2. In step 3, we traverse the row vertices one at a time. If number $k$ was placed in position $(i,j)$, then row vertex $r_{ik}$ is followed by the unvisited vertices $x_{ijk3}$, $x_{ijk2}$, $x_{ijk1}$, and then by the end puzle vertex $v_{ij}$. Once all $r_{ik}$ vertices have been traversed for a given $i$, we visit the end row vertex $t_i$. Note that the three $x$ vertices visited for each $i$ and $k$ in step 3 are the three that were skipped in step 2. Therefore, every puzzle vertex is visited by the time we finish traversing all the row vertices. However, if row $i$ is missing the number $k$, then there will be no available unvisited puzzle vertices to visit after $r_{ik}$, so this part of the graph can only be traversed if all the row constraints are satisfied by the choices in step 2. Step 4 evolves analogously to step 3, except for $c_{jk}$ instead of $r_{ik}$, $y_{ijkl}$ instead of $x_{ijkl}$, $w_{ij}$ instead of $v_{ij}$ and $d_j$ instead of $t_i$. Hence, this part of the graph can only be traversed if all the column constraints are also satisfied by the choices in step 2. Assuming the graph must be traversed as described above, it is clear that all Hamiltonian cycles in the resultant instance of HCP correspond to valid Sudoku solutions. In order to show this is the case, we first describe the set of directed edges $E$ in the graph. Note that in each of the following, if $k+1$ or $k+2$ are bigger than $N$, they should be wrapped back around to a number between $1$ and $N$ by subtracting $N$. For example, if $k+2 = N+1$ then it should be taken as 1 instead. \begin{itemize}\item $(s\;,\;b_{11}), (d_N\;,\; f)$ and $(f\;,\;s)$ \item $(b_{ak}\;,\; x_{i,j,(k+1),1})$ for all $a, k$, and $(i,j)$ contained in block $a$ \item $(x_{ijk1}\;,\;x_{ijk2}), (x_{ijk2}\;,\;x_{ijk1}), (x_{ijk2}\;,\;x_{ijk3})$ and $(x_{ijk3}\;,\;x_{ijk2})$ for all $i, j, k$ \item $(x_{ijk3}\;,\;x_{i,j,(k+1),1})$ for all $i, j, k$ \item $(y_{ijk1}\;,\;y_{ijk2}), (y_{ijk2}\;,\;y_{ijk1}), (y_{ijk2}\;,\;y_{ijk3})$ and $(y_{ijk3}\;,\;y_{ijk2})$ for all $i, j, k$ \item $(y_{ijk3}\;,\;y_{i,j,(k+1),1})$ for all $i, j, k$ \item $(x_{ijk3}\;,\;y_{i,j,(k+2),1})$ for all $i, j, k$ \item $(y_{ijk3}\;,\;b_{a,k+2})$ for all $i, j$, and for $k \neq N-1$, where $a$ is the block containing position $(i,j)$ \item $(y_{i,j,N-1,3}\;,\;b_{a+1,1})$ for all $i, j$ except for the case where both $i = N$ and $j = N$, where $a$ is the block containing position $(i,j)$ \item $(y_{N,N,N-1,3}\;,\;r_{11})$ \item $(r_{ik}\;,\;x_{ijk3})$ for all $i, j, k$ \item $(x_{ijk1}\;,\;v_{ij})$ for all $i, j, k$ \item $(v_{ij}\;,\;r_{ik})$ for all $i, j, k$ \item $(v_{ij}\;,\;t_i)$ for all $i, j$ \item $(t_i\;,\;r_{i+1,1})$ for all $i < N$ \item $(t_N\;,\;c_{11})$ \item $(c_{jk}\;,\;y_{ijk3})$ for all $i, j, k$ \item $(y_{ijk1}\;,\;w_{ij})$ for all $i, j, k$ \item $(w_{ij}\;,\;c_{jk}$ for all $i, j, k$ \item $(w_{ij}\;,\;d_j)$ for all $i, j$ \item $(d_j\;,\;c_{j+1,1})$ for all $j < N$\end{itemize} \section{Detailed explanation}\label{sec-explanation} We need to show that every valid Hamiltonian cycle corresponds to a valid Sudoku solution. Note that at this stage, we have not handled any fixed cells, so any valid Sudoku solution will suffice. Fixed cells will be taken care of in Section \ref{sec-fixed}. \begin{theorem}Every Hamiltonian cycle in the graph constructed in the previous section corresponds to a valid Sudoku solution, and every valid Sudoku solution has corresponding Hamiltonian cycles.\end{theorem} \begin{proof}First of all, note that vertices $x_{ijk2}$ are degree 2 vertices, and so they ensure that if vertex $x_{ijk1}$ is visited before $x_{ijk3}$, it must be proceeded by $x_{ijk2}$ and then $x_{ijk3}$. Likewise, if vertex $x_{ijk3}$ is visited before $x_{ijk1}$, it must be proceeded by $x_{ijk2}$ and $x_{ijk1}$. The same argument holds for vertices $y_{ijk2}$. This will ensure that the path any Hamiltonian cycle must take through the $x$ and $y$ vertices is tightly controlled. Each of the block vertices $b_{ak}$ links to $x_{i,j,(k+1),1}$ for all $(i,j)$ contained in block $a$. One of these edges must be chosen. Suppose number $k$ is to be placed in position $(i,j)$, contained in block $a$. Then the edge $(b_{ak},x_{i,j,(k+1),1})$ is traversed. From here, the cycle must continue through vertices $x_{i,j,(k+1),2}$ and $x_{i,j,(k+1),3}$. It is then able to either exit to one of the $y$ vertices, or continue visiting $x$ vertices. However, as will be seen later, if it exits to the $y$ vertices at this stage, it will be impossible to complete the Hamiltonian cycle. So instead it continues on to $x_{i,j,(k+2),1}$, and so on. Only once all of the $x_{ijml}$ vertices for $m \neq k$ have been visited (noting that $i$ and $j$ are fixed here) can it safely exit to the $y$ vertices -- refer this as Assumption 1 (we will investigate later what happens if Assumption 1 is violated for any $i,j,k$). The exit to $y$ vertices will occur immediately after visiting vertex $x_{i,j,(k-1),3}$, which is linked to vertex $y_{i,j,(k+1),1}$. Note that by Assumption 1, vertices $x_{ijkl}$ are unvisited for $l = 1, 2, 3$. Then, from the $y$ vertices, the same argument as above applies again, and eventually vertex $y_{i,j,(k-1),3}$ is departed, linking to vertex $b_{a,k+1}$ if $k < N$, or to vertex $b_{a+1,1}$ if $k = N$. Refer to the equivalent assumption on visiting the $y$ vertices as Assumption 2. This continues until all the block vertices have been traversed, at which time vertex $y_{N,N,N-1,3}$ links to $r_{11}$. Note that, other than by violating Assumptions 1 or 2, it is not possible to have deviated from the above path. By the time we arrive at $r_{11}$, all the block vertices $b_{ak}$ have been visited. Also, every puzzle vertex $x_{ijkl}$ and duplicate puzzle vertex $y_{ijkl}$ has been visited other than those corresponding to placing number $k$ in position $(i,j)$. Next, each of the row vertices $r_{ik}$ links to $x_{ijk3}$ for all $i, j, k$. For each $i$ and $k$, one of these edges must be chosen. However, by Assumption 1, all vertices $x_{ijk3}$ have already been visited except for those corresponding to the number $k$ being placed in position $(i,j)$. If the choices in the previous step violate the row constraints, then there will be a row $i$ that does not contain a number $k$, and subsequently there will be no valid edge emanating from vertex $r_{ik}$. Hence, if the choices made in step 2 violate the row constraints, and Assumption 1 is correct, it is impossible to complete a Hamiltonian cycle. If the choices in the previous step satisfy the row constraints, then there should always be precisely one valid edge to choose here. Once vertex $x_{ijk3}$ is visited, vertices $x_{ijk2}$ and $x_{ijk1}$ must follow, at which point the only remaining valid choice is to proceed to vertex $v_{ij}$. From here, any row vertex $r_{im}$ that has not yet been visited can be visited. If all, have been visited, then $t_i$ can be visited instead. Note that once $t_i$ is visited, it is impossible to return to any $r_{ik}$ vertices, so they must all be visited before $t_i$ is visited. An analogous argument to above can be made for the column vertices $c_{jk}$. Note that if Assumptions 1 and 2 are correct, then vertex $y_{ijkl}$ will be unvisited at the start of step 4 if and only if $x_{ijkl}$ was unvisited at the start of step 3. Therefore, we see that if Assumptions 1 and 2 are correct, then it is only possible to complete the Hamiltonian cycle if the choices made in step 2 correspond to a valid Sudoku solution. Now consider the situation where Assumption 1 is violated, that is, after step 2 there exists unvisited vertices $x_{ijkl}$ and $x_{ijml}$ for some $i, j$, and $k \neq m$. Then during step 3, without loss of generality, suppose vertex $r_{ik}$ is visited before $r_{im}$. As argued above, this will be followed by vertices $x_{ijk3}$, $x_{ijk2}$, $x_{ijk1}$, at which point visiting vertex $v_{ij}$ is the only available choice. Then later, $r_{im}$ is visited. It must visit $x_{ijm3}$, $x_{ijm2}$, $x_{ijm1}$ and is then, again, forced to proceed to vertex $v_{ij}$. However, since vertex $v_{ij}$ has already been visited, this is impossible and the Hamiltonian cycle cannot be completed. If Assumption 2 is violated, and it is vertices $y_{ijkl}$ and $y_{ijml}$ that are unvisited after step 2, an analogous argument can be made involving step 4. Hence, every Hamiltonian cycle in the graph must satisfy Assumptions 1 and 2. This completes the proof.\end{proof} Since any valid Sudoku solution has corresponding Hamiltonian cycles, the resulting instance of HCP is equivalent to a blank Sudoku puzzle. In a later section, the method for removing edges based on fixed numbers for a given Sudoku instance is described. Since the instance of HCP can be constructed, and the relevant edges removed, in polynomial time as a function of $N$, the algorithm above constitutes a reduction of Sudoku to the Hamiltonian cycle problem. \section{Size of \lq\lq blank" instance}\label{sec-blank} The instance of HCP that emerges from the above conversion consists of $6N^3 + 5N^2 + 2N + 2$ vertices, and $19N^3 + 2N^2 + 2N + 2$ directed edges. For the standard Sudoku puzzle where $N = 9$, this corresponds to a directed graph with $4799$ vertices and $14033$ directed edges. All of the best HCP heuristic currently available assume that the instance is undirected. There is a well-known conversion of directed HCP to undirected HCP which can be performed as follows. First, produce a new graph which has three times as many vertices as the directed graph. Then add edges to this new graph by the following scheme, where $n$ is the number of vertices in the directed graph: \begin{enumerate}\item Add edges $(3i-1,3i-2)$ and $(3i-1,3i)$ for all $i = 1, \hdots, n$. \item For each directed edge $(i,j)$ in the original graph, add edge $(3i,3j-2)$.\end{enumerate} In the present case, this results in an undirected instance of HCP consisting of $18N^3 + 15N^2 + 6N + 6$ vertices and $31N^3 + 12N^2 + 6N + 6$ edges. This implies that the average degree in the graph grows monotonically with $N$, but towards a limit of $\frac{31}{9}$, so the resultant graph instance is sparse. For $N = 4$, the average degree is just slightly above $3.1$, and for $N = 9$ the average degree is just under $3.3$. A trick can be employed to reduce the number of vertices in the undirected graph. Consider the vertices in the undirected graph corresponding to the $x$ and $y$ vertices. In particular, consider the set of 9 vertices corresponding to $x_{ijk1}$, $x_{ijk2}$ and $x_{ijk3}$. The nine vertices form an induced subgraph such as that displayed at the top of Figure \ref{fig-subgraph}. There are incoming edges incident on the first and seventh vertices, and outgoing edges incident on the third and ninth vertices. If the induced subgraph is entered via the first vertex, it must be departed via the ninth vertex, or else a Hamiltonian cycle cannot be completed. Likewise, if the induced subgraph is entered via the seventh vertex, it must be departed via the third vertex. It can be seen by inspecting all cases that if the fifth vertex is removed, and a new edge is introduced between the fourth and sixth vertices, the induced subgraph retains these same properties. This alternative choice is displayed at the bottom of Figure \ref{fig-subgraph}. Such a replacement can be made for each triplet $x_{ijkl}$ or $y_{ijkl}$. Hence, we can remove $2N^3$ vertices and $2N^3$ edges from the undirected graph for a final total of $16N^3 + 15N^2 + 6N + 6$ vertices and $29N^3 + 12N^2 + 6N + 6$, although at the cost of raising the average degree by a small amount (roughly between 0.1 and 0.15, depending on $N$.) \begin{figure}[h!]\begin{center}\includegraphics[scale=0.35]{subgraphs.png}\caption{The induced subgraph created after the conversion to an undirected graph, corresponding to vertices $x_{ijk1}$, $x_{ijk2}$ and $x_{ijk3}$, and an alternative subgraph with one vertex removed.\label{fig-subgraph}}\end{center}\end{figure} \section{Handling fixed numbers}\label{sec-fixed} In reality, all meaningful instances of Sudoku have fixed values in some of the $N^2$ cells. Although this could potentially be handled by removing vertices, it would then be necessary to redirect edges appropriately. Instead, it is simpler to remove edges that cannot be used while choosing these fixed values. Once this is performed, a graph simplifying heuristic could then be employed to remove unnecessary vertices if desired. For each fixed value, $12N - 12$ edges can be identified as redundant, and be removed. However, when there are multiple fixed values, some edges may be identified as redundant multiple times, so $12N - 12$ is only an upper bound on the number of edges that can be removed per fixed value. For example, suppose one cell has a fixed value of 1, and another cell within the same block has a fixed value of 2. From the first fixed value, we know that all other entries in the block must not be 1. From the second fixed value, we know that the second cell must have a value of 2, and hence not 1. Then the edge corresponding to placing a value of 1 in the second cell would be identified as redundant twice. The exact number of redundant edges identified depends on the precise orientation of the fixed values. For each fixed value $k$ in position $(i,j)$, and block $a$ containing position $(i,j)$, the following sets of edges are redundant and may be removed (an explanation for each set follows the list): \begin{itemize}\item[(1)] $(b_{ak}\;,\;x_{mnk1})$ for all choices of $m$ and $n$ such that block $a$ contains $(m,n)$, and also $(m,n) \neq (i,j)$ \item[(2)] $(b_{am}\;,\;x_{ijm1})$ for $m \neq k$ \item[(3)] $(x_{m,n,(k-1),3}\;,\;y_{m,n,(k+1),1})$ for all choices of $m$ and $n$ such that block $a$ contains $(m,n)$, and also $(m,n) \neq (i,j)$ \item[(4)] $(x_{i,j,(m-1),3}\;,\;y_{i,j,(m+1),1})$ for $m \neq k$ \item[(5a)] If $k < N$ : $(y_{m,n,(k-1),3}\;,\;b_{a,k+1})$ for all choices of $m$ and $n$ such that block $a$ contains $(m,n)$, and also $(m,n) \neq (i,j)$ \item[(5b)] If $k = N$ and $a < N$ : $(y_{m,n,(k-1),3}\;,\;b_{a+1,1})$ for all choices of $m$ and $n$ such that block $a$ contains $(m,n)$, and also $(m,n) \neq (i,j)$ \item[(5c)] If $k = N$ and $a = N$ : $(y_{m,n,(k-1),3}\;,\;r_{11})$ for all choices of $m$ and $n$ such that block $a$ contains $(m,n)$, and also $(m,n) \neq (i,j)$ \item[(6a)] $(y_{i,j,(m-1),3}\;,\;b_{a,m+1})$ for $m \neq k$ and $m \neq N$ \item[(6b)] If $k < N$ and $a < N$ : $(y_{i,j,(N-1),3}\;,\;b_{a+1,1})$ \item[(6c)] If $k < N$ and $a = N$ : $(y_{i,j,(N-1),3}\;,\;r_{11})$ \item[(7)] $(r_{ik}\;,\;x_{imk3})$ for all $m \neq k$ \item[(8)] $(x_{imk1}\;,\;v_{im})$ for all $m \neq k$ \item[(9)] $(r_{im}\;,\;x_{ijm3})$ for all $m \neq k$ \item[(10)] $(c_{jk}\;,\;y_{mjk3})$ for all $m \neq k$ \item[(11)] $(y_{mjk1}\;,\;w_{mk})$ for all $m \neq k$ \item[(12)] $(c_{jm}\;,\;y_{ijm3})$ for all $m \neq k$\end{itemize} The edges in set (1) correspond to the option of placing a value of $k$ elsewhere in block $a$. The edges in set (2) correspond to the option of picking a value other than $k$ in position $(i,j)$. Those two sets of incorrect choices would lead to the edges from sets (3) and (4) respectively being used to transfer from the $x$ vertices to the $y$ vertices, and so those edges are also redundant. The edges in (5a)--(5c) correspond to the edges that return from the $y$ vertices to the next block vertex after an incorrect choice is made (corresponding to the set (1)). If $k = N$ then the next block vertex is actually for the following block, rather than for the next number in the same block. If $k = N$ and $a = N$ then all block vertices have been visited and the next vertex is actually the first row vertex. Likewise, the edges in (6a)--(6c) correspond to the edges that return from the $y$ vertices after an incorrect choice is made (corresponding to the set (2)). Note that if $k = N$, there are $N-1$ redundant edges in (6a). If $k < N$ there are $N-2$ redundant edges in (6a) and then one additional redundant edge from either (6b) or (6c). The edges in set (7) correspond to the option of finding a value of $k$ in row $i$ at a position other than $(i,j)$, which is impossible. The edges in set (8) correspond to visiting the end puzzle vertex after making an incorrect choice from (7). The edges in set (9) correspond to the option of finding a value other than $k$ in row $i$ and position $(i,j)$, which is also impossible. Analogous arguments can be made for the edges in sets (10)--(12), except for columns instead of rows. Each of sets (1)-(4) and (7)-(12) identify $N-1$ redundant edges each. As argued above, the relevant sets from (5a)--(5c) will contribute $N-1$ more redundant edges, as well the relevant sets from (6a)--(6c). Hence, the maximum number of edges that can be removed per number is $12N - 12$ for each fixed value. \section{Recovering the Sudoku solution from a Hamiltonian cycle}\label{sec-solution} The constructive algorithm above produces a HCP instance for which each solution corresponds to a valid Sudoku solution Once such a solution is obtained, the following algorithm reconstructs the corresponding Sudoku solution: Denote by $h$ the Hamiltonian cycle obtained. For each $i = 1, \hdots, N$ and $j = 1, \hdots, N$, find vertex $v_{ij}$ in $h$. Precisely one of its adjacent vertices in $h$ will be of the form $x_{ijk1}$ for some value of $k$. Then, number $k$ can be placed in the cell in the $i$th row and $j$th column in the Sudoku solution. Suppose that the vertices are labelled in the order given in Section \ref{sec-conversion}. That is, $s$ is labelled as 1, $f$ is labelled as 2, the $b_{ak}$ vertices are labelled $3, 4, \hdots, N^2-2$, and so on. Then, for each $i$ and $j$, vertex $v_{ij}$ will be labelled $3N^3 + 3N^2 + (i+1)N + (j+2)$, and vertex $x_{ijk1}$ will be labelled $3iN^2 + (3j-1)N + 3k$. Of course, if the graph has been converted to an undirected instance, or if it has been reduced in size by a graph reduction heuristic, these labels will need to be adjusted appropriately. \section{Reducing the size of the HCP instances}\label{sec-reducing} After constructing the HCP instances using the above method, graph reduction techniques can be applied. Most meaningful instances of Sudoku will have many fixed values, which in turn leads to an abundance of degree 2 vertices. In order to test the effectiveness of such techniques, a very simple reduction algorithm was used. Iteratively, the algorithm iteratively checks the following two conditions until there are no applicable reductions remaining: \begin{enumerate}\item If two adjacent vertices are both degree 2, they can be contracted to a single vertex. \item If a vertex has two degree 2 neighbours, all of its incident edges going to other vertices can be removed.\end{enumerate} Note that the second condition above leads to three adjacent degree 2 vertices which will in turn be contracted to a single vertex. The removal of edges when the second condition is satisfied often leads to additional degree 2 vertices being formed which allows the algorithm to continue reducing. Note also that this simple graph reduction heuristic is actually hampered by the graph reduction method described in Section \ref{sec-blank}, since that method eliminates many degree 2 vertices. It is likely that a more sophisticated graph reduction heuristic could be developed that incorporates both methods. The above heuristic was applied to both a well-formed (that is, uniquely solvable) Sudoku instance with 35 fixed values, as well as one of the Sudoku instances from the repository of roughly 50000 instances maintained by Royle \cite{royle}. The instances in that repository all contain precisely 17 fixed numbers, and are all well-formed; it was recently proved via a clever exhaustive computer search that 17 is the minimal number of fixed values for a well-formed Sudoku problem with 9 symbols \cite{min17}. The two instances tested are displayed in Figure \ref{fig-instances}. \begin{figure}[h!]\begin{center}\includegraphics[scale=0.6]{puzzles.png}\caption{Two well-formed Sudoku instances with 35 fixed values and 17 fixed values respectively.\label{fig-instances}}\end{center}\end{figure} After the simple reduction heuristic above was applied to the first Sudoku instance, it had been reduced from an undirected instance with 14397 vertices and 22217 edges, to an equivalent instance with 8901 vertices and 14175 edges. Applying the above reduction algorithm to the second Sudoku instance from Royle's repository reduced it from an undirected instance with 14397 vertices and 22873 edges, to an equivalent instance with 12036 vertices and 19301 edges. In both cases the reduction is significant, although obviously more there are greater opportunities for reduction when there are more fixed values. Both instances were solved by Concorde \cite{concorde} which is arguably the best algorithm for solving HCP instances containing large amount of structure, as its branch-and-cut method is very effective at identifying sets of arcs that must be fixed all at once, or not at all, particularly in sparse graphs. Technically, Concorde actually converts the HCP instance to an equivalent TSP instance but does so in an efficient way. The first instance was solved during Concorde's presolve phase, while the second instance required 20 iterations of Concorde's branch and cut algorithm\footnote{It should be noted that Concorde does use a small amount of randomness in its execution. The random seed used in this experiment was 1453347272.} to discover a solution. This would seem to indicate that the first Sudoku instance can be solved without requiring any amount of guessing. The two solutions were then interpreted via the algorithm in Section \ref{sec-solution} to provide solutions to the initial Sudoku instances; those solutions are displayed in Figure \ref{fig-solutions}. \begin{figure}[h!]\begin{center}\includegraphics[scale=0.6]{solutions.png}\caption{The solutions to the Sudoku instances in Figure \ref{fig-instances}, as interpreted from the Hamiltonian cycles of the converted HCP instances.\label{fig-solutions}}\end{center}\end{figure}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} \section{Introduction}\label{sec:intro} The young (1--5~kyr) energetic pulsar PSR~J0537\textminus6910\xspace \citep{wangetal98,chenetal06} resides in the Large Magellanic Cloud at a distance of 49.6~kpc \citep{pietrzynskietal19}. Its pulsations are only detectable at X-ray energies, and the pulsar was first observed by \citet{1998ApJ...499L.179M} using the {\it Rossi X-ray Timing Explorer} ({\it RXTE}\xspace) during searches for pulsations from the remnant of SN1987A. Further observations with {\it RXTE}\xspace, prior to its decommissioning in early 2012, revealed that PSR~J0537\textminus6910\xspace often undergoes sudden changes in rotation frequency, i.e., glitches, at a rate of more than three per year, and exhibits interesting inter-glitch behavior \citep{2004ApJ...603..682M, 2006ApJ...652.1531M,anderssonetal18, antonopoulouetal18,2018ApJ...852..123F}. Observations of the pulsar resumed from 2017-2020 using the {\it Neutron star Interior Composition Explorer} ({\it NICER}\xspace) on board the International Space Station \citep{2012SPIE.8443E..13G}, which revealed more glitches and a continuation of the timing behavior seen with {\it RXTE}\xspace \citep{hoetal20}. PSR~J0537\textminus6910\xspace is a particularly intriguing potential gravitational-wave\xspace source. It is the fastest-spinning known young pulsar (with rotation frequency $f_{\rm rot}=62\mbox{ Hz}$), which places its gravitational-wave\xspace frequency $f$ (e.g., at twice $f_{\rm rot}$; see Section~\ref{sec:model}) in the most sensitive band of ground-based gravitational-wave\xspace detectors. PSR~J0537\textminus6910\xspace also has the highest spin-down luminosity ($\dot{E}=4.9\times 10^{38}\mbox{ erg s$^{-1}$}$) among the $\sim$2900 known pulsars in the ATNF Pulsar Catalogue \citep{2005AJ....129.1993M}. Its spin-down behavior appears to be driven by a process other than pure electromagnetic dipole radiation loss (at constant stellar magnetic field and moment of inertia). Specifically, its (long-term) braking index $n\equivf_{\rm rot}\ddot{f}_{\rm rot}/\dot{f}_{\rm rot}^2=-1.25\pm0.01$, as measured over more than 20~yr \citep{hoetal20}, indicates an accelerating spin-down rate and significantly deviates from the measured values of most pulsars, $n=3$, that imply dipole radiation \citep{shapiroteukolsky83}. More importantly, observations of PSR~J0537\textminus6910\xspace show the pulsar's (short-term) interglitch braking index $n_{\rm ig}$, as measured during intervals between $\sim 50$ glitches, has values typically $> 10$, and approaches an asymptotic value of $\lesssim 7$ at long times after a glitch, i.e., when the effects of a preceding glitch are diminished (see Figure~\ref{fig:tnig}; see also \citealt{anderssonetal18}). It is this behavior that provides tantalizing suggestions that PSR~J0537\textminus6910\xspace could be losing some of its rotational energy to gravitational-wave\xspace emission. In particular, a slightly deformed pulsar can emit gravitational waves\xspace that results in $n=5$, and a r-mode fluid oscillation in a pulsar can emit gravitational waves\xspace that results in $n=7$ (see, e.g., \citealt{riles17,anderssonetal18,2018ASSL..457..673G,gao20}). In this work, we search for mass quadrupolar gravitational-wave\xspace emission from PSR~J0537\textminus6910\xspace that follows the same phase as that of the pulsar's rotation. Previously, data from initial LIGO's fifth and sixth science runs (S5 and S6) and Virgo's second and fourth science runs (VSR2 and VSR4), in conjunction with {\it RXTE}\xspace timing measurements, were used to set limits on gravitational-wave\xspace emission by PSR~J0537\textminus6910\xspace that closely approached the spin-down limit \citep{2010ApJ...713..671A,2014ApJ...785..119A}. Here, we analyze data from the second and third observing runs (O2 and O3) of LIGO and Virgo, tracking the rotation phase with the contemporaneous {\it NICER}\xspace timing ephemeris. In doing so, we also provide an updated ephemeris that includes the latest six months of {\it NICER}\xspace observations of PSR~J0537\textminus6910\xspace. Investigations of r-mode gravitational-wave\xspace emission ($n=7$) are not presented here; such searches are more technically challenging and require different methods that search over a range of frequencies (see, e.g., \citealt{mytidis2015constraining,mytidis2019sensitivity,O2Narrowband,fesikpapa20,fesikpapa20b}) due to uncertainty in gravitational-wave\xspace frequency for a given rotation frequency \citep{anderssonetal14,idrisyetal15,carideetal19}. Nevertheless, we are able to reach below the spin-down limit of PSR~J0537\textminus6910\xspace for the first time, which means that the minimum amplitude we could detect in our analysis is lower than the one obtained by assuming all of the pulsar's rotational energy loss is converted to gravitational waves\xspace (see Section~\ref{sec:model}). In other words, we can now obtain physically meaningful constraints. \begin{figure}[htb] \begin{center} \includegraphics[width=\columnwidth]{figures/f1.eps} \caption{ Interglitch braking index $n_{\rm ig}$ calculated from the spin parameters of each segment between glitches as a function of time since the last glitch. Large and small circles denote {\it NICER}\xspace\ and {\it RXTE}\xspace\ values, respectively, with the former from Tables~\ref{tab:model} and \ref{tab:glitch} and from \citet{hoetal20} and latter from \citet{antonopoulouetal18}. Errors in $n_{\rm ig}$ are 1$\sigma$ uncertainty. Orange horizontal dotted lines indicate braking index $n=5$ and 7, which are expected for pulsar spin-down by gravitational-wave\xspace emission due to an ellipticity and r-mode oscillation, respectively. Green dot-dashed and dashed lines indicate exponential decay to $n=5$ with best-fit time-scale of 24~d and to $n=7$ with best-fit time-scale of 21~d, respectively. } \label{fig:tnig} \end{center} \end{figure} \section{Search method} \subsection{Model of gravitational-wave\xspace emission} \label{sec:model} The first model considered here allows for gravitational-wave\xspace emission at once and twice the spin frequency simultaneously, which has been searched for previously \citep{2015MNRAS.453.4399P,2017ApJ...839...12A, 2019ApJ...879...10A,abbott2020gravitational}, and can result from a triaxial star spinning about an axis that is not its principal axis \citep{2010MNRAS.402.2503J,2015MNRAS.453...53J}. The amplitudes of each harmonic at once and twice the spin frequency of the star, denoted $h_{21}(t)$ and $h_{22}(t)$, respectively, can be written as \begin{eqnarray}\label{eq:model} \nonumber h_{21} &=& - \frac{C_{21}}{2} \Big\{ F^D_+(\alpha, \delta, \psi; t) \sin\iota \cos\iota \cos{\left[\Phi(t)+ \Phi_{21}^C\right]}\\ && \qquad +F^D_\times(\alpha, \delta, \psi; t)\sin\iota \sin{\left[\Phi(t)+ \Phi_{21}^C\right]} \Big\}, \label{eq:model1} \\ \nonumber h_{22} &=& - C_{22} \Big\{ F^D_+(\alpha, \delta, \psi; t) (1 + \cos^2\iota) \cos{\left[2\Phi(t)+ \Phi_{22}^C\right]} \\ && \qquad +2 F^D_\times(\alpha, \delta, \psi; t) \cos\iota \sin{\left[2\Phi(t)+ \Phi_{22}^C\right]} \Big\}. \label{eq:model2} \end{eqnarray} Here, $C_{21}$ and $C_{22}$ are dimensionless constant component amplitudes, and $\Phi_{21}^C$ and $\Phi_{22}^C$ are phase angles. $F^D_+$ and $F^D_\times$ are antenna or beam functions and describe how the two polarization components of the signal are projected onto the detector \citep[see, e.g.,][]{1998PhRvD..58f3001J}. The angles $(\alpha, \delta)$ are the right ascension and declination of the source, while the angles $(\iota, \psi)$ specify the orientation of the star's spin axis relative to the observer. $\Phi(t)$ is the rotational phase of the source. The second model is a special case of the first model and is used for gravitational-wave\xspace emission at only twice the rotational frequency ($C_{21}=0$), implying a triaxial star that is spinning about a principal axis, such as its z-axis. In this case, it is simpler to write the gravitational-wave\xspace amplitude in terms of the dimensionless value $h_0$, which requires substituting $C_{22} = -h_0 / 2$ in equation~(\ref{eq:model2}) \citep{2019ApJ...879...10A}. The sign change simply maintains consistency with the model from \citet{1998PhRvD..58f3001J}. The cause of such gravitational-wave\xspace emission is a deviation from axial symmetry, which can be written in terms of a dimensionless equatorial ellipticity $\varepsilon$, defined in terms of the star's principal moments of inertia $(I_{xx}, I_{yy}, I_{zz})$: \begin{equation}\label{eq:ellipticity} \varepsilon \equiv \frac{|I_{xx} - I_{yy}|}{I_{zz}}. \end{equation} The gravitational-wave\xspace amplitude is directly proportional to the ellipticity: \begin{equation}\label{eq:h0} h_0 = \frac{16\pi^2 G}{c^4} \frac{I_{zz} \varepsilon f_{\rm rot}^2 }{d}, \end{equation} where $d$ is the star's distance from the Earth. When setting upper limits, we use a fiducial value for the z-component of the moment of inertia, \emph{i.e.}, $I_{zz}^{\rm fid} = 10^{38}$\,kg\,m$^2$. The combination of the ellipticity and fiducial moment of inertia can be cast in terms of the mass quadrupole moment of the $l=m=2$ mode of the star via $Q_{22} = \sqrt{15/8\pi}I_{zz} \varepsilon$ \citep{2005PhRvL..95u1101O}. The gravitational-wave\xspace amplitude $h_0$ can be compared to the spin-down limit amplitude $h_0^{\rm sd}$, which is the gravitational-wave\xspace amplitude produced assuming that all of the rotational energy lost by the pulsar is converted into gravitational waves\xspace: \begin{equation}\label{eq:h0sd} h_{0}^{\rm sd}=\frac{1}{d}\left(\frac{5GI_{zz}}{2c^3}\frac{|\dot{f}_{\rm rot}|}{f_{\rm rot}}\right)^{1/2}. \end{equation} Our results for the single harmonic case are quoted in terms of $h_{0}^{\rm sd}$. {\it NICER}\xspace observations of PSR~J0537\textminus6910\xspace allow for the ephemeris of the pulsar to be determined, which means we know the expected signal frequency and its evolution. With this information, we can perform a targeted search for gravitational waves\xspace from this pulsar based on the two signal models discussed, with the phase tracking that of the pulsar rotation. \subsection{{\it NICER}\xspace data} \label{sec:nicer} In \citet{hoetal20}, timing analysis is performed on {\it NICER}\xspace data of PSR~J0537\textminus6910\xspace from 2017 August 17 to 2020 April 25, with eight glitches detected during this timespan and the last three glitches during O3. Here we present an update and results on timing analysis since the work of \citet{hoetal20}. In particular, data from 2020 May 12 to October 29 is analyzed using the methodology as described in \citet{hoetal20}. Our analysis reveals continuing accelerated spin-down (see Table~\ref{tab:model}) and three subsequent glitches (see Table~\ref{tab:glitch} and Figure~\ref{fig:glitch}), including the smallest glitch of PSR~J0537\textminus6910\xspace yet detected using {\it NICER}\xspace. Note that the timing model of segment 8 uses three additional subsequent times-of-arrival (TOAs) beyond those in Table~1 of \cite{hoetal20} and, as a result, the epoch and other parameters of the model differ; e.g., {segment 8 is associated with the data point at 63~d and $n_{\rm ig}=16$ in Figure~\ref{fig:tnig} compared to 50~d and $n_{\rm ig}=22$ in} Figure~6 of \citet{hoetal20}. Meanwhile, the relatively short timespan of segment 9 means the timing model for this segment is not able to constrain $\ddot{f}_{\rm rot}$. For the most recent glitch 11, its magnitude is large ($\Deltaf_{\rm rot}=33.9\mbox{ $\mu$Hz}$), which suggests the time to the next glitch will be long ($\sim 200\pm 20\mbox{ d}$; \citealt{hoetal20}). If the interglitch period is indeed long, then {\it NICER}\xspace measurements could eventually yield $n_{\rm ig}\lesssim 7$ for segment 11, which would lend further support for gravitational-wave\xspace emission (see Section~\ref{sec:intro} and Figure~\ref{fig:tnig}). \begin{deluxetable*}{ccccccccccc}[htb] \tablecaption{Timing model parameters for segments between epochs of new glitches of PSR~J0537\textminus6910\xspace. Columns from left to right are segment number, timing model epoch, segment start and end dates, number of times-of-arrival, rotation frequency and its first two time derivatives, interglitch braking index, and timing model residual and goodness-of-fit measure. Number in parentheses is 1$\sigma$ uncertainty in last digit. Segments 1--7 are presented in \cite{hoetal20}. \label{tab:model}} \tablewidth{0pt} \tablehead{ \colhead{Segment} & \colhead{Epoch} & \colhead{Start} & \colhead{End} & \colhead{TOAs} & \colhead{$f_{\rm rot}$} & \colhead{$\dot{f}_{\rm rot}$} & \colhead{$\ddot{f}_{\rm rot}$} & \colhead{$n_{\rm ig}$} & \colhead{Residual RMS} & \colhead{$\chi^2/\mbox{dof}$} \\ & \colhead{(MJD)} & \colhead{(MJD)} & \colhead{(MJD)} & & \colhead{(Hz)} & \colhead{($10^{-10}$ Hz s$^{-1}$)} & \colhead{($10^{-20}$ Hz s$^{-2}$)} & & \colhead{($\mu$s)} & } \startdata 8 & 58931 & 58871.5 & 58991.2 & 17 & 61.908808739(3) & $-1.997535(7)$ & 1.06(8) & 16(1) & 173.7 & 9.9 \\ 9 & 59020 & 58995.6 & 59046.3 & 11 & 61.907273376(2) & $-1.99699(4)$ & [1]\tablenotemark{a} & --- & 147.8 & 6.7 \\ 10 & 59074 & 59050.4 & 59098.7 & 10 & 61.906349948(5) & $-1.99762(2)$ & 3.6(8) & 56(13) & 60.9 & 1.5 \\ 11 & 59129 & 59108.7 & 59150.7 & 11 & 61.905434556(6) & $-1.99809(3)$ & 2.2(13) & 34(20) & 72.3 & 2.1 \\ \enddata \tablenotetext{a}{ $\ddot{f}_{\rm rot}$ is fixed at $10^{-20}\mbox{ Hz s$^{-2}$}$.} \end{deluxetable*} \begin{deluxetable*}{cccccc}[htb] \tablecaption{Parameters of new glitches of PSR~J0537\textminus6910\xspace. Columns from left to right are glitch number and epoch, change in rotation phase and changes in rotation frequency and its first two time derivatives at each glitch. Number in parentheses is 1$\sigma$ uncertainty in last digit. Glitches 1--7 are presented in \cite{hoetal20}. \label{tab:glitch}} \tablewidth{0pt} \tablehead{ \colhead{Glitch} & \colhead{Glitch epoch} & \colhead{$\Delta\phi$} & \colhead{$\Deltaf_{\rm rot}$} & \colhead{$\Delta\dot{f}_{\rm rot}$} & \colhead{$\Delta\ddot{f}_{\rm rot}$} \\ & \colhead{(MJD)} & \colhead{(cycle)} & \colhead{($\mu$Hz)} & \colhead{($10^{-13}$ Hz s$^{-1}$)} & \colhead{($10^{-20}$ Hz s$^{-2}$)} } \startdata 8 & 58868(5) & $ 0.08(12)$ & 24.0(1) & $-2.3(6)$ & $-5(1)$ \\ 9 & 58993(3) & $ 0.06(12)$ & 0.4(1) & $-0.3(8)$ & --- \\ 10 & 59049(3) & $-0.22(2)$ & 8.46(3) & $-1.3(5)$ & --- \\ 11 & 59103(5) & $0.42(2)$ & 33.958(7) & $-2.0(3)$ & --- \\ \enddata \end{deluxetable*} \begin{figure} \begin{center} \includegraphics[width=\columnwidth]{figures/f2.eps} \caption{ Glitch $\Deltaf_{\rm rot}$ (top) and $\Delta\dot{f}_{\rm rot}$ (bottom) as functions of time. Glitch numbers and values from Table~\ref{tab:glitch} and \citet{hoetal20}. Errors in $\Delta\dot{f}_{\rm rot}$ are 1$\sigma$ uncertainty, while errors in $\Deltaf_{\rm rot}$ are not shown because they are generally smaller than the symbols. Shaded regions denote second observing run (O2) and third observing run (O3) of LIGO/Virgo. Vertical long and short-dashed lines indicate two possible start dates of O2 data used in the present work (see Section~\ref{sec:gwdata}). } \label{fig:glitch} \end{center} \end{figure} The gravitational-wave\xspace search performed here uses the timing model of \citet{hoetal20}. The differences between the model of \citet{hoetal20} and the model presented here are well within the former's uncertainties, and thus use of the latter would not yield significantly different results. \subsection{LIGO and Virgo data}\label{sec:gwdata} We use a combination of data from the second and third observing runs of the Advanced LIGO \citep{2015CQGra..32g4001L} and Virgo \citep{2015CQGra..32b4001A} gravitational wave detectors. During O2, LIGO Livingston (L1) and LIGO Hanford (H1) took data from 2016 November 30 to 2017 August 25 and had duty factors of $\sim 57\%$ and $\sim 59\%$, respectively (including commissioning breaks), while Virgo took data from 2017 August 1 to 2017 August 25 with a duty factor of $\sim 85\%$. As noted in Section~\ref{sec:nicer}, {\it NICER}\xspace data start on 2017 August 17, and thus one set of searches we undertake uses only about six days of O2 data overlapping with the NICER data in addition to the O3 data {(explicitly 5.3, 5.5 and 6.0 d of data for H1, L1 and V1, respectively)}. Alternatively, we can consider a more optimistic and much longer time-series of O2 data by taking advantage of the correlation between glitch size and time-to-next-glitch seen for PSR~J0537\textminus6910\xspace \citep{2006ApJ...652.1531M,antonopoulouetal18,2018ApJ...852..123F,hoetal20}. Assuming a (unobserved) glitch occurred on 2017 March 22 with the same size as the largest {\it NICER}\xspace glitch (i.e., glitch 2 with $\Deltaf_{\rm rot}=36\mbox{ $\mu$Hz}$), we would expect a subsequent glitch 224 d later (at 68\% confidence) on 2017 November 1, which is the earliest estimated date at which glitch 1 occurred (see Figure~\ref{fig:glitch} and \citealt{hoetal20}). Thus 2017 March 22 to November 1 is the longest period over which we would expect PSR~J0537\textminus6910\xspace to not have undergone a glitch and the {\it NICER}\xspace ephemeris to be valid. O3 lasted from 2019 April 1 to 2020 March 27, with a one-month pause in data collection in October 2019. The three detectors' datasets H1, L1, and V1 had duty factors of $\sim {72}\%$, $\sim {69}\%$, and $\sim {69}\%$, {or 259, 248 and 248 d of data,} respectively during O3. In the case of a detection, calibration uncertainties limit our ability to provide robust estimates of the amplitude of the gravitational-wave\xspace signal and corresponding ellipticity \citep{Abbott:2017pqa}. Even without a detection, these uncertainties affect the estimated instrument sensitivity and inferred upper limits. The uncertainties vary over the course of a run but do not change by large values, so we do not explicitly consider time-dependent calibration uncertainties in our analysis. For further information on O2 calibration techniques, see discussions in \citet{2019ApJ...879...10A}. The full raw strain data from the O2 run is publicly available from the Gravitational Wave Open Science Center\footnote{\url{https://www.gw-openscience.org/data}} \citep{2015JPhCS.610a2021V,2019arXiv191211716T}. For the LIGO O3 data set, the analysis uses the ``C01'' calibration. The C01 calibration has estimated maximum amplitude and phase uncertainties of $\sim 7\%$ and $\sim 4 \deg$, respectively \citep{Sun_2020}, which we use as conservative estimates of the true calibration uncertainty near the frequencies analyzed here. For the Virgo O3 data set, we use the ``V0'' calibration with estimated maximum amplitude and phase uncertainties of 5\% and 2~$\deg$, respectively. \subsection{Search pipeline} The time-domain Bayesian method performs a coherent analysis of the interferometers' data, meaning that we analyze the entire data set with an effective single Fourier Transform, thereby preserving the phase information. First, the raw strain data are heterodyned \citep{Dupuis:2005} using the expected signal phase evolution, known precisely from the electromagnetic timing ephemeris. Then a low-pass filter with a knee frequency of 0.25 Hz is applied, and the data are downsampled so that the sampling time is one minute, compared to 60 microseconds originally. This heterodyning is performed for an expected signal whose frequency is at once or twice the rotational frequency of the pulsar. The heterodyned data is the input to a nested sampling algorithm that is a part of the {\sc LALInference} package \citep{2010PhRvD..81f2003V, PhysRevD.91.042003}, which infers the unknown signal parameters depending on the model of gravitational-wave\xspace emission. PSR~J0537\textminus6910\xspace glitched three times over the course of the gravitational-wave\xspace observations (see Figure~\ref{fig:glitch}). For each glitch, we assume an unknown phase offset between the electromagnetic and gravitational-wave\xspace phase. The individual phase offsets of multiple glitches that occurred between O2 and O3 cannot be disentangled, so only one phase offset is included for these glitches. This means that we introduce four additional phase parameters when performing parameter estimation. We also make use of restricted and unrestricted priors when performing the analysis. In the first case, we use estimates of the orientation of the pulsar relative to the Earth based on a model fit of the observed pulsar wind nebulae torus \citep{Ng:2008}, which imply narrow priors in our analysis on the polarization and inclination angles. Therefore, we use a Gaussian prior on $\psi$ of $2.2864\pm0.0384$\,rad and a bimodal Gaussian prior on $\iota$ with modes at $1.522\pm0.016$ and $1.620\pm0.016$\,rad \citep[see][for reasons behind the bimodality]{2015MNRAS.453...53J}. This range of $\iota$ would suggest the pulsar's rotation axis is almost perpendicular to the line-of-sight, which would in turn lead to a linearly polarized gravitational-wave\xspace signal dominated by the `+' polarization component. The second case assumes a uniform isotropic prior on the axis direction, which therefore does not rely on {assumptions about the pulsar's orientation matching that of the wind nebula or uncertainties in the above modeling of the not-well-resolved X-ray observations}. The initial signal phase and glitch phase offsets all use uniform priors over their full ranges. For the single harmonic search, we parameterize the signals using the mass quadrupole $Q_{22}$ and distance. As a conservative approach, we use an unphysical flat prior on $Q_{22}$ with a lower bound at zero and an upper bound of $\scinum{5}{37}$~kg\,m$^2$, which is well above the largest upper limits found in \citet{2019ApJ...879...10A}. For the distance, we use a Gaussian prior with mean of 49.59\,kpc and standard deviation of 0.55\,kpc based on the value given in \citet{pietrzynskietal19}, combining the statistical and systematic errors in quadrature. For the dual harmonic search, which uses the amplitudes $C_{21}$ and $C_{22}$ rather than the physical parameters of $Q_{22}$ and $d$, we use flat priors that are bounded between zero and $\scinum{1}{-22}$, which is again well above the limit implied in \citet{2019ApJ...879...10A}. To analyze multiple detectors' data sets simultaneously, we combine the product of the {Student's $t$-likelihoods} calculated for each detector \citep{Dupuis:2005}. The outputs of the analysis are posterior distributions of the parameters of interest, which are $h_0/Q_{22}/\varepsilon$ for the single harmonic search, $C_{21}$ and $C_{22}$ for the dual harmonic search, and the angles $\cos\iota$ and $\psi$ for both choices of priors. In Section~\ref{sec:results}, we present results on the amplitude parameters marginalized over the rest of the parameter space. {We also provide odds ratios between two hypotheses: the data contain a coherent signal in the detectors, or incoherent signals or noise in the different detectors. These values are used to assess the presence of a signal in the data and, for a given prior choice, can be thought of as a ``detection statistic''.} \section{Results}\label{sec:results} Results from our searches do not show evidence for gravitational-wave\xspace emission from PSR~J0537\textminus6910\xspace via the two models that we assume. {For the single harmonic model, the Bayesian odds of the data containing a coherent signal between detectors versus incoherent signals or noise in the different detectors \citep[see equation A6 of][]{2017ApJ...839...12A} favor the latter case by $\sim 20\,000$ and $\sim 31\,000$ for the unrestricted and restricted priors, respectively. For the dual harmonic model, the case of an incoherent signal or noise in the detectors is favored by $\lesssim 2\!\times\!10^{8}$ for both prior choices.} An amplitude spectral density obtained after the heterodyne correction is displayed in Figure~\ref{fig:amph} for each of the three detectors. If a loud continuous gravitational-wave\xspace signal was present, we would expect to see a narrow line feature in the spectrum. The amplitude spectral densities also give an estimation of the sensitivity of the search. \begin{figure}[htb] \centering \includegraphics[width=\columnwidth]{figures/O3_spectra.pdf} \caption{Two-sided amplitude spectral density (ASD) after heterodyning, low pass filtering, and downsampling the raw strain data for the $l=m=2$ gravitational-wave\xspace mode. Different color lines indicate the Hanford (H1), Livingston (L1), and Virgo (V1) detectors.} \label{fig:amph} \end{figure} Given the lack of evidence for a signal from either the single or dual harmonic models, we expect the odds between these models to favor the simpler single harmonic model. {Indeed, we find that the single harmonic model is strongly favored by factors of $\sim 5700$ and $\sim 9200$ for the restricted and unrestricted orientation cases, respectively. However, it is worth noting that the odds between models will depend on our choice of the uniform prior range on the amplitude parameters.} Though no gravitational waves are detected, we can still determine upper limits on possible gravitational-wave\xspace emission from PSR~J0537\textminus6910\xspace. Here, we use 95\% credible upper bounds on the amplitude parameters based on their marginalized probability distributions.\footnote{{Simulations on independent and identically distributed noise realizations show that the different noise instantiations can produce upper limits that vary by $\sim 20\%$ at a $1\sigma$ confidence level. However, the Bayesian credible limits we present are valid for our particular dataset.}} The dimensionless gravitational-wave\xspace amplitude $h_0$ and coefficients $C_{21}$ and $C_{22}$ are constrained for the single and dual harmonic searches, respectively. For the single harmonic search, $h_0$ can be mapped to a limit on the maximum ellipticity $\varepsilon$ using equation \eqref{eq:h0}. In Table~\ref{table:1}, we show the different constraints for both searches using all O3 data and the last $\sim 6$ days of O2 data (see Section~\ref{sec:gwdata}). In addition to the detector calibration uncertainties discussed in Section~\ref{sec:gwdata}, we estimate that the statistical uncertainty on the upper limits due to the use of a finite number of posterior samples is on the order of $1\%$. \input{table_results.tex} Figure~\ref{fig:corner} shows the marginalized posterior probability distributions on the pulsar ellipticity and $h_0$ for the single harmonic search with unrestricted and restricted source orientation priors. The posteriors show significant support at ellipticities of zero, indicating no evidence of a signal at current sensitivities. We therefore show 95\% credible upper limits on the ellipticity for both prior choices along with the fiducial spin-down limit. \begin{figure}[hb] \centering \includegraphics[width=\columnwidth]{figures/ellpost.pdf} \caption{Posterior probability distribution for ellipticity and $h_0$ for the analyses with unrestricted and restricted priors on the pulsar orientation. The 95\% credible upper limits are shown as vertical colored lines, while the spin-down limit is given by the vertical dashed black line.} \label{fig:corner} \end{figure} Figure \ref{fig:c21c22} shows a similar posterior distribution on the dimensionless amplitudes $C_{21}$ and $C_{22}$ for the dual harmonic model. For this model, no evidence of gravitational waves\xspace is found, so an upper limit at 95\% is indicated in both panels of this figure. The model given by equation~(\ref{eq:model}) implies that the value of $C_{21}$ becomes completely unconstrained when $\sin{\iota} = 0$. For the unrestricted orientation prior result shown in the left panel of Figure~\ref{fig:c21c22}, this leads to a long high amplitude tail in the $C_{21}$ posterior distribution. In Figures~\ref{fig:corner} and \ref{fig:c21c22}, we see that the amplitude posteriors can peak away from zero. This behavior was unsurprising and can occur even for pure Gaussian noise. Even with these peaks, the posteriors are still entirely consistent with zero ellipticity. For example, for the unrestricted posterior distribution shown in Figure~\ref{fig:corner}, a value of zero ellipticity is within the minimum 66\% credible interval around the mode. In contrast to emission in the single harmonic case, an energy-based limit on gravitational-wave\xspace emission is rather complex in the dual harmonic case. The relevant constraint is that the observed spin-down energy is equal to the sum of the luminosities at the two harmonics. These two emissions have different beam patterns: the emission at the rotation frequency is strongest along the rotational equator ($\iota = \pi/2$ direction), where the polarization is linear, while emission at twice the rotation frequency is strongest along the axis of rotation ($\iota = 0$), where the polarization is circular. Therefore, the spin-down limit on the maximum amplitudes of the two harmonics depends on both the relative size of the intrinsic strength of the two components and the orientation of the spin axis relative to the observer. To provide some insight, if we compare the sky-averaged emission strength at only the rotation frequency to emission at only twice the rotation frequency, the spin-down limit would allow the amplitude of the radiation at the rotation frequency to be approximately twice as strong as that at twice the rotation frequency \citep[see Section~3 of][for more details]{2010MNRAS.402.2503J}. \begin{figure*}[htb] \centering \includegraphics[width=1.0\textwidth]{figures/cpost.pdf} \caption{Posterior probability distributions for the amplitudes $C_{21}$ and $C_{22}$ with unrestricted and restricted priors on the pulsar orientation. The 95\% credible upper limits are shown as vertical colored lines.} \label{fig:c21c22} \end{figure*} The results presented above use all O3 data in combination with about six days of O2 data, when {\it NICER}\xspace was operating and monitoring PSR~J0537\textminus6910\xspace. We also conducted searches using only O3 data or using O3 data plus O2 data from 2017 March 22 to the end of O2. The latter analysis assumes no glitches occurred during the additional time and represents the estimated maximum time that can be safely included without a contemporaneous timing model (see Section~\ref{sec:gwdata}). For only O3 data, we obtain $h_0$ and $\varepsilon$ limits that are worse by $\sim 7\%$ for the unrestricted prior and unchanged for the restricted prior, compared to those from those shown in Table~\ref{table:1}. For O3 data plus the extra O2 data, we obtain amplitude limits that are improved by $\lesssim 20\%$ compared to those shown in Table~\ref{table:1}. \section{Conclusions} Using data from LIGO/Virgo's second and third observing runs, we searched for mass quadrupolar-sourced gravitational waves\xspace from the young, dynamic PSR~J0537\textminus6910\xspace at once or twice the pulsar's rotational frequency of 62\,Hz. For the first time, we reached below the gravitational-wave spin-down limit for PSR~J0537\textminus6910\xspace and showed that gravitational-wave\xspace emission for a pure $l=m=2$ mode accounts for less than 14\% of the pulsar's spin-down energy budget. We placed the third most stringent constraint on the ellipticity ($\varepsilon<3\times 10^{-5}$) of any young pulsar (behind only the Crab pulsar and B1951+32/J1952+3252; \citealt{2019ApJ...879...10A,abbott2020gravitational}). While this limit is much higher than those of old recycled millisecond pulsars (for which $\varepsilon<10^{-8}$; \citealt{abbott2020gravitational}), young pulsars such as PSR~J0537\textminus6910\xspace and the Crab pulsar are important because they have much stronger magnetic fields (and are hotter) and thus might have greater ellipticities. The ellipticity constraint of PSR~J0537\textminus6910\xspace is also {above or} near estimates of the maximum ellipticity that can be sustained by an elastically deformed neutron star crust \citep{2013PhRvD..88d4004J,2018PhRvL.121m2701C,gittins2021}. PSR~J0537\textminus6910\xspace is a frequently glitching pulsar and potential source of continuous gravitational waves\xspace. The X-ray data from {\it NICER}\xspace gives us the necessary tools to account for the phase evolution of a gravitational-wave\xspace signal over time, which allows us to perform a fully coherent and sensitive search for such a signal. While our multi-messenger analysis focuses on gravitational waves\xspace from a time-varying mass quadrupole ($n=5$), another search could be performed for gravitational waves\xspace from a r-mode fluid oscillation ($n=7$) using wider-band techniques (e.g., \citealt{fesikpapa20,fesikpapa20b}, using O2 data). The strain sensitivity achieved in our analysis ($1\times 10^{-26}$) is also comparable to the $(2-3)\times10^{-26}$ estimated in \cite{anderssonetal18} for r-mode emission from PSR~J0537\textminus6910\xspace. Finally, from the observed correlation between glitch size and time-to-next-glitch for PSR~J0537\textminus6910\xspace \citep{2006ApJ...652.1531M,antonopoulouetal18,2018ApJ...852..123F,hoetal20}, we can hope to measure in the future low braking indices (7 or even lower) after the largest glitches. As noted above, braking indices of 5 and 7 are predicted by gravitational wave\xspace-emitting mechanisms. The observed evolution of $n_{\rm ig}$ to lower values than those shown in Figure~\ref{fig:tnig}, which may occur after the effects of glitches on the pulsar's spin-down behavior have decayed, may indicate that gravitational waves\xspace are continuously emitted between glitches. On the other hand, glitches may trigger detectable transient gravitational waves\xspace \citep{prixetal11,hoetal20b,yimjones20}, and gravitational-wave\xspace searches at glitch epochs of other pulsars have been conducted \citep{keiteletal19}. It is therefore vital to continue to monitor the spin evolution of PSR~J0537\textminus6910\xspace, not only to obtain the timing ephemeris and measure braking indices, but also to know when this pulsar undergoes a glitch. Since the spin period of PSR~J0537\textminus6910\xspace is only detectable at X-ray energies, {\it NICER}\xspace is the only effective means to perform the necessary observations. Fortunately {\it NICER}\xspace is anticipated to operate until at least late 2022, overlapping with the fourth observing run of LIGO/Virgo and KAGRA \citep{KAGRA}, which is likely to begin in 2022 and continue into 2023. \section*{Acknowledgments} \input{acknowledgements} D.An.\ acknowledges support from an EPSRC fellowship (EP/T017325/1). C.M.E.\ acknowledges support from FONDECYT/Regular 1171421 and USA1899-Vridei 041931SSSA-PAP (Universidad de Santiago de Chile, USACH). W.C.G.H.\ acknowledges support through grants 80NSSC19K1444 and 80NSSC21K0091 from NASA. This work is supported by NASA through the {\it NICER}\xspace mission and the Astrophysics Explorers Program and uses data and software provided by the High Energy Astrophysics Science Archive Research Center (HEASARC), which is a service of the Astrophysics Science Division at NASA/GSFC and High Energy Astrophysics Division of the Smithsonian Astrophysical Observatory. \facility{{\it NICER}\xspace} \bibliographystyle{aasjournal}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} In power system planning and operation against contingencies (e.g., generator loss, transmission line tripping, unexpected power demands), to avoid the system from running underfrequency or to help the network recover from it, load shedding and curtailment are commonly employed to balance supply and demand. However, due to inertia, it takes some time for the energy resources to re-enter a safe frequency region until the power network eventually converges to steady state. Hence, during transients, generators are still in danger of reaching their frequency limits and being tripped, which may in turn cause blackouts. This phenomenon tends to happen more frequently in modern power networks due to low inertia and highly-dynamic units. Therefore, there is a need to analyze the transient behavior of power networks and design controllers that ensure the safe evolution of the system. \emph{Literature review.} Transient stability refers to the ability of power networks to maintain synchronism after being subjected to a disturbance, see e.g.,~\citep{PK-JP:04}. Many works, see e.g.,~\citep{HDC:11,FD-MC-FB:13,PJM-JH-JK-HJS:14}, provide conditions to ensure synchronicity and investigate their relationship with the topology of the power network. However, even if network synchronism holds, system transient trajectory may enter unsafe regions, e.g., transient frequency may violate individual generator's frequency limits, causing generator failure and leading to blackouts~\citep{PK:94}. Hence, various techniques have been proposed to improve transient behavior. These include resource re-dispatch with transient stability constraints~\citep{AA-EBM:06,TTN-VLN-AK:11}; thyristor-controlled series capacitor compensation to optimize transmission impedance and keep power transfer constant~\citep{TG-JP:01}; the use of power system stabilizers to damp out low frequency inter-machine oscillations~\citep{MAM-HRP-MA-MJH:14}, and placing virtual inertia in power networks to mitigate transient effects~\citep{TSB-TL-DJH:15,BKP-SB-FD:17}. While these approaches have a qualitative effect on transient behavior, they do not offer strict guarantees as to whether the transient frequency stays within a specific region. Furthermore, the approach by~\cite{TSB-TL-DJH:15} requires a priori knowledge of the time evolution of the disturbance trajectories and an estimation of the transient overshoot. Alternative approaches rely on the idea of identifying the disturbances that may cause undesirable transient behaviors using forward and backward reachability analysis, see e.g.,~\citep{MA:14,YCC-ADD:12,HC-PJS-SVD:16} and our previous work~\citep{YZ-JC:17-acc}. The lack of works that provide tools for transient frequency control motivates us here to design feedback controllers for the generators that guarantee simultaneously the stability of the power network and the desired transient frequency behavior. Our design is inspired by the controller-design approach to safety-constrained systems taken by~\cite{ADA-XX-JWG-PT:16}, where the safety region is encoded as the zero-sublevel set of a barrier function and safety is ensured by constraining the evolution of the function along the system trajectories. \emph{Statement of contributions.} The main result of the paper is the synthesis of a Lipschitz continuous, distributed controller, available at specific individual generator nodes, that satisfies the following requirements (i) renders the closed-loop power network asymptotically stable; (ii) for each controlled generator node, if its initial frequency belongs to a desired safe frequency region, then its frequency trajectory stays in it for all subsequent time; and (iii) if, instead, its initial frequency does not belong to the safe region, then the frequency trajectory enters it in finite time, and once there, never leaves. Our technical approach to achieve this combines Lyapunov stability and set invariance theory. We first show that requirement (iii) automatically holds if (i) and (ii) hold true, and we thereby focus our attention on the latter. For each one of these requirements, we provide equivalent mathematical formulations that are amenable to control design. Regarding (i), we consider an energy function for the power system and formalize it as identifying a controller that guarantees that the time evolution of this energy function along every trajectory of the dynamics is non-decreasing. Regarding (ii), we show that this condition is equivalent to having the controller make the safe frequency interval forward invariant. To avoid discontinuities in the controller design on the boundary of the invariant set, we report to the idea of barrier functions to have the control effort gradually kick in as the state trajectory approaches the boundary. Our final step is to use the identified constraints to synthesize a specific controller that satisfies both and is distributed. The latter is a consequence of the fact that, for each bus, the constraints only involve the state of the bus and that of neighboring states. We analyze its robustness properties against measure error and parameter uncertainty, quantify its magnitude when the initial state is uncertain, and provide an estimation on the frequency convergence rate from the unsafe to the safe region for each controlled generator. Finally, we illustrate the performance and design trade-offs of the proposed controller on the IEEE 39-bus power network. \section{Preliminaries}\label{section:prelimiaries} In this section we introduce basic notation and notions from set invariance and graph theory. \vspace*{-1.5ex} \paragraph*{Notation.} Let ${\mathbb{N}}$, ${\mathbb{R}}$, ${\mathbb{R}}_{>}$, and ${\mathbb{R}}_{\geqslant}$ denote the set of natural, real, strictly positive, and nonnegative real numbers, respectively. Variables are assumed to belong to the Euclidean space unless specified otherwise. For $a,b\in{\mathbb{N}}$, denote $[a,b]_{{\mathbb{N}}}\triangleq\{x\in{\mathbb{N}}\ |\ a\leqslant x\leqslant b\}$. Given $\mathcal{C} \subset {\mathbb{R}}^{n}$, $\partial\mathcal{C}$ denotes its boundary. We let $\|\cdot\|_{2}$ denote the 2-norm on ${\mathbb{R}}^{n}$. For a point $x\in{\mathbb{R}}^{n}$ and $r\in{\mathbb{R}}_{>}$, denote $B_{r}(x)\triangleq\setdef{x'\in{\mathbb{R}}^{n}}{\|x'-x\|_{2}\leqslant r}$. Denote $\bold{1}_n$ and $\bold{0}_n$ in ${\mathbb{R}}^n$ as the vector of all ones and zeros, respectively. For $A\in\mathbb{R}^{m\times n}$, let $[A]_i$ and $[A]_{ij}$ denote its $i$th row and $(i,j)$th element. We denote by $A^{\dagger}$ its unique Moore-Penrose pseudoinverse and by $\operatorname{range}(A)$ its column space. A continuous function $\alpha:{\mathbb{R}}\rightarrow {\mathbb{R}}$ is of class-$\mathcal{K}$ if it is strictly increasing and $\alpha(0)=0$. Given a differentiable function $l:{\mathbb{R}}^{n}\rightarrow{\mathbb{R}}$, we let $\nabla l$ denote its gradient. A function $f:{\mathbb{R}}_{\geqslant }\times{\mathbb{R}}^{n}\rightarrow{\mathbb{R}}^{n},\ (t,x)\rightarrow f(t,x)$ is Lipschitz in $x$ (uniformly in $t$) if for every $x_{0}\in{\mathbb{R}}^{n}$, there exist $L,r>0$ such that $\|f(t,x)-f(t,y)\|_{2}\leqslant L\|x-y\|_{2}$ for any $x,y\in B_{r}(x_{0})$ and any $t\geqslant 0$. \vspace*{-1.5ex} \paragraph*{Set invariance.} We introduce here notions of forward invariance~\cite{HKK:02}. Consider the non-autonomous system on~${\mathbb{R}}^{n}$, \begin{align}\label{eqn:nonlinear} \dot x=f(t,x), \quad x(0)=x_{0}, \end{align} where $f:{\mathbb{R}}_{\geqslant}\times{\mathbb{R}}^{n}\rightarrow{\mathbb{R}}^{n}$. We assume $f$ is piecewise continuous in $t$ and Lipschitz in $x$, so that the solution of~(\ref{eqn:nonlinear}) exists and is unique. A set $\mathcal{C}\in{\mathbb{R}}^{n}$ is \textit{(forward) invariant} for system~\eqref{eqn:nonlinear} if for every initial condition $x_{0}\in \mathcal{C}$, the solution starting from $x_0$ satisfies $x(t)\in \mathcal{C}$ for all $t\geqslant 0$. The following result states a sufficient and necessary condition for a set to be forward invariant for~\eqref{eqn:nonlinear}. \begin{lemma}\longthmtitle{Nagumo's Theorem~\cite{FB-SM:08}}\label{lemma:Nagumo} Let $l:{\mathbb{R}}^{n}\rightarrow{\mathbb{R}}$ be continuously differentiable and let $ \mathcal{C}\triangleq\setdef{x}{l(x)\leqslant 0}$. Suppose that for all $x\in\mathcal{C}$, there exists $s\in{\mathbb{R}}^{n}$ such that $l(x)+\nabla l(x)^{T}s<0$. Furthermore, suppose there exists a Lipschitz function $\phi:{\mathbb{R}}^{n}\rightarrow{\mathbb{R}}^{n}$ such that $\nabla l(x)^{T}\phi(x)<0$ for all $x\in\partial\mathcal{C}$. Then $\mathcal{C}$ is forward invariant if and only if $\nabla l(x)^{T}f(t,x)\leqslant 0$ for all $x\in\partial\mathcal{C}$. \end{lemma} The assumptions in Nagumo's Theorem ensure that the set $\mathcal{C}$ is regular enough to have a well-defined interior and boundary. \vspace*{-1.5ex} \paragraph*{Graph theory.} We present basic notions in algebraic graph theory from~\cite{FB-JC-SM:08cor,NB:94}. An undirected graph is a pair $\mathcal{G} = \mathcal(\mathcal{I},\mathcal{E})$, where $\mathcal{I} = \{1,\dots,n\}$ is the vertex set and $\mathcal{E}=\{e_{1},\dots, e_{m}\} \subseteq \mathcal{I} \times \mathcal{I}$ is the edge set. A path is an ordered sequence of vertices such that any pair of consecutive vertices in the sequence is an edge of the graph. A graph is connected if there exists a path between any two vertices. Two nodes are neighbors if there exists an edge linking them. Denote by $\mathcal{N}(i)$ the set of neighbors of node~$i$. For each edge $e_{k} \in \mathcal{E}$ with vertices $i,j$, the orientation procedure consists of choosing either $i$ or $j$ to be the positive end of $e_{k}$ and the other vertex to be the negative end. The incidence matrix $D=(d_{k i}) \in \mathbb{R}^{m \times n}$ associated with $\mathcal{G}$ is then defined as \begin{align*} d_{k i} = \begin{cases} 1 & \text{if $i$ is the positive end of $e_{k}$}, \\ - 1 & \text{if $i$ is the negative end of $e_{k}$}, \\ 0 & \text{otherwise}. \end{cases} \end{align*} \section{Problem statement}\label{section:problem-statement} In this section we introduce the dynamical model for the power network and state our control objective. \subsection{Power network model} The power network is encoded by a connected undirected graph $\mathcal{G} = (\mathcal{I},\mathcal{E})$, where $\mathcal{I} = \{1,2,\cdots,n\}$ is the collection of buses and $\mathcal{E} = \{e_{1},\cdots,e_{m}\}\subseteq\mathcal{I}\times\mathcal{I}$ is the collection of transmission lines. For each node $i\in\mathcal{I}$, let $\theta_{i}\in{\mathbb{R}}$, $\omega_{i}\in{\mathbb{R}}$ and $p_{i}\in{\mathbb{R}}$ denote its voltage angle, shifted voltage frequency relative to the nominal frequency, and constant active power injection, respectively. We partition buses into $\mathfrak{C}$ and $\mathcal{I} \backslash \mathfrak{C}$, where every bus $i\in\mathfrak{C}$ requires an individual transient frequency regulation realized via an exogenous control command~$u_{i}$. The dynamics is described by the swing equations for voltage angles and frequencies, \begin{align}\label{eqn:swing-equations-dynamics} \dot\theta_{i}(t) &\hspace{-0.05cm} =\omega_{i}(t), \ \forall i\in\mathcal{I}, \\ M_{i}\dot\omega_{i}(t) & \hspace{-0.05cm}=\hspace{-0.05cm} -E_{i}\omega_{i}(t) - \hspace*{-2ex} \sum_{j\in\mathcal{N}(i)} \hspace*{-1.5ex} b_{ij} \sin(\theta_{i}(t)-\theta_{j}(t)) \hspace{-0.05cm}+u_{i}(t)\hspace{-0.05cm}+ p_{i},\ \hspace{-0.05cm} \forall i\in\mathfrak{C},\notag \\ M_{i}\dot\omega_{i}(t) & \hspace{-0.05cm}=\hspace{-0.05cm} -E_{i}\omega_{i}(t) - \hspace*{-2ex} \sum_{j\in\mathcal{N}(i)} \hspace*{-1.5ex} b_{ij} \sin(\theta_{i}(t)-\theta_{j}(t)) \hspace{-0.05cm}+ p_{i},\ \hspace{-0.05cm}\forall i\in\mathcal{I}\backslash\mathfrak{C}, \notag \end{align} where $b_{ij}\in{\mathbb{R}}_{>}$ is the susceptance of the line connecting bus $i$ and~$j$, and $M_{i} \in {\mathbb{R}}_{\geqslant}$ and $E_{i} \in {\mathbb{R}}_{\geqslant}$ are the inertia and damping coefficients of bus $i \in \mathcal{I}$. For simplicity, we assume that they are all strictly positive. For our purposes, it is convenient to rewrite the dynamics~\eqref{eqn:swing-equations-dynamics} in a more compact way. Let $\theta \triangleq [\theta_{1}, \cdots, \theta_{n}]^{T} \in {\mathbb{R}}^{n}$, $\omega \triangleq [\omega_{1}, \cdots, \theta_{n}]^{T} \in {\mathbb{R}}^{n}$ and $p\triangleq [p_{1},\cdots,p_{n}]^{T}\in{\mathbb{R}}^{n}$ be the collection of voltage angles, frequencies, and power injections. Let $D\in{\mathbb{R}}^{m\times n}$ be the incidence matrix corresponding to an arbitrary graph orientation, and define the voltage angle difference vector \begin{align}\label{eqn:state-transformation} \lambda \triangleq D\theta \in{\mathbb{R}}^{m} . \end{align} Denote by $Y_{b}\in{\mathbb{R}}^{m\times m}$ the diagonal matrix whose $k$th diagonal item represents the susceptance of the transmission line $e_{k}$ connecting bus $i$ and $j$, i.e., $[Y_{b}]_{k,k}=b_{ij},$ for $k=1,2,\cdots, m$. We re-write the dynamics~\eqref{eqn:swing-equations-dynamics} in terms of $\lambda$ and $\omega$ as \begin{subequations}\label{eqn:dynamics-2} \begin{align} \dot \lambda (t) &= D\omega(t),\label{eqn:dynamics-2a} \\ M_{i}\dot\omega_{i}(t) &= -E_{i}\omega_{i}(t)-[D^{T}Y_{b}]_{i}\sin\lambda(t)+u_{i}(t)+p_{i}, \ \forall i\in\mathfrak{C},\label{eqn:dynamics-2b} \\ M_{i}\dot\omega_{i}(t) &=-E_{i}\omega_{i}(t)-[D^{T}Y_{b}]_{i}\sin\lambda(t)+p_{i}, \ \forall i\in\mathcal{I}\backslash\mathfrak{C},\label{eqn:dynamics-2c} \end{align} \end{subequations} where $\sin\lambda(t)\in{\mathbb{R}}^{m}$ is the component-wise sine value of $\lambda(t)$. Note that the transformation~\eqref{eqn:state-transformation} enforces $ \lambda(0) \in \operatorname{range}(D)$. We refer to an initial condition satisfying this equation as \emph{admissible}. When convenient, for conciseness, we use $ x(t)\triangleq \left(\lambda(t),\omega(t)\right)\in{\mathbb{R}}^{m+n $ to denote the collection of all states, and we neglect its dependence on $t$ if the context is clear. The trajectories $(\lambda(t),\omega(t))$ locally converge to a unique equilibrium point if all $u_{i}$'s are set to zero. Specifically, let $L\triangleq D^{T}Y_{b}D$ and $L^{\dagger}$ be its pseudoinverse. Define $\omega^{\infty}\triangleq\frac{\sum_{i=1}^{n} p_{i}}{\sum_{i=1}^{n}E_{i}}$, $E\triangleq\text{diag}(E_{1},E_{2},\cdots,E_{n})$, and $\tilde p\triangleq p-\omega^{\infty}E\bold{1}_{n}$. If \begin{align}\label{ineq:sufficient-eq} \|L^{\dagger}\tilde p\|_{\mathcal{E},\infty}<1, \end{align} where $\|y\|_{\mathcal{E},\infty} \triangleq \max_{(i,j)\in\mathcal{E}} |y_{i}-y_{j}|$, then there exists $\lambda^{\infty}\in \Gamma \triangleq\setdef{\lambda}{|\lambda_{i}|<\pi/2}$ unique in $\Gamma_{\operatorname{cl}}\triangleq\setdef{\lambda}{|\lambda_{i}|\leqslant\pi/2}$ such that \begin{align}\label{eqn:lambda-solution} \tilde p = D^{T}Y_{b}\sin \lambda^{\infty} \text{ and } \lambda^{\infty} \in \operatorname{range} (D). \end{align} According to \cite[Lemma 2 and inequality (S17)]{FD-MC-FB:13}, system~\eqref{eqn:dynamics-2} with $u_{i}\equiv0$ for every $i\in\mathfrak{C}$, $(\lambda^{\infty},\omega^{\infty}\bold{1}_{n})$ is stable. Furthermore, $(\lambda(t),\omega(t))$ locally converges to $(\lambda^{\infty},\omega^{\infty}\bold{1}_{n})$ provided $\lambda(0) \in \operatorname{range}(D)$. Throughout the rest of the paper, we assume that condition~\eqref{ineq:sufficient-eq} holds. \subsection{Control goal} Our goal is to design a state-feedback controller for each bus $i\in\mathfrak{C}$ that guarantees that the frequency transient behavior stays within desired safety bounds while, at the same time, preserving the stability properties that the system~\eqref{eqn:dynamics-2} enjoys when no external input $u_{i}$ is present. We state these requirements explicitly next. \emph{Stability and convergence requirement:} Since the system~\eqref{eqn:dynamics-2} without $u_{i}$ is locally stable, we require that the same system with the proposed controller $u_{i}$ is also locally stable. Furthermore, for every admissible initial condition, the two systems should converge to the same equilibrium $(\lambda^{\infty},\omega^{\infty}\bold{1}_{n})$, meaning that $u_{i}$ only affects the transient behavior. \emph{Frequency invariance requirement:} For each $i\in\mathfrak{C}$, let $\underline\omega_{i}\in{\mathbb{R}}$ and $\bar\omega_{i}\in{\mathbb{R}}$ be lower and upper safe frequency bounds, where $\underline\omega_{i}<\bar\omega_{i}$. We require that the frequency $\omega_{i}(t)$ stays inside the safe region $[\underline\omega_{i},\bar\omega_{i}]$ for any $t> 0$, provided that the initial frequency $\omega_{i}(0)$ lies inside $[\underline\omega_{i},\bar\omega_{i}]$. This forward invariance requirement corresponds to underfrequency/overfrequency avoidance. \emph{Attractivity requirement:} If, for some $i\in\mathfrak{C}$, the initial frequency $\omega_{i}(0)\notin[\underline\omega_{i},\bar\omega_{i}]$, then after a finite time, $\omega_{i}$ enters the safe region and never leaves afterwards. This requirement corresponds to underfrequency/overfrequency recovery. In addition to these requirements, we also seek the designed controller to be Lipschitz as a function of the state. This guarantees the existence and uniqueness of solutions for the closed-loop system and, at the same time, provides robustness for practical implementation against errors in state measurements. \begin{remark}\longthmtitle{Selection of buses with transient frequency specification}\label{rmk:selction-node} {\rm The set $\mathfrak{C}$ consists of buses belonging to either of the following two types: a) buses with specified over/underfrequency requirement~\citep{PP-PSK-CWT:06} and b) buses whose transient frequency behavior is key in evaluating system performance, or are used as indexes for load shedding schemes~\citep{NWM-KC-MS:11}. We assume each individual bus in $\mathfrak{C}$ is equipped with an external input directly tuning its transient behavior. We show later that this is necessary condition to obtain frequency invariance guarantees.} \relax\ifmmode\else\unskip\hfill\fi\oprocendsymbol \end{remark} Note that the attractivity requirement is automatically satisfied once the controller meets the first two requirements, provided that $\omega^{\infty}\in (\underline\omega_{i},\bar\omega_{i})$. However, in general it is still of interest to provide estimates for how fast the frequency reaches the safe region. Our objective is to design a controller that satisfies the above three requirements simultaneously and is distributed, in the sense that each bus can implement it using its own information and that of its neighboring buses and transmission lines. \section{Constraints on controller design} In this section, we identify constraints on the controller design that provide sufficient conditions to ensure, on the one hand, the stability and convergence requirement and, on the other hand, the frequency invariance requirement. \subsection{Constraint ensuring stability and convergence} We establish a stability constraint by identifying an energy function and restricting the input so that its evolution along every trajectory of the closed-loop dynamics is monotonically non-increasing. We select the following energy function~\citep{TLV-HDN-AM-JS-KT:17} \begin{align}\label{eqn:energy-func} V(\lambda,\omega)\triangleq\frac{1}{2}\sum_{i=1}^{n}M_{i} (\omega_{i}-\omega^{\infty})^{2} + \sum_{j=1}^{m}[Y_{b}]_{j,j}a(\lambda_{j}), \end{align} where $a(\lambda_{j}) \triangleq \cos\lambda_{j}^{\infty} - \cos\lambda_{j} - \lambda_{j}\sin\lambda_{j}^{\infty} + \lambda_{j}^{\infty}\sin\lambda_{j}^{\infty}$. The next result uses the LaSalle Invariance Principle to show this property. \begin{lemma}\longthmtitle{Sufficient condition for local stability and convergence}\label{lemma:sufficient-stability-convergence} Consider the system~\eqref{eqn:dynamics-2}. Under condition~\eqref{ineq:sufficient-eq}, further suppose that, for every $i\in\mathfrak{C}$, $u_{i}:{\mathbb{R}}^{m+n}\times{\mathbb{R}}^{n}\rightarrow{\mathbb{R}},\ (x,y)\mapsto u_{i}(x,y)$ is Lipschitz in $x$. Let $c \triangleq \min_{ \lambda\in\partial\Gamma _{\operatorname{cl}}}V(\lambda,\omega^{\infty}\bold{1}_{n})$ and define \begin{align}\label{set:region} \Phi\triangleq\setdef{(\lambda,\omega)}{\lambda\in \Gamma_{\operatorname{cl}},\ V(\lambda,\omega)\leqslant c/\beta} \end{align} with $\beta\in{\mathbb{R}}_{>}$. If for every $i\in\mathfrak{C}$, $x\in{\mathbb{R}}^{m+n}$, and $p\in{\mathbb{R}}^{n}$, \begin{subequations}\label{ineq:stabilize-constraints-2} \begin{align} (\omega_{i}-\omega^{\infty})u_{i}(x,p) &\leqslant 0 \quad \text{if } \omega_{i}\neq\omega^{\infty},\label{ineq:stabilize-constraints-2a} \\ u_{i}(x,p) &=0 \quad \text{if }\omega_{i}=\omega^{\infty},\label{ineq:stabilize-constraints-2b} \end{align} \end{subequations} then the following results hold provided $\lambda(0) \in \operatorname{range} (D)$ and $(\lambda(0),\omega(0))\in\Phi$ for some $\beta>1$: \begin{enumerate} \item \label{item:solution} The solution of the closed-loop system exists and is unique for any $t\geqslant 0$; \item\label{item:invariance} $\lambda(t)\in \operatorname{range}(D)$ and $(\lambda(t),\omega(t))\in\Phi$ for any $t\geqslant 0$; \item\label{item:convergence-stability} $(\lambda^{\infty},\omega^{\infty}\bold{1}_{n})$ is stable, and $(\lambda(t),\omega(t))\rightarrow(\lambda^{\infty}, \omega^{\infty}\bold{1}_{n})$ as $t\rightarrow \infty$. \end{enumerate} \end{lemma} \begin{pf} To prove\emph{~\ref{item:solution}}, as $(x,y) \mapsto u_{i}(x,y)$ is Lipschitz in~$x$, there exists a unique local solution over $[0,\delta]$ for some $\delta>0$, according to~\cite[Theorem~3.1]{HKK:02}. Let $[0,T)$ be the maximal interval of existence. We then show that $\Phi$ is non-empty and compact, and that $(\lambda(t),\omega(t))$ lies entirely in $\Phi$ for any $t\in[0,T)$. These two facts together, by~\cite[Theorem~3.3]{HKK:02}, imply the existence and uniqueness of the solution for every $t\geqslant 0$. To show the non-emptiness of $\Phi$, note that in~\eqref{eqn:energy-func} if $|\lambda_{i}|\leqslant \pi/2$ and $|\lambda_{i}^{\infty}|<\pi/2$, then $a(\lambda_{i})\geqslant 0$, which implies that $V(\lambda,\omega)\geqslant 0$ for every $\lambda\in\Gamma_{\operatorname{cl}}$ and every $ \omega\in{\mathbb{R}}^{n}$; hence $c\geqslant 0$. Then $(\lambda^{\infty},\omega^{\infty}\bold{1}_{n})\in\Phi$ as $V(\lambda^{\infty},\omega^{\infty}\bold{1}_{n})=0$. % To show the compactness of $\Phi$, note that the set is clearly closed. Since the polytope $\Gamma_{_{\operatorname{cl}}}$ is bounded, the variable $\lambda$ is bounded too. Therefore, $a(\lambda_{i})$ is bounded for every $i\in[1,m]_{\mathbb{N}}$. Since $ V(\lambda,\omega) \le c/\beta$, we deduce that $\sum_{i=1}^{n}M_{i}(\omega_{i}-\omega^{\infty})^{2}$ is bounded, implying that $\omega$ is bounded. Hence, $\Phi$ is bounded. Regarding statement\emph{~\ref{item:invariance}}, note that $\lambda(t)\in \operatorname{range}(D)$ holds for every $t\geqslant 0$ since both $\lambda(0)$ and $\dot\lambda(t)$ lie in $\operatorname{range}(D)$. To establish the invariance of $\Phi$, we examine the evolution of the function $V$ along the dynamics~\eqref{eqn:dynamics-2}, \begin{align*} \dot V(\lambda,\omega) &=\sum_{i=1}^{n}(\omega_{i}-\omega^{\infty}) \left(-E_{i}\omega_{i}-[D^{T}Y_{b}]_{i}\sin\lambda+p_{i}\right) \\ &+\sum_{i \in \mathfrak{C}}(\omega_{i}-\omega^{\infty})u_{i}(x,p) + \sum_{j=1}^{m}[Y_{b}]_{j,j}(\sin\lambda_{j} - \sin\lambda_{j}^{\infty})[D]_{j}\omega \\ &=-\sum_{i=1}^{n}E_{i}(\omega_{i}-\omega^{\infty})^{2} + \sum_{i\in\mathfrak{C}}(\omega_{i}-\omega^{\infty})u_{i}(x,p) \\ &\leqslant -\sum_{i=1}^{m}E_{i}(\omega_{i}-\omega^{\infty})^{2}\leqslant 0, \end{align*} where we have employed~\eqref{eqn:derivative-step-0} in the second equality. \begin{figure*}[htb!] \begin{align} &\sum_{i=1}^{n}(\omega_{i}-\omega^{\infty}) \left(-[D^{T}Y_{b}]_{i}\sin\lambda+p_{i}-\omega^{\infty} E_i\right) + \sum_{j=1}^{m}[Y_{b}]_{j,j}(\sin\lambda_{j}-\sin\lambda_{j}^{\infty})[D]_{j} \omega\notag \\ =&\sum_{i=1}^{n}(\omega_{i}-\omega^{\infty}) \left(-[D^{T}Y_{b}]_{i}\sin\lambda+p_{i}-\omega^{\infty} E_i\right) + \sum_{j=1}^{m}(\sin\lambda_{j} - \sin\lambda_{j}^{\infty})[Y_{b}D]_{j}(\omega-\omega^{\infty}\bold{1}_{n})\notag \\ =&\sum_{i=1}^{n}(\omega_{i}-\omega^{\infty} ) \left(p_{i}-\omega^{\infty} E_i\right) - \sum_{j=1}^{m}(\sin\lambda_{j}^{\infty})[Y_{b}D]_{j}(\omega-\omega^{\infty} \bold{1}_{n}) \notag \\ =&\sum_{i=1}^{n}(\omega_{i}-\omega^{\infty} ) \left(p_{i}-\omega^{\infty}E_i-D^{T}Y_{b}\sin\lambda_{i}^{\infty}\right) =(\omega-\omega^{\infty}\bold{1}_{n})^{T}(\tilde p-D^{T}Y_{b}\sin\lambda^{\infty})=0.\label{eqn:derivative-step-0} \end{align} \hrulefill \end{figure*} This monotonicity of $V$ implies that the constraint $V(\lambda,\omega)\leqslant c/\beta$ defining $\Phi$ can never be violated. Now if there exists a time $t_{1}>0$ such that $(\lambda(t_{1}),\omega(t_{1}))\notin\Phi$, then it must be the case where $\lambda(t_{1})\notin\Gamma$. By the continuity of the trajectory, there must exist another time $t_{2}$ before $t_{1}$ such that $\lambda(t_{2})\in\partial\Gamma_{_{\operatorname{cl}}}$, in which case $V(\lambda(t_{2}),\omega(t_{2}))\geqslant V(\lambda(t_{2}),\omega^{\infty}\bold{1}_{n})\geqslant c>c/\beta$, which is a contradiction. Hence $\Phi$ is invariant. To prove~\emph{\ref{item:convergence-stability}}, notice that, for any $ (\lambda,\omega)\in\Phi,\ \dot V(\lambda,\omega)\leqslant 0$; second, $V(\lambda^{\infty},\omega^{\infty}\bold{1}_{n})=0$; third, $V(\lambda,\omega)>0,$ for every $(\lambda,\omega)\in\Phi$ with $(\lambda,\omega)\neq(\lambda^{\infty},\omega^{\infty}\bold{1}_{n})$. By~\cite[Theorem~4.1]{HKK:02}, $(\lambda^{\infty},\omega^{\infty}\bold{1}_{n})$ is stable. Finally, to establish convergence, let \begin{align}\label{set:omega} \Omega \triangleq \Phi \cap \setdef{(\lambda,\omega)}{ \text{} \lambda \in \operatorname{range}(D)}. \end{align} Note that $(\lambda(0),\omega(0))\in\Omega$. Clearly, the set $\Omega$ is compact and invariant with respect to the dynamics~(\ref{eqn:dynamics-2a})-(\ref{eqn:dynamics-2c}) with controller satisfying~(\ref{ineq:stabilize-constraints-2}). Noticing that $\dot V(\lambda,\omega)=0$ implies $\omega=\omega^{\infty}\bold{1}_{n}$, let $S\triangleq \setdef{(\lambda,\omega)}{\omega=\omega^{\infty}\bold{1}_{n}}\bigcap\Omega$. It is easy to see that no solution can identically stay in $S$ other than the trivial solution $(\lambda(t),\omega(t))\equiv(\lambda^{\infty},\omega^{\infty}\bold{1}_{n})$. The conclusion then follows from the LaSalle Invariance Principle~\cite[Theorem~4.4]{HKK:02}. \qed \end{pf} \begin{remark}\longthmtitle{Computation of the region of attraction}\label{rmk:app-invairance} {\rm The set~$\Phi$ is an estimate of the region of attraction but its explicit computation requires the solution of a non-convex optimization problem to determine the value of~$c$. We can equivalently compute $c$ by solving $2m$ convex problems. For each $j\in[1,m]_{{\mathbb{N}}}$, let \begin{align*} \bar c_{j} \triangleq \hspace*{-2ex} \min_{ \begin{subarray}{c} \lambda_{j}=\pi/2 \\ |\lambda_{i}|\leqslant \pi/2, \, \forall i \neq j \end{subarray} } V( \lambda,\omega^{\infty}\bold{1}_{n}), \quad \underline c_{j} \triangleq \hspace*{-2ex} \min_{ \begin{subarray}{c} \lambda_{j}=-\pi/2 \\ |\lambda_{i}|\leqslant \pi/2, \, \forall i \neq j \end{subarray} } V( \lambda,\omega^{\infty}\bold{1}_{n}). \end{align*} Note that these problems are convex, as the Hessian of $V(\tilde\lambda, \omega^{\infty}\bold{1}_{n})$ with respect to $\tilde\lambda$, $\nabla^{2}V = \operatorname{diag}([Y_{b}]_{1,1} \cos(\lambda_{1}), \cdots,$\\$[Y_{b}]_{m,m}\cos(\lambda_{m}))$, is positive definite on $\Gamma_{_{\operatorname{cl}}}$, and the feasible set is a closed convex subset of $\Gamma_{_{\operatorname{cl}}}$. One can easily see that $c=\min_{j\in[1,m]_{{\mathbb{N}}}}\{\bar c_{j},\underline c_{j}\}$. On the other hand, although it is easy to check if a given initial state belongs to $\Phi$, it is difficult to characterize its geometric shape. The work~\citep{TLV-HDN-AM-JS-KT:18} shows that, for suitable $\bar c>0$ determined via a convex quadratic program, the ellipsoid \begin{align*} \bar\Phi\triangleq \setdef{(\lambda,\omega)}{\bar V(\omega,\lambda)\leqslant \bar c} \end{align*} is a subset of $\Phi$ (here $\bar V(\omega,\lambda) \triangleq \frac{1}{2}\sum_{i=1}^{n}M_{i} (\omega_{i}-\omega^{\infty})^{2} + \frac{1}{2} \sum_{j=1}^{m}[Y_{b}]_{j,j} (\lambda_{j}-\lambda_{j}^{\infty})^{2}$ is quadratic). Lemma~\ref{lemma:sufficient-stability-convergence} remains valid if $\Phi$ is replaced by~$\bar\Phi$. } \relax\ifmmode\else\unskip\hfill\fi\oprocendsymbol \end{remark} \subsection{Constraint ensuring frequency invariance}\label{sec:constraint-freq} We next focus our attention on the frequency invariance requirement. We start by defining the invariant sets we are interested in, \begin{align}\label{eqn:barrier-function-unsymmetric} \bar{\mathcal{C}}_{i}\triangleq\setdef{x}{\omega_{i}-\bar\omega_{i}\leqslant 0}, \quad \underline{\mathcal{C}}_{i} \triangleq\setdef{x}{ \underline\omega_{i}-\omega_{i}\leqslant 0}. \end{align} The characterization stated in the next result directly follows from Nagumo's Theorem. \begin{lemma}\longthmtitle{Sufficient and necessary condition for frequency invariance}\label{lemma:frequency-invariance} Assume that the solution of~\eqref{eqn:dynamics-2} exists and is unique for every admissible initial condition. Then, for any $i\in\mathfrak{C}$, the sets $\bar{\mathcal{C}}_{i}$ and $\underline{\mathcal{C}}_{i}$ are invariant if and only if for every $x\in{\mathbb{R}}^{m+n}$ and $p\in{\mathbb{R}}^{n}$, \begin{subequations}\label{ineq:invariance-condition-unsymmetric-1} \begin{align} u_{i}(x,p)-q_{i}(x,p)\leqslant 0 \quad \text{if } \omega_{i}=\bar\omega_{i},\label{ineq:invariance-condition-unsymmetric-1a} \\ -u_{i}(x,p)+q_{i}(x,p)\leqslant 0 \quad \text{if }\omega_{i}=\underline \omega_{i},\label{ineq:invariance-condition-unsymmetric-1b} \end{align} \end{subequations} where $ q_{i}(x,p)\triangleq E_{i}\omega_{i}+[D^{T}Y_{b}]_{i}\sin\lambda-p_{i}$. \end{lemma} \begin{pf} For simplicity, we only deal with the case of $\bar{\mathcal{C}}_{i}$ (the other case follows similarly). For each $i \in \mathfrak{C}$, let $\bar l_{i}, \underline l_{i}: {\mathbb{R}}^n \rightarrow {\mathbb{R}}$ be defined by $\bar l_{i}(x) \triangleq\omega_{i}-\bar\omega_{i}$ and $\underline l_{i}(x)\triangleq-\omega_{i}+\underline\omega_{i}$. Notice that, by letting $s=-\bold{1}_{m+n}$ and $\phi(x)\equiv-\bold{1}_{m+n}$, one has that $\bar l_{i}(x)+\nabla \bar l_{i}(x)^{T}s<0$ for every $x\in\bar{\mathcal{C}}_{i}$ and $\nabla \bar l_{i}(x)^{T}\phi(x)<0$ for every $x\in\partial\bar{\mathcal{C}}_{i}$, and hence the assumptions in Nagumo's Theorem hold. Denote by $f(t,x)$ the right-hand side of the dynamics~\eqref{eqn:dynamics-2}. Then $\bar{\mathcal{C}}_{i}$ is invariant if and only if $\nabla\bar l_{i}(x)^{T}f(t,x)\leqslant 0$ when $\omega_{i}(t)=\bar\omega_{i}$, which is equivalent to~\eqref{ineq:invariance-condition-unsymmetric-1a}. \qed \end{pf} From Lemma~\ref{lemma:frequency-invariance}, one sees that if some bus $j\in\mathfrak{C}$ does not possess an external control input (i.e., $u_{j}\equiv0$), then one can not guarantee the invariance of $\bar{\mathcal{C}}_{j}$ and $\underline{\mathcal{C}}_{j}$, since without an active control signal, condition~\eqref{ineq:invariance-condition-unsymmetric-1} can easily be violated. The characterization of Lemma~\ref{lemma:frequency-invariance} points to the value of the input at the boundary of $\bar{\mathcal{C}}_{i}$ and $\underline{\mathcal{C}}_{i}$. However, having a controller that is only nonvanishing at such points is undesirable, as the actuator effort would be discontinuous, affecting the system evolution. A more sensible policy is to have the controller become active as the system state gets closer to the boundary of these sets, and do so in a gradual way. This is captured by the following result. \begin{lemma}\longthmtitle{Sufficient condition for frequency invariance}\label{lemma:sufficent-frequecy-invariance} Assume that the solution of~\eqref{eqn:dynamics-2} exists and is unique for every admissible initial condition. For each $i\in\mathfrak{C}$, let $\bar\omega_{i}^{\operatorname{th}},\ \underline\omega_{i}^{\operatorname{th}}\in{\mathbb{R}}$ be such that $\underline\omega_{i}<\underline\omega_{i}^{\operatorname{th}} < \bar\omega_{i}^{\operatorname{th}}<\bar\omega_{i}$ and let $\bar\alpha_{i}$ and $\underline\alpha_{i}$ be functions of class-$\mathcal{K}$. If for every $x\in{\mathbb{R}}^{m+n}$ and $p\in{\mathbb{R}}^{n}$, \begin{subequations}\label{ineq:invariance-condition-unsymmetric-4} \begin{align} (\omega_{i}-\bar\omega_{i}^{\operatorname{th}})(u_{i}(x,p)-q_{i}(x,p))\leqslant -\bar\alpha_{i}(\omega_{i}-\bar\omega_{i}), \label{ineq:invariance-condition-unsymmetric-3-a} \end{align} if $\bar\omega_{i}^{\operatorname{th}}<\omega_{i}\leqslant \bar\omega_{i}$, and \begin{align} (\underline\omega_{i}^{\operatorname{th}}-\omega_{i})(-u_{i}(x,p)+q_{i}(x,p))\leqslant -\underline\alpha_{i}(\underline\omega_{i}-\omega_{i}), \label{ineq:invariance-condition-unsymmetric-3-b} \end{align} \end{subequations} if $ \underline\omega_{i}\leqslant \omega_{i}< \underline\omega_{i}^{\operatorname{th}}$, then $\bar{\mathcal{C}}_{i}$ and $\underline{\mathcal{C}}_{i}$ are invariant. \end{lemma} The proof of Lemma~\ref{lemma:sufficent-frequecy-invariance} follows by noting that, when $\omega_{i}=\bar\omega_{i}$ (resp. $\omega_{i}=\underline\omega_{i}$), condition~(\ref{ineq:invariance-condition-unsymmetric-3-a}) (resp.~(\ref{ineq:invariance-condition-unsymmetric-3-b})) becomes~(\ref{ineq:invariance-condition-unsymmetric-1a}) (resp.~(\ref{ineq:invariance-condition-unsymmetric-1b})). The introduction of class-$\mathcal{K}$ functions enables the design of controllers that gradually kick in as the margin for satisfying the requirement for frequency invariance gets increasingly small. In fact, using~\eqref{eqn:dynamics-2}, we can equivalently write~\eqref{ineq:invariance-condition-unsymmetric-3-a} as \begin{align}\label{ineq:derivative-class-K} M\dot\omega_{i}\leqslant - \bar\alpha_{i}(\omega_{i}-\bar\omega_{i})/(\omega_{i} - \bar\omega_{i}^{\operatorname{th}}), \quad \text{if } \bar\omega_{i}^{\operatorname{th}}<\omega_{i}\leqslant \bar\omega_{i}. \end{align} Notice that, as $\omega_{i}$ grows from the threshold $\bar\omega_{i}^{\operatorname{th}}$ to the safe bound $\bar\omega_{i}$, the value of $-\bar\alpha_{i} (\omega_{i}-\bar\omega_{i})/(\omega_{i} - \bar\omega_{i}^{\operatorname{th}})$ monotonically decreases to 0. Thus, the constraint on $\dot\omega_{i}$ becomes tighter (while allowing $\dot\omega_{i}$ to still be positive) as $\omega_{i}$ approaches $\bar\omega_{i}$, and when $\omega_{i}$ hits $\bar\omega_{i}$, prescribes $\dot\omega_{i}$ to be nonpositive to ensure invariance. It is interesting to point out the trade-offs present in the choice of class-$\mathcal{K}$ functions. A function with a large derivative, for instance, corresponds to a controller design that allows the derivative above to be significant near the boundary, at the risk of increasing the sensitivity to changes in the state. We re-examine this point later after introducing our specific controller design. \section{Distributed controller synthesis} In this section we introduce a distributed controller design that meets the stability and convergence condition~\eqref{ineq:stabilize-constraints-2} as well as the frequency invariance condition~\eqref{ineq:invariance-condition-unsymmetric-4}. Our next result formally introduces this controller and characterizes its continuity property. \begin{proposition}\longthmtitle{Distributed frequency controller}\label{prop:L} For each $i \in \mathfrak{C}$, let $\bar\alpha_{i}$ and $\underline\alpha_{i}$ be Lipschitz functions of class-$\mathcal{K}$. Then, \begin{align}\label{eqn:stability-transient-controller-Lipschitz-4} u_{i}(x,p) \!=\! \begin{cases} \min\{0,\frac{-\bar\alpha_{i}(\omega_{i}-\bar\omega_{i})}{\omega_{i} - \bar\omega_{i}^{\operatorname{th}}}+q_{i}(x,p)\} & \omega_{i}>\bar\omega_{i}^{\operatorname{th}}, \\ 0 & \underline\omega_{i}^{\operatorname{th}}\leqslant \omega_{i}\leqslant \bar\omega_{i}^{\operatorname{th}}, \\ \max\{0,\frac{\underline\alpha_{i}(\underline\omega_{i} - \omega_{i})}{\underline\omega_{i}^{\operatorname{th}}-\omega_{i}} + q_{i}(x,p)\} & \omega_{i}<\underline\omega_{i}^{\operatorname{th}}, \end{cases} \end{align} is Lipschitz in its first argument. \end{proposition} \begin{pf} Let $i \in \mathfrak{C}$. We show that for any $x\in{\mathbb{R}}^{m+n}$, there exist $L,r\in{\mathbb{R}}_{>}$ such that $|u_{i}(y,p)-u_{i}(z,p)|\leqslant L \|y-z\|$ for any $y,z\in B_{r}(x)$. Notice that this condition holds true for $x$ belonging to $\mathbb{H} \triangleq \setdef{x\in{\mathbb{R}}^{m+n}}{\omega_{i}\neq\bar\omega_{i}^{\operatorname{th}},\ \omega_{i}\neq\underline\omega_{i}^{\operatorname{th}}}$, in that $x\mapsto\frac{-\bar\alpha_{i}(\omega_{i} - \bar\omega_{i}))}{(\omega_{i}-\bar\omega_{i}^{\operatorname{th}})}+q_{i}(x,p)$ (resp. $x \mapsto \frac{\underline\alpha_{i} (\underline\omega_{i}-\omega_{i})}{\underline\omega_{i}^{\operatorname{th}}-\omega_{i}} + q_{i}(x,p)$) is Lipschitz for any $x$ in $\mathbb{H}$, and the $\min$ (resp. $\max$)) operator preserves Lipschitz continuity. Hence we only need to establish Lipschitzness for $x\not\in\mathbb{H}$. For simplicity we only reason for the case when $x$ satisfies $\omega_{i}=\bar\omega_{i}^{\operatorname{th}}$. Denote $ r_{0}\triangleq \min\{ \tfrac{1}{2}(\bar\omega_{i} - \bar\omega_{i}^{\operatorname{th}}),\ \tfrac{1}{2} (\bar\omega^{\operatorname{th}}_{i} - \underline\omega_{i}^{\operatorname{th}}) \}\in{\mathbb{R}}_{>}$. One can see that for any $x'\in B_{r_{0}}(x)$, it holds that $\underline\omega_{i}^{\operatorname{th}}\leqslant \omega_{i}$. Next we show that there always exists $r\leqslant r_{0}$ such that \begin{align} \frac{-\bar\alpha_{i}(\omega_{i} -\bar\omega_{i})}{(\omega_{i}-\bar\omega_{i}^{\operatorname{th}})} + q_{i}(x',p)>0, \label{eq:auxx} \end{align} for all $ x'\in B_{r}(x) \cap \setdef{x'}{\omega_{i}>\bar\omega_{i}^{\operatorname{th}}}$. Notice that for any $x'\in B_{r}(x)$, $\omega_{i}-\bar\omega_{i}\leqslant \bar\omega_{i}^{\operatorname{th}} + r-\bar\omega_{i}\leqslant\omega_{i}^{\operatorname{th}} + (\bar\omega_{i}-\bar\omega_{i}^{\operatorname{th}})/2-\bar\omega_{i} = -(\bar\omega_{i}-\bar\omega_{i}^{\operatorname{th}})/2<0$, and $q_{i}(x',p)=\omega_{i}+[D^{T}]_{i}\lambda-p_{i}\geqslant - (n+1)\|x'\|_{2}-|p_{i}|$. Therefore, it holds that \begin{align*} \frac{-\bar\alpha_{i}(\omega_{i} - \bar\omega_{i})}{(\omega_{i}-\bar\omega_{i}^{\operatorname{th}})} + q_{i}(x',p)\geqslant \frac{-\bar\alpha_{i}(\omega_{i}-\bar\omega_{i})}{2r}- (n+1)\|x'\|_{2}-|p_{i}|. \end{align*} % It is easy to see that for any $ x'\in B_{r}(x) \cap \setdef{x'}{\omega_{i}>\bar\omega_{i}^{\operatorname{th}}}$, the first term can be arbitrarily large by reducing $r$, while the other two terms are bounded; therefore, there exits $r>0$ small enough such that~\eqref{eq:auxx} holds. By~\eqref{eqn:stability-transient-controller-Lipschitz-4}, this implies that $u_{i}(x',p)=0$ for any $ x'\in B_{r}(x)$, and hence $u_i$ is Lipschitz in $x$. \qed \end{pf} \begin{remark}\longthmtitle{Distributed character and practical implementation}\label{rmk:control-realization} {\rm The controller~\eqref{eqn:stability-transient-controller-Lipschitz-4} is distributed since each controlled bus $i \in \mathfrak{C}$, $u_{i}$ only utilizes $\omega_{i}$, $p_{i}$, and information of buses it is connected to in the power network in order to compute $[D^{T}Y_{b}]_{i}\lambda$. This term corresponds to the aggregate power flow injected at node~$i$ from its neighboring nodes. In turn, this means that, instead of measuring $\lambda_{j}$ and its corresponding susceptance for every $i$'s neighboring node $j$, in practice, each node can simply measure the signed power flows in each neighboring transmission lines of node $i$ and sum it up, which is equivalent to $[D^{T}Y_{b}]_{i}\lambda$ as well.} \relax\ifmmode\else\unskip\hfill\fi\oprocendsymbol \end{remark} The next result shows that the proposed distributed controller achieves the objectives identified in Section~\ref{section:problem-statement} regarding stability, convergence, and frequency invariance. \begin{theorem}\longthmtitle{Transient frequency control with stability guarantees}\label{thm:decentralized-controller} Under condition~\eqref{ineq:sufficient-eq}, let $\omega^{\infty}\in(\underline\omega^{\operatorname{th}}_{i},\bar\omega^{\operatorname{th}}_{i})$ and consider the closed-loop system~\eqref{eqn:dynamics-2} with controller~\eqref{eqn:stability-transient-controller-Lipschitz-4}. If $\lambda(0)\in \operatorname{range}(D)$ and $(\lambda(0),\omega(0))\in\Phi$ for some $\beta>1$, then \begin{enumerate} \item\label{item:exist-unique} The solution exists and is unique for every $t\geqslant 0$; \item\label{item:invariance-K} $\lambda(t)\in \operatorname{range}(D)$ and $(\lambda(t),\omega(t))\in\Phi$ for any $t\geqslant 0$; \item\label{item:convergence-stability-K} $(\lambda^{\infty},\omega^{\infty}\bold{1}_{n})$ is stable, and $(\lambda(t),\omega(t))\rightarrow(\lambda^{\infty},\omega^{\infty}\bold{1}_{n})$ as $t\rightarrow \infty$; \item\label{item:finite-time-active} The controllers become inactive in finite time, i.e., there exists a time $t_{0}>0$ such that $u_{i}(x(t),p)=0$ for all $t\geqslant t_{0}$ and all $i\in\mathfrak{C}$. \item\label{item:frequency-invariant} For any $i\in\mathfrak{C}$, if $\omega_{i}(0)\in[\underline\omega_{i},\bar\omega_{i}]$, then $\omega_{i}(t)\in[\underline\omega_{i},\bar\omega_{i}]$ for all $t> 0$; \item\label{item:frequency-attraction} For any $i\in\mathfrak{C}$, if $\omega_{i}(0)\not\in[\underline\omega_{i},\bar\omega_{i}]$, then $\omega_{i}(t)$ monotonically approaches $[\underline\omega_{i},\bar\omega_{i}]$. Furthermore, there exists a finite time $t_{1}>0$ such that $\omega_{i}(t)\in[\underline\omega_{i},\bar\omega_{i}]$ for all $t\geqslant t_{1}$. \end{enumerate} In addition, if~\ref{item:exist-unique} holds for $(\lambda(0),\omega(0))\not\in\Phi$, then~\ref{item:frequency-invariant} and the monotonic convergence in~\ref{item:frequency-attraction} still hold, but with no guarantee on the existence of a finite $t_{1}$. \end{theorem} \begin{pf} It is easy to see that~\eqref{eqn:stability-transient-controller-Lipschitz-4} guarantees $u_{i}(x,p)\leqslant 0$ if $\omega_{i}>\bar\omega_{i}^{\operatorname{th}}$, $u_{i}(x,p)=0$ if $\omega_{i}\in(\underline\omega_{i}^{\operatorname{th}}, \bar\omega_{i}^{\operatorname{th}})$, and $u_{i}(x,p)\geqslant 0$ if $\omega_{i}<\underline\omega_{i}^{\operatorname{th}}$. Therefore,~\eqref{ineq:stabilize-constraints-2} holds as $\omega^{\infty}\in(\underline\omega^{\operatorname{th}}_{i}, \bar\omega^{\operatorname{th}}_{i})$. Hence \emph{\ref{item:exist-unique}-\ref{item:convergence-stability-K}} directly follow from Lemma~\ref{lemma:sufficient-stability-convergence} (Proposition~\ref{prop:L} justifies the Lipschitzness of the controller). To prove\emph{~\ref{item:finite-time-active}}, we use the convergence established in\emph{~\ref{item:convergence-stability-K}}. For $\epsilon = \min_{i\in\mathfrak{C}}\{\bar\omega_{i}^{\operatorname{th}} - \omega^{\infty},\omega^{\infty}-\underline\omega_{i}^{\operatorname{th}}\}$, there exists $t_{0}\in{\mathbb{R}}_{>}$ such that $\|(\lambda(t),\omega(t)) - (\lambda^{\infty},\omega^{\infty}\bold{1}_{n})\|_{2} < \epsilon$, for $ t\geqslant t_{0}$. Therefore, for any $i\in\mathfrak{C}$, $|\omega_{i}(t)-\omega^{\infty}| \leqslant \|(\lambda(t),\omega(t))-(\lambda^{\infty},\omega^{\infty}\bold{1}_{n})\|_{2}\leqslant \min\{\bar\omega_{i}^{\operatorname{th}}- \omega^{\infty},\omega^{\infty}-\underline\omega_{i}^{\operatorname{th}}\}$, for $ t\geqslant t_{0}$, which implies $\underline\omega_{i}^{\operatorname{th}}\leqslant \omega_{i}(t)\leqslant\bar\omega_{i}^{\operatorname{th}}$, for $ t\geqslant t_{0}$. The result follows now from the definition~\eqref{eqn:stability-transient-controller-Lipschitz-4} of the controller. Regarding~\emph{\ref{item:frequency-invariant}}, the controller~\eqref{eqn:stability-transient-controller-Lipschitz-4} satisfies~\eqref{ineq:invariance-condition-unsymmetric-3-a} if $\bar\omega_{i}^{\operatorname{th}}<\omega_{i}\leqslant \bar\omega_{i}$, and satisfies~\eqref{ineq:invariance-condition-unsymmetric-3-b} if $ \underline\omega_{i}\leqslant \omega_{i}< \underline\omega_{i}^{\operatorname{th}}$; hence by Lemma~\ref{lemma:sufficent-frequecy-invariance} both $\bar{\mathcal{C}}_{i}$ and $\underline{\mathcal{C}}_{i}$ are invariant. Proving monotonicity in~\emph{\ref{item:frequency-attraction}} is equivalent to showing that $\dot\omega_{i}(t)\leqslant 0$ when $\omega_{i}(t)>\bar\omega_{i}$ and $\dot\omega_{i}(t)\geqslant 0$ when $\omega_{i}(t)<\underline\omega_{i}$. For simplicity we only prove the first case. Note that $u_{i}(x,p)\leqslant \frac{-\bar\alpha_{i}(\omega_{i} - \bar\omega_{i})}{(\omega_{i}-\bar\omega_{i}^{\operatorname{th}})} + q_{i}(x,p)$. Plugging this into~(\ref{eqn:dynamics-2b}) and using $\omega_{i}>\bar\omega_{i}$, one has \begin{align}\label{ineq:dynamics-bound} M_{i}\dot\omega_{i}\leqslant \frac{-\bar\alpha_{i}(\omega_{i} - \bar\omega_{i})}{(\omega_{i}-\bar\omega_{i}^{\operatorname{th}})}\leqslant 0, \end{align} establishing monotonicity (notice that the inequality holds even if the initial condition does not belong to~$\Phi$). Finally, since $\omega^{\infty}\in(\underline\omega^{\operatorname{th}}_{i}, \bar\omega^{\operatorname{th}}_{i})$ and $\omega_{i}(t)\rightarrow\omega^{\infty}$ for every $i\in\mathcal{I}$, there exists $t_{1}$ such that $\omega_{i}(t_{1}) \in [\underline\omega^{\operatorname{th}}_{i},\bar\omega^{\operatorname{th}}_{i}]$, which, by~\emph{\ref{item:frequency-invariant}}, further implies that $\omega(t) \in [\underline\omega^{\operatorname{th}}_{i},\bar\omega^{\operatorname{th}}_{i}]$ for every $t\geqslant t_{1}$. \qed \end{pf} \begin{remark}\longthmtitle{Performance trade-offs via selection of class-$\mathcal{K}$ functions}\label{rmk:linear-class-K} {\rm As pointed out in Section~\ref{sec:constraint-freq}, the choice of class-$\mathcal{K}$ functions affects the system behavior. To illustrate this, consider the linear choice $\bar\alpha_{i}=\underline\alpha_{i}:{\mathbb{R}}\rightarrow{\mathbb{R}},\ s \mapsto \gamma_{i}s$, where $\gamma_{i}>0$ is a design parameter. A smaller $\gamma_{i}$ leads to more stringent requirements on the derivative of the frequency. This is because $u_{i}(x,p)$ can be non-zero only when either of the following happen, \begin{align*} \frac{-\bar\alpha_{i}(\omega_{i} - \bar\omega_{i})}{(\omega_{i}-\bar\omega_{i}^{\operatorname{th}})} + q_{i}(x,p)<0\text{ and } \omega_{i}>\bar\omega_{i}^{\operatorname{th}}, \\ \frac{\underline\alpha_{i}(\underline\omega_{i} - \omega_{i})}{\underline\omega_{i}^{\operatorname{th}}-\omega_{i}} + q_{i}(x,p)>0\text{ and } \omega_{i}<\underline\omega_{i}^{\operatorname{th}}. \end{align*} In this first case, the term $\frac{-\bar\alpha_{i}(\omega_{i} - \bar\omega_{i})}{(\omega_{i}-\bar\omega_{i}^{\operatorname{th}})} = \frac{\gamma_{i}(\bar\omega_{i} - \omega_{i})}{\omega_{i}-\bar\omega_{i}^{\operatorname{th}}}>0$ becomes smaller as $\gamma_{i}$ decreases, making its addition with $q_{i}(x,p)$ more likely to be less than $0$, and resulting in an earlier activation of~$u_{i}$. The second case follows similarly. A small $\gamma_{i}$ may also lead to high control magnitude because it prescribes a smaller bound on the frequency derivative, which in turn may require a larger control effort. However, choosing a large $\gamma_{i}$ may cause the controller to be highly sensitive to $\omega_{i}$. This is because the absolute value of the partial derivative of $\frac{-\bar\alpha_{i}(\underline\omega_{i} - \omega_{i})}{(\omega_{i}-\bar\omega_{i}^{\operatorname{th}})}$ (resp. $\frac{\underline\alpha_{i}(\underline\omega_{i} - \omega_{i})}{\underline\omega_{i}^{\operatorname{th}}-\omega_{i}}$) with respect to $\omega_{i}$ grows proportionally with $\gamma_{i}$; consequently, when $u_{i}(x,p)$ is non-zero, its sensitivity against $\omega_{i}$ increases as $\gamma_{i}$ grows, resulting in low tolerance against slight changes in~$\omega_{i}$. In the limit, as $\gamma_{i}\rightarrow \infty$, this yields \begin{align}\label{eqn:stability-transient-controller-infinite} u_{i}^{\infty}(x,p) = \begin{cases} \min\{0,q_{i}(x,p)\} & \omega_{i}= \bar\omega_{i}, \\ 0 & \underline\omega_{i}< \omega_{i}< \bar\omega_{i}, \\ \max\{0, q_{i}(x,p)\} & \omega_{i}=\underline\omega_{i}, \end{cases} \end{align} which in general is discontinuous. We illustrate in simulation the dependence of the controller on the choice of linear class-$\mathcal{K}$ functions in Section~\ref{sec:simulations}.} \relax\ifmmode\else\unskip\hfill\fi\oprocendsymbol \end{remark} \section{Closed-loop performance analysis}\label{sec:performance} In this section, we characterize additional properties of the closed-loop system under the proposed distributed controller beyond stability and frequency invariance. We characterize the attractivity rate of trajectories for initial conditions outside the safe frequency region, the boundedness of the control effort prescribed by the controller along the system trajectories, and its robustness against measurement and parameter uncertainty. \subsection{Estimation of the attractivity rate}\label{subsection:att-rate} Here we provide an estimate of the convergence rate to the safe region (cf. Theorem~\ref{thm:decentralized-controller}\ref{item:frequency-attraction}) when the frequency of a node is initially outside it. The next result identifies a specific trajectory bounding the frequency evolution. \begin{lemma}\longthmtitle{Upper bound on frequency evolution}\label{lemma:frequency-attractivity} With the notation of Theorem~\ref{thm:decentralized-controller}, assume that for some $i\in\mathfrak{C}$, $\omega_{i}(0)>\bar\omega_{i}$. Let $z_{i}(t)$ be the unique solution of \begin{align}\label{eqn:frequency-upper-bound} M_{i}\dot z_{i}(t)= \frac{-\bar\alpha_{i}(z_{i}(t)-\bar\omega_{i})}{z_{i}(t)-\bar\omega_{i}^{\operatorname{th}}},\ z_{i}(0)=\omega_{i}(0). \end{align} Then it holds that $\omega_{i}(t)\leqslant z_{i}(t),$ for any $t\geqslant 0$. Furthermore, $z_{i}(t)$ converges to $\bar\omega_{i}$ monotonically without reaching it in finite time. \end{lemma} \begin{pf} It is easy to check that if $z_{i}(0)>\bar\omega_{i}$, then there exists a unique solution of~(\ref{eqn:frequency-upper-bound}) for every $t\geqslant 0$. Since~\eqref{ineq:dynamics-bound} holds for every $i\in\mathfrak{C}$, by the Comparison Lemma~\cite[Lemma~3.4]{HKK:02}, one has that $\omega_{i}(t)\leqslant z_{i}(t) $ for any $t\geqslant 0$. On the other hand, one can easily prove via Lemma~\ref{lemma:Nagumo} that the set $\left\{ z_{i} \big| \bar\omega_{i}-z_{i}\leqslant0 \right\}$ is invariant, which, together with the fact that $z_{i}(0)>\bar\omega_{i}$, implies $z_{i}(t)\geqslant \omega_{i}$ for every $t\geqslant 0$. By the dynamics~\eqref{eqn:frequency-upper-bound}, we deduce $\dot z_{i}(t)\leqslant 0$ for every $t\geqslant 0$ and the monotonicity follows. Finally, since $z_{i}(t)$ is monotone decreasing and lower-bounded, $z_{i}(t)$ is convergent, with limit $\bar\omega_{i}$ (since $\dot z_{i}(t)<0$ if $z_{i}(t)\neq \bar\omega_{i}$). Finally, since the uniqueness of trajectories is guaranteed by the Lipschitzness of the dynamics~\eqref{eqn:frequency-upper-bound} and $\bar\omega_{i}$ is an equilibrium, it follows that $z_{i}(t)>\bar\omega_{i}$ for any $t\geqslant 0$. \qed \end{pf} A similar statement holds for the case when the initial frequency is lower than the lower safe bound, but we omit it for brevity. When $\bar\alpha_{i}$ is linear, the next result provides an explicit expression for the bounding trajectory. \begin{corollary}\longthmtitle{Estimation of frequency convergence rate with linear class-$\mathcal{K}$ function}\label{cor:frequency-upper-bound-linear} With the notation of Lemma~\ref{lemma:frequency-attractivity}, if $\bar\alpha_{i}(s)=\bar\gamma_{i} s$ with $\bar\gamma_{i}>0$, then $z_{i}(t)$ is uniquely determined by \begin{align}\label{ineq:upper-bound-z-4} z_{i}(t)+(\bar\omega_{i}-\bar\omega_{i}^{\operatorname{th}}) \ln\left(\frac{z_{i}(t)-\bar\omega_{i}}{\omega_{i}(0)-\bar\omega_{i}}\right) = -\bar\gamma_{i}t/M_{i}+\omega_{i}(0). \end{align} Furthermore, it holds that for any $t\geqslant 0$, \begin{align* z_{i}(t)\leqslant \bar\omega_{i} + (\omega_{i}(0)-\bar\omega_{i}) \exp \Big(\frac{-\bar\gamma_{i} t/M_{i}+ \omega_{i}(0)-\bar\omega_{i}}{\bar\omega_{i} -\bar\omega_{i}^{\operatorname{th}}} \Big) . \end{align*} \end{corollary} \begin{pf} In the case where $\bar\alpha_{i}(s)=\bar\gamma_{i}s$, by separation of variables, one has that~\eqref{eqn:frequency-upper-bound} is equivalent to \begin{align*} \frac{z_{i}-\bar\omega_{i}^{\operatorname{th}}}{z_{i}-\bar\omega_{i}}\text{d}z_{i} = -\bar\gamma_{i}\text{d}t/M_{i},\ z_{i}(0)=\omega_{i}(0). \end{align*} Equation~\eqref{ineq:upper-bound-z-4} follows by integrating the above differential equation. Since by Lemma~\ref{lemma:frequency-attractivity} $z_{i}(t)\geqslant\bar\omega_{i}$ for every $t\geqslant 0$, it holds \begin{align*} \bar\omega_{i}+(\bar\omega_{i} - \bar\omega_{i}^{\operatorname{th}}) \ln\left(\frac{z_{i}(t)-\bar\omega_{i}}{\omega_{i}(0)-\bar\omega_{i}}\right)\leqslant -\bar\gamma_{i}t/M_{i}+\omega_{i}(0), \end{align*} concluding the proof. \qed \end{pf} \begin{remark}\longthmtitle{Estimation of safe-frequency entry time}\label{rmk:incap-finite-time} {\rm Corollary~\ref{cor:frequency-upper-bound-linear} establishes the exponential convergence rate of the frequency evolution to the safe region, but it does not provide an estimate of the finite time of entry $t_1$ stated in Theorem~\ref{thm:decentralized-controller}\ref{item:frequency-attraction}. This is because the upper-bound signal $z_{i}$ never hits $\bar\omega_{i}$ in finite time. This drawback is caused by the fact that the existence of $t_{1}$ is justified by (cf. proof of Theorem~\ref{thm:decentralized-controller}\ref{item:frequency-attraction}) the combination of frequency invariance and convergence of the closed-loop system, where we do not utilize the latter in obtaining the upper-bound signal. To fix this, one may replace $\bar\omega_{i}$ by $\bar\omega_{i}-\epsilon_{i}$ in~\eqref{eqn:stability-transient-controller-Lipschitz-4} with $\epsilon_{i}\in{\mathbb{R}}_{>}$, and determine $t_{1}$ by solving $z(t_{1})=\bar\omega_{i}$ along the dynamics~(\ref{eqn:frequency-upper-bound}). Note that, although this procedure does not jeopardize any statement in Theorem~\ref{thm:decentralized-controller}, it actually puts a stricter frequency invariance requirement on the controller. } \relax\ifmmode\else\unskip\hfill\fi\oprocendsymbol \end{remark} \subsection{Bounds on controller magnitude}\label{subsection:magnitude} Here, we provide bounds on the amplitude of the proposed controller~\eqref{eqn:stability-transient-controller-Lipschitz-4} along the system trajectories for a given constant power injection profile~$p$. Our approach to do this is to constrain the allowable initial conditions by employing the energy function $V$ as a measure of how far an initial state can be from the equilibrium point. Formally, let \begin{align*} \hat\Phi(\eta)\triangleq\setdef{(\lambda,\omega)}{\lambda\in \Gamma_{\operatorname{cl}},\ V(\omega,\lambda)\leqslant \eta}, \end{align*} be the collection of allowable initial states, where $0\leqslant \eta<c$. The next result bounds the control input as a function of~$\eta$. \begin{lemma}\longthmtitle{Lower bound on control effort}\label{lemma:lower-bound} For $i\in\mathfrak{C}$, let $g_{i}(\lambda,\omega)\triangleq \frac{-\bar\alpha_{i}(\omega_{i} - \bar\omega_{i})}{\omega_{i} -\bar\omega_{i}^{\operatorname{th}}} + q_{i}(x,p)$ and $d_{i}\triangleq1/2M_{i}(\omega_{i}^{\operatorname{th}} - \omega^{\infty})^{2}$. Let $(\lambda^{*},\omega^{*})$ be the optimal solution~of \begin{subequations}\label{sube:opti-orig} \begin{alignat}{2} \mathbf{(Q)} \hspace{15mm} &\min_{(\lambda,\omega)} & \quad & g_{i}(\lambda,\omega)\notag \\ &\text{s.t.}&\quad & (\lambda,\omega)\in\hat\Phi(\eta)\label{opti-orig-1} \\ &&& \lambda\in \operatorname{range}(D) \label{opti-orig-2} \\ &&& \omega_{i}>\omega_{i}^{\operatorname{th}} \label{opti-orig-3} \end{alignat} \end{subequations} and define \begin{align} u_{i}^{\min}(\eta)\triangleq \begin{cases} 0 & \hspace{1.2cm}\text{if $0\leqslant \eta\leqslant d_{i}$,} \\ \min\{0,g_{i}(\lambda^{*},\omega^{*})\} & \hspace{1.2cm} \text{if $d_{i}<\eta<c$.} \end{cases}\label{case:u-min} \end{align} Then, for any $(\lambda(0),\omega(0))\in\hat\Phi(\eta)$ with $\lambda(0)\in \operatorname{range}(D)$, \begin{align}\label{ineq:lower-bound} u_{i}(x(t),p)\geqslant u_{i}^{\min}(\eta), \end{align} for any $t\geqslant 0$, and there exists initial states such that equality holds at some $t\geqslant 0$. \end{lemma} \begin{pf} Note that by Theorem~\ref{thm:decentralized-controller} with $\beta=c/\eta>1$, one has $(\lambda(t),\omega(t))\in\hat\Phi(\eta)$ and $\lambda(t)\in \operatorname{range}(D)$ for every $t> 0$, provided they hold at~$t=0$. Therefore, to show~\eqref{ineq:lower-bound} for every $t\geqslant 0$, it suffices to show it holds for~$t=0$. If $0\leqslant \eta\leqslant d$, then $1/2M_{i}(\omega_{i}(0)-\omega^{\infty})^{2}\leqslant V(\omega(0),\lambda(0))\leqslant d_{i}= 1/2(M_{i}(\omega_{i}^{\operatorname{th}}-\omega^{\infty})^{2}$, which implies $\omega_{i}(0)\leqslant \omega_{i}^{\operatorname{th}}$; therefore, $u_{i}(x(0),p)\geqslant 0$ follows by~\eqref{eqn:stability-transient-controller-Lipschitz-4}. Also, $u_{i}(x(0),p)$ can be $0$ in the case when, say, $x(0)=(\lambda^{\infty},\omega^{\infty})$. In the other case, if $d_{i}<\eta<c$, then $u_{i}(x(0),p)$ is lower bounded by the optimal value of \begin{alignat}{2}\label{sube:u-bound} \mathbf{(\hat{Q})} \hspace{15mm} &\min_{\lambda,\omega} & \quad & u_{i}(x,p)\notag \\ &\text{s.t.}&\quad &~\eqref{opti-orig-1}\text{ and}~\eqref{opti-orig-2}. \end{alignat} Denote this optimal value by $v_{i}(\eta)$. Also, the value of $u_{i}(x(0),p)$ can be exactly $v_{i}(\eta)$, e.g., in the case when $x(0)$ is the optimal solution of $(\hat Q)$. Note that $v_{i}(\eta) \leqslant 0$ as $(\lambda^{\infty},\omega^{\infty})$ satisfies~\eqref{sube:u-bound} and $u_{i}((\lambda^{\infty},\omega^{\infty}),p)=0$. Since it holds that a) $u_{i}(x,p)\geqslant 0$ for any $\omega_{i}\leqslant \omega_{i}^{\operatorname{th}}$, and b) $u_{i}(x,p)\leqslant 0$ for any $\omega_{i}\geqslant \omega_{i}^{\operatorname{th}}$, one can, without changing the optimal value, replace $u_{i}(x,p)$ by $\min\{0,g_{i}(\lambda,\omega)\}$ in $(\hat Q)$, and meanwhile add an additional constraint~\eqref{opti-orig-3}. With a simple reasoning effort, one can show that for this new optimization problem, the optimal value is exactly $\min\{0,g_{i}(\lambda^{*},\omega^{*})\}$. \qed \end{pf} Note that the control amplitude lower bound $u_{i}^{\min}(\eta)$ depends nonlinearly on the power injection~$p$. This is because, although the objective function in the optimization problem $(Q)$, linearly depends on $p$, the optimal value does depend nonlinearly on $p$ through the constraint~\eqref{opti-orig-1}. This is due to the fact that the equilibrium $(\lambda^{\infty},\omega^{\infty}\bold{1}_{n})$ depends on $p$ through the transcendental equation~\eqref{eqn:lambda-solution}. A similar result can be stated regarding an upper bound of the controller magnitude, but we omit it for brevity. The problem $(Q)$ is non-convex due to the non-convexity of the objective function. We next show that its optimal value equals that of another optimization problem with convex objective function and non-convex feasible set. Define the function $h_{i}:{\mathbb{R}}^{m+n}\times{\mathbb{R}}\rightarrow{\mathbb{R}}^{},\ (z,\omega)\rightarrow h_{i}(z,\omega)$ exactly the same as $g_{i}$ but replacing $\sin\lambda_{i}$ by $z_{i}$ in the definition of~$q_i$. In this way, $h_{i}(\sin\lambda,\omega) = g_{i}(\lambda,\omega)$. Let $\mathcal{D}_{i}^{+}\triangleq\left\{ j \big| [D^{T}Y_{b}]_{ij}> 0 \right\}$ and $\mathcal{D}_{i}^{-}\triangleq\left\{ j \big| [D^{T}Y_{b}]_{ij}<0 \right\}$. Consider the optimization \begin{subequations}\label{sube:opti:lossless-relax} \begin{alignat}{2} \mathbf{(R)} \hspace{15mm} &\min_{(z,\lambda,\omega)} & \quad & h_{i}(z,\omega)\notag \\ &\text{s.t.}&\quad &\sin\lambda_{j}\leqslant z_{j},\ \forall j\in\mathcal{D}^{+}_{i},\label{ineq:positive-relax} \\ &&& \sin\lambda_{j}\geqslant z_{j},\ \forall j\in\mathcal{D}^{-}_{i}, \label{ineq:negative-relax} \\ &&& \eqref{opti-orig-1}\text{ to}~\eqref{opti-orig-3}, \end{alignat} \end{subequations} We claim that the optimal value of this problem is the same as that of $(Q)$. The claim holds if every optimal solution of $(R)$, denoted by $(z^{\sharp},\lambda^{\sharp},\omega^{\sharp})$, satisfies~\eqref{ineq:positive-relax} and~\eqref{ineq:negative-relax} with equality signs. This has to be the case since, for instance, if $\sin\lambda_{k}^{\sharp}<z_{k}^{\sharp}$ for some $k\in\mathcal{D}^{+}_{i}$, then $(z^{\sharp},\lambda^{\sharp},\omega^{\sharp})$ can no more be an optimal solution, since $(\hat z^{\sharp},\lambda^{\sharp},\omega^{\sharp})$, where $\hat z^{\sharp}$ differs from $z^{\sharp}$ only in its $k$th component, $\hat z^{\sharp}_{k} = \sin\lambda_{k}^{\sharp}$, has $h_{i}(\hat z^{\sharp},\omega^{\sharp})<h_{i}(z^{\sharp},\omega^{\sharp})$, violating optimality. Our next step is to convexify $(R)$. Here we assume that $\omega_{i}\mapsto \frac{-\bar\alpha_{i} (\omega_{i}-\bar\omega_{i})}{\omega_{i} - \bar\omega_{i}^{\operatorname{th}}}$ is convex in $\omega_{i}$ in the region $\omega_{i}>\omega_{i}^{\operatorname{th}}$, which suffices to guarantee the convexity of $(z,\omega)\mapsto h_{i}(z,\omega)$ in $(z,\omega)$ under constraint~\eqref{sube:opti:lossless-relax} (this convexity assumption holds if, for instance, $\bar\alpha_{i}$ is a linear function). To handle the non-convexity of the constraints~\eqref{ineq:positive-relax} and~\eqref{ineq:negative-relax}, in the following two results, we separately provide inner and outer approximations, leading to upper and lower approximations of the optimal value of~$(R)$, and equivalently~$(Q)$. \begin{lemma}\longthmtitle{Upper bound of optimal value}\label{lemma:upper-optimal} Define $\mathcal{H}^{+} \triangleq \{ (a,b)\big|\ |a|<\pi/2,\ \sin a\leqslant b\text{ if }a\in[-\pi/2,0),\text{ and } a\leqslant b\text{ if } a\in[0,\pi/2] \}$, and $\mathcal{H}^{-}\triangleq \{ (a,b)\big|\ |a|<\pi/2,\ a\geqslant b\text{ if }a\in[-\pi/2,0),\text{ and}$ $ \sin a\geqslant b\text{ if } a\in[0,\pi/2] \}$. Consider the convex optimization problem \begin{subequations}\label{sube:opti:lossless-relax-2} \begin{alignat}{2} \mathbf{(\bar R)} \hspace{15mm} &\min_{(z,\lambda,\omega)} & \quad & h_{i}(z,\omega)\notag \\ &\text{s.t.}&\quad &(\lambda_{j}, z_{j})\in\mathcal{H}^{+},\ \forall j\in\mathcal{D}^{+}_{i},\label{ineq:positive-relax-2} \\ &&& (\lambda_{j}, z_{j})\in\mathcal{H}^{-},\ \forall j\in\mathcal{D}^{-}_{i}, \label{ineq:negative-relax-2} \\ &&& \eqref{opti-orig-1}\text{ to}~\eqref{opti-orig-3}, \end{alignat} \end{subequations} and denote its optimal solution by $(z^{o},\lambda^{o},\omega^{o})$. Then it holds that $h_{i}(z^{o},\omega^{o})\geqslant g_{i}(\lambda^{o},\omega^{o})\geqslant g_{i}(\lambda^{*},\omega^{*})$. \end{lemma} \begin{pf} The second inequality holds since $(\lambda^{o},\omega^{o})$ satisfies~\eqref{opti-orig-1}\text{ to}~\eqref{opti-orig-3}, making it a feasible point for~$(Q)$. To show the first inequality, one can easily check that for any $j\in\mathcal{D}^{+}_{i}$, if $(\lambda_{j},z_{j})\in\mathcal{H}^{+}$, then $\sin \lambda_{j}\leqslant z_{j}$ (cf. Figure~\ref{fig:sin-inner}). Therefore,~\eqref{ineq:positive-relax-2} is stricter than~\eqref{ineq:positive-relax}. Similarly,~\eqref{ineq:negative-relax-2} is stricter than~\eqref{ineq:negative-relax}. Therefore, $[D^{T}Y_{b}]_{ij}z^{o}_{j}\geqslant [D^{T}Y_{b}]_{ij}\sin\lambda_{j}^{o}$ holds for any $j\in[1,m]_{{\mathbb{N}}}$, completing the proof since $h_{i}(z^{o},\omega^{o})\geqslant h_{i}(\sin\lambda^{o},\omega^{o})=g_{i}(\lambda^{o},\omega^{o})$. \qed \end{pf} \begin{lemma}\longthmtitle{Lower bound of optimal value} \label{lemma:lower-optimal} Define $\mathcal{M}^{+}_{0}\triangleq \{ (a,b)\big|\ -\pi/2<a\leqslant 0,\ \sin a\leqslant b\}$, $\mathcal{M}^{+}_{1}\triangleq \{ (a,b)\big|\ 0\leqslant a\leqslant \pi/2,\ 2a/\pi \leqslant b\}$, $\mathcal{M}^{-}_{0}\triangleq\{ (a,b)\big|\ -\pi/2<a\leqslant 0,\ 2a/\pi \geqslant b\}$, and $\mathcal{M}^{-}_{1}\triangleq\{ (a,b)\big|\ 0\leqslant a\leqslant \pi/2,\ \sin a \leqslant b\}$. Consider the convex optimization problem for $\mu \triangleq \{\mu_{j}\}_{j\in\mathcal{D}^{+}_{i}\bigcup\mathcal{D}^{-}_{i}}$, with $\mu_{j}\in\{0,1\}$, \begin{subequations}\label{sube:opti:relax-3} \begin{alignat}{2} \mathbf{(\underline R^{\mu})} \hspace{15mm} &\min_{(z,\lambda,\omega)} & \quad & h_{i}(z,\omega)\notag \\ &\text{s.t.}&\quad &(\lambda_{j}, z_{j})\in\mathcal{M}^{+}_{\mu_{j}},\ \forall j\in\mathcal{D}^{+}_{i},\label{ineq:positive-relax-3} \\ &&& (\lambda_{j}, z_{j})\in\mathcal{M}^{-}_{\mu_{j}},\ \forall j\in\mathcal{D}^{-}_{i}, \label{ineq:negative-relax-3} \\ &&& \eqref{opti-orig-1}\text{ to}~\eqref{opti-orig-3}, \end{alignat} \end{subequations} and denote its optimal solution by $(\underline z^{\mu},\underline\lambda^{\mu},\underline\omega^{\mu})$. Let $\mu^{*}\triangleq\arg\min_{\mu}h_{i}(\underline z^{\mu},\underline\omega^{\mu})$, then $h_{i}(\underline z^{\mu^{*}},\underline\omega^{\mu^{*}})\leqslant g_{i}(\lambda^{*},\omega^{*})$. \end{lemma} \begin{pf} Define \begin{subequations}\label{opti:relax-4} \begin{alignat}{2} \mathbf{(\underline R)} \hspace{15mm} &\min_{(z,\lambda,\omega)} & \quad & h_{i}(z,\omega)\notag \\ &\text{s.t.}&\quad &(\lambda_{j}, z_{j})\in\mathcal{M}^{+}_{0} \cup \mathcal{M}^{+}_{1},\ \forall j\in\mathcal{D}^{+}_{i},\label{ineq:positive-relax-4} \\ &&& (\lambda_{j}, z_{j})\in\mathcal{M}^{-}_{0} \cup \mathcal{M}^{-}_{1},\ \forall j\in\mathcal{D}^{-}_{i}, \label{ineq:negative-relax-4} \\ &&& \eqref{opti-orig-1}\text{ to}~\eqref{opti-orig-3}. \end{alignat} \end{subequations} One can easily see that~\eqref{ineq:positive-relax}-\eqref{ineq:negative-relax} is stricter than~\eqref{ineq:positive-relax-4}-\eqref{ineq:negative-relax-4} (cf. Figure~\ref{fig:sin-outer}). Hence the optimal value of $(\underline R)$ lower bounds $g_{i}(\lambda^{*},\omega^{*})$. Notice that~\eqref{ineq:positive-relax-3}-\eqref{ineq:negative-relax-3} simply splits~\eqref{ineq:positive-relax-4}-\eqref{ineq:negative-relax-4} into convex regions, and hence $(\underline z^{\mu^{*}},\underline\lambda^{\mu^{*}},\underline\omega^{\mu^{*}})$ is also the optimal solution of $(\underline R^{{\mu}})$. \qed \end{pf} \begin{figure}[htb] \centering% \subfigure[\label{fig:sin-inner}]{\includegraphics[width=.49\linewidth]{epsfiles/convexification-sin.png}} \subfigure[\label{fig:sin-outer}]{\includegraphics[width=.49\linewidth]{epsfiles/convexification-sin-outer.png}} \caption{Tightening and relaxation of a sinusoidal non-convex constraint. In plot~\subref{fig:sin-inner}, within $|a|<\pi/2$, by ignoring the gray region delimited by $b=a$, $b=\sin (a)$ and $a=\pi/2$, the non-convex set characterized by $\sin(a)\leqslant b$ appearing in~\eqref{ineq:positive-relax} contains the red convex subset~$\mathcal{H}^{+}$. On the other hand, in plot~\subref{fig:sin-outer}, this non-convex set is contained in the blue region. Each of the blue regions separated by the dotted line at $a=0$ are convex.}\label{fig:sin-appr} \end{figure} Together, Lemmas~\ref{lemma:upper-optimal} and~\ref{lemma:lower-optimal} provide us with efficient ways of approximating the value of the bound on the control effort~$u_{i}^{\min}(\eta)$. \subsection{Robustness to measurement and parameter uncertainty} Here we study the controller performance under measurement and parameter uncertainty. This is motivated by scenarios where the state or the power injection may not be precisely measured, or scenarios where some system parameters, like the damping coefficient, are only approximately known. Formally, we let $\hat x=(\hat\lambda,\hat\omega)$, $\hat p$, and $\hat E$ be the measured or estimated state, power injection, and damping parameters, respectively. For every $i\in\mathfrak{C}$, we introduce the error variables \begin{align*} \epsilon^{\omega}_{i}&\triangleq \hat\omega_{i}-\omega_{i},\ &\epsilon^{\lambda}_{i}\triangleq [D^{T}Y_{b}]_{i}\hat\lambda-[D^{T}Y_{b}]_{i}\lambda, \\ \epsilon^{p}_{i}&\triangleq \hat p_{i}-p_{i},\ &\epsilon^{E}_{i}\triangleq \hat E_{i}-E_{i}.\hspace{1.94cm} \end{align*} We make the following assumption regarding the error. \begin{assumption}\longthmtitle{Bounded uncertainties}\label{assumption:bounded-uncertain} For each $i\in\mathfrak{C}$, \begin{enumerate} \item\label{item:uncertain-bound} the uncertainties are piece-wise continuous and can be bounded by $|\epsilon^{\omega}_{i}(t)|\leqslant \bar\epsilon^{\omega}_{i}$, $|\epsilon^{\lambda}_{i}(t)|\leqslant \bar\epsilon^{\lambda}_{i}$, $|\epsilon^{p}_{i}(t)|\leqslant \bar\epsilon^{p}_{i}$, and $|\epsilon^{E}_{i}(t)|\leqslant \bar\epsilon^{E}_{i}$ for all $t\geqslant 0$; \item\label{item:robust-frequency-bound} $\omega^{\infty} \in (\underline\omega_{i}^{\operatorname{th}} + \bar\epsilon^{\omega}_{i},\bar\omega_{i}^{\operatorname{th}}-\bar\epsilon^{\omega}_{i})$; \item\label{item:uncertain-freqeuncy-bound} $\bar\epsilon^{\omega}_{i}< \min\{\bar\omega_{i}-\bar\omega_{i}^{\operatorname{th}}, \underline\omega_{i}^{\operatorname{th}}-\underline\omega_{i}\}$. \end{enumerate} \end{assumption} Condition~\emph{\ref{item:uncertain-bound}} provides uniform bounds on the uncertainties;~\emph{\ref{item:robust-frequency-bound}} ensures that, even with uncertainty, the control input is identically 0 around the equilibrium;~\emph{\ref{item:uncertain-freqeuncy-bound}} guarantees that the control input is always non-singular. For convenience, we use~$\hat u_{i}(\hat x,\hat p(t))$ to refer to the controller with the same functional expression as~\eqref{eqn:stability-transient-controller-Lipschitz-4} but implemented with approximate parameter values and evaluated at the inaccurate state $\hat x$ and power injection $\hat p(t)$. Notice that $\hat p(t)$ can be time-varying. The next result shows that $\hat u_{i}$ still stabilizes the power network and enforces the satisfaction of a relaxed frequency invariance condition. For simplicity, we restrict our attention to linear class-$\mathcal{K}$ functions in the controller design. \begin{proposition}\longthmtitle{Robust stability and frequency invariance under uncertainty}\label{prop:robust-uncertainty} Under condition~\eqref{ineq:sufficient-eq} and Assumption~\ref{assumption:bounded-uncertain}, consider the evolution of the system~\eqref{eqn:dynamics-2} with the controller $\hat u_{i}$ for each $i\in\mathfrak{C}$. Then the following results hold provided $\lambda(0) \in \operatorname{range}(D)$ and $(\lambda(0),\omega(0))\in\Phi$ for some $\beta>1$: \begin{enumerate} \item\label{item:robust-sol-existence} The solution exists and is unique for every $t\geqslant 0$. \item\label{item:robust-invariance-K} $\lambda(t)\in \operatorname{range}(D)$ and $(\lambda(t),\omega(t))\in\Phi$ for any $t\geqslant 0$; \item\label{item:robust-stability} $(\lambda^{\infty},\ \omega^{\infty}\bold{1}_{n})$ is stable, and $\left(\lambda(t),\ \omega(t)\right)$ converges to $(\lambda^{\infty},\ \omega^{\infty}\bold{1}_{n})$; \item\label{item:robust-finite-time} There exists a finite time $t_{2}$ such that $\hat u_{i}(\hat x(t),\hat p(t))=0$ for every $t\geqslant t_{2}$ and every $i\in\mathfrak{C}$. \item\label{item:robust-invariance} Suppose $\bar\alpha_{i}(s) = \underline\alpha_{i}(s)=\gamma_{i}s$ for every $i\in\mathfrak{C}$. Then, if there exists $\Delta>0$ such that satisfy \begin{subequations}\label{sube:ineq:robust-invariance} \begin{align} \hspace{-1.2cm} \frac{-\gamma_{i}(\bar\epsilon^{\omega}_{i} + \Delta)}{\bar\omega_{i}-\bar\omega_{i}^{\operatorname{th}} + \Delta+\bar\epsilon^{\omega}_{i}}+\bar\epsilon^{E}_{i}(\Delta+\bar\omega_{i}) + \hat E_{i}\bar\epsilon^{\omega}_{i}+\bar\epsilon^{\lambda}_{i} + \bar\epsilon^{p}_{i}\leqslant 0,\label{sube:ineq:robust-invariance-a} \\ \hspace{-1.2cm} \frac{-\gamma_{i}(\bar\epsilon^{\omega}_{i} + \Delta)}{\underline\omega_{i}^{\operatorname{th}}-\underline\omega_{i} + \Delta + \bar\epsilon^{\omega}_{i}} + \bar\epsilon^{E}_{i}(\Delta-\underline\omega_{i})+\hat E_{i}\bar\epsilon^{\omega}_{i} + \bar\epsilon^{\lambda}_{i}+\bar\epsilon^{p}_{i}\leqslant 0,\label{sube:ineq:robust-invariance-b} \end{align} then $\omega_{i}(t) \in [\underline\omega_{i}-\Delta, \bar\omega_{i}+\Delta]$ for all $t> 0$, provided $\omega_{i}(0) \in [\underline\omega_{i}-\Delta,\bar\omega_{i}+\Delta]$, and, if $\omega_{i}(0)\not\in [\underline\omega_{i}-\Delta,\bar\omega_{i}+\Delta]$, then there exists a finite time $t_{3}$ such that $\omega_{i}(t) \in [\underline\omega_{i}-\Delta, \bar\omega_{i}+\Delta]$ for all $t\geqslant t_{3}$. \end{subequations} \end{enumerate} \end{proposition} \begin{pf} The proofs of~\emph{\ref{item:exist-unique}-\ref{item:robust-stability}} follow similar arguments as the proofs of Theorem~\emph{\ref{item:exist-unique}-\ref{item:convergence-stability-K}}. For stability, one can show that $\frac{d}{dt}V(\omega(t),\lambda(t)) = -\tilde\omega^{T}(t)E\tilde\omega(t) +\sum_{i\in\mathfrak{C}}\tilde\omega_{i}(t)\hat u_{i}(\hat x(t),\hat p(t))$. By Assumption~\ref{assumption:bounded-uncertain} and the definition of $\hat u_{i}$, it holds that $\sum_{i\in\mathfrak{C}}\tilde\omega_{i}(t)\hat u_{i}(\hat x(t),\hat p(t))\leqslant 0$, implying $\frac{d}{dt}V(\lambda(t),\omega(t))\leqslant 0$. The convergence follows by LaSalle Invariance Principle and noticing that $\hat u_{i}(\hat x,\hat p (t))$ is identically 0 so long as $\omega_{i}\in [\underline\omega_{i}^{\operatorname{th}}+\bar\epsilon^{\omega}_{i},\bar\omega_{i}^{\operatorname{th}}-\bar\epsilon^{\omega}_{i}]$, which, together with the convergence, implies that $\hat u_{i}(\hat x(t),\hat p (t))$ is 0 after a finite time. For~\emph{\ref{item:robust-invariance}}, to prove the invariance of $[\underline\omega_{i}-\Delta,\bar\omega_{i}+\Delta]$, by Lemma~\ref{lemma:frequency-invariance}, we only need to show that \begin{subequations} \begin{align} \hat u_{i}(\hat x,\hat p(t))-q_{i}(x,t)\leqslant 0,\ \text{if } \omega_{i}=\bar\omega_{i}+\Delta,\label{ineq:robust-invariance-1a} \\ -\hat u_{i}(\hat x,\hat p(t))+q_{i}(x,t)\leqslant 0,\ \text{if }\omega_{i}=\underline \omega_{i}-\Delta.\label{ineq:robust-invariance-1b} \end{align}\label{ineq:robust-invariance} \end{subequations} For simplicity, we only show that~\eqref{sube:ineq:robust-invariance-a} implies~\eqref{ineq:robust-invariance-1a} (the fact that~\eqref{sube:ineq:robust-invariance-b} implies~\eqref{ineq:robust-invariance-1b} follows similarly). Notice that if $\omega_{i}=\bar\omega_{i}+\Delta$, then $\hat u_{i}(\hat x,\hat p(t))-q_{i}(x,t)$ equals \begin{align} \frac{-\gamma_{i}(\Delta+\epsilon^{\omega}_{i})}{\bar\omega_{i} - \bar\omega_{i}^{\operatorname{th}} + \Delta+\epsilon^{\omega}_{i}}+\epsilon^{E}_{i}(\bar\omega_{i}+\Delta) + \hat E_{i}\epsilon^{\omega}_{i} + \epsilon^{\lambda}_{i} + \epsilon^{p}_{i},\label{exp:omega-dot} \end{align} which, by Assumption~\ref{assumption:bounded-uncertain}, is smaller than or equal to the left-hand side of~\eqref{sube:ineq:robust-invariance-a} by letting the uncertainties take their individual bounds; hence~\eqref{ineq:robust-invariance-1a} holds. Finally, the existence of $t_{3}$ follows a similar proof in Theorem\emph{~\ref{item:frequency-attraction}}. \qed \end{pf} One should look at~\eqref{sube:ineq:robust-invariance} as a condition that, independently of the specific realization of the uncertainty, guarantees that the invariance of the frequency interval is ensured. \begin{figure}[tb!] \centering% \includegraphics[width=1.1\linewidth]{epsfiles/IEEE39bus.pdf} \caption{IEEE 39-bus power network.}\label{fig:IEEE39bus} \end{figure} \section{Simulations}\label{sec:simulations} We illustrate the performance of our control design in the IEEE 39-bus power network displayed in Figure~\ref{fig:IEEE39bus}. The network consists of 46 transmission lines and 10 generators, serving a load of approximately 6GW. We take the values of susceptance $b_{ij}$ and rotational inertia~$M_{i}$ for generator nodes from the Power System Toolbox~\citep{KWC-JC-GR:09}. We use this toolbox to assign the initial power injection $p_{i}(0)$ for every bus (although the analytical results hold for constant power injections, in simulation we have also tested the more general time-varying case). We assign all non-generator buses a uniform small inertia $M_{i}=0.1$. The damping parameter is $E_{i}=1$ for all buses. The initial state $(\lambda (0),\omega(0))$ is chosen to be the unique equilibrium with respect to the initial power injection. We implement the distributed controller in~\eqref{eqn:stability-transient-controller-Lipschitz-4} in the generators with indices $\mathfrak{C}=\{30,31,32\}$ to tune their transient frequency behavior. The controller parameters are as follows: for every $i \in \mathfrak{C}$, we let $\bar\alpha_{i}(s) = \underline\alpha_{i}(s)=\gamma_{i}s$, with $\gamma_{i}=2$, $\bar\omega_{i} = -\underline\omega_{i} = 0.2$Hz and $\bar\omega_{i}^{\operatorname{th}} = -\underline\omega_{i}^{\operatorname{th}} = 0.1$Hz. The nominal frequency is 60Hz, and hence the safe frequency region is $[59.8\text{Hz},\ 60.2\text{Hz}]$. \begin{figure*}[tbh!] \centering \includegraphics[width=1\linewidth,height=0.25\linewidth]{epsfiles/IEEE39-generator-loss-traj.png} \caption{Frequency and control input trajectories at node 30 corresponding to the power supply loss of generator G9 during [10,40]s. The frequency trajectory without transient controller goes beyond the safe bounds during the contingency, while this is avoided with the proposed controller. Notice that the latter xonly takes effect when the frequency is close to the safe bound.}\label{fig:generator-loss} \end{figure*} \begin{figure*}[tbh!] \centering \subfigure[\label{frequency-response-no-control-generator}]{\includegraphics[width=.23\linewidth]{epsfiles/IEEE39-frequency-response-no-control-generator.png}} \subfigure[\label{frequency-response-with-control-generator}]{\includegraphics[width=.23\linewidth]{epsfiles/IEEE39-frequency-response-with-control-generator.png}} \subfigure[\label{input-trajectories}]{\includegraphics[width=.23\linewidth]{epsfiles/IEEE39-input-trajectories.png}} \subfigure[\label{input-trajectories-robust}]{\includegraphics[width=.23\linewidth]{epsfiles/IEEE39-frequency-response-with-uncertainty-control-generator.png}} \caption{Frequency and control input trajectories with and without transient controller. Plot~\subref{frequency-response-no-control-generator} shows the frequency trajectories of the generators 30, 31, and 32 without the transient controller~\eqref{eqn:stability-transient-controller-Lipschitz-4}, with all of them going beyond the lower safe frequency bound. With the transient controller, plot~\subref{frequency-response-with-control-generator} shows that all frequency trajectories stay within the safe bound. Plot~\subref{input-trajectories} shows the corresponding trajectories of the control inputs. Plot~\subref{input-trajectories-robust} shows the controller performance under parameter uncertainty and errors in the power injection approximation.}\label{fig:trajectories} \end{figure*} We first show how the proposed controller maintains the targeted generator frequencies within the safe region provided that these frequencies are initially in it. For our first scenario, we consider a generator loss and recovery process. Specifically, we set the power injection of node 38 to zero (i.e., generator G9) during the time interval [10,40]s. As shown in Figure~\ref{fig:generator-loss}, without the transient controller~\eqref{eqn:stability-transient-controller-Lipschitz-4}, the frequency of node 30 first gradually goes down, exceeding the safe bound 59.8Hz a few times, even tending to converge to a frequency below it. As node 38 recovers its power supply at 40s, the frequency comes back to 60Hz. In comparison, with the transient controller, the frequency trajectory never goes beyond 59.8Hz during the transient. For our second scenario, we perturb all non-generator nodes by a sinusoidal power injection whose magnitude is proportional to the corresponding node's initial power injection. Specifically, for every $i\in\{1,2,\cdots,29\}$, \begin{align* p_{i}(t)= \begin{cases} p_{i}(0) & \text{if $t\geqslant 30$,} \\ \left(1+0.3\sin(\frac{\pi t}{30} )\right)p_{i}(0) & \text{otherwise.} \end{cases} \end{align*} For $i\in\{30,31,\cdots,39\}$, $p_{i}(t)$ remains constant all the time. Figure~\ref{fig:trajectories}\subref{frequency-response-no-control-generator} shows the frequency responses of generators $30$, $31$, and $32$ without the transient controller. One can see that all trajectories exceed the 59.8Hz lower frequency bound. For comparison, Figure~\ref{fig:trajectories}\subref{frequency-response-with-control-generator} shows the trajectories with the transient controller~(\ref{eqn:stability-transient-controller-Lipschitz-4}), where all remain within the safe frequency region. Figure~\ref{fig:trajectories}\subref{input-trajectories} displays the corresponding input trajectories, which converge to 0 in finite time, as stated in Theorem~\ref{thm:decentralized-controller}\emph{\ref{item:finite-time-active}}. We also illustrate the robustness of the controller against uncertainty. We have each controller employ $\hat E_{i}=2$ and $\hat p_{i}(t)=1.1p_{i}(t)$, corresponding to $100\%$ and $10\%$ deviations on droop coefficients and power injections, respectively. Figure~\ref{fig:trajectories}\subref{input-trajectories-robust} illustrates the frequency trajectories of the 3 controlled generators. Since condition~\eqref{sube:ineq:robust-invariance} is satisfied with $\Delta=0.1$Hz, Proposition~\ref{prop:robust-uncertainty} ensures that the invariant frequency interval is now $[59.7\text{Hz},60.3\text{Hz}]$. Next, we examine the effect of the choice of class-$\mathcal{K}$ function on the behavior of the transient frequency. We focus our attention on bus $30$ and simulate the network behavior for a linear function with $\gamma_{30}=0.1,2,10,$ and $+\infty$ (the latter corresponding to the discontinuous controller in~\eqref{eqn:stability-transient-controller-infinite}). Figure~\ref{fig:trajectories-vs-gamma} shows the corresponding frequency and control input trajectories for the first 30 seconds at node 30. From Figure~\ref{fig:trajectories-vs-gamma}\subref{IEEE39-omega-trajectories-vs-gamma}, one can see that the frequency trajectory with $\gamma_{30}=0.1$ tends to stay away from the lower safe bound (overprotection), compared with the trajectories with $\gamma_{30}=2,10,$ and $+\infty$, and this results in a larger control input, cf. Figure~\ref{fig:trajectories-vs-gamma}\subref{IEEE39-input-trajectories-vs-gamma}. As $\gamma_{30}$ increases, the control input is triggered later. On the other hand, choosing a large $\gamma_{30}$ lead to higher sensitivity, as observed in Figure~\ref{fig:trajectories-vs-gamma}\subref{IEEE39-input-trajectories-vs-gamma}, where the input trajectory with large $\gamma_{30}$ grows faster at the time when the control input first becomes non-zero. In fact, the controller with $\gamma_{30}=10$ exhibits a sharp change around $t=9s$, similar to the discontinuous controller~\eqref{eqn:stability-transient-controller-infinite}. The discontinuity of the latter is more evident under state measurements errors. In Figure~\ref{fig:trajectories-vs-noisy-gamma}, we run the same simulation but with $\hat\omega_{30}(t)=\omega_{30}(t)+0.001\sin(200\pi t)$ as the measured frequency. One can observe the high-frequency fluctuation in the control input trajectory around $9.4$s for $\gamma_{30}=+\infty$, whereas this does not happen for $\gamma_{30}=2$ due to its Lipschitz continuity character. These simulations validate the observations of Remark~\ref{rmk:linear-class-K}. \begin{figure}[tbh!] \centering \subfigure[\label{IEEE39-omega-trajectories-vs-gamma}]{\includegraphics[width=.47\linewidth]{epsfiles/IEEE39-omega-trajectories-vs-gamma.png}} \subfigure[\label{IEEE39-input-trajectories-vs-gamma}]{\includegraphics[width=.47\linewidth]{epsfiles/IEEE39-input-trajectories-vs-gamma.png}} \caption{Frequency and control input trajectories at node $30$ with linear class-$\mathcal{K}$ function with slope $\gamma_{30}=0.1,2,10$ and $+\infty$, respectively. We observe from plot~\subref{IEEE39-omega-trajectories-vs-gamma} that the frequency trajectory with small $\gamma_{30}$ tends to stay away from the safe frequency bound, at the cost of having a large control input, as shown in plot~\subref{IEEE39-input-trajectories-vs-gamma}. A large $\gamma_{30}$ causes the controller to be sensitive to $\omega_{30}$, making the input change rapidly around 9s. }\label{fig:trajectories-vs-gamma} \end{figure} \begin{figure}[tbh!] \centering \subfigure[\label{IEEE39-input-noisy-trajectories-gamma-2}]{\includegraphics[width=.47\linewidth]{epsfiles/IEEE39-input-trajectories-vs-gamma-2.png}} \subfigure[\label{IEEE39-input-noisy-trajectories-gamma-infty}]{\includegraphics[width=.47\linewidth]{epsfiles/IEEE39-input-trajectories-vs-gamma-bangbang.png}} \caption{Control input trajectories at node $30$ with linear class-$\mathcal{K}$ function with slope $\gamma_{30}=2$ and $+\infty$, respectively, under state measurement errors in $\omega_{30}$. The controller with $\gamma_{30}=2$ is Lipschitz continuous (cf. plot~\subref{IEEE39-input-noisy-trajectories-gamma-2}), whereas the controller with $\gamma_{30}=+\infty$ (cf. plot~\subref{IEEE39-input-noisy-trajectories-gamma-infty}) is discontinuous.}\label{fig:trajectories-vs-noisy-gamma} \end{figure} Next, we simulate the case where some of the generator frequencies are initially outside the safe region to show how the transient controller brings the frequencies back to it. We use the same setup as in Figure~\ref{fig:trajectories}, but we only turn on the distributed controller after~$t=12s$. Figure~\ref{fig:delayed-trajectories}\subref{freuency-response-with-delayed-control-generator} shows the frequency trajectories of generators $30$, $31$, and $32$. As the controller is disabled for the first $12$s, all 3 frequency trajectories are lower than 59.8hz at $t=12$s. After $t=12$s, all of them return to the safe region in a monotonic way, and once they are in the region, they never leave, in accordance with Theorem~\ref{thm:decentralized-controller}\emph{\ref{item:frequency-attraction}}. Figure~\ref{fig:delayed-trajectories}\subref{delayed-input-trajectories} shows the corresponding control input trajectories. Finally, we illustrate the bounds on control amplitude of Section~\ref{subsection:magnitude}. Let $\eta=0.5$ and $i=30$. By Lemma~\ref{lemma:lower-bound}, the control input is lower bounded by $u_{i}^{\min}(\gamma)$, which requires $g_{i}(\lambda^{*},\omega^{*})$. The numerical computation of the upper $g_{i}(\lambda^{o},\omega^{o})$ (cf. Lemma~\ref{lemma:upper-optimal}) and lower $h_{i}(\underline z^{\mu^{*}} ,\underline\omega^{\mu^{*}})$ (cf. Lemma~\ref{lemma:lower-optimal}) bounds both yield $-5.8686$. Figure.~\ref{fig:IEEE39bus-control-bnd}\subref{control-bound-lower} shows 100 input trajectories with initial states randomly selected around $(\lambda^{o},\omega^{o})$, all lower bounded by $-5.8686$. \begin{figure}[tbh!] \centering \subfigure[\label{freuency-response-with-delayed-control-generator}]{\includegraphics[width=.47\linewidth]{epsfiles/IEEE39-frequency-response-with-delayed-control-generator.png}} % \subfigure[\label{delayed-input-trajectories}]{\includegraphics[width=.47\linewidth]{epsfiles/IEEE39-delayed-input-trajectories.png}} \caption{Frequency and control input trajectories with transient controller available after $t=12$s. Plot~\subref{freuency-response-with-delayed-control-generator} shows the frequency trajectories of generators 30, 31, and 32. Due to the disturbance, and without the transient controller, all 3 frequency trajectories exceed the 59.8Hz safe bound at $t=12$s. As the transient controller kicks in, the unsafe trajectories come back to the safe region and never leave afterwards. Plot~\subref{delayed-input-trajectories} shows the control input trajectories.}\label{fig:delayed-trajectories} \end{figure} \begin{figure}[htb] \centering% \subfigure[\label{control-bound-lower}]{\includegraphics[width=.47\linewidth]{epsfiles/IEEE39-control-bound.png}} \subfigure[\label{control-bound-upper}]{\includegraphics[width=.47\linewidth]{epsfiles/IEEE39-control-bound-upper.png}} \caption{Control input trajectories at node 30 corresponding to 100 different initial states. In plot~\subref{control-bound-lower}, with all initial states randomly selected around the worst-case scenario, all 100 trajectories are lower bounded by $-5.8686$ (denoted by the dashed line), as guaranteed by Lemma~\ref{lemma:lower-bound}. A similar result is illustrated in plot~\subref{control-bound-upper}, where another 100 trajectories with random initial states are upper bounded by $5.8494$.}\label{fig:IEEE39bus-control-bnd} \end{figure} \section{Conclusions} We have proposed a distributed transient power frequency controller that is able to maintain the nodal frequency of actuated buses within a desired safe region and to recover from undesired initial conditions. We have proven that the control input vanishes in finite time, so that the closed-loop system possesses the same equilibrium and local stability and convergence guarantees as the open-loop one. We have characterized the smoothness and robustness properties of the proposed controller. Future work will investigate the incorporation of economic cost, taking advantage of the trade-offs in the choice of class-$\mathcal{K}$ functions for controller design, the optimization of control effort by having controlled nodes have access to information beyond their immediate neighbors, and the understanding of the connection between actuation effort and network connectivity. {\small \bibliographystyle{plainnat}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction}\label{s1} As we know, the Dirac equation is introduced as a relativistic wave equation which plays very important role for relativistic particles in molecular physics, quantum chemistry, nuclearphysics, condensed matter, high energy physics and particle physics. Dirac equation is a solvable model or quasi-solvable model on central potentials such as the Hulth\'{e}n potential \cite{Jian-2003, Ikhdair-2010}, the Woods-Saxon potential \cite{Guo-2005}, the Eckart potential \cite{Sari-2015}, the Morse potential \cite{Morse-1929, Berkdemir-2006, Ikhdair-2011, Zhang-2016}, the Poschl-Teller potential \cite{Wei-2009}, the Manning-Rosen potential \cite{Wei-2008}, the hyperbolic potential \cite{Jia-2009}, the Rosen-Morse potential \cite{Oyewumi-2010}, and the pseudoharmonic potential \cite{Gang-2004}, and so on that is applied in different physical systems by various methods such as Darboux transformation and supersymmetry approach \cite{Halberg-2019, Halberg-2020, Amani-2012}. The Dirac equation is written by potentials $V(r)$ and $S(r)$ that these are introduced to repulsive vector potential and attractive scalar potential, respectively. The relativistic Dirac equation describes the motion of spin half particle by the approach of the spin symmetry and the pseudospin symmetry that these come from deformed and superdeformation nuclei in nuclear physics, also these are SU(2) symmetries of a Dirac Hamiltonian. It should be noted that whenever the difference between vector potential and scalar potential is equal to constant, a spin symmetry occurs, and in contrast, whenever the sum of the vector potential and scalar potential is equal to constant, then pseudospin symmetry is created \cite{Ginocchio-2004}. It is interesting to know that Dirac Hamiltonian is invariant under the SU(2) algebra for the two aforesaid symmetries in which scalar potential is coupled with mass and vector potential is coupled with energy \cite{Smith-1971, Bell-1975}. In this paper, we draw attention to pseudospin symmetry, because whenever an electron travels through a solid, its motion approximately behaves as an electron with an effective mass traveling unperturbed through free space. This means that the effective mass of electron has a different mass which so-called quasiparticle. One of applications of the quasiparticle is that one plays an important role in graphene as relativistic fermionic quasiparticles that live in two-dimentional space. In the sense that graphene structure is a honeycomb lattice as a single layer of carbon atoms in which the neighbourhood of Fermi level which are located at the edge of the first Brillouin zone and so-called Dirac points. For massless Dirac equation, the conduction and valence bands meet at the six points as a zero gap semiconductor, but for massive Dirac equation with potential term, we will have a energy gap between the conduction and valence bands that the corresponding graphene is called as gapped graphene. It should be noted that real graphene is the same gapped graphene with a non-zero energy gap, during which the electron-electron interactions, substrates, and impurities have a significant effect on the electronic structures of the gapped graphene \cite{Pedersen-2009, Zhu-2009, Klimchitskaya-2017}. Then, graphene is a communication bridge between condensed matter and high energy physics in which its electronic properties has a great share in this study \cite{Novoselov-2004, Neto-2009, Nair-2008}. Therefore, the motion of this quasiparticle is like the motion of electrons in graphene as the relativistic fermionic pseudospin. In that case it is possible to consider a gravitational system holographically dual of an adaptive model of graphene \cite{Ketab-2018, Zali-2019}. So with this view, we acquire the eigenvalues and the eigenfunctions only for quasiparticles in pseudospin symmetry. Moreover, we consider the corresponding system by a potential barrier which herein we use the Morse potential in pseudospin symmetry which the potential shape will express in the Sec. \ref{III}. The choice of Morse potential is motivated by its important applications in atomic and molecular physics for the potential energy of a diatomic molecule. This potential provides an appropriate model to describe the interatomic interaction of linear molecules. The Morse potential can also be used to model other interactions, such as the interaction between an atom and a surface which can have a significant effect on the gapped graphene. We note that the Dirac equation solved with the Morse potential by the Nikiforov–Uvarov method and asymptotic iteration method which is based on solving the polynomial of hypergeometric \cite{Berkdemir-2006, Ikhdair-2011, Bayrak-2007}. The methods of Nikiforov–Uvarov and asymptotic iteration have good benefits for solving quantum problems, but the calculation of bound states and wavefunctions are a little longer than our used way, especially, the calculated wavefunction is only pre-defined as a form. Therefore, this motivation allows us to use the factorization method because this method is a very powerful tool for solving the Dirac equation by special functions. Then in order to obtain the energy spectrum and the spinor wavefunctions, we will write the Dirac equation as two second-order differential equations in terms of two spinor wavefunctions in polar coordinate, and then the corresponding differential equations will be compared to the confluent Heun's function so that eigenvalues and eigenfunctions can be obtained. Heun's function has the different forms such as normal form of Heun's equation, confluent Heun's equation, doubly-Confluent Heun's equation, biconfluent Heun's equation, and triconfluent Heun's equation \cite{Olver-2010}. Also, various types of Einstein functions have been used to solve a variety of single particle quantum mechanical problems \cite{Downing-2016, Downing1-2016, Downing-2017}. In this paper, we will use the form of confluent Heun's equation that its details will come in the Sec. \ref{III}. We will see that the spinor wavefunctions will obtain in terms of confluent Heun's equation. We have organized the present job as follows: In Sec. \ref{II}, we present the general form of the massive Dirac equation in the presence of potentials of scalar and vector. In Sec. \ref{III}, by applying the Morse potential, we calculate the eigenvalues and wavefunctions by using the confluent Heun's equation in pseudospin symmetry, and then the energy spectrum is calculated in terms of the spin-orbit quantum numbers, and also we plot wavefunctions in terms of radial coordinate. In Sec. \ref{IV}, we present the electronic properties of gapped graphene. Finally, we will give a summary in section \ref{V} for this job. \section{Massive Dirac equation}\label{II} In this section, we consider the Dirac equation with both scalar potential $S(r)$ and a vector potential $V(r)$ for the nuclear motion of diatomic molecule is given by \begin{equation}\label{diraceq1} H= v_F (\vec{\tilde{\alpha}}.\vec{p}) + \tilde{\beta} (m c^2 + S(r)) + V(r), \end{equation} where $\tilde{\alpha}= \left( \begin{array}{cc} 0 & \sigma \\ \sigma & 0 \end{array} \right)$, $\tilde{\beta}= \left( \begin{array}{cc} I & 0 \\ 0 & -I \end{array} \right)$ are $4 \times 4$ matrices in which $\sigma$ is Pauli matrix and $I$ is $2 \times 2$ unit matrix, $\vec{p}$ is the linear momentum operator, $v_F\simeq 10^6 \,m/s$ is the Fermi velocity in graphene, $m$ is particle mass. In order to solve the above Dirac equation, we take the radial coordinate $r$ on the two dimensional plane in terms of $x-y$. In this case, the first term is written according to coordinates $x$ and $y$ as $\vec{\sigma}\cdot\vec{p}=\sigma_{x}p_{x}+\sigma_{y}p_{y}$ in which $\sigma_ x= \left( \begin{array}{cc} 0 & 1 \\ 1 & 0 \end{array} \right)$, $\sigma_y= \left( \begin{array}{cc} 0 & -i \\ i & 0 \end{array} \right)$, $p_{x} = -i \hbar \frac{\partial}{\partial x}$ and $p_{y} = -i\hbar\frac{\partial}{\partial y}$. Now we can write The Dirac Hamiltonian \eqref{diraceq1} as the below eigenvalue equation \begin{equation}\label{Hpsi1} H \Psi(r, \phi) = E \Psi(r, \phi), \end{equation} where $E$ is the measured energy or eigenvalue, and $\Psi$ is introduced as the wave function. The representation of the wave function is as the four-spinor, so it is better to split the four-component spinor into two two-component spinors as $\psi_I$ and $\psi_{II}$ in the following form \cite{Greiner-2000} \begin{equation}\label{psi1} \Psi(r, \phi) = {\psi_I(r, \phi) \choose \psi_{II}(r, \phi)} = \frac{1}{r} {\psi_1(r) \, e^{i k \phi} \choose i \, \psi_2(r) \, e^{i (k + 1) \phi}}, \end{equation} where indices $1$ and $2$ demonstrate the radial part of the wave functions, and spin-orbit quantum number $k \in \mathbb{Z}$ is a constant. By converting the Cartesian coordinates $x-y$ to the polar coordinates $r-\phi$ as $\partial_{x}=\cos\phi \frac{\partial}{\partial r}-\frac{sin \phi}{r}\frac{\partial}{\partial \phi}, \partial_{y}=sin \phi \frac{\partial}{\partial r}+\frac{cos \phi}{r}\frac{\partial}{\partial \phi}$, we can acquire the corresponding Hamiltonian in terms of two components spinors in the following form \begin{equation}\label{rimat12} \left(\begin{array}{ccc} \frac{d \psi_1}{d r} \\ \frac{d \psi_2}{dr} \\ \end{array}\right)= \left(\begin{array}{ccc} -\frac{k}{r} & \widetilde{E} + \widetilde{m} - ( \widetilde{V}(r)- \widetilde{S}(r)) \\ -\widetilde{E} + \widetilde{m} + (\widetilde{V}(r) + \widetilde{S}(r)) & \frac{k}{r} \\ \end{array}\right) \left(\begin{array}{ccc} \psi_1 \\ \psi_2 \\ \end{array}\right), \end{equation} where these are a first-order differential equations system in terms of $\psi_1$ and $\psi_2$, and $\widetilde{E} = \frac{E}{v_F \hbar}$, $\widetilde{m} = \frac{m c^2}{v_F \hbar}$, $\widetilde{V} = \frac{V}{v_F \hbar}$ and $\widetilde{S} = \frac{S}{v_F \hbar}$. We can obtain the wave functions $\psi_1$ and $\psi_2$ according to each other as \begin{subequations}\label{psi21} \begin{eqnarray} \psi_{1} = \frac{\frac{d}{dr}-\frac{k}{r}}{(-\widetilde{E} + \widetilde{m} + U(r))}\psi_{2},\label{psi21-1}\\ \psi_{2} = \frac{\frac{d}{dr}+\frac{k}{r}}{(\widetilde{E} + \widetilde{m} - W(r))}\psi_{1},\label{psi21-2} \end{eqnarray} \end{subequations} where $U(r) = \widetilde{V}(r) + \widetilde{S}(r)$ and $W(r) = \widetilde{V}(r) - \widetilde{S}(r)$. Next, the corresponding differential equations system are rewritten as two second-order differential equations in the form \begin{subequations}\label{psi12} \begin{eqnarray} \left(\frac{d^{2}}{dr^{2}} - \frac{k(k+1)}{r^{2}}\right) \psi_1 + (\widetilde{E} + \widetilde{m} - W(r)) \left(\widetilde{E} - \widetilde{m} - U(r)\right)\psi_1 = 0,\label{psi12-1}\\ \left(\frac{d^{2}}{dr^{2}} - \frac{k (k - 1)}{r^{2}}\right) \psi_2 + (\widetilde{E} + \widetilde{m} - W(r)) \left(\widetilde{E} - \widetilde{m} - U(r)\right)\psi_2 = 0,\label{psi12-2} \end{eqnarray} \end{subequations} where $k (k+1) = l (l+1)$ and $k (k-1) = \widetilde{l} (\widetilde{l}+1)$ in which $l$ and $\widetilde{l}$ are orbital angular momentum for spin symmetry and pseudospin symmetry, respectively. Herein, the total angular momentum is introduced as $j = l + s$ and $\widetilde{j} = \widetilde{l} + \widetilde{s}$ for spin symmetry and pseudospin symmetry, respectively, in which $s = \widetilde{s} = \pm \frac{1}{2}$. We will have the corresponding relationships for spin symmetry \begin{subequations}\label{spinsym} \begin{eqnarray} \textrm{aligned spin}: k = -(l+1),\,\, j = l + \frac{1}{2},\,\, k < 0,\\ \textrm{unaligned spin}: k = +l,\,\, j = l - \frac{1}{2},\,\, k > 0, \end{eqnarray} \end{subequations} where $k = +1, \pm 2, \pm 3, \cdot \cdot \cdot$. But for pseudospin symmetry yields \begin{subequations}\label{psespinsym} \begin{eqnarray} \textrm{aligned spin}: k = -\widetilde{l},\,\, j = \widetilde{l} - \frac{1}{2},\,\, k < 0,\\ \textrm{unaligned spin}: k = \widetilde{l} + 1,\,\, j = \widetilde{l} + \frac{1}{2},\,\, k > 0, \end{eqnarray} \end{subequations} where $k = -1, \pm 2, \pm 3, \cdot \cdot \cdot$. When the sum of scalar and vector potentials becomes a constant or $U(r) = C_p$ so-called the pseudospin symmetry, in this case the Dirac equation has pseudospin symmetric solutions that the pseudospin symmetry is an exact symmetry for Dirac Hamiltonian under the condition $\frac{d U(r)}{dr} = 0$ in which $U(r)$ is a constant. Now for $W(r) = C_s$, the Dirac equation has spin symmetric solutions that the spin symmetry is an exact symmetry for Dirac Hamiltonian under the condition $\frac{d W(r)}{dr} = 0$ in which $W(r)$ is a constant \cite{Gupta-2008, Hassanabadi-2012, Arda-2015}. Since graphene's spin plays as the role of the pseudospin, then eigenvectors of the bilayer graphene is introduced as a pseudospin. Thus in this job we only consider pseudospin symmetry by approach $U(r) = C_p$ \cite{Min-2008, Jose-2009, Tuan-2014}. In the next section, we will obtain the corresponding eigenvalues and eigenvectors by Morse potential in Eqs. \eqref{psi12}. \section{Bound states with Morse potential}\label{III} In this section, we intend to explore the corresponding system by Morse potential for quasi-particles as charge carriers. This means that the effective mass of the propagated electrons through the graphene hexagonal lattice creates relativistic fermionic quasi-particles. For this purpose, we implement the Dirac equation by approach of pseudospin symmetry with a potential barrier instead of Schr\"{o}dinger equation. Thus, we take the potential barrier as the Morse potential \cite{Morse-1929, Berkdemir-2006, Zhang-2016} rather than scalar and vector potentials ($\widetilde{V}(r) = -\widetilde{S}(r)$) in the following form \begin{equation}\label{Morse1} W(r) = \widetilde{V}(r) - \widetilde{S}(r) = D_e\, \left(1-e^{-\alpha(r - r_e)}\right)^{2}, \end{equation} where $D_e$ is the dissociation energy, $\alpha$ is the width of the potential well, and $r_e$ is the equilibrium bond length. By inserting Eq. \eqref{Morse1} into Eqs. \eqref{psi12-2} we have \begin{equation}\label{psi23} \frac{d^{2}\psi_2}{dr^{2}}-\frac{k(k-1)}{r^{2}} \psi_2 - \varepsilon_1 D_e \left(1 - e^{-\alpha (r-r_e)^2}\right)^2 \psi_2 + \varepsilon \psi_2 = 0 \end{equation} where $\varepsilon = \varepsilon_1 (\widetilde{E} + \widetilde{m})$ and $\varepsilon_1 = \widetilde{E} - \widetilde{m} - C_p$. Now in order to analytical solve the spinor wave functions $\psi_1$ and $\psi_2$, we extend the following exponential function series in the form \begin{equation}\label{cent1} \frac{e^{-\alpha r}}{(1-e^{-\alpha r})^{2}} = \frac{1}{(\alpha {r})^{2}}-{\frac {1}{12}} + {\frac {1}{240}}{(\alpha {r})}^{2} - {\frac{1}{6048}}{(\alpha {r})}^{4} + O \left((\alpha {r})^{6} \right), \end{equation} in this case, we can obtain the centrifugal term within Eq. \eqref{psi23} by approximation $\alpha r \ll 1$ as follows: \begin{equation}\label{cent2} \frac{1}{r^{2}}\simeq \frac{\alpha^{2}e^{-\alpha r}}{(1-e^{-\alpha r})^{2}} + \frac{\alpha^2}{12}, \end{equation} so, Eq. \eqref{psi23} can be analytical solved by substituting the aforesaid approximation expression, because graphs of both sides of the Eq. \eqref{cent2} are the same as shown in Fig. \ref{fig0}. Also, if $ r-r_e $ is replaced by $ r $ in Eq. \eqref{cent2}, the Morse potential graph \eqref{Morse1} is similar to the its approximation graph. Note that by this approximation, we take values of nearly large $k$ and vibrations of small amplitude around the equilibrium bond length $r_e$. \begin{figure}[h] \begin{center} {\includegraphics[scale=.3]{eq122r.eps}} \caption{Graphs of both sides of the Eq. (12) for $\alpha = 0.988879$.}\label{fig0} \end{center} \end{figure} By substituting Eq. \eqref{cent2} into Eq. \eqref{psi23} we can obtain \begin{equation}\label{diffwave1} \frac{d^{2}\psi_2}{dr^{2}} - \frac{\alpha^2 k (k-1) e^{-\alpha r}}{\left(1-e^{-\alpha r}\right)^{2}} \psi_2 - \frac{\alpha^2 k (k-1)}{12} \psi_2 - \varepsilon_1 D_e \left(1-e^{-\alpha (r - r_e)}\right)^{2} \psi_2 + \varepsilon \psi_2 = 0, \end{equation} by changing the variable $z = e^{-\alpha r}$, the above relationship is written in terms of $z$ as \begin{equation}\label{diffwave2} \frac{d^{2}\psi_2(z)}{dz^{2}} + \frac{1}{z}\frac{d\psi_2(z)}{dz} - \left(\frac{k(k-1)}{z (1-z)^{2}} + \frac{k (k-1)}{12 z^2} + \frac{\varepsilon_1 D_e z_e^2}{\alpha^2} - \frac{2 \varepsilon_1 D_e z_e}{\alpha^2 z} + \frac{\varepsilon_1 D_e - \varepsilon}{\alpha^2 z^2}\right)\psi_2(z)=0, \end{equation} where $z_e = e^{\alpha r_e}$ is a constant. Next by using the separation method of variables, we take the wave function as \begin{equation}\label{psi2F} \psi_2(z) = F(z) H(z), \end{equation} where the function $F(z)$ is as an arbitrary function of $z$ and function $H(z)$ is introduced as confluent Heun's function. To insert the aforesaid wave function into Eq. \eqref{diffwave2} we will have \begin{eqnarray}\label{diffwave3} \frac{d^{2}H(z)}{dz^{2}} &+& \left(\frac{1}{z} + \frac{2 F'}{F}\right)\frac{dH(z)}{dz} \\\nonumber & + &\left(\frac{F''}{F} + \frac{1}{z} \frac{F'}{F} - \frac{k(k-1)}{z (1-z)^{2}} - \frac{k (k-1)}{12 z^2} - \frac{\varepsilon_1 D_e z_e^2}{\alpha^2} + \frac{2 \varepsilon_1 D_e z_e}{\alpha^2 z} - \frac{\varepsilon_1 D_e - \varepsilon}{\alpha^2 z^2}\right) H(z)=0. \end{eqnarray} The polynomials of confluent Heun's function is written in the differential form as follows: \begin{equation}\label{Heunfunc1} \frac{d^{2}H(z)}{dz^{2}} + \left(a+\frac{b+1}{z}+\frac{c+1}{z-1}\right) \frac{dH(z)}{dz} + \left(\frac{\mu}{z} + \frac{\nu}{z-1}\right) H(z)=0. \end{equation} where \begin{equation}\label{Heunfunc2} H(z) = HeunC(a,b,c,\delta,\eta,z) = \sum^{\infty}_{n=0}\lambda_{n}(a,b,c,\delta,\eta) \, z^{n },\,\,\,\,\,\textrm{the radius of convergence}\,\, |z| < 1, \end{equation} where the normalization of the Heun's function is $HeunC(a,b,c,\delta,\eta,0) = 1$, and $\mu =\frac{1}{2} (a - b - c + ab - bc) - \eta $ and $\nu = \frac{1}{2} (a + b + c + ac + bc) + \delta + \eta $ (see Ref. \cite{Fiziev-2009} for more details). By inserting the Heuns' polynomial into the Heun's differential equation, we can obtain the recurrence relationship for expansion coefficients $\lambda_{n}(a, b, c, \delta, \eta)$ by three-term as \begin{equation} P_{n} \lambda_{n} = Q_{n}\lambda_{n-1} + R_{n}\lambda_{n-2}, \end{equation} so that the corresponding coefficients obtain with initial condition $ \lambda_{-1} = 0$ , $\lambda_{0} = 1$ in the following form \begin{subequations}\label{coef1} \begin{eqnarray} P_{n} &=& 1+\frac{b}{n},\label{coef1-1}\\ Q_{n} &=& 1 + \frac{1}{n} (-a +b+c-1) + \frac{1}{n^{2}} \left(\eta + \frac{1}{2} (a - b - c - ab + bc)\right),\label{coef1-2}\\ R_{n} &=& \frac{a}{n^{2}}\left(\frac{\delta}{a}+\frac{b+c}{2}+n-1\right),\label{coef1-4} \end{eqnarray} \end{subequations} when $n \rightarrow \infty$ we will have $P_{n} \rightarrow 1$, $Q_{n} \rightarrow 1$ and $R_{n} \rightarrow 0$. Now, by comparing the second terms of the Eqs. \eqref{diffwave3} and \eqref{Heunfunc1}, we can obtain the arbitrary function $F(z)$ in the following form \begin{equation}\label{Arbifunc1} F(z)= F_{0}\,e^{\frac{az}{2}}\,z^{\frac{b}{2}}\,(z-1)^{\frac{c+1}{2}}, \end{equation} where $F_0$ is an integral constant. The important feature of this solution is that the boundary condition for wave function is as $\psi(r \rightarrow 0) = 0$ and $\psi(r \rightarrow \infty) = 0$. To substituting Eq. \eqref{Arbifunc1} into the third term of Eq. \eqref{diffwave3}, and then by comparing the third terms of the Eqs. \eqref{diffwave3} and \eqref{Heunfunc1}, we can acquire a constraints between coefficients and also find the eigenvalues of bound states as \begin{subequations}\label{coef1} \begin{eqnarray}\label{coef1} & {a}^{2} = \frac{4\,\varepsilon_1\,{De}\,{ze}^{2}}{{\alpha}^{2}},\label{coef1-1}\\ & b^2 = \frac{4\, D_e\, \varepsilon_1 + 4\, \varepsilon}{\alpha^2} + \frac{k (k-1)}{3},\label{coef1-2}\\ & c^2 = (2 k - 1)^2,\label{coef1-3}\\ & \mu = \frac{1}{2} (b + 1) \left(a - c - 1\right) - k (k - 1) + \frac{a^2}{2 z_e} ,\label{coef1-4}\\ & \nu = \frac{1}{2}\, (c + 1) \left(a + b + 1\right) + k (k-1),\label{coef1-5} \end{eqnarray} \end{subequations} these relationships lead to the following expressions \begin{subequations}\label{coef3} \begin{eqnarray}\label{coef3} & \mu + \nu = \frac{a}{2} \left(\frac{a}{z_e} + b + c +2\right),\label{coef3-1}\\ & \eta = k (k-1) - \frac{a^{2}}{2 z_e} + \frac{1}{2},\label{coef3-2}\\ & \delta = \frac{a^{2}}{2 z_e}.\label{coef3-3} \end{eqnarray} \end{subequations} In order to describe the bound state of the current model, we should write the system energy with respect to the quantum numbers. Hence, we re-return to the confluent Heun's function $HeunC(a, b, c, \delta, \eta, z)$ that one reduces to a confluent Heun's polynomial of degree $N$ by conversion $n \rightarrow N + 2$. In this case, we need two requirement conditions that the first condition (see Ref. \cite{Fiziev-2009, Downing-2013} for more details) arises from $R_{N+2} = 0$ which one yields to \begin{equation}\label{condition1} \mu + \nu + N a = 0, \end{equation} where is equivalent to $\frac{\delta}{a} + \frac{b + c}{2} + N + 1 = 0$. The second condition also arises from recurrence equation as $\lambda_{N+1} = 0$ that one gives rice to a tridiagonal determinant in the form \begin{equation}\label{condition2} \Delta_{N + 1} (\mu) = 0, \end{equation} where this condition can find a constraint between the aforesaid parameters such as the corresponding potential and the Heun's function coefficients. \begin{equation}\label{large} \begin{vmatrix} \mu - q_1 & (1+b) & 0 & \dots & 0 & 0 & 0 \\ N a & \mu - q_2 + a & 2(2+ b) & \dots & 0 & 0 & 0\\ 0 & (N-1) a & \mu-q_3+ 2 a & \dots & 0 & 0 & 0\\ \vdots & \vdots & \vdots & \ddots & \vdots & \vdots & \vdots\\ 0 & 0 & 0 & \dots & \mu - q_{N-1} + (N-2) a & (N-1)(N-1+b) & 0\\ 0 & 0 & 0 & \dots & 2a & \mu - q_N + (N-1)a & N(N+b)\\ 0 & 0 & 0 & \dots & 0 & a & \mu - q_{N+1} + N a \\ \end{vmatrix} = 0, \end{equation} where $q_N = (N-1)(N+b+c)$. Now, by using the first condition, we can write the energy spectra $\widetilde{E} \equiv \widetilde{E}_{N k}$ as follows: \begin{eqnarray}\label{Enk1} 2 \alpha \left(N \pm k + 1 \mp \frac{1}{2}\right) \sqrt{D_{e} (\widetilde{E}_{N k} - \widetilde{m} - C_p)} + \alpha^2 \left(N \pm k + 1 \mp \frac{1}{2}\right)^2 - \frac{\alpha^2 k (k-1)}{12} \\\nonumber + \left(\widetilde{E}_{N k} + \widetilde{m}\right)\left(\widetilde{E}_{N k} - \widetilde{m} - C_p\right) = 0, \end{eqnarray} where the upper sign represents for unaligned spin ($k > 0$), and the lower sign corresponds to aligned spin ($k < 0$). Now, we can calculate the values of the energy spectrum by using the quantum numbers $N$ and $k$ for pseudospin symmetry as given into Tab. \ref{tab1}. Note that $l$, $\widetilde{l}$ and $\widetilde{j}$ obtain from Eq. \eqref{psespinsym}, and $N L_{\widetilde{j}}$ and $(N-1) L_{\widetilde{j}}$ are implemented for aligned spin ($k < 0$) and unaligned spin ($k > 0$). Also, we can see from Tab. \ref{tab1} degeneracy energy for $k \rightarrow -k+1$, i.e, $E_{N\, k} = E_{N\, \bar{k}+1}$. \begin{table}[h] \caption{The energy spectrum $\widetilde{E}_{N k}$ by the values of $D_e = 5 \,fm^{-1}$, $m = 10 \,fm^{-1}$, $\alpha = 0.988879 \,fm^{-1}$, $r_e = 2.40873 \,fm$ and $C_p = 0 \, fm^{-1}$ \cite{Berkdemir-2006}.} \centering \begin{tabular}{|| c | c | c | c | c | c | c | | c | c | c | c | c | c | c ||} \hline\hline \,$N$\, & \,\,\,$k$ & $\,\widetilde{l}\,$ & \,\,$\widetilde{j}$\,\, & \,$l$\, & $N L_{\widetilde{j}}\,/ \,(N-1)L_{\widetilde{j}}$ & $\widetilde{E}_{N k}\,(fm^{-1})$ & \,$N$\, & \,\,$k$ & $\,\widetilde{l}\,$ & \,\,$\widetilde{j}$\,\, & \,$l$\, & $NL_{\widetilde{j}}\,/ \,(N-1) L_{\widetilde{j}}$ & $\widetilde{E}_{N k}\,(fm^{-1})$ \\ \hline\hline $1$ & $-4$ & $4$ & $\frac{7}{2}$ & $3$ & $1f_{7/2}$ & \,\,$-9.264477593$\,\, & $2$ & $-4$ & $4$ & $\frac{7}{2}$ & $3$ & $2f_{7/2}$ & $\,\,-9.091901523\,\,$ \\ $1$ & $-3$ & $3$ & $\frac{5}{2}$ & $2$ & $1d_{5/2}$ & $-9.421012900$ & $2$ & $-3$ & $3$ & $\frac{5}{2}$ & $2$ & $2d_{5/2}$ & $-9.237705059$ \\ $1$ & $-2$ & $2$ & $\frac{3}{2}$ & $1$ & $1p_{3/2}$ & $-9.57951865$ & $2$ & $ -2$ & $2$ & $\frac{3}{2}$ & $1$ & $2p_{3/2}$ & $-9.399442093$\\ $1$ & $-1$ & $1$ & $\frac{1}{2}$ & $0$ & $1s_{1/2}$ & $-9.727001781$ & $2$ & $ -1$ & $1$ & $\frac{1}{2}$ & $0$ & $2s_{1/2}$ & $-9.564374480$\\ $1$ & $2$ & $1$ & $\frac{3}{2}$ & $2$ & $0d_{3/2}$ & $-9.727001781$ & $2$ & $2$ & $1$ & $\frac{3}{2}$ & $2$ & $1d_{3/2}$ & $-9.564374480$\\ $1$ & $3$ & $2$ & $\frac{5}{2}$ & $3$ & $0f_{5/2}$ & $-9.579518653$ & $2$ & $3$ & $2$ & $\frac{5}{2}$ & $3$ & $1f_{5/2}$ & $-9.399442093$\\ $1$ & $ 4$ & $3$ & $\frac{7}{2}$ & $4$ & $0g_{7/2}$ & $-9.421012900$ & $2$ & $ 4$ & $3$ & $\frac{7}{2}$ & $4$ & $1g_{7/2}$ & $-9.237705059$\\ $1$ & $ 5$ & $4$ & $\frac{9}{2}$ & $5$ & $0h_{9/2}$ & $-9.264477593$ & $2$ & $ 5$ & $4$ & $\frac{9}{2}$ & $5$ & $1h_{9/2}$ & $-9.091901523$\\ \hline \end{tabular} \label{tab1} \end{table} We can see the energy spectra versus spin-orbital quantum number $k$ as shown in Fig. \ref{fig1}. Graph \ref{fig1} shows us the amount of energy increases for increasing of spin-orbital quantum number. Also, we can see, the graph of energy spectra has a linear relation with $k$ at each level $N$. In the same $k$, the amount of energy increases when the level $N$ increases. Since we have employed a good approximation approach as shown in Fig. \ref{fig0}, then the energy spectrum is calculated for values of nearly large $k$ for term $k (k-1) / r^2$ within Eq. \eqref{psi23} as shown in Fig. \ref{fig1}. \begin{figure}[h] \begin{center} {\includegraphics[scale=.3]{E2k.eps}} \caption{Energy spectrum of confined states for $N = 1$ (solid circle) and $N = 2$ (solid diamond).}\label{fig1} \end{center} \end{figure} Now, by inserting Eq. \eqref{Arbifunc1} into Eq. \eqref{psi2F} we can obtain the wavefunction $\psi_2$, and then by substituting the obtained wavefunction $\psi_2$ into Eq. \eqref{psi21-1} we can acquire the wavefunction $\psi_1$. the variations of the wavefunctions $\psi_1$ and $\psi_2$ are displayed in Figs. \ref{fig2} in terms of coordinate $r$. Fig. \ref{fig2} shows us that the wavefunctions $\psi_1$ and $\psi_2$ tend to zero when the coordinate $r$ tends to zero or infinity. These wavefunctions are plotted for $s$ and $p$ orbitals as the ground state and the first excited state, respectively, in which $N = 1$ and $k = -1$ implies to $s$ orbital, and $N = 1$ and $k = -2$ implies to $p$ orbital. \begin{figure}[h] \begin{center} {\includegraphics[scale=.35]{psi1211.eps}} {\includegraphics[scale=.35]{psi1212.eps}} \caption{The real part of the wavefunctions $\psi_1$ (line) and $\psi_2$ (dash) for the ground state (left) and for the first excited state (right) along with their full probability density, i.e., $\psi_1^2 + \psi_2^2$.}\label{fig2} \end{center} \end{figure} \section{Electronic properties of gapped graphene}\label{IV} As we know, the carries motion in graphene behaves as massless Dirac fermions in a two-dimensional honeycomb lattice. In that case, the conduction and valence bands in graphene touch each other at six points, which lie on the edge of the first Brillouin zone so-called Dirac points. This honeycomb structure is not a Bravais lattice but can be considered as a triangular lattice with a basis of two atoms per unit cell, which the lattice vectors, $a_1$ and $a_2$, and the reciprocal lattice vectors, $b_1$ and $b_2$ are \begin{subequations} \begin{eqnarray}\label{unitcell} a_{1,2} = \frac{a_0}{2} (3, \pm \sqrt{3}),\\ b_{1,2} = \frac{2 \pi}{3 a_0} (1, \pm \sqrt{3}), \end{eqnarray} \end{subequations} where $a_0 = 1.42 \AA$ is the lattice constant (see Ref. \cite{Neto-2009} for more details). Since, in this job, we consider massive relativistic fermionic quasi-particles with Morse potential, then is expected that appear an energy gap between the conduction and valence bands which named gapped graphene. For this purpose, we obtain the dispersion relation for this system, i.e., energy acquires in terms wavevectors $K_x$ and $K_y$ in presence of Morse potential. By using the Dirac Hamiltonian \eqref{diraceq1}, and relationships $p_x = \hbar K_x$ and $p_y = \hbar K_y$, and then by implicating \eqref{psi1} into the obtained equations, we can find the dispersion relation in coordinates $x-y$ in the following form \begin{equation}\label{disper1} (\tilde{E} - \widetilde{m} - C_p)(\widetilde{E} + \tilde{m} - W) = K_x^2 + K_y^2, \end{equation} where $W$ is the Morse potential \eqref{Morse1} which is a function of the lattice vector. In absence of mass and potential terms, the corresponding dispersion relation converts to $E = \pm \hbar v_F \sqrt{K_x^2 + K_y^2}$ for particles with spin half that one is similar to photons energy, $E = \hbar c K$, in which the velocity of light replaced by the Fermi velocity, that in this case, graphene behaves as a massless Dirac fermions without energy gap. Now, if the system is only in the absence of the Morse potential, the dispersion relation is as $E = \pm \sqrt{m^2 c^4 + \hbar^2 v_F^2 (K_x^2 +K_y^2)}$ in that case, graphene behaves as a massive Dirac fermions with energy gap. Therefore, with existence of mass and Morse potential terms, i.e., Eq. \eqref{disper1}, we can see energy gap between the corresponding bands as showed in Fig. \ref{fig3}. Fig. \ref{fig3} shows us that there are two the energy bands as the valence band when the energy is less than zero, and the conduction band when the energy is more than zero. Also, we can see that the energy spectrum has a linear form with respect to wave vectors that is one of the important properties of graphene. In order to obtain the value of gapped energy, we first acquire the six Dirac points as obtained results in Ref. \cite{Neto-2009} in the following form \begin{equation}\label{wavevectors1} K = \left(\pm \frac{2 \pi}{3 a_0}, \frac{2 \pi}{3 \sqrt{3} a_0}\right), \left(0, -\frac{4 \pi}{3 \sqrt{3} a_0}\right), \,\,\,\,\,K' = \left(\pm \frac{2 \pi}{3 a_0}, -\frac{2 \pi}{3 \sqrt{3} a_0}\right), \left(0, -\frac{4 \pi}{3 \sqrt{3} a_0}\right), \end{equation} where $K$ and $K'$ are the coordinates of the six-point Dirac wavevectors. By inserting the above values of wavevectors into the dispersion relation \eqref{disper1}, we can obtain the value of gapped energy as \begin{equation}\label{gappedener1} \Delta \widetilde{E} = \widetilde{E}^+ - \widetilde{E}^- = 11.47442062 - 7.695552073 = 3.778868546 \, fm^{-1}, \end{equation} where $\widetilde{E}^\pm$ represent two the energy bands of the valence and the conduction. \begin{figure}[h] \begin{center} {\includegraphics[scale=.35]{E2Kxy.eps}} \caption{The energy bands in terms of wavevectors $K_x$ and $K_y$.}\label{fig3} \end{center} \end{figure} \section{Conclusion}\label{V} In this paper, we studied the massive Dirac equation with two potentials called scalar potential and vector potential. Dirac Hamiltonian has been written in polar coordinate by radial coordinate $r$ and azimuthal coordinate $\phi$. Then, we obtained the corresponding Hamiltonian by two spinors in terms of spin-orbit quantum number $k$. Afterward, the two-component spinor wavefunctions have been written as two second-order differential equations with spin symmetry and pseudospin symmetry. The corresponding system has explored by arbitrary spin-orbit quantum number $k$ in spin and pseudospin symmetry in which $k > 0$ and $k < 0$ represent aligned spin and unaligned spin, respectively. Also, we considered the sum of scalar and vector potentials as $U(r) = C_p = constant$ for the pseudospin symmetry and in contrast, the subtract of scalar and vector potentials as $W(r) = C_s = constant$ for the spin symmetry. In what follows, since the motion of electrons in a graphene is propagated like relativistic fermionic quasi-particles, in this case we considered the corresponding system from the perspective of pseudospin symmetry with presence of Morse potential. For this purpose, instead of subtracting of scalar and vector potentials, we took from the Morse potential. In order to solve the corresponding wavefunctions, we used an approximation for the centrifugal term by condition $\alpha r \ll 1$. Then, by taking the variable change $z = e^{-\alpha r}$, and by using the seperation of variables, we could wrote the wavefunction in terms of confluent Heun's function. Afterward, by comparing the second-order differential equation of wavefunction with the second-order differential equation of confluent Heun's function, we obtained the eigenvector and the eigenvalues. Next, we calculated the amount of energy spectrum in terms of arbitrary $N$ and $k$ by coefficients of Dirac Hamiltonian and Morse potential for pseudospin symmetry as shown in Tab. \ref{tab1}. Also, $s$, $p$, $d$ and $f$ orbitals found from the total angular momentum and other quantum numbers. For a more complete justification, we plotted the energy spectrum in terms of spin-orbit quantum number $k$ for $N = 1, 2$, and saw that energy spectrum has a linear relation with $k$ at each level $N$. In what follows, we plotted the variation of the components spinor wavefunctions in terms of radial coordinate $r$ for the ground state ($s$ orbital) and the first excited state ($p$ orbital). As we know, electrons transport in graphene by the relativistic quantum theory in a two-dimensional system. We were able to show that the topology of the graphene band structure has a linear dispersion relation that in the presence of mass and Morse potential, created the energy gap in the Dirac points which are described in terms of relativistic fermionic carriers. For this purpose, we obtained the energy spectrum in the valence band and the conduction band in terms of the wavevectors $K_x$ and $K_y$. Finally, we plotted the graph of the energy bands in terms of wavevectors $K_x$ and $K_y$ and calculated the values of the energy gap for the Dirac points. As a result, massive Dirac fermions give rise to the gapped graphene.
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} Protoplanetary disks around young stars contain the gas and dust from which planetary systems will form. In the midplanes of these disks, the temperature becomes so low that molecules freeze out from the gas phase onto dust grains. The radius at which this happens for a certain molecule is defined as its snowline. The position of a snowline depends both on the species-dependent sublimation temperature and disk properties (mass, temperature, pressure and dynamics). Snowlines play an important role in planet formation as increased particle size, surface density of solid material, and grain stickiness at a snowline location may enhance the efficiency of planetesimal formation \citep{Stevenson1988,Ciesla2006,Johansen2007,Chiang2010,Gundlach2011,Ros2013}. Furthermore, the bulk composition of planets may be regulated by the location of planet formation with respect to snowlines, as gas composition and ice reservoirs change across a snowline \citep{Oberg2011,Madhusudhan2014,Walsh2015,Eistrup2016}. Determining snowline locations is thus key to studying planet formation. \begin{figure*} \centering \includegraphics[width=17cm,trim={0 16cm .5cm 1cm},clip]{PhysicalStructure_paper.pdf} \caption{Gas density (cm$^{-3}$), gas temperature (K), and dust temperature (K) as a function of disk radius, \textit{r}, and scale height, \textit{z/r}, for the adopted model for the TW Hya disk. The temperature color range is limited to highlight values around the CO snow surface. The solid black contours indicate temperatures of 100, 200 and 500 K. The blue arrow indicates the location of the midplane CO snowline associated with a freeze-out temperature of 17 K, as determined by Q13, and the dashed contour marks the corresponding snow surface.} \label{fig:PhysicalStructure} \end{figure*} The CO snowline is of particular interest because CO ice is a starting point for prebiotic chemistry \citep{Herbst2009}. Assuming a disk around a solar-type star, the CO snowline occurs relatively far (a few tens of AU) from the central star due to the low freeze-out temperature of CO; hence, it is more accessible to direct observations than other snowlines. However, locating it is difficult because CO line emission is generally optically thick, so that the bulk of the emission originates in the warm surface layers. An alternative approach is to observe molecules whose emission is expected to peak around the snowline, or molecules that are abundant only when CO is depleted from the gas phase. Based on the former argument, DCO$^+$ has been used to constrain the CO snowline location \citep{Mathews2013,Oberg2015}, but may be affected by some DCO$^+$ also formed in warm disk layers \citep{Favre2015,Qi2015}. A species from the latter category is N$_2$H$^+$ \citep{Qi2013,Qi2015}. This molecule forms through proton transfer from H$_3^+$ to N$_2$, \begin{equation} \label{eq:N2H+formation} \mathrm{N_2 + H_3^+} \rightarrow \mathrm{N_2H^+ + H_2}, \end{equation} but provided that CO is present in the gas phase, its formation is impeded, because CO competes with N$_2$ for reaction with H$_3^+$, \begin{equation} \label{eq:HCO+formation} \mathrm{CO + H_3^+} \rightarrow \mathrm{HCO^+ + H_2}. \end{equation} Furthermore, reactions with CO are the dominant destruction pathway of N$_2$H$^+$: \begin{equation} \label{eq:N2H+destruction} \mathrm{N_2H^+ + CO} \rightarrow \mathrm{HCO^+ + N_2}. \end{equation} N$_2$H$^+$ is therefore expected to be abundant only in regions where CO is depleted from the gas phase, i.e., beyond the CO snowline. Observational evidence for the anti-correlation of N$_2$H$^+$ and gas-phase CO was initially provided for pre-stellar and protostellar environments \citep[e.g.][]{Caselli1999,Bergin2001,Jorgensen2004b}. However, survival of N$_2$H$^+$ is aided in these systems by the delayed freeze-out of N$_2$ as compared to CO, because gas-phase N$_2$ forms more slowly when starting from atomic abundances under diffuse cloud conditions \citep{Aikawa2001,Maret2006}. In protoplanetary disks, N$_2$ molecules are expected to be more abundant than N atoms because of the higher gas density which increases the N$_2$ formation rate, and this timescale effect is not important. So far, the results for protoplanetary disks seem inconclusive. Recent observations of C\element[][18]{O} in the disk of HD 163296 suggest a CO snowline location consistent with the observed N$_2$H$^+$ emission \citep{Qi2015}. On the other hand, several studies indicate a depletion of CO in the disk around TW Hya down to $\sim$10~AU \citep{Favre2013,Nomura2016,Kama2016,Schwarz2016}, inconsistent with the prediction that CO is depleted only beyond a snowline at $\sim$30~AU, based on modeling of N$_2$H$^+$ observations \citep[][hereafter Q13]{Qi2013}. In this work, we explore the robustness of the N$_2$H$^+$ line emission as a tracer of the CO snowline location in the disk midplane, using a physical model (constrained by observations) for the disk around TW Hya. TW Hya is the closest protoplanetary disk system \citep[$\sim$ 54 pc,][]{vanLeeuwen2007} and considered an analog of the Solar Nebula based on disk mass and size. The spatial distribution and emission of N$_2$H$^+$ are modeled for different CO and N$_2$ abundances and binding energies, as well as different cosmic ray ionization rates and degrees of dust settling, using a simple chemical network and full radiative transfer. \citet{Aikawa2015} have shown that analytical formulae for the molecular abundances give a similar N$_2$H$^+$ distribution as a full chemical network. They also found that the N$_2$H$^+$ abundance can peak at temperatures slightly below the CO freeze-out temperature in a typical disk around a T Tauri star, but they did not invoke radiative transfer to make a prediction for the resulting N$_2$H$^+$ emission. The physical and chemical models used in this work are described in Sect.~\ref{sec:Models}, and Sect.~\ref{sec:Results} shows the predicted N$_2$H$^+$ distributions and emission. The simulated emission is compared with that observed by Q13 and convolved with a smaller beam ($0\farcs2\times0\farcs2$) to predict results for future higher angular resolution observations. This section also studies the dependence of the model outcome on CO and N$_2$ abundances, binding energies, cosmic ray ionization rate, and dust grain settling, and the use of multiple N$_2$H$^+$ transitions to further constrain the snowline location. Finally, the dependence of the outer edge of the N$_2$H$^+$ emission on chemical and physical effects is explored. In Sect.~\ref{sec:Discussion} the implications of the results will be discussed and in Sect.~\ref{sec:Conclusions} the conclusions summarized. \section{Protoplanetary disk model} \label{sec:Models} \subsection{Physical model} \label{sec:Physicalmodel} For the physical structure we adopt the model for TW Hya from \citet{Kama2016}. This model reproduces the dust spectral energy distribution (SED) as well as CO rotational line profiles, from both single-dish and ALMA observations, and spatially resolved CO $J$=3--2 emission from ALMA. The 2D physical-chemical code DALI \citep[Dust And LInes,][]{Bruderer2009,Bruderer2012,Bruderer2013} was used to create the model, assuming a stellar mass and radius of \mbox{$M_*$ = 0.74 $\mathrm{M}_{\sun}$} and \mbox{$R_*$ = 1.05 $\mathrm{R}_{\sun}$}, respectively. The disk is irradiated by UV photons and X-rays from the central star and UV photons from the interstellar radiation field. The stellar UV spectrum from \citet{Cleeves2015} is used (based on \citealt{Herczeg2002,Herczeg2004} and \citealt{France2014}), which is roughly consistent with a $\sim$~4000~K blackbody with UV excess due to accretion. The X-ray spectrum is modeled as a thermal spectrum at \mbox{$3.2 \times 10^6$ K} with a total X-ray luminosity of \mbox{$1.4 \times 10^{30}$ erg s$^{-1}$} and the cosmic ray ionization rate is taken to be low, 5 $\times$ 10$^{-19}$ s$^{-1}$ \citep{Cleeves2015}. Starting from an input gas and dust density structure the code uses radiative transfer to determine the dust temperature and local radiation field. The chemical composition is obtained from a chemical network simulation based on a subset of the UMIST 2006 gas-phase network \citep{Woodall2007} and used in a non-LTE excitation calculation for the heating and cooling rates to derive the gas temperature (see \citealt{Bruderer2012} for details). As will be shown in Sect.~\ref{sec:Results} and Fig.~\ref{fig:PhysicalStructure}, N$_2$H$^+$ is predicted in the region where the gas and dust temperatures are coupled ($z/r \lesssim 0.25$). Hence, the temperature in the relevant disk region is not sensitive to changes in molecular abundances. The input gas density has a radial power law distribution, \begin{equation} \Sigma_{\mathrm{gas}} = 30 \, \mathrm{g \, cm}^{-2} \left( \frac{r}{35 \, \mathrm{ AU}} \right) ^{-1} \exp \left( \frac{-r}{35 \, \mathrm{ AU}} \right), \end{equation} and a vertical Gaussian distribution, \begin{equation} h = 0.1 \left( \frac{r}{35 \, \mathrm{ AU}} \right) ^{0.3}. \end{equation} To match the observations, the gas-to-dust mass ratio is set to 200. Two different dust populations are considered; small grains (0.005 - 1 $\mu$m) represent 1\% of the dust surface density, whereas the bulk of the dust surface density is composed of large grains (0.005 - 1000 $\mu$m). The vertical distribution of the dust is such that large grains are settled toward the midplane with a settling parameter $\chi$ of 0.2, i.e. extending to 20\% of the scale height of the small grains; \begin{equation} \rho_{\mathrm{dust,small}} = \frac{0.01\Sigma_{\mathrm{dust}}}{\sqrt{2\pi}Rh} \exp \left[ -\frac{1}{2} \left( \frac{\pi/2-\theta}{h} \right) ^2 \right] \hspace{0.2cm} \mathrm{g \, cm}^{-3}, \mathrm{and} \end{equation} \begin{equation} \rho_{\mathrm{dust,large}} = \frac{0.99\Sigma_{\mathrm{dust}}}{\sqrt{2\pi}R\chi h} \exp \left[ -\frac{1}{2} \left( \frac{\pi/2-\theta}{\chi h} \right) ^2 \right] \hspace{0.2cm} \mathrm{g \, cm}^{-3}, \end{equation} \\where $\theta$ is the vertical latitude coordinate measured from the pole ($\theta = 0$) to the equator, i.e. the midplane ($\theta = \pi/2$; \citealt{Andrews2012}). In the inner 4 AU, the gas and dust surface density is lowered by a factor of 100 with respect to the outer disk to represent the gap detected in the inner disk \citep{Calvet2002,Hughes2007}. Recent observations indicate that the dust distribution in this inner region is more complicated \citep{Andrews2016}, but this will not affect the N$_2$H$^+$ distribution in the outer disk. In Sect.~\ref{sec:GrainSettling} we examine the influence of grain settling on the N$_2$H$^+$ distribution and emission by using a model with $\chi$ = 0.8, i.e. the large grains extending to 80\% of the small grain scale height. The resulting density and thermal structure of the disk are shown in Fig.~\ref{fig:PhysicalStructure} and used in the chemical modeling described in Sect.~\ref{sec:Chemicalmodel}. A midplane temperature of 17 K corresponds to a radius of 27.5 AU, consistent with the CO snowline properties derived by Q13. In their analysis, Q13 fit ALMA observations using a power law for the radial distribution of the N$_2$H$^+$ column density, with an inner radius presumed to coincide with the CO snowline. \begin{figure}[!t] \resizebox{\hsize}{! {\includegraphics[trim={0 1cm 0cm .4cm},clip]{ChemicalNetwork_v3_paper.pdf}} \caption{Schematic representation of the chemical network used to model N$_2$H$^+$ (red). Freeze-out and desorption products are highlighted in purple and photodissociation products are shown in blue. The processes responsible for the anti-correlation between N$_2$H$^+$ and CO are highlighted with red arrows. } \label{fig:ChemicalNetwork} \end{figure} \subsection{Chemical model} \label{sec:Chemicalmodel} If CO is abundant in the gas phase, N$_2$H$^+$ formation is slowed down (Eqs.~\ref{eq:N2H+formation}~and~\ref{eq:HCO+formation}) and N$_2$H$^+$ destruction is enhanced (Eq.~\ref{eq:N2H+destruction}). On the other hand, gas-phase N$_2$ is required to form N$_2$H$^+$ (Eq.~\ref{eq:N2H+formation}). Based on these considerations, the simplest method to predict the distribution of N$_2$H$^+$ is by calculating the balance between freeze-out and desorption for N$_2$ and CO at every position in the disk. Assuming a constant total abundance, i.e. $n_\mathrm{g}$(CO) + $n_\mathrm{s}$(CO) = $n$(CO), the steady state gas phase and ice abundances ($n_\mathrm{g}$ and $n_\mathrm{s}$, resp.) are then given by, \begin{eqnarray} \label{eq:n_gas} n_\mathrm{g}(\mathrm{CO}) &=& \frac{n(\mathrm{CO})}{k_\mathrm{f}/k_\mathrm{d} + 1} \hspace{0.1cm} \mathrm{cm}^{-3}, \mathrm{and} \\n_\mathrm{s}(\mathrm{CO}) &=& n(\mathrm{CO}) - n_\mathrm{g}(\mathrm{X}) \hspace{0.1cm} \mathrm{cm}^{-3}, \label{eq:n_ice} \end{eqnarray} where $k_\mathrm{f}$ and $k_\mathrm{d}$ are the freeze-out and desorption rates, respectively. For N$_2$ a similar equation holds. Thermal desorption is considered here as the only desorption process, which is appropriate for volatile molecules such as CO and N$_2$. However, the dust density in the outer disk may be low enough for UV photons to penetrate to the disk midplane, such that photodesorption may become effective. Photodesorption is therefore included when studying the outer edge of the N$_2$H$^+$ emission in Sect.~\ref{sec:OuterEdge}. The thermal desorption rate depends on the specific binding energy for each molecule, $E_b$, and for CO and N$_2$ values of 855 and 800 K \citep{Bisschop2006} are adopted, respectively. Expressions for the freeze-out and desorption rates, and a discussion of the adopted parameters can be found in Appendix~\ref{ap:chemmodel}. Solving the gas and ice abundances time dependently shows that equilibrium is reached within $10^5$ years, so steady state is a reasonable assumption for a typical disk lifetime of $10^6$ years. The snow surface is defined as the position in the disk where 50\% of a species is present in the gas phase and 50\% is frozen onto the grains. From Eq.~\ref{eq:n_gas} the snow surfaces for CO and N$_2$ can thus be predicted. Note that the freeze-out and desorption rates (Eqs. \ref{eq:k_freezeout} and \ref{eq:k_desorption}), and therefore the fraction of a species that is present in the gas or ice (e.g. $n_g$(CO)/$n$(CO); see Eq.~\ref{eq:n_gas}) at a certain temperature, do not depend on abundance. Hence the locations of the midplane snowlines are independent of the total, i.e. gas plus ice, CO and N$_2$ abundances. As a first approximation, N$_2$H$^+$ can be considered to be present between the CO and N$_2$ snow surfaces. Comparison with the result from the chemical model described below shows that the N$_2$H$^+$ layer extends beyond the N$_2$ snow surface, and the outer boundary is better described by the contour where only 0.05\% of the N$_2$ has desorbed while the bulk remains frozen out. We will refer to the N$_2$H$^+$ layer bounded by the CO snow surface and the contour where 0.05\% of the N$_2$ has desorbed as model ``FD'' (Freeze-out and Desorption). \begin{table*} \caption{Reactions, rate data and related parameters for the N$_2$H$^+$ chemical network. \label{tab:RateCoefficients}} \centering \begin{tabular}{l c c c c c c c c } \hline\hline \\[-.3cm] Reaction & $\zeta$\tablefootmark{a} & $\alpha$\tablefootmark{b} & $\beta$\tablefootmark{b} & $\gamma$\tablefootmark{b} & $S$\tablefootmark{c} & $E_\mathrm{b}$\tablefootmark{d} & $Y$\tablefootmark{e} & $k_0(r,z)$\tablefootmark{f} \\ & s$^{-1}$ & cm$^3$ s$^{-1}$ & & K & & K & photon$^{-1}$ & s$^{-1}$ \\ \hline \\[-.3cm] H$_2$ + cosmic ray $\rightarrow$ H$_2^+$ + e$^-$ & $1.20\times10^{-17}$ & ... & ...& ... & ...& ...& ...& ...\\ H$_2^+$ + H$_2 \rightarrow$ H$_3^+$ + H & ... & $2.08\times10^{-9}$ & 0 & 0 & ... & ... & ... & ... \\ H$_3^+$ + e$^- \rightarrow$ H$_2$ + H & ... & $2.34\times10^{-8}$ & -0.52 & 0 & ... & ... & ... & ... \\ N$_2$ + H$_3^+ \rightarrow$ N$_2$H$^+$ + H$_2$ & ... & $1.80\times10^{-9}$ & 0 & 0 & ...&...&... & ...\\ CO + H$_3^+ \rightarrow$ HCO$^+$ + H$_2$ & ... & $1.36\times10^{-9}$ & -0.14 & -3.4 & ... & ...&... & ...\\ N$_2$H$^+$ + CO $\rightarrow$ HCO$^+$ + N$_2$ & ... & \hspace{0.1cm}$8.80\times10^{-10}$ & 0 & 0 & ... & ...&... &...\\ HCO$^+$ + e$^- \rightarrow$ CO + H & ... & $2.40\times10^{-7}$ & -0.69 & 0 & ... & ... & ... & ... \\ N$_2$H$^+$ + e$^- \rightarrow$ N$_2$ + H & ... & $2.77\times10^{-7}$ & -0.74 & 0 & ... & ... & ... & ...\\ CO $\rightarrow$ CO (ice) & ... & ... & ... & ... & 0.90 & ... & ... & ... \\ N$_2$ $\rightarrow$ N$_2$ (ice) & ... & ... & ... & ... & 0.85 & ... & ... & ... \\ CO (ice) $\rightarrow$ CO & ... & ... & ... & ... & ... & 855 & ... & ... \\ N$_2$ (ice) $\rightarrow$ N$_2$ & ... & ... & ... & ... & ... & 800 & ... & ... \\ {\color[gray]{.4}CO (ice) + h$\nu \rightarrow$ CO} & {\color[gray]{.4}...} & {\color[gray]{.4}...} & {\color[gray]{.4}...} & {\color[gray]{.4}...} & {\color[gray]{.4}...} & {\color[gray]{.4}...} & {\color[gray]{.4} $1.4\times10^{-3}$} & {\color[gray]{.4}...} \\ {\color[gray]{.4}N$_2$ (ice) + h$\nu \rightarrow$ N$_2$} & {\color[gray]{.4}...} & {\color[gray]{.4}...} & {\color[gray]{.4}...} & {\color[gray]{.4}...} & {\color[gray]{.4}...} & {\color[gray]{.4}...} & {\color[gray]{.4} $2.1\times10^{-3}$} & {\color[gray]{.4}...} \\ CO + h$\nu$ $\rightarrow$ C + O & ... & ... & ... & ... & ... & ... & ... & $4.4\times10^{-7}$ \\ N$_2$ + h$\nu$ $\rightarrow$ 2 N & ... & ... & ... & ... & ... & ... & ... & $3.9\times10^{-7}$ \\ \hline \end{tabular} \tablefoot{Equations for the reaction rate coefficients or reaction rates can be found in Appendix~\ref{ap:chemmodel}. Photodesorption processes are shown in grey and are only considered in model CH-PD. For photodissociation the unshielded rates are listed. \\ \tablefoottext{a} \mbox{Cosmic} ray ionization rate taken from \citet{Cravens1978}. \tablefoottext{b} \mbox{Values} taken from the \textsc{Rate}12 release of the UMIST database for Astrochemistry \citep{McElroy2013}. \tablefoottext{c} \mbox{Lower} limits for the sticking coefficients taken from \citet{Bisschop2006}. \tablefoottext{d} \mbox{Binding} energies for pure ices taken from \citet{Bisschop2006}. \tablefoottext{e} \mbox{Photodesorption} yields. For CO, the yield is taken from \citet{Paardekooper2016} for CO ice at 20~K. For N$_2$, the result from \citet{Bertin2013} for mixed ices with CO:N$_2$ = 1:1 in protoplanetary disks is used. The yield for CO under these conditions is similar to the one reported by \citet{Paardekooper2016}. \tablefoottext{f} \mbox{Unattenuated} photodissociation rates for the adopted radiation field at a disk radius of 25 AU. Unshielded photodissociation rates for CO are taken from \citet{Visser2009} and for N$_2$ from \citet{Li2013} and \citet{Heays2014}.} \end{table*} \begin{table} \addtolength{\tabcolsep}{-2pt} \caption{Overview of models and adopted parameters. \label{tab:Models}} \centering \begin{tabular}{l c c c c c c c c} \hline\hline \\[-.3cm] Model & $\chi$ \tablefootmark{a} & $E_\mathrm{b}$(CO) \tablefootmark{b} & $E_\mathrm{b}$(N$_2$) \tablefootmark{b} & $\zeta_{\mathrm{CR}}$ \tablefootmark{c} & Photo- \\ & & [K] & [K] & [s$^{-1}$] & desorption \\ \hline \\[-.3cm] CH & 0.2 & 855 & 800 & $1.2 \times 10^{-17}$ & \\ CH-Eb1 & 0.2 & 1150 & 1000 & $1.2 \times 10^{-17}$ & \\ CH-Eb2 & 0.2 & 1150 & 800 & $1.2 \times 10^{-17}$ & \\ CH-CR1 & 0.2 & 855 & 800 & $1.0 \times 10^{-19}$ & \\ CH-CR2 & 0.2 & 855 & 800 & $5.0 \times 10^{-17}$ & \\ CH-PD & 0.2 & 855 & 800 & $1.2 \times 10^{-17}$ & yes \\ CH-$\chi$0.8 & 0.8 & 855 & 800 & $1.2 \times 10^{-17}$ & \\ \hline \end{tabular} \tablefoot{\tablefoottext{a} \mbox{Large} grain settling parameter. \tablefoottext{b} \mbox{Binding} energy. \tablefoottext{c} \mbox{Cosmic} ray ionization rate.} \addtolength{\tabcolsep}{+2pt} \end{table} Prediction of the N$_2$H$^+$ abundance itself requires solving a chemical model. To avoid uncertainties associated with full chemical network models, a reduced chemical network, incorporating the key processes affecting the N$_2$H$^+$ abundance, including the freeze-out and thermal desorption of CO and N$_2$, is adopted. This network is similar to that used by \citet{Jorgensen2004a} for protostellar envelopes, but with freeze-out, thermal desorption and photodissociation of CO and N$_2$ included (see Fig. \ref{fig:ChemicalNetwork}). It resembles the analytical approach applied by \citet{Aikawa2015}. The most important aspects are described below and a more detailed description can be found in Appendix~\ref{ap:chemmodel}. Incorporation of CO and N$_2$ destruction by photodissociation in the surface and outer layers of the disk is necessary because depletion of the parent molecule, and a possible change in N$_2$/CO ratio, may affect the N$_2$H$^+$ abundance. For CO and N$_2$, photodissociation occurs through line absorption, and shielding by H$_2$ and self-shielding are important. For CO, photodissociation cross sections and shielding functions were taken from \citet{Visser2009}, and for N$_2$ from \citet{Li2013} and \citet{Heays2014}. For a given radiation field, both photodissociation rates are accurate to better than 20\%, and the difference in unshielded rates ($2.6\times10^{-10}$ versus $1.7\times10^{-10}$ s$^{-1}$ in the general interstellar radiation field) turns out to be significant. Note that gas-phase formation of CO and N$_2$ are ignored, such that the model predicts a steep cutoff in the gas-phase abundances in the disk atmosphere. However, this should not affect the freeze-out and desorption balance around the snow surfaces, as they are located deeper within in the disk. The system of ordinary differential equations dictating the reduced chemistry, was solved using the python function \texttt{odeint}\footnote{The function \texttt{odeint} is part of the \texttt{SciPy} package (http://www.scipy.org/) and uses \texttt{lsoda} from the FORTRAN library \texttt{odepack}.} up to a typical disk lifetime of $10^{6}$ yr. As an initial condition, all CO and N$_2$ is considered to be frozen out, while all other abundances (except H$_2$) are set to zero. In Sect.~\ref{sec:Abundance} the effect of CO and N$_2$ abundances, and the N$_2$/CO ratio, is studied by varying the total, i.e. gas plus ice, abundances between $10^{-7}$ and $10^{-4}$ (with respect to H$_2$) such that the N$_2$/CO ratio ranges between 0.01 and 100. We will refer to these models as model ``CH'' (simple CHemical network). The adopted parameters are listed in Table~\ref{tab:RateCoefficients}. \begin{figure*} \centering \includegraphics[width=17cm,trim={0 16.2cm 0cm 1.1cm},clip]{N2H+distribution_modelCH_paper.pdf} \caption{Distributions of CO gas, N$_2$ gas and N$_2$H$^+$ in the simple chemical model (model \mbox{CH}) with CO and N$_2$ abundances of $3 \times 10^{-6}$. To focus on the region around the CO snow surface, the vertical scale is limited to a scale height $z/r \leq$ 0.2. The rightmost panel highlights the region where N$_2$H$^+$ is present near the disk midplane. The dashed and dash-dotted contours represent the CO and N$_2$ snow surfaces, respectively, and the corresponding midplane snowlines are indicated by arrows below the horizontal axis of the rightmost panel. The midplane radius with the highest N$_2$H$^+$ abundance is marked with a red arrow.} \label{fig:N2H+distribution} \end{figure*} The temperature at which a molecule freezes out depends on the gas density and on the binding energy for each molecule, $E_\mathrm{b}$. In the fiducial FD and CH models binding energies for pure ices are used. When in contact with water ice, the CO and N$_2$ binding energies are higher. Recent results from \citet{Fayolle2016} show that, as long as the ice morphology and composition are equivalent for both CO and N$_2$, the ratio of the binding energies remains the same ($\sim$0.9). The effect of different binding energies will be studied in Sect.~\ref{sec:Eb} by adopting values of 1150~K and 1000~K (model \mbox{CH-Eb1}) and 1150~K and 800~K (model \mbox{CH-Eb2}), for CO and N$_2$, respectively. The former values are for both CO and N$_2$ on a water ice surface \citep{Garrod2006}, i.e. representing a scenario in which all ices evaporate during disk formation and then recondense. The latter model represents a situation in which CO is in contact with water ice, while N$_2$ resides in a pure ice layer. Another important parameter in the simple chemical model is the cosmic ray ionization rate, since it controls the H$_3^+$ abundance, important for formation of N$_2$H$^+$. Based on modeling of HCO$^+$ and N$_2$H$^+$ line fluxes and spatially resolved emission, \citet{Cleeves2015} have suggested that the cosmic ray ionization rate in TW Hya is very low, of order \mbox{$10^{-19}$ s$^{-1}$.} The importance of the cosmic ray ionization rate is addressed in Sect.~\ref{sec:ZetaCR} by adopting values of \mbox{$\zeta = 1 \times 10^{-19}$ s$^{-1}$} (CH-CR1) and \mbox{$\zeta = 5 \times 10^{-17}$ s$^{-1}$} (CH-CR2), as also used by \citet{Aikawa2015} in their study of N$_2$H$^+$. An overview of all CH models is given in Table~\ref{tab:Models}. \subsection{Line radiative transfer} Emission from the N$_2$H$^+$ $J$ = 4--3 (372~GHz), $J$ = 3--2 (279 GHz) and $J$ = 1--0 (93 GHz) transitions were simulated with the radiative transfer code LIME \citep[LIne Modeling Engine,][]{Brinch2010} assuming a distance, inclination and position angle appropriate for TW Hya; 54 pc, 6$\degr$ and 155$\degr$, respectively \citep{Hughes2011,Andrews2012}. These are the same values as adopted by Q13. The LIME grid was constructed such that the grid points lie within and just outside the region where the N$_2$H$^+$ abundance $> 1\times10^{-13}$. In the disk region where N$_2$H$^+$ is predicted, the gas density is larger than the $J$=4--3 critical density of $\sim8\times10^6$~cm$^{-3}$ (see Fig.~\ref{fig:PhysicalStructure}), so to reduce CPU time, models were run in LTE. The simulated images were convolved with a 0\farcs63 $\times$ 0\farcs59 beam, similar to the reconstructed beam of Q13, and a 0\farcs2 $\times$ 0\farcs2 beam to anticipate future higher spatial resolution observations. For the \mbox{$J$ = 4--3} transition, the line profiles and the integrated line intensity profiles were compared to the observational data reduced by Q13. \section{Results} \label{sec:Results} \subsection{N$_2$H$^+$ distribution and emission} \label{sec:CompareModels} Figure \ref{fig:N2H+distribution} shows the distribution of CO gas, N$_2$ gas and N$_2$H$^+$ as predicted by the simple chemical model (model CH). Abundance refers to fractional abundance with respect to H$_2$ throughout this work. CO and N$_2$ are frozen out in the disk midplane and destroyed by UV photons higher up in the disk. The snow surface is defined as the position in the disk where the gas-phase and ice abundances become equal (dashed and dash-dotted contours in Fig.~\ref{fig:N2H+distribution}, left panels), and the snowline is the radius at which this happens in the midplane. For the physical structure and fiducial binding energies adopted, the CO snowline is then located at 19~AU which corresponds to a temperature for both the gas and dust of \mbox{$\sim$20~K}. This is smaller than the snowline location of 30~AU (corresponding to 17~K) as inferred by Q13, but in good agreement with recent results from \citet{Zhang2016} who directly detect the CO snowline around 17~AU using \element[][13]C\element[][18]O observations. Although the N$_2$H$^+$ abundance starts to increase at the midplane CO snowline, it peaks \mbox{$\sim$10~AU} further out (red arrow in Fig.~\ref{fig:N2H+distribution}, rightmost panel). It thus seems that the reduction in CO gas abundance at the snowline is not sufficient to allow N$_2$H$^+$ to be abundant, but that an even higher level of depletion is required to favor N$_2$H$^+$ formation over destruction. On the other hand, very low fractions of N$_2$ in the gas phase are sufficient to allow N$_2$H$^+$ formation, extending the N$_2$H$^+$ layer beyond the N$_2$ snow surface. In addition to the expected N$_2$H$^+$ layer, N$_2$H$^+$ is predicted to be abundant in a layer higher up in the disk where the N$_2$ abundance in the gas phase exceeds that of CO due to a slightly lower photodissociation rate of N$_2$ as compared with CO. The presence of N$_2$H$^+$ in the surface layers is also seen in full chemical models \citep{Walsh2010,Cleeves2014,Aikawa2015} and its importance is further discussed in Sect.~\ref{sec:SurfaceLayer}. The results from the simple chemical model thus deviate from the expectation that N$_2$H$^+$ is most abundant in a layer directly outside the CO snowline, as can also be seen from the radial column density profiles in Fig.~\ref{fig:Emission} (top panel). When considering only freeze-out and desorption (model FD) and assuming a constant N$_2$H$^+$ abundance of $3\times10^{-10}$ between the CO snow surface and the 0.05\% contour for N$_2$ gas, the N$_2$H$^+$ column density peaks only 2 AU outside the snowline. On the contrary, in model CH this peak is located 11 AU further out in the disk, at the snowline location derived by Q13. In addition, the column density profile for model CH is flatter due to the N$_2$H$^+$ surface layer. \begin{figure} \centering \includegraphics[trim={0 6.8cm 0cm 1cm},clip]{Emission_Chemmodels_paper.pdf} \caption{N$_2$H$^+$ column density profile (top panel) and simulated \mbox{$J$ = 4--3} line emission (middle and bottom panel) for the N$_2$H$^+$ distributions predicted by the simple chemical model with CO and N$_2$ abundances of $3\times10^{-6}$ (model CH; red lines) and a model incorporating only freeze-out and desorption (model FD; black lines). Integrated line intensity profiles are shown after convolution with a $0\farcs63\times0\farcs59$ beam (middle panel) or a $0\farcs2\times0\farcs2$ beam (bottom panel). Observations by Q13 are shown in grey in the middle panel with the 3$\sigma$-error depicted in the lower right corner. The vertical grey line marks the position of the observed emission peak. The vertical blue line indicates the position of the midplane CO snowline inferred from these observations by Q13, while the red line indicates the location of the midplane CO snowline in the models.} \label{fig:Emission} \end{figure} \begin{figure} \sidecaption \includegraphics{ColumndensityPeak_paper.pdf} \caption{Position of the N$_2$H$^+$ column density peak in model CH for different CO and N$_2$ abundances. The best-fit model with abundances of $3\times10^{-6}$, as shown in Fig.~\ref{fig:N2H+distribution}, is indicated by a star and the color of the symbols represents the value of the N$_2$/CO ratio. The vertical red line marks the location of the CO snowline in the models.} \label{fig:AbundanceColDens} \end{figure} In order to determine whether this difference in N$_2$H$^+$ distribution is large enough to cause different emission profiles, emission from the N$_2$H$^+$ $J$~=~4--3 (372~GHz) transition was simulated. Model FD fits the observed emission peak reasonably well for an N$_2$H$^+$ abundance of $3 \times 10^{-10}$, although the simulated emission peak is located 7~AU closer to the star than observed. Variations in the assumed N$_2$H$^+$ abundance only affect the intensity, but not the position of the peak. On the other hand, model CH can reproduce the position of the emission peak for a CO and N$_2$ abundance of $3 \times 10^{-6}$ (Fig.~\ref{fig:Emission}, middle panel). The underprediction of the emission in the outer disk is further discussed in Sect.~\ref{sec:OuterEdge}. The difference between the models becomes more prominent at higher spatial resolution (Fig.~\ref{fig:Emission}, bottom panel). In that case, model FD predicts the emission peak 10~AU outside the snowline (instead of 17~AU), while this is 30~AU for model CH (instead of 24~AU) due to the flattened column density profile. An N$_2$H$^+$ column density peaking at 30~AU, 11~AU outside the snowline, can thus reproduce the observed emission peak, which is in agreement with Q13, unlike a column density profile peaking directly at the CO snowline. However, this is only the case for a low CO and N$_2$ abundance of $3\times10^{-6}$, as discussed further below. \subsection{Influence of CO and N$_2$ abundances}\label{sec:Abundance} To examine whether the exact amount of CO present in the gas phase is more important for the N$_2$H$^+$ distribution than the location of the CO snowline, as suggested above, the total CO and N$_2$ abundances in the simple chemical network were varied. Changing the CO abundance does not influence the N$_2$H$^+$ distribution via temperature changes since the gas and dust are coupled in the region where N$_2$H$^+$ is present (see Sect.\ref{sec:Physicalmodel} and Fig.~\ref{fig:PhysicalStructure}). Furthermore, recall that the location of the midplane CO snowline does not depend on abundance and thus remains at 19~AU for all models which adopt the fiducial binding energy. The position of the N$_2$H$^+$ column density peak, however, turns out to move further away from the snowline with increasing CO abundance (Fig.~\ref{fig:AbundanceColDens}). This reinforces the idea that the gas-phase CO abundance remains too high for N$_2$H$^+$ to be abundant after the 50\% depletion at the snowline. Instead, N$_2$H$^+$ peaks once the amount of CO in the gas phase drops below a certain threshold, which is reached further away from the snowline for higher CO abundances. This is in agreement with the conclusions from \citet{Aikawa2015}. Moreover, the position of the column density peak depends also on the N$_2$ abundance. For a fixed CO abundance, the position of the maximum N$_2$H$^+$ column density shifts outward with increasing N$_2$ abundance, since the amount of gas-phase N$_2$ remains high enough for efficient N$_2$H$^+$ formation at larger radii. The N$_2$H$^+$ distribution thus strongly depends on the amount of both CO and N$_2$ present in the gas phase, with the column density peaking \mbox{6--18 AU} outside the CO snowline for different abundances. \subsection{Importance of the N$_2$H$^+$ surface layer}\label{sec:SurfaceLayer} Besides the expected N$_2$H$^+$ layer outside the CO snow surface, model CH also predicts a layer higher up in the disk where N$_2$H$^+$ is abundant as a result of a slightly lower N$_2$ photodissociation rate compared with CO. Since both molecules can self-shield, the photodissociation rates depend on molecular abundances. Therefore, the CO and N$_2$ abundances influence the shape of the N$_2$H$^+$ surface layer as shown in Fig.~\ref{fig:N2H+_abundances}. When N$_2$ is equally or more abundant than CO, N$_2$H$^+$ can survive in the region where CO is photodissociated but N$_2$ is still present. The higher the abundances, the closer to the disk surface a sufficiently high column density is reached for efficient self-shielding and the more extended is the N$_2$H$^+$ surface layer (Fig.~\ref{fig:N2H+_abundances}, left panel). The inner boundary of the surface layer is set where CO photodissociation ceases to be effective. For lower CO and N$_2$ abundances, photodissociative photons can penetrate deeper into the disk, and the N$_2$H$^+$ surface layer is located closer to the star (Fig.~\ref{fig:N2H+_abundances}, middle panel). The layer does not extend to the disk outer radius any longer because most N$_2$ is now photodissociated in the outer regions. Finally, when CO is more abundant than N$_2$, the surface layer decreases, until for N$_2$/CO $\lesssim$ 0.2 CO becomes abundant enough everywhere above the snow surface to shift the balance towards N$_2$H$^+$ destruction (Fig.~\ref{fig:N2H+_abundances}, right panel). \begin{figure} \sidecaption \includegraphics[width=0.5\textwidth]{N2H+distribution_Abundance_paper.pdf} \caption{Distribution of N$_2$H$^+$ in the simple chemical model (model CH) for different N$_2$ and CO abundances as listed above the panels. To focus on the region around the CO snow surface, the vertical scale is limited to a scale height $z/r \leq$ 0.25. The dashed contour represents the CO snow surface.} \label{fig:N2H+_abundances} \end{figure} To address the influence of the N$_2$H$^+$ surface layer, \mbox{$J$=4--3} lines were simulated for model CH with different CO and N$_2$ abundances with the CO snow surface set as an upper boundary. In other words, no N$_2$H$^+$ is present above the CO snow surface in these ``snow surface only'' models. Removing the N$_2$H$^+$ surface layer hardly affects the position of the column density peak (Fig.~\ref{fig:ColumndensityPeak}, top left panel), suggesting that the offset between N$_2$H$^+$ and CO snowline is not caused by the surface layer but rather is a robust chemical effect. The emission, however, is strongly influenced by the surface layer (Fig.~\ref{fig:EmissionPeak}, top left panel). In the full CH models, the emission peak is shifted away from the snowline for higher CO abundances by up to $\sim$~50 AU, while in the snow surface only models, the emission traces the column density peak with an offset related to the beam size. Only for CO abundances~$\sim10^{-6}$ or N$_2$/CO ratios $\lesssim$ 1 (blue plus signs in Fig.~\ref{fig:EmissionPeak}), does the emission trace the column density in the full models, and only for even lower CO abundances ($\sim10^{-7}$) does the emission peak at the snowline. In addition to the N$_2$H$^+$ column density offset, the relation between CO snowline and N$_2$H$^+$ emission is thus weakened even more in models with N$_2$/CO $\gtrsim$ 0.2 due to the presence of an N$_2$H$^+$ surface layer that causes the emission to shift outward. Furthermore, the N$_2$H$^+$ surface layer contributes significantly to the peak integrated intensity. This intensity shows a linear correlation with the N$_2$/CO ratio, but the difference of \mbox{$\sim$600 mJy beam$^{-1}$ km s$^{-1}$} (for the $0\farcs63\times0\farcs59$ beam) between models with a N$_2$/CO ratio of 0.01 and 100 reduces to only \mbox{$\sim$100 mJy beam$^{-1}$ km s$^{-1}$} in the snow surface only models (see Fig.~\ref{fig:EmissionIntensity}). For the TW Hya physical model adopted, a surface layer of N$_2$H$^+$, in addition to the midplane layer outside the CO snow surface, seems necessary to reproduce the observed integrated peak intensity. This is in agreement with \citet{Nomura2016}, who suggest that the N$_2$H$^+$ emission in TW Hya originates in the disk surface layer based on the brightness temperature. \begin{figure*} \vspace{-0.4cm} \centering \includegraphics[width=17cm,trim={0 12.3cm 0cm 1.3cm},clip]{ColumndensityPeak_Overview_paper.pdf} \caption{Position of the N$_2$H$^+$ column density peak in the different models (listed in the lower right corner of each panel) for different CO and N$_2$ abundances. From left to right and top to bottom: the fiducial models (CH), models with both CO and N$_2$ binding energies increased (CH-Eb1), models with only CO binding energy increased (CH-Eb2), models with large grains settled to only 80\% of small grain scale height (CH-$\chi$0.8), models with a lower cosmic ray ionization rate ($1\times10^{-19}$~s$^{-1}$; CH-CR1) and models with a higher cosmic ray ionization rate ($5\times10^{-17}$~s$^{-1}$; CH-CR2). Models with N$_2$/CO ratios $<$ 1 are highlighted with blue plus signs. Red circles in the left panels represent the snow surface only models, i.e. N$_2$H$^+$ removed above the CO snow surface. The red lines mark the location of the CO snowline in the models. The grey line indicates the position of the observed emission peak.} \label{fig:ColumndensityPeak} \includegraphics[width=17cm,trim={0 12.3cm 0cm 1.3cm},clip]{EmissionPeak_Overview_paper.pdf} \caption{As Fig.~\ref{fig:ColumndensityPeak}, but for the position of the simulated N$_2$H$^+$ $J$=4--3 emission peak after convolution with a $0\farcs63\times0\farcs59$ beam. } \label{fig:EmissionPeak} \vspace{-0.1cm} \end{figure*} \subsection{Influence of CO and N$_2$ binding energies}\label{sec:Eb} The location of the CO snowline depends on the CO binding energy. To address whether the offset between N$_2$H$^+$ and CO snowline is a result of the adopted binding energies, models were run with a higher CO binding energy (1150 K), i.e. assuming CO on a water ice surface (model CH-Eb2). As the amount of N$_2$ also influences the N$_2$H$^+$ distribution, models were run with a higher binding energy for both CO and N$_2$ (1150 and 1100 K, respectively) as well (model CH-Eb1). The position of the N$_2$H$^+$ column density and emission peak for different CO and N$_2$ abundances are shown in the top middle and top right panels of Figs.~\ref{fig:ColumndensityPeak} and \ref{fig:EmissionPeak}, respectively. When the binding energy is increased for both species (model CH-Eb1), the results are similar to before. The N$_2$H$^+$ column density peaks \mbox{5--9 AU} outside the CO snowline, and the emission peak shifts to even larger radii with increasing CO abundance when an N$_2$H$^+$ surface layer is present (black circles in Fig.~\ref{fig:EmissionPeak}). Increasing only the CO binding energy, i.e. shifting the CO snowline inward but not affecting the N$_2$ snowline (model CH-Eb2), results in the N$_2$H$^+$ column density to peak \mbox{12--26 AU} from the CO snowline. The emission peaks, however, stay roughly at the same radii for both models, thus better tracing the column density maximum when the CO and N$_2$ snowlines are further apart. The peak integrated intensities are similar for all three sets of binding energies. The N$_2$H$^+$ column density thus peaks outside the CO snowline for all binding energies tested, and the offset is largest when the CO and N$_2$ snowline are furthest apart. The offset between snowline and emission peak is roughly independent of the binding energies, except for CO abundances of $\sim10^{-4}$. Therefore, a degeneracy exists between the peak position of the emission and the column density. \subsection{Influence of the cosmic ray ionization rate}\label{sec:ZetaCR} The cosmic ray ionization rate controls the H$_3^+$ abundance, and may therefore have an effect on the N$_2$H$^+$ distribution. To address the importance of the cosmic ray ionization rate, model CH was run with \mbox{$\zeta = 5 \times 10^{-17}$ s$^{-1}$} (CH-CR2), as also used by \citet{Aikawa2015} in their study of N$_2$H$^+$, and \mbox{$\zeta = 1 \times 10^{-19}$ s$^{-1}$} (CH-CR1), as suggested by \citet{Cleeves2015}. The results for the N$_2$H$^+$ column density and \mbox{$J$=4--3} emission are presented in Figs.~\ref{fig:ColumndensityPeak} and \ref{fig:EmissionPeak}, respectively (bottom middle and right panels). The trends seen for the position of the column density and emission peak are roughly the same as for the fiducial models with \mbox{$\zeta = 1.2 \times 10^{-17}$ s$^{-1}$}, although both offsets are $\sim$10~AU larger for the lowest cosmic ray ionization rate. The very small radius at which the emission peaks for model CH-CR2 with a CO abundance of $\sim10^{-7}$ is due to a combination of a higher N$_2$H$^+$ abundance in the inner few tens of AU as compared to models with higher CO abundance and a $0\farcs6$ ($\sim$32~AU) beam. The strongest effect of the cosmic ray ionization rate is on the strength of the peak integrated intensity. Models CH-CR2 predict a higher peak integrated intensity than observed, while N$_2$ needs to be more than two orders of magnitude more abundant than CO to be consistent with the low cosmic ray ionization rate of \mbox{$10^{-19}$ s$^{-1}$} in models CH-CR1 (see Fig.~\ref{fig:Intensity_ZetaCR}). The cosmic ray ionization rate thus influences the distribution of N$_2$H$^+$ with respect to the snowline, with the column density peaking closest to the snowline for the highest values of $\zeta$ and the lowest CO abundances. However, the smallest offset remains 4~AU. \begin{figure} \centering \includegraphics[scale=0.9]{N2H+distribution_Grainsettling_paper.pdf} \vspace{-0.2cm} \caption{N$_2$H$^+$ distribution predicted by the simple chemical model for a physical structure with the large grains settled to only 80\% of the small grain scale height (model CH-$\chi$0.8). Abundances of $3\times10^{-6}$ are adopted for both CO and N$_2$.} \label{fig:N2H+Grainsettling} \end{figure} \subsection{Influence of grain settling}\label{sec:GrainSettling} In the physical model adopted so far, the large grains have settled toward the disk midplane. The distribution of the dust is important because it affects the UV penetration and the disk thermal structure, which is determined by the processing of UV radiation by the dust particles. Since the location of the CO snow surface is temperature dependent, grain settling may indirectly influence the location of the CO snowline. To examine whether this also influences the relation between N$_2$H$^+$ and the snowline, a physical model in which the large grains have only settled to 80\% of the small grain scale height is used. The N$_2$H$^+$ distribution predicted by the simple chemical model for CO and N$_2$ abundances of $3\times10^{-6}$ is presented in Fig.~\ref{fig:N2H+Grainsettling}. The CO snow surface is now located higher up in the disk as a consequence of the shallower temperature gradient near the midplane. In other words, the temperature stays below the CO freeze-out temperature at larger scale heights. The resulting increase in the N$_2$H$^+$ column just outside the snowline in combination with the smaller N$_2$H$^+$ surface layer, reduces the contribution of this layer. This is for instance reflected in the peak integrated intensity; the difference between full models and snow surface only models is now only a factor of $\sim$2 instead \mbox{of $\sim$5}. Figures~\ref{fig:ColumndensityPeak} and \ref{fig:EmissionPeak} (bottom left panels) show what this means for the positions of the N$_2$H$^+$ column density and emission peaks. Due to the different temperature structure, the CO snowline is located at 25~AU, but the N$_2$H$^+$ column density still peaks 6--22~AU further out. However, the offset between column density and emission peak is now different. The emission does trace the column density for CO abundances higher than $\sim5\times10^{-6}$, while for lower abundances the emission peaks at smaller radii than the column density. Again, when the surface layer is removed, the emission roughly traces the column density for all CO and N$_2$ abundances. Thus, the N$_2$H$^+$ emission seems not only sensitive to the chemical conditions, but also the physical conditions in the disk and the UV penetration. Depending on the degree of grain settling the emission traces the column density for different CO abundances, although the N$_2$H$^+$ column density peaks outside the CO snowline in all models. \begin{figure} \centering \includegraphics[width=17cm,trim={0 13.6cm 0cm 1.5cm},clip]{EmissionTrends_Transitions_paper.pdf} \caption{Position of the N$_2$H$^+$ $J$=4--3 (black circles), $J$=3--2 (blue plus signs) and $J$=1--0 (red crosses) emission peaks for different CO and N$_2$ abundances in the simple chemical model (model CH) (top panels) and the corresponding snow surface only models, i.e. N$_2$H$^+$ removed above the CO snow surface (bottom panels). The emission is convolved with a $0\farcs63\times0\farcs59$ beam (left panels) or $0\farcs2\times0\farcs2$ beam (right panels). The red lines mark the location of the CO snowline in the models.} \label{fig:Transitions} \end{figure} \subsection{Constraints provided by multiple N$_2$H$^+$ transitions}\label{sec:Transitions} For N$_2$/CO ratios larger than $\sim$0.2, the simple chemical network predicts that N$_2$H$^+$ is also abundant in a surface layer above the CO snow surface. The presence of this surface layer significantly influences the N$_2$H$^+$ \mbox{$J$=4--3} emission and complicates the relationship between N$_2$H$^+$ and the CO snowline. To assess whether a different N$_2$H$^+$ transition would be better suited to trace the CO snowline, emission was simulated for the \mbox{$J$=3--2} (279~GHz) and \mbox{$J$=1--0} (93~GHz) transitions for models CH and CH-$\chi$0.8. The results for the position of the N$_2$H$^+$ emission peaks in model CH are shown in Fig.~\ref{fig:Transitions}. For the full models with N$_2$/CO~$>$~0.2, the emission peak shifts outward with decreasing transition frequency (Fig.~\ref{fig:Transitions}, top panels), while all transitions peak at a similar radius for the models where the N$_2$H$^+$ surface layer has been removed (Fig.~\ref{fig:Transitions}, bottom panels) or is not present. When the emission is convolved with a $0\farcs2\times0\farcs2$ beam, the \mbox{$J$=1--0} transition peaks in some models at smaller radii than the other transitions. That is because in these cases the structure caused by the surface layer can be resolved, revealing two components that are smeared into one broad feature by the $0\farcs63\times0\farcs59$ beam. Similar results are obtained for model CH-$\chi$0.8 (not shown). Observing multiple transitions thus seems to provide a good indication whether or not a surface layer of N$_2$H$^+$ is contributes to the emission, and thus how well the emission traces the column density. Although comparison of the emission-peak positions for different transitions may indicate the contribution of an N$_2$H$^+$ surface layer, no information is provided on how far the emission peak is then offset from the column density peak or actual CO snowline. To examine whether N$_2$H$^+$ line ratios may contribute to addressing this problem, the \mbox{$J$=4--3/$J$=3--2} and \mbox{$J$=4--3/$J$=1--0} ratios are calculated. Results for model CH and model CH-$\chi$0.8 with three different CO and N$_2$ abundances (as shown in Fig.~\ref{fig:N2H+_abundances}) are presented in Fig.~\ref{fig:Lineratios}. When the N$_2$H$^+$ surface layer is removed or not present at all, both line ratios are nearly constant throughout the disk at \mbox{$J$=4--3/$J$=3--2 $\approx$ 1.2} and \mbox{$J$=4--3/$J$=1--0 $\approx$ 20}. Only at $0\farcs2$ resolution does the \mbox{$J$=4--3/$J$=1--0} ratio increase in the inner $\sim$30~AU. In the fudicial model with the large grains settled to 20\% of the small grain scale height, both line ratios can distinguish between differently shaped N$_2$H$^+$ surface layers. The line ratios become more steep when the surface layer extends to about half the disk radius and increase in value for a surface layer extending to the disk outer radius. In model CH-$\chi$0.8, the surface layer contributes less to the emission and although the line ratios show an increase at around 40~AU when the surface layer is present, distinguishing differently shaped surface layers is not possible. N$_2$H$^+$ line ratios are thus sensitive to the distribution of N$_2$H$^+$, and together with the position of the different emission peaks, can provide modeling constraints and aid in constraining the location of the CO snowline. \begin{table} \caption{Offset between the CO snowline and the N$_2$H$^+$ column density and $J$=4--3 emission peak in the different models. \label{tab:Results}} \centering \begin{tabular}{l c c c c c c c c} \hline\hline \\[-.3cm] & Offset & \multicolumn{4}{c}{\,\,\,Offset $J$=4--3 emission} \\ Model & column density & $0\farcs63\times0\farcs59$ & $0\farcs2\times0\farcs2$ & \\ & AU & AU & AU \\ \hline \\[-.3cm] CH & \,\,\,6--18 & \,\,\,\,\,\,\,4--53 * & \,\,\,\,\,\,\,2--50 * \\ CH-Eb1 & \,\,\,5--10 & \,\,\,\,\,\,\,2--45 * & \,\,8--43 \\ CH-Eb2 & 13--26 & \,\,5--40 & \,\,8--35 \\ CH-CR1 & 10--31 & 12--53 & 12--55\\ CH-CR2 & \,\,\,4--17 & \,\,\,\,\,\,\,2--50 * & \,\,\,\,\,\,\,2--53 * \\ CH-$\chi$0.8 & \,\,6--2 & \,\,\,\,\,\,\,1--28 * & \,\,\,\,\,\,\,4--22 * \\ \hline \end{tabular} \tablefoot{The CO snowline is located at 19 AU in models CH, CH-CR1 and CH-CR2, at 10 AU in models CH-Eb1 and CH-Eb2, and at 25 AU in models CH-$\chi$0.8. A value of ``0'' means coincidence with the CO snowline in the respective model. A star (*) indicates models for which the emission peaks inside the snowline for CO abundances $\leq2\times10^{-7}$.} \end{table} \subsection{Outer edge of N$_2$H$^+$ emission}\label{sec:OuterEdge} So far, we have focused on the peak of the N$_2$H$^+$ emission and its relation to the CO snowline. The simple chemical model (model CH) produces a good fit to the emission peak, but underestimates the emission coming from the outer disk (further out than $\sim$60~AU). In this region, the density may have become low enough for UV radiation to penetrate the midplane and photodesorption to become effective. To address whether this can account for the observed emission, photodesorption is included in model CH-PD (see Fig.~\ref{fig:Emission_Photodesorption}). Although N$_2$H$^+$ is now present in the midplane at radii larger than $\sim$~60~AU and this results in an increase in the column density at these radii, the $\sim$10~mJy~beam$^{-1}$~km~s$^{-1}$ gain in emission is not enough to explain the observations. Increasing the photodesorption rates by two orders of magnitude does not yield an higher intensity, so photodesorption alone can not explain the N$_2$H$^+$ emission originating in the outer disk. Interestingly, the radius at which model and observations start to deviate ($\sim$60~AU) is equal to the radial extent of the millimeter grains \citep[see e.g.,][]{Andrews2012}. The absence of large grains in the outer disk, not accounted for in our model, may influence the temperature structure, such that thermal desorption becomes effective, as shown for CO by \citet{Cleeves2016}. An increase in CO and N$_2$ desorption may then cause an increase in N$_2$H$^+$ in the disk outer region. Photodissociation turns out to be an important process in N$_2$H$^+$ chemistry, so a logical question to ask is whether N$_2$ photodissociation is responsible for the outer edge of the N$_2$H$^+$ emission. N$_2$ self-shielding is not effective until the N$_2$ column density becomes $\gtrsim10^{15}$~cm$^{-2}$ \citep{Li2013}, so although the N$_2$H$^+$ layer below the CO snow surface extends over the entire disk in most models (see Fig.~\ref{fig:N2H+_abundances}), the N$_2$H$^+$ abundance outside $\sim$~100~AU is two orders of magnitude lower for N$_2$ abundances $\lesssim 10^{-6}$. However, despite an N$_2$H$^+$ layer throughout the entire disk for N$_2 > 10^{-6}$, the outer radius of the emission coincides with the outer boundary of the N$_2$H$^+$ surface layer, which is set by N$_2$ photodissociation. Only for N$_2$ abundances as high as $10^{-4}$ does the N$_2$H$^+$ emission extend over the entire disk. For lower abundances is the emission thus truncated due to N$_2$ photodissociation at the outer edge in this particular model. \section{Discussion} \label{sec:Discussion} To study the robustness of N$_2$H$^+$ as tracer of the CO snowline, we model the N$_2$H$^+$ distribution for the disk around TW Hya using a simple chemical model and simulate the resulting emission with the radiative transfer code LIME. The N$_2$H$^+$ column density peaks $\sim$5--30~AU outside of the CO snowline, for all physical and chemical conditions tested. Furthermore, the N$_2$H$^+$ emission peaks generally not at the same radius as the column density, and can be up to 53~AU offset from the CO snowline. Only for very low total, i.e. gas plus ice, CO abundances ($\sim$10$^{-7}$) can the emission peak inside the snowline, although the column density does not. Results for the different models are summarized in Table~\ref{tab:Results}. Fitting the N$_2$H$^+$ column density using a power law with the inner radius assumed to be at the CO snowline can thus generally only produce an outer boundary to the snowline location. Triggered by the question on how N$_2$H$^+$ can be abundant in protoplanetary disks in spite of very similar freeze-out temperatures for CO and N$_2$, \citet{Aikawa2015} performed a chemical model study of the N$_2$H$^+$ distribution. They attributed its presence to the conversion of CO to less volatile species. However, the models presented in this work predict an N$_2$H$^+$ layer even for canonical CO abundances of $\sim10^{-4}$. Nonetheless, the conclusions that the absolute abundances of CO and N$_2$ are important and the N$_2$H$^+$ abundance can peak at a temperature below the CO and N$_2$ freeze-out temperature are reinforced by our models for many different CO and N$_2$ abundances. Results on the effect of the CO and N$_2$ binding energies and cosmic ray ionization rate are also in good agreement. \begin{figure*} \centering \includegraphics[width=17cm,trim={0 0cm 0cm 0.5cm},clip]{N2H+overview_final.pdf} \caption{Schematic representation of the distribution of gas-phase CO (blue) and N$_2$H$^+$ (red) in disks with either a steep vertical temperature profile, as for TW Hya (left), or a shallow vertical temperature profile (right). These differences can be due to different degrees of grain settling. To highlight the region around the CO snowline, the vertical direction depicts scale height, $z/r$. The dashed black contour represents the CO snow surface and the light blue area directly outside this contour shows that, at the snow surface, the gas phase abundance drops by 50\%. The N$_2$H$^+$ surface layer is indicated by dotted red lines. The predicted column density profiles are shown below. For N$_2$H$^+$, the column density profile is shown with (dotted line) and without (solid line) the surface layer. The vertical dashed black line indicates the location of the midplane CO snowline. } \label{fig:Discussion} \end{figure*} \citet{Aikawa2015} also report the presence of N$_2$H$^+$ in layers higher up in the disk in their full chemical model (in line with \citealt{Walsh2010} and \citealt{Cleeves2014}), but they do not perform a radiative transfer calculation to explore whether this contributes significantly to the resulting emission. Here we show that the discrepancy between column density and emission maxima is caused by such a surface layer that is present in models with CO less than $\sim$5 times as abundant as N$_2$, due to a small difference in the CO and N$_2$ photodissociation rates. Although CO is more than an order of magnitude more abundant than N$_2$ in the ISM, CO can be underabundant in the gas-phase in protoplanetary disks. This underabundance used to be attributed to photodissociation and freeze-out \citep{Dutrey1997,vanZadelhoff2001}, but recent studies concerning in particular TW Hya, suggest that on top of these well-known processes, CO is also depleted in the warm molecular layer and indeed inside the snowline \citep{Favre2013,Nomura2016,Schwarz2016}. Moreover, observations of [C I] lines indicate a general carbon depletion in this disk \citep{Kama2016}. The results presented here show that N$_2$H$^+$ is very sensitive to the gas-phase CO abundance, and the best fit to the observed emission is acquired for a total CO abundance of $3\times10^{-6}$, consistent with the CO depletion scenario. To achieve such a low CO gas-phase abundance in the models, a low total CO abundance is required, as the amount of CO present in the gas phase depends on the available ice reservoir. This suggests that CO frozen out in the outer disk may be trapped in the ice or converted to more complex species. Other possiblilities are that it has been locked up due to grain growth on the way to the inner disk or locked up in even larger bodies like planetesimals. The overprediction of N$_2$H$^+$ emission inside the CO snowline as compared to the observations may indicate that some of the CO is trapped in other ices with higher binding energies, such as CO$_2$ and H$_2$O, since this will allow gradual release of additional CO when these species desorb off the grains at higher temperatures. The contribution of the surface layer to the total N$_2$H$^+$ emission seems to depend on the disk physical structure. In the TW Hya model, the high degree of dust settling results in a steep vertical temperature gradient. This confines the CO snow surface, and hence the associated N$_2$H$^+$ layer, close to the midplane. For a less settled disk, the vertical temperature gradient is shallower and the N$_2$H$^+$ layer resides higher up in the disk. The N$_2$H$^+$ column density just outside the CO snowline is much higher in the latter case and therefore the contribution from the N$_2$H$^+$ surface layer is significantly lower. This is further aided by the lower gas density in the surface layer due to its increased scale height (see Fig.~\ref{fig:Discussion}). For CO abundances $\gtrsim10^{-6}$ the N$_2$H$^+$ emission then traces the column density, while the emission is shifted to larger radii in models with substantial grain settling. Differences in disk vertical structure may also help explain why the CO snowline can be observed directly with CO isotopologues in some disks, but not in others. The higher up in the disk the CO snow surface resides, the larger the CO column density decrease across the snowline, simply because the CO-depleted region extends to larger heights (see Fig.~\ref{fig:Discussion}). This may explain why in TW Hya no sharp drop in CO column density is seen around the snowline, on top of the global CO depletion \citep{Nomura2016,Schwarz2016}. The rise in column density inward of 10 AU may be the result of release of trapped CO at the CO$_2$ and H$_2$O snowlines. On the other hand, in HD 163296, both C\element[][18]O and N$_2$H$^+$ emission can be reproduced by a sharp change in column density at roughly the same radius \citep{Qi2015}. The fitted CO freeze-out temperature occurs, for the physical model adopted for HD 163296 by these authors, at a radius of 85--90~AU, while the N$_2$H$^+$ emission can be reproduced by a column density profile with inner radius between 84 and 98 AU. These results are consistent with the results shown here that the N$_2$H$^+$ distribution peaks outside the CO snowline. The better agreement between CO and N$_2$H$^+$ emission could mean that the CO snow surface is located higher up in the disk. As the HD 163296 disk is inclined with respect to the line of sight, \citep[e.g.,][]{Dominik2003}, this hypothesis could be tested by deriving the height at which the N$_2$H$^+$ layer resides from channel maps. Another possibility is that there is no N$_2$H$^+$ surface layer due to the much stronger UV radiation field of the Herbig Ae star HD 163296 as compared to the \mbox{T Tauri} star TW Hya, so that the N$_2$H$^+$ emission follows the column density. In addition, a strong drop in CO abundance may be easier to detect in a disk with a low global carbon and CO depletion. The relationship between N$_2$H$^+$ and the CO snowline is thus more complicated than direct coincidence and a snowline location can generally not be derived from only a power law fit to the observed N$_2$H$^+$ emission. For disks with the CO snow surface high above the midplane, e.g. due to a low degree of grain settling, the N$_2$H$^+$ emission seems to generally trace the column density peak quite well. The then obtained outer boundary for the snowline can be improved if a CO column density profile can be derived from C\element[][18]O observations. On the other hand, when the N$_2$H$^+$ emission is dominated by a surface layer, e.g. in a very settled disk, chemical modeling is required. If the CO snow surface is close to the midplane, the CO column density change across the snowline will be small and C\element[][18]O observations will be less helpful (see Fig.~\ref{fig:Discussion}). Detailed knowledge about the disk vertical physical structure are thus required to translate N$_2$H$^+$ emission into a CO snowline location. Comparing emission from multiple N$_2$H$^+$ transitions can provide information on to what extent the emission is dominated by an N$_2$H$^+$ surface layer, and thus how well it traces the column density. Higher spatial resolution may also reveal significant contribution from a surface layer, as multiple components may be concealed in a broad emission peak at low resolution. \section{Conclusions} \label{sec:Conclusions} In this work, we modeled the N$_2$H$^+$ distribution and resulting emission for the disk around TW Hya using a simple chemical network. Our main conclusions regarding the robustness of N$_2$H$^+$ as a tracer of the CO snowline are listed below. \begin{enumerate} \item For the adopted physical structure and binding energies, freeze-out and thermal desorption predict a CO snowline at 19 AU, corresponding to a CO midplane freeze-out temperature of 20 K. This is smaller than inferred by \citet{Qi2013}. \item A simple chemical model predicts the N$_2$H$^+$ column density to peak at least $\sim$5~AU outside the CO snowline for all physical and chemical conditions tested. This offset shows an increasing trend with CO abundance, suggesting that the N$_2$H$^+$ distribution is dictated by the amount of CO present in the gas phase, rather than its snowline location. \item In addition to the N$_2$H$^+$ layer outside the CO snow surface, N$_2$H$^+$ is predicted to be abundant in a surface layer where the gas-phase N$_2$ abundance exceeds that of CO due to a small difference in the photodissociation rates. Only in models with N$_2$/CO $\lesssim$ 0.2 is no surface layer present. \item The contribution of this surface layer to the total N$_2$H$^+$ emission depends on the disk vertical structure. For the adopted physical structure for TW Hya, in which the large grains have settled toward the midplane, the simulated N$_2$H$^+$ emission is dominated by the surface layer. This causes the emission to shift to even larger radii, up to $\sim$50~AU beyond the snowline. The influence of the surface layer is much smaller in a less settled disk, and in this case the N$_2$H$^+$ emission does roughly trace the column density. \item The extent of the N$_2$H$^+$ surface layer, and therefore the shift of the emission peak, also depends on the CO abundance. Moreover, the peak integrated intensity depends on the N$_2$/CO ratio. Together, this suggests that N$_2$H$^+$ may help constrain CO and N$_2$ abundances in protoplanetary disks, provided a representative model of the physical structure is derivable from existing observations. \item An N$_2$H$^+$ distribution based on the freeze-out and desorption balance for CO and N$_2$, and thus peaking directly at the CO snowline, produces an emission peak 7~AU closer to the star than observed. To reproduce the observed emission peak with the simple chemical model, a CO and N$_2$ abundance of $3\times10^{-6}$ is required. This is in agreement with a global CO and carbon depletion in TW Hya. The N$_2$H$^+$ surface layer predicted by the simple chemical model is necessary to fit both the location and the intensity of N$_2$H$^+$ emission peak. \item The cosmic ray ionization rate influences both the N$_2$H$^+$ intensity as well as the positions of the column density and emission maxima, while only the peak positions change with different CO and N$_2$ binding energies. \item Underprediction of the emission from the region depleted of millimeter grains (radii larger than $\sim$60~AU) reinforces the idea that N$_2$H$^+$ may be very sensitive to the physical structure of the disk. \end{enumerate} The relationship between the N$_2$H$^+$ distribution and the CO snowline location is thus more complicated than initially assumed and simple parametrized column density fits provide only upper boundaries for the snowline radius. Instead, more detailed modeling is needed to derive the CO snowline location from N$_2$H$^+$ emission, and as shown in this work, a simple chemical model seems to be sufficient. However, detailed knowledge of the disk physical structure is required. On the other hand, the sensitivity to CO and N$_2$ abundance and physical structure suggests that N$_2$H$^+$ may be a more versatile probe, capable of constraining CO and N$_2$ abundances, and the thermal structure of protoplanetary disks. \begin{acknowledgements} We thank Michiel Hogerheijde for sharing the reduced image cube of the ALMA N$_2$H$^+$ $J$ = 4--3 observations in TW Hya, Ilse Cleeves for fruitful discussions and the anonymous referee for useful comments to improve the manuscript. Astrochemistry in Leiden is supported by the European Union A-ERC grant 291141 CHEMPLAN, by the Netherlands Research School for Astronomy (NOVA) and by a Royal Netherlands Academy of Arts and Sciences (KNAW) professor prize. M.L.R.H acknowledges support from a Huygens fellowship from Leiden University. C.W. acknowledges support from the Netherlands Organisation for Scientific Research (NWO, program number 639.041.335). This paper makes use of the following ALMA data: ADS/JAO.ALMA\#2011.0.00340.S. ALMA is a partnership of ESO (representing its member states), NSF (USA) and NINS (Japan), together with NRC (Canada), NSC and ASIAA (Taiwan), and KASI (Republic of Korea), in cooperation with the Republic of Chile. The Joint ALMA Observatory is operated by ESO, AUI/NRAO and NAOJ. \end{acknowledgements} \bibliographystyle{aa}
{ "redpajama_set_name": "RedPajamaArXiv" }
\chapter{\glsresetall} \newacronym{cegis}{CEGIS}{Counterexample-Guided Inductive Synthesis} \newacronym{csp}{CSP}{Constraint Satisfiability Problem} \newacronym{cp}{CP}{Constraint Programming} \newacronym{smt}{SMT}{Satisfiability Modulo Theories} \newacronym{lp}{LP}{Linear Programming} \newacronym{milp}{MILP}{Mixed-Integer Linear Programming} \newacronym{ips}{IPS}{Intelligent Physical System} \newacronym{ltl}{LTL}{Linear Temporal Logic} \newacronym{rtl}{RTL}{Temporal Logic over Reals} \newacronym{stl}{STL}{Signal Temporal Logic} \newacronym{mpc}{MPC}{Model Predictive Control} \newacronym{itmp}{ITMP}{Integrated Task and Motion Planning} \newacronym{ai}{AI}{Artificial Intelligence} \newacronym{ff}{FF}{fast forward} \newacronym{idtmp}{IDTMP}{iteratively deepened task and motion planning} \newacronym{cosmop}{CoSMoP}{composition of safe motion primitives} \newacronym{mld}{MLD}{mixed logical dynamic} \newacronym{pomdp}{POMDP}{partially observable Markov decision process} \newacronym{prstl}{PrSTL}{probabilistic signal temporal logic} \newacronym{socp}{SOCP}{Second-Order Cone Programming} \newacronym{rhc}{RHC}{receding horizon control} \newacronym{kf}{KF}{Kalman filter} \newacronym{ukf}{UKF}{unscented Kalman filter} \newacronym{ekf}{EKF}{extended Kalman filter} \newacronym{smc}{SMC}{sequencial Monte-Carlo} \newacronym{lqr}{LQR}{Linear Quadratic Regulator} \newacronym{lqg}{LQG}{Linear Quadratic Gaussian} \newacronym{zoh}{ZOH}{zero order hold} \newacronym{ir}{IR}{infrared} \newacronym{cps}{CPS}{cyber-physical system} \newacronym{dof}{DOF}{degrees of freedom} \newacronym{rrt}{RRT}{Rapidly-exploring Random Tree} \newacronym{ltlopt}{LTLOpt}{optimal control with linear temporal logic specifications} \newacronym{ros}{ROS}{Robot Operating System} \newacronym{bsc}{BSC}{Bounded Satisfiability Checking} \newacronym{ompl}{OMPL}{Open Motion Planning Library} \newacronym{dfa}{DFA}{deterministic finite automata} \newacronym{dba}{DBA}{deterministic Büchi automata} \newacronym{iis}{IIS}{Irreducibly Inconsistent Set} \newacronym{dnf}{DNF}{Disjunctive Normal Form} \newacronym{bmc}{BMC}{Bounded Model Checking} \newacronym{idrtl}{idRTL}{iterative deepening Real-time Temporal Logic} \newacronym{sat}{SAT}{Satisfiability} \newacronym{mlo}{MLO}{Maximum Likelihood Observation} \makenoidxglossaries \usepackage{xifthen \usepackage{xspace \makeatletter \newcommand{\pushright}[1]{\ifmeasuring@#1\else\omit\hfill$\displaystyle#1$\fi\ignorespaces} \newcommand{\pushleft}[1]{\ifmeasuring@#1\else\omit$\displaystyle#1$\hfill\fi\ignorespaces} \makeatother \DeclareMathOperator*{\argmin}{arg\,min} \DeclareMathOperator*{\argmax}{arg\,max} \newcommand{\mathrel{\ooalign{$\leftrightarrow$\cr\hidewidth$/$\hidewidth}}}{\mathrel{\ooalign{$\leftrightarrow$\cr\hidewidth$/$\hidewidth}}} \newcommand{\mathrel{\ooalign{$\Leftrightarrow$\cr\hidewidth$/$\hidewidth}}}{\mathrel{\ooalign{$\Leftrightarrow$\cr\hidewidth$/$\hidewidth}}} \newcommand{\eye}[4 { \draw[rotate around={#4:(#2,#3)}] (#2,#3) -- ++(-.5*55:#1) (#2,#3) -- ++(.5*55:#1); \draw (#2,#3) ++(#4+55:.75*#1) arc (#4+55:#4-55:.75*#1); \draw[fill=gray] (#2,#3) ++(#4+55/3:.75*#1) arc (#4+180-55:#4+180+55:.28*#1); \draw[fill=black] (#2,#3) ++(#4+55/3:.75*#1) arc (#4+55/3:#4-55/3:.75*#1); } \newcommand{\tstar}[5] \pgfmathsetmacro{\starangle}{360/#3} \draw[#5] (#4:#1) \foreach \x in {1,...,#3} { -- (#4+\x*\starangle-\starangle/2:#2) -- (#4+\x*\starangle:#1) } -- cycle; } \newcommand{\ensuremath{\phi}}{\ensuremath{\phi}} \newcommand{\ensuremath{\varphi}}{\ensuremath{\varphi}} \newcommand{\ensuremath{\top}}{\ensuremath{\top}} \newcommand{\ensuremath{\perp}}{\ensuremath{\perp}} \newcommand{\ensuremath{\neg}}{\ensuremath{\neg}} \newcommand{\ensuremath{\wedge}}{\ensuremath{\wedge}} \newcommand{\ensuremath{\vee}}{\ensuremath{\vee}} \newcommand{\ensuremath{\oplus}}{\ensuremath{\oplus}} \newcommand{\ensuremath{\rightarrow}}{\ensuremath{\rightarrow}} \newcommand{\ensuremath{\boldsymbol{U}}}{\ensuremath{\boldsymbol{U}}} \newcommand{\ensuremath{\boldsymbol{R}}}{\ensuremath{\boldsymbol{R}}} \newcommand{\ensuremath{\square}}{\ensuremath{\square}} \newcommand{\ensuremath{\Diamond}}{\ensuremath{\Diamond}} \newcommand{\ensuremath{\bigwedge}}{\ensuremath{\bigwedge}} \newcommand{\ensuremath{\bigvee}}{\ensuremath{\bigvee}} \newcommand{\ensuremath{\leftarrow}}{\ensuremath{\leftarrow}} \newcommand{\ensuremath{\rightarrow}}{\ensuremath{\rightarrow}} \newcommand{\ensuremath{\leftrightarrow}}{\ensuremath{\leftrightarrow}} \newcommand{\mMatrix}[1]{\ensuremath{\uppercase{\boldsymbol{#1}}}} \newcommand{\ensuremath{\vDash}}{\ensuremath{\vDash}} \newcommand{\ensuremath{\not\vDash}}{\ensuremath{\not\vDash}} \newcommand{\ensuremath{\boldsymbol{\rho}}}{\ensuremath{\boldsymbol{\rho}}} \newcommand{\mEdge}[1]{\raisebox{-0.5ex}{\scalebox{0.8}{\ensuremath{\displaystyle \xrightarrow{#1}\;}}}} \newcommand{\raisebox{-0.5ex}{\scalebox{0.8}{\ensuremath{\displaystyle \overset{\delta}{\approx}\;}}}}{\raisebox{-0.5ex}{\scalebox{0.8}{\ensuremath{\displaystyle \overset{\delta}{\approx}\;}}}} \newcommand\scalemath[2]{\scalebox{#1}{\mbox{\ensuremath{\displaystyle #2}}}} \newcommand{\mSMTAtom}[1]{\ensuremath{\llparenthesis #1 \rrparenthesis}} \newcommand{\mSMTBitvec}[1]{\ensuremath{\llbracket #1 \rrbracket}} \newcommand{\mSMTFormula}[1]{\ensuremath{\left|\left[#1\right]\right|}} \newcommand{\mSMTFormulaBase}[1]{\ensuremath{\left|\left[#1\right]\right|^{base}}} \newcommand{\mSMTFormulaKind}[1]{\ensuremath{\left|\left[#1\right]\right|^{K_{ind}}}} \newcommand{\mSMTFormulaKdep}[1]{\ensuremath{\left|\left[#1\right]\right|^{K_{dep}}}} \definecolor{codegreen}{rgb}{0,0.6,0} \definecolor{codegray}{rgb}{0.5,0.5,0.5} \definecolor{codepurple}{rgb}{0.58,0,0.82} \definecolor{backcolour}{rgb}{0.95,0.95,0.92} \lstdefinestyle{mystyle}{ backgroundcolor=\color{backcolour}, commentstyle=\color{codegreen}, keywordstyle=\color{magenta}, numberstyle=\tiny\color{codegray}, stringstyle=\color{codepurple}, basicstyle=\ttfamily\footnotesize, breakatwhitespace=false, breaklines=true, captionpos=b, keepspaces=true, numbers=left, numbersep=5pt, showspaces=false, showstringspaces=false, showtabs=false, tabsize=2 } \lstset{style=mystyle} \def\BibTeX{{\rm B\kern-.05em{\sc i\kern-.025em b}\kern-.08em T\kern-.1667em\lower.7ex\hbox{E}\kern-.125emX}} \begin{document} \newgeometry{top=1in,left=0.75in,right=0.75in,bottom=0.75in, includefoot} \afterpage{\aftergroup\restoregeometry} \title{idSTLPy: A Python Toolbox for Active Perception and Control \\ \thanks{This work was supported in part by the National Science Foundation under Grant IIS-1724070, Grant CNS-1830335 and Grant IIS-2007949.} } \author{Rafael Rodrigues da Silva$^1$ Kunal Yadav$^1$, and Hai Lin$^{1} \thanks{$^{1}$ All authors are with Department of Electrical Engineering, University of Notre Dame, Notre Dame, IN 46556, USA. {\tt\small (rrodri17@nd.edu;~kyadav2@nd.edu;~hlin1@nd.edu)}} } \maketitle \begin{abstract} This paper describes a Python toolbox for active perception and control synthesis of probabilistic signal temporal logic (PrSTL) formulas of switched linear systems with additive Gaussian disturbances and measurement noises. We implement a counterexample-guided synthesis strategy that combines Bounded Model Checking, linear programming, and sampling-based motion planning techniques. We illustrate our approach and the toolbox throughout the paper with a motion planning example for a vehicle with noisy localization. The code is available at \url{https://codeocean.com/capsule/0013534/tree}. \end{abstract} \section{Introduction} The recent decade has seen more intelligent systems in our day-to-day lives, but many of them are still pre-programmed or only work well in controlled environments. Next-generation intelligent systems need to recognize their surrounding environments, make predictions of the environment behavior, and purposely take actions to improve confidence in their belief of environment states. This process is known as an active perception: the intelligent system explicitly explores the environment to collect more information about the environmental behavior \cite{5968}. \par Since the process of active perception involves both actions and perceptions, we propose to specify an active perception task as \gls*{prstl} formulas, which combine real-time temporal logic with chance constraints. Then the active perception problem can be solved as a controller design for a given \gls*{prstl} specification with uncertain and differential constraints. Existing \gls*{prstl} controller synthesis methods include mixed-integer \gls*{socp} \cite{sadigh2015safe,zhong2017fast}, sampling-based optimization \cite{dey2016fast}, and heuristic-search based \cite{yoo2015control}. \gls*{socp} and sampling-based methods provide satisfying controllers for a convex fragment of \gls*{prstl}, but do not incorporate a perception model in the system dynamics. Thus these algorithms cannot synthesize controls to gather more information, and therefore are not considered active perception methods. In this paper, we introduce idSTLPy: a software toolbox for active perception and control developed based on our recent work in \cite{da2019active,dasilva2021active}. This toolbox is an open-source software package for designing the trajectory of an intelligent system with active perception from temporal specification and hybrid dynamics. Unlike other methods, this toolbox synthesizes controllers that consider the effects of observation on the belief dynamics. Hence, the planned trajectory includes motions that reduce the uncertainty about the state variables to achieve the task, i.e., active perception. Our current development is inspired by several toolboxes for symbolic control in the literature, such as TuLip \cite{7587949TuLip}, Linear Temporal Logic MissiOn Planning (LTLMoP) \cite{LTLMoP5650371} and Open Motion Planning Library \cite{he2015towards}, which support the design of controllers for deterministic hybrid systems from \gls*{ltl} formulas. However, to our best knowledge, idSTLPy would be the first toolbox that tackles active perception and control for stochastic systems. The current version of idSTLPy models the stochastic system behavior as a switched linear system with Gaussian noises. This model allows us to inherit the computational efficiency and soundness of Kalman filtering. Additionally, these systems help to represent complex behaviors of physical systems interacting with logical rules or controllers. Therefore, these switched systems allow us to model several real-life problems. \par The software is written in Python. Our basic idea is to combine \gls*{bmc} with sampling-based motion planning to separate logical and dynamical constraints. We propose abstractions that approximate the belief dynamics during the planning and permit us to use these techniques. We show through a simple example that the system can track the planned trajectory during the execution. Therefore, the main goal of this paper is to introduce the newly developed toolbox through a motion planning example under uncertain localization. This paper is structured as follows: \cref{sec:preliminaries} briefly describes the preliminaries, \cref{sec:overview,sec:user-guide} gives the overview of the toolbox with an example. Finally, \cref{sec:conclusion} concludes the work. \section{Preliminaries}\label{sec:preliminaries} \subsection{System}\label{sec:prsystem} We consider switched linear control systems as follows: \begin{equation}\label{eq:prsystem} \begin{aligned} \boldsymbol{x}_{k+1} = & A_{q_k} \boldsymbol{x}_k + B_{q_k} \boldsymbol{u}_k + W_{q_k} \boldsymbol{W}_k, & \boldsymbol{W}_k \sim \mathcal{N}(0, I)\\ \boldsymbol{y}_k = & C_{q_k} \boldsymbol{x}_k + n_{q_k}(\boldsymbol{x}_k) \boldsymbol{V}_k, & \boldsymbol{V}_k \sim \mathcal{N}(0, I), \end{aligned} \end{equation} where $\boldsymbol{x}_k \in \mathbb{R}^n$ are the state variables, $\boldsymbol{u}_k \in \mathcal{U} \subseteq \mathbb{R}^m$ are the input variables, $\mathcal{U} \subseteq \mathbb{R}^m$ is a polytope, $\boldsymbol{y}_k \in \mathbb{R}^p$ are the output variables. Each system location $q \in Q = \{1, 2, \dots, N \}$ is defined by a noise function $n_q : \mathbb{R}^n \rightarrow \mathbb{R}^n$, and constant matrices $A_q \in \mathbb{R}^{n \times n}$, $B_q \in \mathbb{R}^{n \times m}$, and $C_q \in \mathbb{R}^{p \times n}$ with proper dimensions. We assume that the system is subject to mutually uncorrelated zero-mean stationary Gaussian additive disturbances $\boldsymbol{V}_k \sim \mathcal{N}(0, I_n)$ and $\boldsymbol{W}_k \sim \mathcal{N}(0, I_p)$, where $I_n$ is the identity matrix with dimension $n$. Note that this dynamical system can arise from linearization and sampling of a more general continuous system. In such a case, we denote the sampling period as $T_s$, where $T_s = t_{k+1} - t_k $ for all $k \in \mathbb{N}_{\geq 0}$. We assume that the uncertainty is stable, meaning that the uncertainty does not increase infinitely over time. \subsection{Trajectory} The system in \cref{eq:prsystem} is probabilistic. This means that the dynamics result into a random process $\boldsymbol{X}_k$ that represent probabilities over the state variables $prob(\boldsymbol{X}_k = \boldsymbol{x}_k$). We call this random process as belief state. A belief trajectory $\boldsymbol{\beta}$ is defined as a sequence $\boldsymbol{X}_0 \xrightarrow{q_0, \boldsymbol{u}_0, \boldsymbol{y}_1} \boldsymbol{X}_1 \dots$. A transition $\boldsymbol{X}_k\xrightarrow{q_k, \boldsymbol{u}_k, \boldsymbol{y}_k}\boldsymbol{X}_{k+1}$ represents the process of applying a command $q_k \in Q$ and input $\boldsymbol{u}_k \in \mathcal{U}$ at instant $k$ and waiting for an observation $\boldsymbol{y}_{k+1}$ at instant $k + 1$ to update the next belief state $\boldsymbol{X}_{k+1}$. \subsection{Probabilistic Signal Temporal Logic} We specify the requirements of a system belief trajectory using \gls*{prstl} formulas. These formulas are defined recursively according to the following grammar: \begin{align*} \phi := & \pi^\mu_\epsilon | \pi^\mathbb{Q} | \pi^{\mathbb{Q}_1} \ensuremath{\vee} \pi^{\mathbb{Q}_2} | \phi_1 \ensuremath{\wedge} \phi_2 \\ \ensuremath{\varphi} := & \phi | \ensuremath{\varphi}_1 \ensuremath{\wedge} \ensuremath{\varphi}_2 | \ensuremath{\varphi}_1 \ensuremath{\vee} \ensuremath{\varphi}_2 | \ensuremath{\varphi}_1 \ensuremath{\boldsymbol{U}}_{[a,b]} \ensuremath{\varphi}_2 | \ensuremath{\varphi}_1 \ensuremath{\boldsymbol{R}}_{[a,b]} \ensuremath{\varphi}_2, \end{align*} where $\pi$ is a predicate, $\ensuremath{\varphi}$, $\ensuremath{\varphi}_1$, and $\ensuremath{\varphi}_2$ are \gls*{prstl} formulas, and $\phi$, $\phi_1$, and $\phi_2$ are \gls*{prstl} state formulas. Predicates can be one of two types: atomic and probabilistic. An atomic predicate $\pi^\mathbb{Q}$ is a statement about the system locations and is defined by a subset $\mathbb{Q} \subseteq Q$ of locations. A probabilistic predicate $\pi^\mu_\epsilon$ is a statement about the belief $\boldsymbol{X}_k$ defined by a linear function $\mu : \mathbb{R}^n \rightarrow \mathbb{R}$ and a a tolerance $\epsilon \in [0,0.5]$. The operators $\ensuremath{\wedge}, \ensuremath{\vee}$ are Boolean operators conjuntion, and disjunction, respectively. The temporal operators $\ensuremath{\boldsymbol{U}}$ and $\ensuremath{\boldsymbol{R}}$ are \gls*{ltl} operators until and release, respectively. In \gls*{prstl}, these operators are defined by an interval $[a,b] \subseteq \mathbb{N}_{\geq 0}$. We assume that \gls*{prstl} state formulas forms a full-dimensional region in the state space $\mathbb{R}^n$. We denote the fact that a belief trajectory $\boldsymbol{\beta}$ satisfies an \gls*{prstl} formula $\ensuremath{\varphi}$ with $\boldsymbol{\beta} \ensuremath{\vDash} \ensuremath{\varphi}$. Furthermore, we write $\boldsymbol{\beta} \ensuremath{\vDash}_k \ensuremath{\varphi}$ if the trajectory $\boldsymbol{X}_k \xrightarrow{q_k, \boldsymbol{u}_k, \boldsymbol{y}_{k+1}} \boldsymbol{X}_{k + 1} \dots$ satisfies $\ensuremath{\varphi}$. Formally, the following semantics define the validity of a formula $\ensuremath{\varphi}$ with respect to the trajectory $\boldsymbol{\beta}$: \begin{itemize} \item $\boldsymbol{\beta} \ensuremath{\vDash}_k \pi^\mathbb{Q}$ if and only if $k=0$ or $q_{k-1} \in \mathbb{Q}$, \item $\boldsymbol{\beta} \ensuremath{\vDash}_k \pi^\mu_\epsilon$ if and only if $p\big(\mu(\boldsymbol{x}_k) \leq 0\big) \geq 1 - \epsilon$, \item $\boldsymbol{\beta} \ensuremath{\vDash}_k \ensuremath{\varphi}_1 \ensuremath{\wedge} \ensuremath{\varphi}_2$ if and only if $\boldsymbol{\beta} \ensuremath{\vDash}_k \ensuremath{\varphi}_1$ and $\boldsymbol{\beta} \ensuremath{\vDash}_k \ensuremath{\varphi}_2$, \item $\boldsymbol{\beta} \ensuremath{\vDash}_k \ensuremath{\varphi}_1 \ensuremath{\vee} \ensuremath{\varphi}_2$ if and only if $\boldsymbol{\beta} \ensuremath{\vDash}_k \ensuremath{\varphi}_1$ or $\boldsymbol{\beta} \ensuremath{\vDash}_k \ensuremath{\varphi}_2$, \item $\boldsymbol{\beta} \ensuremath{\vDash}_{k} \ensuremath{\varphi}_1 \ensuremath{\boldsymbol{U}}_{[a,b]} \ensuremath{\varphi}_2$ if and only if $\exists k^\prime$ s.t. $k+a \leq k^\prime \leq k+b$, $\boldsymbol{\beta} \ensuremath{\vDash}_{{k^\prime}}\ensuremath{\varphi}_2$, and $\boldsymbol{\beta} \ensuremath{\vDash}_{{k^{\prime\prime}}}\ensuremath{\varphi}_1$, $\forall k + a \leq k^{\prime\prime} < k^\prime$; \item $\boldsymbol{\beta} \ensuremath{\vDash}_{k} \ensuremath{\varphi}_1 \ensuremath{\boldsymbol{R}}_{[a,b]} \ensuremath{\varphi}_2$ if and only if $\exists k^\prime$ s.t. $k+a \leq k^\prime \leq k+b$, $\boldsymbol{\beta} \ensuremath{\vDash}_{{k^\prime}}\ensuremath{\varphi}_1$, and $\boldsymbol{\beta} \ensuremath{\vDash}_{{k^{\prime\prime}}}\ensuremath{\varphi}_2$, $\forall k + a \leq k^{\prime\prime} \leq k^\prime$, or $\boldsymbol{\beta} \ensuremath{\vDash}_{{k^\prime}}\ensuremath{\varphi}_2$, $\forall k + a \leq k^\prime \leq k+b$, \item $\boldsymbol{\beta} \ensuremath{\vDash} \ensuremath{\varphi}$ if and only if $\boldsymbol{\beta} \ensuremath{\vDash}_0 \ensuremath{\varphi}$, \end{itemize} where the temporal operators are indexed by its delay $a \in \mathbb{N}_{\geq 0}$ and deadline $b \in \mathbb{N}_{\geq 0}: a < b \leq \infty$. We can derive other operators such as \emph{true} ($\top = \pi^Q$), \emph{false} ($\perp = \pi^\emptyset$), \emph{always} ($\ensuremath{\square}_{[a,b]} \ensuremath{\varphi} = \perp \ensuremath{\boldsymbol{R}}_{[a, b]} \ensuremath{\varphi}$) and eventually ($\ensuremath{\Diamond}_{[a,b]} \ensuremath{\varphi} = \top \ensuremath{\boldsymbol{U}}_{[a, b]} \ensuremath{\varphi}$). \subsection{Problem Formulation} A practical problem definition for active perception and control synthesis from \gls*{prstl} specification is a feasibility problem of the form, \begin{equation} \begin{aligned} \text{find } & \boldsymbol{\xi} \\ \text{s.t. } & \boldsymbol{\xi} \ensuremath{\vDash} \ensuremath{\varphi}, \\ & prob(\boldsymbol{X}_0 = \bar{\boldsymbol{x}}) \sim \mathcal{N}(\bar{\boldsymbol{x}}, \bar{\Sigma}^x), \\ & \boldsymbol{X}_{k+1} = A_{q_k} \boldsymbol{X}_k + B_{q_k} \boldsymbol{u}_k + W_{q_k}\boldsymbol{W}_k, \\ & \boldsymbol{Y}_{k} = C_{q_k} \boldsymbol{X}_k + n_{q_k}(\boldsymbol{x}_k)\boldsymbol{V}_k, \\ & \boldsymbol{y}_{k+1} = \argmax_{\boldsymbol{y}_{k+1}}prob(\boldsymbol{Y}_{k+1} = \boldsymbol{y}_{k+1}| \boldsymbol{X}_k, q_k, \boldsymbol{u}_k), \\ & q_k \in Q, \boldsymbol{u}_k \in \mathcal{U}, \boldsymbol{W}_k \sim \mathcal{N}(0, I_n), \boldsymbol{V}_k \sim \mathcal{N}(0, I_p), \end{aligned} \end{equation} where $\boldsymbol{\xi}$ is a belief trajectory, $\ensuremath{\varphi}$ is a \gls*{prstl} formula, $prob(\boldsymbol{X}_0 = \bar{\boldsymbol{x}})$ is the initial condition (a priori belief), and $\argmax_{\boldsymbol{y}_{k+1}}prob(\boldsymbol{Y}_{k+1} = \boldsymbol{y}_{k+1}| \boldsymbol{X}_k, q_k, \boldsymbol{u}_k)$ is a practical approximation called \gls*{mlo} \cite{platt2010belief,dasilva2021active}. \section{idSTLPy Overview}\label{sec:overview} Our toolbox implements the approach in \cite{dasilva2021active} illustrated in \cref{fig:diag1}. The basic idea is to construct deterministic abstractions (i.e., $\widehat{TS}$ and $\widetilde{TS}$) and to use \textit{counterexample-guided synthesis} \cite{alur2013syntax,reynolds2015counterexample} to satisfy both the \gls*{prstl} specification $\ensuremath{\varphi}$ and the dynamics of System (\ref{eq:prsystem}). Two interacting layers, discrete and continuous, work together to overcome nonconvexities in the logical specification $\ensuremath{\varphi}$ efficiently. At the discrete layer, a discrete planner acts as a \textit{proposer}, generating discrete plans by solving a \gls*{bmc} \cite{biere2006linear,dasilva2021automatic} for the given specification (i.e., $(\ensuremath{\varphi})_{LTL}$): $\widetilde{TS} \times \breve{TS}_{fair,1} \times \dots \times \breve{TS}_{fair,N} \ensuremath{\vDash} \boldsymbol{E} (\ensuremath{\varphi})_{LTL}$. We use an iterative deepening search to search first for shorter satisfying plans, thus minimizing undue computation. We pass the satisfying discrete plans to the continuous layer, which acts as a \textit{teacher}. In the continuous layer, a sampling-based search is applied to check whether a discrete plan is feasible. If the feasibility test does not pass, we construct a counterexample (i.e., $\breve{TS}_{fair,i}$) to discard infeasible trajectories. Then we add this counterexample to the discrete planner and repeat this process until we find a solution or no more satisfying plans exist. In this approach, we proposed a SPARSE-RRT \cite{littlefield2013efficient} --a sampling-based motion planning-- variant for active perception. The execution of this method is defined by a timeout in seconds ($rrt\_timeout$), a distance to consider that two states are near ($delta\_near$), a distance to drain near states ($delta\_drain$), a goal bias ($goal\_bias$), a minimum ($min\_num\_of\_steps$) and a maximum ($max\_num\_of\_steps$) number of steps for each iteration. Intuitively, for each candidate solution, we execute the proposed RRT for $rrt\_timeout$ seconds. During the execution, we randomly sample a state and take an existing trajectory that the last state is sufficient near ($delta\_near$) and has less uncertainty (i.e., active perception). Next, we randomly select a target state with probability $goal\_bias$ to be in the goal (i.e., task planning) and synthesize control inputs for an horizon between $min\_num\_of\_steps$ and $max\_num\_of\_steps$. If the new trajectory has the last state with less uncertainty than other near ($delta\_drain$) trajectories, we remove the latter from the existing trajectory. If we find at least one trajectory that satisfies the \gls*{prstl} formula, we finish the search. Otherwise, we create a counter-example and find another candidate solution. \begin{figure} \tikzstyle{block} = [draw, rounded corners=1mm, color=blue, text=black, line width=0.5mm, rectangle, minimum height=3em, minimum width=6em] \centering \begin{tikzpicture}[auto, >=latex', scale=0.75, transform shape] \node[block, minimum width=11em] (det) {Construct an Approximated System}; \draw[->] ([yshift=1cm] det.north) -- node [pos=0.5, right] {$\text{System } (\ref{eq:prsystem}), \mathcal{N}(\bar{\boldsymbol{x}}, \bar{\Sigma}^x), \ensuremath{\varphi}$} (det.north); \node[block, minimum width=11em,below=1.25cm of det] (abs) {Abstract the Approximated System and the Specification}; \draw[->] (det.south) -- node [pos=0.5, right] {$\widehat{TS}, \ensuremath{\varphi}$} (abs.north); \node[block, minimum width=11em,below=1.25cm of abs] (dplan) {\begin{tabular}{c}Bounded Model Checking (BMC) \\ $\widetilde{TS} \times \breve{TS}_{fair,1} \times \dots \times \breve{TS}_{fair,N} \ensuremath{\vDash} \boldsymbol{E} (\ensuremath{\varphi})_{LTL}$ \end{tabular}}; \draw[->] (abs.south) -- node [pos=0.5, right] {$\widetilde{TS}, (\ensuremath{\varphi})_{LTL}$} (dplan.north); \node[block, minimum width=11em,below=1.25cm of dplan] (reachsearch) {Dynamical Feasibility Check}; \draw[->] (det.south) |- ++(-4.5cm,-0.5cm) |- (reachsearch); \draw[->,color=blue] ([xshift=-14mm] dplan.south) -- node[left,near end] {\color{blue} Example $\tilde{\boldsymbol{\beta}}_{K,L}$} node[right,pos=0.1] {\color{blue} \textbf{sat}} ([xshift=-14mm] reachsearch.north); \draw[->,color=red] ([xshift=10mm]reachsearch.north) -- node[right,pos=0.85] {\color{red} Counter-example $\widetilde{TS}_{cex,i}$} node[left,near start] {\color{red} \textbf{infeas}} ([xshift=10mm] dplan.south); \draw[->,thick,color=blue] (reachsearch.east) -- node[right, pos=1] {\color{blue} \begin{tabular}{l} \textbf{a trajectory that} \\ \textbf{satisfies the specification} \end{tabular}} ++(0.5cm,0); \draw[->,thick,color=red] (dplan.east) -- node[right,pos=1] {\color{red} \textbf{No solution}} ++(0.5cm,0); \end{tikzpicture} \caption{Pictorial representation of proposed approach. } \label{fig:diag1} \vspace{-0.5cm} \end{figure} \section{User Guide for idSTLPy}\label{sec:user-guide} We will illustrate the main idea of the design methods though a simple motion planning example where the robot position in the workspace is uncertain. In this scenario, we assume that the robot localization depends on the amount of light at its current position as shown in \cref{fig:light-dark-example-workspace}. We can model the robot dynamics in the workspace plane, i.e., $\boldsymbol{x} \in \mathbb{R}^2$, as a first-order system controlled by velocity, $\boldsymbol{u} \in \mathbb{R}^2$: $\boldsymbol{x}_{k+1} = \boldsymbol{x}_k + 0.25 \boldsymbol{u}_k$. The observation function is the identity $\boldsymbol{y}_k = \boldsymbol{x}_k + n(\boldsymbol{x}_k) \boldsymbol{v}_k$ with a zero-mean Gaussian noise function as follows: \begin{equation} n(\boldsymbol{x}) = 0.1(5 - x_1)^2 + const, \end{equation} where $\boldsymbol{x} = [x_1, x_2]^\intercal$. We do not know the robot's position in the workspace. However, our initial belief is an isotropic Gaussian distribution centered at position $[0, 2.5]^\intercal$ with covariance $diag(0.1, 0.1)$. We can specify the motion planning requirements as an \gls*{prstl} formula as follows: \begin{equation} \ensuremath{\varphi} = safe \boldsymbol{U}_{[0,240]} \ensuremath{\square}_{[0,40]} target, \end{equation} where $safe = \pi_{0.01}^{-x_1 - 1} \ensuremath{\wedge} \pi_{0.01}^{x_1 - 5} \ensuremath{\wedge} \pi_{0.01}^{-x_2 - 1} \ensuremath{\wedge} \pi_{0.01}^{x_2 - 4}$ and $target = \pi_{0.05}^{-x_1 - 0.25} \ensuremath{\wedge} \pi_{0.05}^{x_1 - 0.25} \ensuremath{\wedge} \pi_{0.05}^{-x_2 - 0.25} \ensuremath{\wedge} \pi_{0.05}^{x_2 - 0.25}$. In plain English, the robot must satisfy each safety boundary with $99\%$ of confidence until it achieves the target region with $95\%$ confidence within $240$ time instants and stays in the target for $40$ time instants. We can solve this problem using idSTLPy toolbox as shown in \cref{lst:light-dark}. In the subsequent sections, we will explain this code in more detail. \begin{lstlisting}[language=Python,float=*,caption=Motion Planning Example,label=lst:light-dark, basicstyle=\ttfamily\scriptsize] import idstlpy as stl import numpy as np q = stl.mk_variable(size=1, dtype=int) x = stl.mk_variable(size=2, dtype=float) u = stl.mk_variable(size=2, dtype=float) problem = stl.Problem( switched_system=stl.mk_switched_sys([ stl.dynamical_system.mk_lbs( A=np.identity(x.size), B=0.25 * np.eye(x.size, u.size), W=np.zeros((2, 2)), C=np.identity(2), V=lambda state: ((1 / 10) * (5 - state[0]) ** 2 + 0.001) * np.identity(2)) ]), control_domain=stl.logical_and(u[0] >= -1.0, u[0] <= 1.0, u[1] >= -1.0, u[1] <= 1.0).region, initial_state=stl.to_belief(mean=np.array([0, 2.5]), cov=np.diag([0.1, 0.1])), stl_formula=stl.until( stl.logical_and(q == 0, stl.prob(x[0] >= -1) >= 1 - 0.01, stl.prob(x[0] <= 5) >= 1 - 0.01, stl.prob(x[1] >= -1) >= 1 - 0.01, stl.prob(x[1] <= 4) >= 1 - 0.01, name='free_space'), 0, 240, stl.always(0, 40, stl.logical_and(q == 0, stl.prob(x[0] >= -0.25) >= 1 - 0.05, stl.prob(x[0] <= 0.25) >= 1 - 0.05, stl.prob(x[1] >= -0.25) >= 1 - 0.05, stl.prob(x[1] <= 0.25) >= 1 - 0.05, name='target'))) ) solution = problem.solve(rrt_timeout=60, delta_near=2, delta_drain=0.5, goal_bias=0.25, min_num_of_steps=3, max_num_of_steps=15) for i in range(problem.switched_system.system_modes.size): problem.switched_system.system_modes[i].compute_finite_horizon_lqr(horizon=5, Q_final=np.identity(2), Q=np.identity(2), R=0.05*np.identity(2)) xi_sim, xi_real = problem.switched_system.simulate( reference_trajectory=solution, num_of_steps=solution.num_of_steps, real_initial_state=np.array([0.5, 2.75]), real_system=stl.mk_switched_sys([stl.mk_lcs(A=np.identity(2), B=0.25 * np.identity(2))]) ) \end{lstlisting} \subsection{Systems} A system (a.k.a., switched system) is composed of a finite set of system modes. Each system mode is also a dynamical system that inherit the behavior of a linear control system (LCS, i.e., $\boldsymbol{x}_{k+1} = A \boldsymbol{x}_k + B \boldsymbol{u}_k$) as illustrated in \cref{fig:dynamical-system-diagram}. We can have three types of dynamical behavior. If the output dimension is zero, i.e., $p = 0$, the system mode is a linear belief system (LBS). In this behavior, there is no active perception because we assume no observation. Otherwise, if the output dimension is non-zero (i.e., $p > 0$), we have a partially observable linear belief system (POLBS). In turn, a POLBS can have a linear noise function (POLBSWithLinNoise, i.e., $n(\boldsymbol{x}) = V$, where $V \in \mathbb{R}^{p \times p}$ is a constant matrix) or a nonlinear noise function (POLBSWithNonLinNoise). We can create any one of the linear belief systems using the function $mk\_lbs(A, B, W, C=None, V=None)$, where $C$ and $V$ are optional parameters, and $V$ can be a constant matrix or a function that returns a matrix. Similarly, we can construct a switched system using the function $sys.mk\_switched\_sys(system\_modes)$, where $system\_modes$ is a list of linear belief systems. We declare a dynamical system in our example in \cref{lst:light-dark} lines 8-12. \begin{figure} \tikzstyle{block} = [draw, color=black, text=black, line width=0.1mm, rectangle, minimum height=2em, minimum width=4em] \centering \begin{tikzpicture}[auto, >=latex', scale=0.75, transform shape] \node[block] (dyn) {$DynamicalSystem$}; \node[block, below right=0.25cm and -0.25cm of dyn] (lcs) {$LCS$}; \node[block, below right=0.25cm and 0.25cm of lcs] (lbs) {$LBS$}; \node[block, above right=0.25cm and -0.25 of lbs] (polbs) {$POLBS$}; \node[block, above right=0.25cm and -0.25cm of polbs] (POLBSWithLinNoise) {$POLBSWithLinNoise$}; \node[block, below right=0.25cm and -0.25cm of polbs] (POLBSWithNonLinNoise) {$POLBSWithNonLinNoise$}; \node[block, below right=2.25cm and -0.25cm of dyn] (ss) {$SwitchedSystem$}; \draw[->] (dyn.south) to[out=-90,in=180] (lcs.west); \draw[->] (dyn.south) to[out=-90,in=180] (ss.west); \draw[->] (lcs.south) to[out=-90,in=180] (lbs.west); \draw[->] (lbs.north) to[out=90,in=180] (polbs.west); \draw[->] (polbs.north) to[out=90,in=180] (POLBSWithLinNoise.west); \draw[->] (polbs.south) to[out=-90,in=180] (POLBSWithNonLinNoise.west); \end{tikzpicture} \caption{Class Diagram representation of dynamical system types and their inheritance. } \label{fig:dynamical-system-diagram} \vspace{-0.5cm} \end{figure} \subsection{Variables} Due the hybrid nature of switched systems, we have two variable types: real-valued and discrete. As illustrated in \cref{fig:variable-diagram}, a real-valued variable is also a linear expression. A linear expression over a variable $\boldsymbol{x} \in \mathbb{R}^n$ is a multiplication between a constant vector $\boldsymbol{h} \in \mathbb{R}^n$ and the variable plus a constant $c \in \mathbb{R}$: $\boldsymbol{h}^\intercal \boldsymbol{x} + c$. For example, variable $\boldsymbol{x} \in \mathbb{R}^2$ is also a vector of linear expressions: $\big((1, 0)^\intercal \boldsymbol{x} + 0.0, (0, 1)^\intercal \boldsymbol{x} + 0.0\big)$. A discrete variable is a variable which can assume a finite set of values. For example, in \cref{lst:light-dark}, $q$ is a discrete variable that have one valid value (i.e., $q \in \{ 0 \}$). We can declare a variable by the function $mk\_variable(size, dtype)$, where $dtype$ is one of of these two types: $float$ for real-valued variables, $int$ for discrete variables. If $dtype$ is $float$, $size$ is the variable dimension. If $dtype$ is $int$, $size$ is the cardinality of the variable set of values. We declare variables in our example in \cref{lst:light-dark} lines 4-6. \begin{figure} \tikzstyle{block} = [draw, color=black, text=black, line width=0.1mm, rectangle, minimum height=2em, minimum width=4em] \centering \begin{tikzpicture}[auto, >=latex', scale=0.75, transform shape] \node[block] (expr) {$LinearExpression$}; \node[block, right=1cm of expr] (rvar) {$RealValuedVariable$}; \node[block, below=0.25cm of expr] (dvar) {$DiscreteVariable$}; \draw[->] (expr) -- (rvar); \end{tikzpicture} \caption{Class Diagram representation of variable types and their inheritance. } \label{fig:variable-diagram} \vspace{-0.5cm} \end{figure} \subsection{Constraints} An \gls*{prstl} formula atom is a constraint over the discrete and real-valued variables. We illustrate the inheritance of different constraint objects in \cref{fig:convex-constraints-diagram}. We call a constraint defined as an equality over discrete variables $q = \alpha$ as discrete predicate (DiscretePredicate implements $\pi^{\{\alpha\}}$), where $\alpha \in \mathbb{N}$. On the other hand, if the constraint is defined over a real-valued variable, this constraint is convex (ConvexConstraint). Particularly, a linear inequality over the variable (i.e., $\boldsymbol{h}^\intercal \boldsymbol{x} + c \leq 0$) is a linear predicate (LinearPredicate). Similarly, a inequality over the probability of a linear predicate (i.e., $prob(\boldsymbol{h}^\intercal \boldsymbol{x} + c \leq 0) >= 1 - \epsilon$) is a probabilistic linear predicate (ProbabilisticLinearPredicate implements $\pi^{\boldsymbol{h}^\intercal \boldsymbol{x} +c}_\epsilon$). \begin{figure} \tikzstyle{block} = [draw, color=black, text=black, line width=0.1mm, rectangle, minimum height=2em, minimum width=4em] \centering \begin{tikzpicture}[auto, >=latex', scale=0.75, transform shape] \node[block] (cc) {$ConvexConstraint$}; \node[block, above right=0.25 and 0.25cm of cc] (lpi) {$LinearPredicate$}; \node[block, right=0.5cm of lpi] (plpi) {$ProbabilisticLinearPredicate$}; \node[block, below=1cm of cc] (dpi) {$DiscretePredicate$}; \node[block, below right=0.25 and 0.25cm of cc] (region) {$ConvexRegion$}; \draw[->] (lpi) -- (plpi); \draw[->] (dyn.north) to[out=90,in=180] (lpi.west); \draw[->] (dyn.south) to[out=-90,in=180] (region.west); \end{tikzpicture} \caption{Class Diagram representation of variable types and their inheritance. } \label{fig:convex-constraints-diagram} \vspace{-0.5cm} \end{figure} We can apply the Boolean conjunction operator $\ensuremath{\wedge}$ over linear and probabilistic linear predicates (i.e., the function $logical\_and(*args)$\footnote{In Python language, $*args$ means a variable number of arguments.}). The resulting constraint is a convex region over the state (if arguments are LinearPredicates) or belief state space (if arguments are ProbabilisticLinearPredicate). A convex region has a property that defines a polytope or a belief cone as illustrated in \cref{fig:polytope-diagram}. We declare a convex region and take its region that defines the input domain (a polytope) in our example in \cref{lst:light-dark} line 13. \begin{figure} \tikzstyle{block} = [draw, color=black, text=black, line width=0.1mm, rectangle, minimum height=2em, minimum width=4em] \centering \begin{tikzpicture}[auto, >=latex', scale=0.75, transform shape] \node[block] (poly) {$Polytope$}; \node[block, above right=0.25cm and -0.25cm of dyn] (fullpoly) {$FullDimensionalPolytope$}; \node[block, below right=0.25cm and 0.25 of poly] (belief_cone) {$BeliefCone$}; \node[block, below right=0.25cm and -0.25cm of fullpoly] (full_belief_cone) {$FullDimensionalBeliefCone$}; \draw[->] (poly.north) to[out=90,in=180] (fullpoly.west); \draw[->] (poly.south) to[out=-90,in=180] (belief_cone.west); \draw[->] (fullpoly.south) to[out=-90,in=180] (full_belief_cone.west); \end{tikzpicture} \caption{Class Diagram representation of polytope types and their inheritance. } \label{fig:polytope-diagram} \vspace{-0.5cm} \end{figure} \subsubsection{Polytopes} A \emph{polytope} $\mathcal{X} \subseteq \mathbb{R}^n$ is a set in $\mathbb{R}^n$ defined by the intersection of a finite number of closed half-spaces, i.e, $\mathcal{X} := \cap_i \{ \mu_i(\boldsymbol{x}) \leq 0 \}$, where $\mu_i \in H(\mathcal{P})$ is a linear function $\mu_i : \mathbb{R}^n \rightarrow \mathbb{R}$ such that $\mu_i(\boldsymbol{x}) := \boldsymbol{h}_i^\intercal \boldsymbol{x} + c_i$, $H(\mathcal{P})$ is the set of linear functions that defines the polytope $\mathcal{P}$, $\boldsymbol{h}_i \in \mathbb{R}^n$ and $c_i \in \mathbb{R}$ are constants. We can also represent a compact polytope $\mathcal{X} \subset \mathbb{R}^n$ as the convex hull of its vertices, i.e., $\mathcal{X} = \text{conv}\big(V(\mathcal{X})\big)$, where $V(\mathcal{X})$ is a set of vertices. \subsubsection{Belief Cone} We call the reciprocal of polytopes in the belief state space as belief cones. These cones $\mathcal{B} \subseteq \mathbb{R}^{n (n + 1)}$ are intersection of a finite set of second order cones about the multivariate Gaussian distribution parameters (i.e., mean $\hat{\boldsymbol{x}} \in \mathbb{R}^n$ and covariance $\Sigma^x \in \mathbb{R}^{n \times n}$) that satisfies a conjunction of a finite set of probabilistic linear predicates (i.e, $\bigwedge_i prob(\mu_i(\boldsymbol{x}) \leq 0) \geq 1 - \epsilon_i$). For simplicity, we will denote that a Gaussian random variable $\boldsymbol{X}_k \sim \mathcal{N}(\hat{x}_k, \Sigma_k^x)$ satisfy a probabilistic linear predicate $prob(\mu_i(\boldsymbol{x}) \leq 0) \geq 1 - \epsilon_i$ by $\boldsymbol{X}_k \ensuremath{\vDash} prob(\mu_i(\boldsymbol{x}) \leq 0) \geq 1 - \epsilon_i$. Therefore, a belief cone is defined as: \begin{equation} \begin{aligned} \mathcal{B} := & \{ \boldsymbol{b} \in \mathbb{R}^{n (n+1)}: \bigwedge_i \boldsymbol{X} \ensuremath{\vDash} prob(\boldsymbol{h}_i^\intercal \hat{\boldsymbol{x}}_k + c_i \leq 0) \geq 1 - \epsilon_i\} \\ :=& \cap_i \{ \boldsymbol{h}_i^\intercal \hat{\boldsymbol{x}}_k + c_i + \Phi^{-1}(1 - \epsilon_i) \sqrt{\boldsymbol{h}_i^\intercal \Sigma_k^x \boldsymbol{h}_i} \leq 0 \}, \end{aligned} \end{equation} where $\boldsymbol{b} \in \mathbb{R}^{n (n+1)}$ is the Gaussian distribution parameter variable, $\boldsymbol{h}_i^\intercal \in \mathbb{R}^n$, $c_i \in \mathbb{R}$ and $\epsilon_i \in [0, 0.5]$ are constants, $\Phi(v)$ and $\Phi^{-1}(p)$ are the cumulative distribution and quantile functions of the standard Gaussian distribution $V \sim \mathcal{N}(0,1)$, i.e., $\Phi(v) = prob(V \leq v)$ and $\Phi^{-1}(p) \leq v$ if and only if $p \leq \Phi(v)$. We can easily see that $\boldsymbol{X} \ensuremath{\vDash} prob(\boldsymbol{h}_i^\intercal \hat{\boldsymbol{x}}_k + c_i \leq 0) \geq 1 - \epsilon_i$ if and only if $\boldsymbol{h}_i^\intercal \hat{\boldsymbol{x}}_k + c_i + \Phi^{-1}(1 - \epsilon_i) \sqrt{\boldsymbol{h}_i^\intercal \Sigma_k^x \boldsymbol{h}_i} \leq 0$ from Gaussian distribution properties such as linear transformation and the quantile function definition. \subsection{PrSTL formula} We implement a \gls*{prstl} formula as classes shown in \cref{fig:prstl-diagram}. An STLAtomicProposition implements a \gls*{prstl} state formula $\phi$, meaning that it represents the conjunction (i.e., $logical\_and(*args)$) over a list of ProbabilisticLinearPredicate and Boolean formula (i.e., using $logical\_and(*args)$ or $logical\_or(*args)$) over DiscretePredicate, meaning that it is defined by a convex region and a set of valid system modes. In plain English, a trajectory $\boldsymbol{\xi}$ satisfies an STLAtomicProposition at instant $k$ if it reaches a belief state in the convex region using one of the valid system modes. We declare two STLAtomicProposition in our example in \cref{lst:light-dark} lines 16-17 and lines 20-21. An STLAnd object represent the conjunction (i.e., $logical\_and(*args)$) of a list of STLFormulas containing at most one STLAtomicProposition. On the other hand, an STLOr object is a disjunction (i.e., $logical\_or(*args)$) of a list STLFormulas but containing an arbitrary number of STLAtomicProposition. An STLUntil object is an \gls*{prstl} formula with until operator $\ensuremath{\varphi}_1 \ensuremath{\boldsymbol{U}}_{[a,b]} \ensuremath{\varphi}_2$, and STLRelease object is an \gls*{prstl} formula with until operator $\ensuremath{\varphi}_1 \ensuremath{\boldsymbol{R}}_{[a,b]} \ensuremath{\varphi}_2$. We obtain these formulas using the functions: $until(\ensuremath{\varphi}_1, a, b, \ensuremath{\varphi}_2)$, $eventually(a, b, \ensuremath{\varphi})$, $release(\ensuremath{\varphi}_1, a, b, \ensuremath{\varphi}_2)$, $always(a, b, \ensuremath{\varphi})$. We declare a formula in our example in \cref{lst:light-dark} lines 15-22. \begin{figure} \tikzstyle{block} = [draw, color=black, text=black, line width=0.1mm, rectangle, minimum height=2em, minimum width=4em] \centering \begin{tikzpicture}[auto, >=latex', scale=0.75, transform shape] \node[block] (phi) {$STLFormula$}; \node[block, above right=0.25cm and 0.25cm of phi] (tphi) {$STLTemporalFormula$}; \node[block, below right=0.25cm and 0.25 of phi] (pi) {$STLAtomicProposition$}; \node[block, above left=0.25cm and 0.25cm of phi] (phi_and) {$STLAnd$}; \node[block, below left=0.25cm and 0.25cm of phi] (phi_or) {$STLOr$}; \node[block, above right=0.25cm and 0.25cm of tphi] (phi_until) {$STLUntil$}; \node[block, below right=0.25cm and 0.25cm of tphi] (phi_release) {$STLRelease$}; \draw[->] (phi.north) to[out=90,in=180] (tphi.west); \draw[->] (tphi.north) to[out=90,in=180] (phi_until.west); \draw[->] (phi.south) to[out=-90,in=180] (pi.west); \draw[->] (phi.north) to[out=90,in=0] (phi_and.east); \draw[->] (phi.south) to[out=-90,in=0] (phi_or.east); \draw[->] (tphi.south) to[out=-90,in=180] (phi_release.west); \end{tikzpicture} \caption{Class Diagram representation of \gls*{prstl} formula types and their inheritance. } \label{fig:prstl-diagram} \vspace{-0.5cm} \end{figure} \subsection{Solution} The object Problem wraps the implementation of the approach presented in \cref{sec:overview}. A solution could be empty ($None$ in Python) if the algorithm did not find a trajectory that satisfies the specification. However, if such a trajectory is found, the algorithm returns an object that implements this trajectory. Since the solution is a trajectory of the approximated transition system \cite{dasilva2021active}, the returned trajectory is a ProbabilisticTSTrajectory. This object implements methods to extract data from the trajectory such as: $get\_mean$, $get\_cov$, $get\_control$, $get\_action$ that returns the belief state mean, the belief state covariance, the control, and the action from the trajectory. We can use these methods to execute a trajectory tracking strategy. We declare a problem in our example in \cref{lst:light-dark} lines 7-23 and solve it in \cref{lst:light-dark} lines 24-25. We can simulate the execution of this planned trajectory. The planned trajectory is the result of an approximated belief dynamics where the observations are the \gls*{mlo}. Hence, we propose to use a linear feedback law to adjust the belief state during the execution to track the planned trajectory. Specifically, we implemented a \gls*{rhc} strategy with a finite horizon discrete time \gls*{lqr} to track the belief mean values. We initialize this strategy by calling the method $compute\_finite\_horizon\_lqr(horizon, Q\_final, Q, R)$ for each system mode, where $Q_{final}, Q \in \mathbb{R}^{n \times n}$ are positive-semi-definite contrant matrices and $R \in \mathbb{R}^{m \times m}$ is a positive definite matrix. Next, we call the method $simulate$ from SwitchedSystem object. This method simulates the system $real\_system$ initialized at $real\_initial\_state$ for $num\_of\_steps$ steps while using the linear feedback law to track the planned trajectory $reference\_trajectory$. In the running example, we track the mean of the estimated belief with cost function $J = \hat{\boldsymbol{x}}_h^\intercal Q_{final} \hat{\boldsymbol{x}}_h + \sum_{k=0}^{h - 1} \hat{\boldsymbol{x}}_k^\intercal Q \hat{\boldsymbol{x}}_k + \boldsymbol{u}_k^\intercal R \boldsymbol{u}_k$, where $Q = I_2$ and $R = 0.05 I_2$ and the horizon $h = 5$, as shown in \cref{lst:light-dark} lines 27-32. The result is shown in \cref{fig:light-dark-example-workspace}. The blue trajectory is the planned trajectory in the belief space. This trajectory approximates the observation as maximum likelihoods. However, we observe a very different observation during the execution, which is the purple trajectory in the figure. As a result, the belief trajectory during the execution is slightly different, the orange trajectory in the figure. However, this trajectory satisfies the specification, and the resulting state (in red) also is within the expected result. Since the maximum likelihood approximation is close to the most likely belief trajectory, a simple tracking strategy is, in general, sufficient to enforce the planned trajectory during execution. \begin{figure} \centering \includegraphics[width=0.9\linewidth]{light-dark-example-ws.png} \caption{Light-dark example. The shade in the workspace represents the amount of light at that position and the black box is the target.} \label{fig:light-dark-example-workspace} \vspace{-0.5cm} \end{figure} \section{Conclusions and future work}\label{sec:conclusion} In this work, we presented a Python toolbox for controller synthesis from PrSTL specifications. We considered problems with a switched linear system with Gaussian noises. We illustrated our approach on a simulation of robot motion planning under noisy localization. In this example, we showed that the planned trajectory satisfied both the task and active perception requirements. We will focus on two directions in future work. One direction is to extend to other probabilistic hybrid systems and also consider probabilistic switching. Another direction is to drop the \gls*{mlo} approximation during the planning without a conservative assumption. \bibliographystyle{IEEEtran}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section*{Introduction} In its simplest form, the Rashba spin-orbit coupling (RSOC) \cite{bychkov_properties_1984} occurring in systems with broken inversion symmetry lifts the spin degeneracy of a parabolic band pair, by shifting them oppositely along the wave-vector $k$ \cite{manchon_new_2015}. On each of the two corresponding concentric circular Fermi contours, spins are tangential (and thus locked perpendicular to $k$) and curl clockwise for one contour and counter-clockwise for the other. The first direct visualization of Rashba-split bands using angle-resolved photoemission spectroscopy (ARPES) was reported at the surface of heavy metals such as Au \cite{lashell_spin_1996,varykhalov_ir111_2012} or of their alloys \cite{ast_giant_2007} where the RSOC coefficient $\alpha_R$ can take values in the range of a few $\si{\electronvolt \cdot \angstrom}$. The spin-momentum locking of Rashba systems was later harnessed to interconvert spin and charge currents (direct and inverse Rashba-Edelstein effects) at interfaces between heavy metals \cite{rojas-sanchez_spin--charge_2013} and also in oxide-based two-dimensional electron gases (2DEGs) \cite{lesne_highly_2016,vaz2019mapping}. Just a few years after their discovery \cite{ohtomo_high-mobility_2004}, a finite RSOC was indeed identified through weak antilocalization (WAL) in LaAlO$_3$/SrTiO$_3$ (LAO/STO) 2DEGs \cite{caviglia_tunable_2010}, which are non-centrosymmetric systems. $\alpha_R$ was found to amount to a few tens of meV --- considerably lower than at the surface of heavy metals --- and, quite remarkably, to be tuneable by a gate voltage \cite{caviglia_tunable_2010,vaz_determining_2020}. Aside from a report of a giant RSOC at the surface of STO \cite{santander-syro_giant_2014} that later studies did not observe \cite{mckeown_walker_absence_2016}, ARPES could not provide a direct visualization of Rashba-split bands in STO 2DEGs due to the limitation in the energy resolution. This moderate $\alpha_R$ combined with the long scattering time of such 2DEGs was however successfully used to achieve spin-charge interconversion with very high efficiency \cite{lesne_highly_2016,vaz2019mapping}. Another family of oxide 2DEGs is based on KTaO$_3$ (KTO) instead of STO \cite{gupta_ktao3_2022}. Bulk STO and KTO share several features: they are both quantum paraelectrics and become n-type conductors when doped with minute amounts of impurities \cite{wemple_transport_1965}. Just like STO, KTO may also harbor 2DEGs when interfaced with appropriate materials such as LAO \cite{zhang_unusual_2019}, LaVO$_3$ \cite{wadehra_planar_2020} or reactive metals such as Al (that locally create oxygen vacancies which are n-type dopants) \cite{moreno2021admat}. One major difference though is that Ta is much heavier than Ti and thus RSOC in KTO 2DEGs should be significantly stronger than in STO 2DEGs. Indeed, WAL data yields $\alpha_R \approx$ 300 meV$\cdot$\AA \cite{zhang_unusual_2019} and compatible values were recently derived from bilinear magnetoresistance experiments \cite{moreno2021admat}. Despite several attempts to use ARPES to map the band structure of KTO 2DEGs \cite{santander-syro_orbital_2012,king_subband_2012}, Rashba-split bands were nonetheless never observed, although band structure calculations on KTO \cite{zhang_unusual_2019} and related systems \cite{kim2016strongly} do predict a strong Rashba splitting of the Ta \textit{d}\textsubscript{xz/yz} bands. Therefore, the band structure of (001)-oriented KTO 2DEGs remains elusive to this day, as is the direct visualization of Rashba splitting in any oxide, except for surface states in delafossite single crystals\cite{sunko_maximal_2017}. In this paper, we report the synthesis of (001)-oriented KTO 2DEGs through the deposition of 1 $\si{\angstrom}$ of Al by molecular beam epitaxy. We use ARPES to measure the band dispersion and associated Fermi surfaces. The data reveal a pair of Rashba-split \textit{d}\textsubscript{xz/yz} bands with a $\alpha_R$ consistent with values extracted from magnetotransport \cite{zhang_unusual_2019,moreno2021admat} and earlier density-functional theory (DFT) calculations \cite{zhang_unusual_2019}. We fit the ARPES data with an 14-band tight-binding (TB) Hamiltonian (see Methods), determine the corresponding spin and orbital textures and compute the Edelstein effect for each band pair. We finally discuss the role of symmetry and anisotropy on the Edelstein effect. \section*{Results} \subsection*{Band structure of KTO(001) 2DEGs} To create an electron gas in a (001)-oriented single crystal of KTO, we deposited 1\textcolor{black}{-2} $\mathrm{\mathring{A}}$ of Al by molecular beam epitaxy (MBE) following the same protocol as detailed in Refs. \cite{rodel_universal_2016,moreno2021admat}. The sample was then transferred in ultra high vacuum to the ARPES measurement chamber (see Methods). Fig.\ref{fig:figure1} displays the detailed band structure of KTO//Al(1 $\mathrm{\mathring{A}}$) near $\Gamma$\textsubscript{002} (corresponding to a photon energy of 31 eV at normal emission). \textcolor{black}{Results from 2 $\mathrm{\mathring{A}}$ samples gave very similar results, albeit with a poorer signal to noise ratio.} We observed a metallic state at the surface, with nearly parabolic bands crossing the Fermi level $\epsilon$\textsubscript{F}. By varying the photon energy we found that the probed states did not significantly disperse with \textit{k}\textsubscript{z}, thereby confirming the quasi-2D nature of the electron gas. In Fig.~\ref{fig:figure1}\textbf{a} we show the band dispersion along the (100) in-plane direction with linear horizontal polarization (LH) of the photon beam. On the right side, the same data are compared with theoretical fits obtained with our TB model. As detailed in the Methods it comprises a total of 14 bands out of which four band pairs fall in the measurement window. The model includes orbital mixing of the 5\textit{d} orbitals of Ta and the strong SOC. For clarity, we associate a specific color to each band pair. Pink, green and orange band pairs result from different linear combinations of the three \textit{t}\textsubscript{2g} orbitals. The first two are predicted to display mainly a \textit{d}\textsubscript{xy} component and a low effective mass \textit{m}\textsubscript{e}= 0.23\textit{m}\textsubscript{0} with \textit{m}\textsubscript{0} the electron mass. We note that although the green bands are not very visible here, they are clearly present on similar data taken at room temperature \cite{moreno2021admat} and in earlier ARPES studies of KTO 2DEGs\cite{santander-syro_orbital_2012,king_subband_2012}, hence we include them in our model. The orange bands instead are mainly formed by mixed \textit{d}\textsubscript{xz} and \textit{d}\textsubscript{yz} orbitals and display a larger effective mass \textit{m}\textsubscript{e}= 0.52\textit{m}\textsubscript{0}. The cyan band pair is introduced in our model as additional \textit{d}\textsubscript{xy} subbands originated from the quantum confinement of the carriers along the \textit{z} direction. However, some uncertainty remains regarding the actual dominant orbital character (i.e. \textit{d}\textsubscript{xy} or \textit{d}\textsubscript{xz, yz}) of this band pair. \textcolor{black}{Please note that in order to describe the Rashba-like band splitting theoretically, we also had to add the \textit{e}\textsubscript{g} states \textit{d}\textsubscript{z\textsuperscript{2}} and the \textit{d}\textsubscript{x\textsuperscript{2}-y\textsuperscript{2}} to our tight-binding model, even though these states lie above the Fermi energy and are thus not observed in the ARPES measurement. By analogy with Ref. \cite{kim2016strongly}, in Eq. 12 in the Methods section we explain that the orange band pair can be mapped to a Rashba Hamiltonian near the $\Gamma$ point. In this effective 2-band model, the Rashba term is proportional to the orbital mixing coefficient of the \textit{d}\textsubscript{z\textsuperscript{2}} and the \textit{d}\textsubscript{x\textsuperscript{2}-y\textsuperscript{2}} states indicating the necessity to consider the \textit{e}\textsubscript{g} states as well.} \begin{figure}[ht] \centering \includegraphics[width=\linewidth]{Figure1.png} \caption{\textbf{Electronic band structure of KTO(001) 2DEGs.} \textbf{a} Band dispersion of Al/KTO(001) surface measured by ARPES. The electrons are collected at normal emission with linearly polarized photons at 31 eV ($\Gamma$\textsubscript{002} of bulk KTO). The tight-binding fits are overlaid to the data in the right panel. A specific color is associated to each pair of bands. The yellow box highlights the energy range in panels \textbf{b}-\textbf{d}. \textbf{b} (\textbf{c}) Sum of spectra obtained from two orthogonal linearly (circularly) polarized photons, i.e. linear horizontal and vertical (LV + LH), and circular left and right (CL + CR) respectively. \textbf{d} Same as \textbf{c} with overlaid tight binding fits. Constant energy maps and theoretical contours at \textbf{e} 5 meV, \textbf{f} 25 meV and \textbf{g} 50 meV below the Fermi level. The energies are indicated by gray horizontal lines on the dispersion in panel \textbf{a}.} \label{fig:figure1} \end{figure} In the measured spectra we can identify two branches close to $\epsilon$\textsubscript{F}, which are nicely fitted with a visible Rashba split band pair (orange). Panels \textbf{b}-\textbf{d} display high resolution measurements to better visualize the splitting of such bands where the binding energy span corresponds to the yellow box in Fig.\ref{fig:figure1}.\textbf{a}. By symmetry, \textit{d}\textsubscript{xz} and \textit{d}\textsubscript{yz} states can be excited only with linear horizontal (LH) and vertical (LV), respectively. Thus, the complete band structure results from the sum of the LV and LH spectra (Fig.~\ref{fig:figure1}.\textbf{b}). Summing circular left and right polarization (CL + CR) yields a comparable intensity distribution shown in Fig.~\ref{fig:figure1}\textbf{c}, where one can clearly appreciate two \textit{k}-split bands matching with the theoretical ones (Fig.~\ref{fig:figure1}\textbf{d}). In Fig.~\ref{fig:figure1}.\textbf{e}, \textbf{f}, \textbf{g} we show constant energy maps in the \textit{k}\textsubscript{x}\textit{k}\textsubscript{y} plane and the corresponding contours derived from our TB fit at binding energies of 5 meV, 25 meV and 50 meV, respectively (gray horizontal lines in panel \textbf{a}). The good agreement between experiment and theory validates the model and the parameters extracted from the fitting of the bands dispersion, e.g. orbital mixing, spin-orbit coupling, hopping amplitudes and on-site energies (discussed in Methods). \begin{figure}[ht] \centering \includegraphics[width=\linewidth]{fig2.pdf} \caption{\textbf{Spin and orbital textures of electronic states.} Constant energy lines with spin and orbital textures at selected energies. Panels \textbf{a}-\textbf{d}: Iso-energy lines slightly above the band edges of the four band pairs. The upper quadrants display the spin texture (red arrows), the lower quadrants the orbital expectation values (blue arrows). Left quadrants correspond to the outer band of a pair, right quadrants to the inner band. The contours for which the spin or orbital textures are shown are drawn as slightly thicker lines. \textbf{e} detail of panel \textbf{c}. Here, all quadrants show the textures of both bands. \textbf{f} detail of panel \textbf{d}, only the second band pair (green) is shown.} \label{fig:figure2} \end{figure} \subsection*{Spin and orbital textures} Next, we analyse the spin and orbital textures of the 2DEG by calculating the expectation values of the spin and orbital moment operators (Eqs.~\eqref{eq:spin_operator} and \eqref{eq:lambda_xyz}) using the eigenstates of the TB Hamiltonian discussed in the Methods section (Eq.~\eqref{eq:Hamiltonian}). Fig.~\ref{fig:figure2} depicts the spin and orbital textures at selected iso-energy lines. Here, we chose energies slightly above the band edge of each band pair. Near the band edge, each band pair has a Rashba-like band structure with almost circular iso-energy lines. The first (pink, Fig.~\ref{fig:figure2}\textbf{a}), second (green, Fig.~\ref{fig:figure2}\textbf{b}), and fourth (cyan, Fig.~\ref{fig:figure2}\textbf{d}) exhibit circular, Rashba-like spin and orbital textures, with the orbital moments pointing in opposite direction to the spin moments. Due to time-reversal and mirror symmetries, both textures are completely in-plane. Near the band edges, the effective Rashba parameters of these bands, characterizing the band splitting and defined by $\alpha_R = \nicefrac{\Delta k \hbar ^2}{2 m_\mathrm e}$ with $\Delta k$ the difference between the average $k$ of the bands and $\hbar$ the reduced Planck constant, are in the order of 10 $\si{\milli\electronvolt\angstrom}$. The third band pair (orange) exhibits a much larger splitting, characterized by an effective Rashba parameter of approximately $\alpha_R \approx 320$ $\si{\milli\electronvolt\angstrom}$, which is in good agreement with values deduced from magnetotransport \cite{zhang_unusual_2019,moreno2021admat}. However and quite unexpectedly, the spin and orbital textures deviate from the pure, \textcolor{black}{linear} Rashba model (Fig.~\ref{fig:figure2}\textbf{c} and \textbf{e}). Although the orbital moments show a circular texture, their absolute values at the outer and inner bands differ and are comparably small. The spin texture deviates even more from the standard \textcolor{black}{linear} Rashba model. Spins have comparably small absolute values (up to $0.09 \hbar$ instead of almost $0.5 \hbar$ near the other bands' edges). Further, since the system under consideration deviates from a pure Rashba system, and the corresponding multi-band Hamiltonian also contains higher-order terms in $k$, the spin expectation values at the outer band perform an in-plane rotation of $ 6 \pi$ along the whole iso-energy line, whereas in conventional Rashba systems a $2 \pi$ rotation occurs. Due to the hybridized orbital character of approximately $50\%$ $d_{zx}$ and $50\%$ $d_{yz}$ these strongly Rashba-split bands can approximately be described by a \textcolor{black}{linear} Rashba Hamiltonian in the basis $\{ \frac{1}{\sqrt{2}}(\ket{d_{zx\downarrow}}+i\ket{d_{yz\downarrow}}), \frac{1}{\sqrt{2}}(\ket{d_{zx\uparrow}}-i\ket{d_{yz\uparrow}}) \}$. However, this means that the Rashba-like texture of this band pair occurs in the pseudo spin space $\vec \tau = \vec s_{d_{zx}}-\vec s_{d_{yz}}$, and not in the actual spin space $\vec s = \vec s_{d_{zx}}+\vec s_{d_{yz}}$ which leads to a strong suppression of the spin expectation values~\cite{kim2016strongly} (more details in the Methods Section). Here, $\vec s_{d_{zx}}$ and $\vec s_{d_{yz}}$ are the spins of the corresponding orbitals. Also, since this band pair predominantly is formed by $d_{zx}$ and $d_{yz}$ states, the orbital expectation values are also strongly reduced (the corresponding matrix elements in the $\lambda_x$ and $\lambda_y$ matrices are zero; see Methods section). At higher energies, the band structure deviates from the Rashba-like parabolic shape, and the iso-energy lines are no longer circles. Due to hybridization, the spin and orbital textures become more complicated. Most noticeably, their amplitudes vary along an iso-energy line (see e.g. Fig.~\ref{fig:figure2}\textbf{d}), and the amplitude of the orbital moments strongly increases at higher energies. The larger variation in the absolute values of the orbital moments in comparison to the spin expectation values can be explained qualitatively by the higher variety of the values of the corresponding quantum numbers. While there are only two possibilities of projecting the spin to a quantization axis ($s=\frac{1}{2}$), there are five magnetic quantum numbers for the orbital angular momentum of the $d$ electrons under consideration~\cite{johansson2021spin}. This larger range of magnetic quantum numbers is also reflected by stronger varying expectation values of the orbital moment. At approximately $-26$ $\si{\milli\electronvolt}$ the two bands of the second band pair (green) intersect in the $\left\langle10 \right\rangle$ direction, leading to unconventional textures at energies $>-26$ $\si{\milli\electronvolt}$: Fig.~\ref{fig:figure2}\textbf{f} indicates that the spin expectation values rotate by $10 \pi$ along the whole iso-energy line, and the orbital expectation values of the inner band rotate by $6 \pi$ due to deviations of the 2DEG from a pure Rashba system. \subsection*{Spin and orbital Edelstein effects} We now discuss the Edelstein effect corresponding to these bands and their spin/orbital textures. We define the Edelstein efficiency tensor $\chi$ by the magnetic moment $\vec m$ per 2D unit cell, induced by the electric field $\vec E$, \begin{align}\label{eq:Edelstein_efficiency} \frac{A_0}{A} \vec m = \chi \vec E = (\chi^\text s + \chi^\text l) \vec E \ . \end{align} Here, $A_0$ is the area of the 2D unit cell, $A$ is the area of the sample, and $\chi^\text s$ and $\chi^\text l$ are the efficiencies of the spin (SEE)\cite{edelstein_spin_1990} and orbital Edelstein effect (OEE)\cite{levitov_magnetoelectric_1985,yoda_current-induced_2015,go_toward_2017,salemi_orbitally_2019}, respectively. The Edelstein efficiency is calculated using the semi-classical Boltzmann approach and a constant relaxation time approximation. For symmetry reasons $\chi_{xy}=-\chi_{yx}$ are the only nonzero tensor elements. Details are discussed in the Methods Section. \begin{figure}[ht] \centering \includegraphics[width=0.5\linewidth]{fig3.pdf} \caption{\textbf{Edelstein effect.} Edelstein conversion efficiency $\chi_{xy}$ representing the magnetic moment per surface unit cell along $x$ direction induced by an electric field in $y$ direction. \textbf{a} Contribution from the spin moments (red) and contribution of each band pair (colors as in Fig.~\ref{fig:figure1}). $\epsilon_\text{max}$ corresponds to the energy where the total spin Edelstein efficiency is maximum, $\epsilon_\text{max}^{5,6}$ is the energy of the third band pair's (orange) maximum of the Edelstein efficiency. \textbf{b} Contribution from the orbital moments (blue) and contribution of each band pair. \textbf{c} Total Edelstein efficiency (gray).} \label{fig:figure3} \end{figure} Fig.~\ref{fig:figure3} depicts the band-resolved and total Edelstein conversion efficiency of the KTO(001) 2DEG versus the Fermi level position. Panels \textbf{a} and \textbf{b} show each band pair's contribution to the SEE and the OEE, respectively, and the total efficiencies as sum of the different pairs (red/blue). Panel \textbf{c} shows the total Edelstein conversion efficiency. Consistent with results on SrTiO$_3$ interfaces~\cite{johansson2021spin}, the OEE dominates the total Edelstein effect, mainly because the orbital moments are in general larger than the spin moments, and the amplitude of the orbital moments of neighboring bands differ more, as discussed above. Usually, the two bands of a pair contribute oppositely to the SEE and OEE, and their contributions partially compensate. If the expectation values differ in magnitude, this compensation is reduced, and the resulting Edelstein effect is enhanced. When a new band pair is filled, it contributes positively or negatively to the total SEE and OEE, depending on the chirality of the corresponding moments (see Fig.~\ref{fig:figure2}). The fourth band pair (cyan) contributes with a positive sign to the SEE, although it has the same spin chirality as the other bands. Here, the inner band's contribution dominates over the outer band's because of slightly larger spin moments, which inverts the sign of the SEE (see Fig.~\ref{fig:figure2}\textbf{d}). At $\varepsilon=-26$ $\si{\milli\electronvolt}$, $\chi_{xy}^\text s$ for the second band pair (green) exhibits a sharp kink, which is also visible in the total SEE. This kink is related to the crossing of both bands of this pair, and the related strong modification of the spin texture (see Fig.~\ref{fig:figure2} \textbf{f} and discussion above). Unexpectedly, the lowest band pair (pink) and not the strongly Rashba-split band pair (orange) dominates the SEE as well as the OEE, although the effective Rashba parameter of the latter exceeds $\alpha_R$ of the lowest band pair by one order of magnitude. This can be directly understood by considering the spin and orbital textures of the strongly Rashba-split band pair (Fig.~\ref{fig:figure2}\textbf{e}). Since the Rashba splitting occurs in the subspace of pseudsopsin $\vec \tau$, and not in the spin subspace (see Methods for details), the spin values are comparably small and deviate from the spin texture of a conventional Rashba system. Due to the orbital character of this band pair (mainly $d_{zx}$ and $d_{yz}$), the orbital texture is small as well. The lowest band pair (pink) has the highest density of states and therefore dominates the total SEE and OEE efficiencies. We note that the maximum spin and orbital EE found here exceed those calculated for STO 2DEGs \cite{johansson2021spin} by a factor of $\sim 2$ and $\sim 4$, respectively. \section*{Discussion} As discussed above, the SEE and OEE provided by the third band pair are not as large as one would expect by just considering the giant Rashba-like splitting of this band pair. For symmetry reasons, the states of both bands have equal amount of $d_{yz}$ and $d_{zx}$ character, which leads to an almost complete cancellation of the spin expectation values. In order to enhance the corresponding Edelstein efficiency, we reduce the symmetry of the system by introducing an anisotropy with respect to the $d_{yz}$ and $d_{zx}$ orbitals. The simplest way to simulate anisotropy in our particular model Hamiltonian is to assume different on site energies of the corresponding orbitals, $\Delta \epsilon_{yz} \neq \Delta \epsilon_{zx}$. In a realistic system, strain-induced anisotropy would also influence other parameters of the model, but we decided to modify only one parameter in order to demonstrate the general influence of anisotropy, and not to model details of a strained system. \begin{figure}[ht] \centering \includegraphics[width=\linewidth]{fig4.pdf} \caption{\textbf{Influence of anisotropy on the iso-energy lines and the calculated Edelstein efficiency.} \textbf{a} shows the iso-energy lines of a system with $\Delta \epsilon_{yz}-\Delta \epsilon_{zx}=30$ $\si{\milli\electronvolt}$ at $\epsilon= 5 \si {\milli\electronvolt}$, which corresponds to the energy of the maximum contibution of the third band pair (orange) to the spin Edelstein effect. \textbf{b} same as \textbf{a}, but for the isotropic system. \textbf{c} spin and \textbf{d} orbital Edelstein efficiency $\chi_{xy}$ for an anisotropic system with $\Delta \epsilon_{yz}-\Delta \epsilon_{zx}=30 \si{\milli\electronvolt}$.} \label{fig:figure4} \end{figure} Fig.~\ref{fig:figure4}\textbf{a} and \textbf{b} show the spin and orbital textures of an anisotropic ($\Delta \epsilon_{yz}-\Delta \epsilon_{zx}= 30 \si{\milli\electronvolt}$) and the isotropic 2DEG, respectively. Here, we chose the energies of the third band pair's maximum contribution to the SEE (labeled $\epsilon_\text{max}^{5,6}$). In comparison to the isotropic system, the spin and orbital expectation values of the anisotropic system are remarkably increased, as envisaged from the low-energy expansion of the third band pair (Eq.~\eqref{eq:expansion}). While the character of these bands at the band edge is approximately $50 \%$ of each $d_{yz}$ and $d_{zx}$ in the isotropic case, this ratio strongly changes with anisotropy ($62 \%$ of $d_{zx}$ and $35 \%$ of $d_{yz}$ for $\Delta \epsilon_{yz}-\Delta \epsilon_{zx}= 30 \si{\milli\electronvolt}$). Hence, the $\vec k$ dependent spin expectation values are increased, because the contributions from both orbitals compensate less in the anisotropic case. Further, the admixture of $d_{xy}$ states ($2\%$ at the band edges versus $0.2\%$ in the isotropic case) leads to enhanced orbital expectation values. Fig. \ref{fig:figure4}\textbf{c} shows the SEE and OEE efficiencies versus the Fermi level for an anisotropic system with $\Delta \epsilon_{yz}-\Delta \epsilon_{zx}= 30 \si{\milli\electronvolt}$. In comparison to the SEE and OEE in the isotropic case (Fig.~\ref{fig:figure3}), the contribution of the third band pair (orange) to the total SEE and OEE signal is indeed increased by a factor $\sim$3 due to the enhanced $\vec k$ dependent spin and orbital expectation values. However, the anisotropy also affects the other band pairs, and the total SEE and OEE, which are superpositions of all bands' contributions, are reduced. Obviously, the Edelstein tensor becomes anisotropic in the presence of strain, $\chi_{xy} \neq - \chi_{yx}$, and in general the magnitude of $\chi_{xy}$ and $\chi_{yx}$ are affected oppositely by anisotropy. Our work shows that even when the host material contains a heavy element such as Ta, in 2DEGs based on $d$ element perovskite oxides and accordingly displaying a substantial Rashba splitting, the orbital symmetry may lead to strong compensation effects, limiting the spin-charge interconversion efficiency through the EE and inverse EE. The compensation effects are three-fold: (i) the EE from the $d_{xz/yz}$ band is weaker than expected due to the opposite spin textures of $d_{xz}$ and $d_{yz}$ components; (ii) the EE from the $d_{xy}$ bands and $d_{xz/yz}$ bands typically have opposite sign; (iii) the spin EE and orbital EE generally have opposite signs, the total response being dominated by the orbital EE. Our calculations indicate that introducing in-plane anisotropy is necessary to unleash the potential of the $d_{xz/yz}$ Rashba split bands for spin-charge interconversion in KTO 2DEGs. This can be achieved from a cubic material like KTO by making it orthorhombic, e.g. by growing it on orthorhombic substrates such as rare-earth scandates which have a good lattice match with KTO. Another approach to lift the $d_{xz/yz}$ orbital degeneracy is to design a 2DEG from a material that, unlike KTO, is orthorhombic in the bulk. Candidates include SrZrO$_3$, CaZrO$_3$ or CaHfO$_3$. DFT simulations predict that this latter compound, that includes the 5$d$ element Hf, can be made metallic through doping with oxygen vacancies \cite{alay-e-abbas_chemical_2014}, similar to STO and KTO, which suggests that it may host a 2DEG when interfaced with another oxide. To suppress the second compensation effect, it may be attractive to generate 2DEGs in oxide materials in which crystal field splitting causes the $d_{xz/yz}$ bands to lie below the $d_{xy}$ bands. Recent ARPES measurements suggested that such a band order inversion occurs at interfaces between STO and $\gamma$-Al$_2$O$_3$ \cite{chikina_band-order_2021}. It would be interesting to see if the same inversion could be engineered in KTO 2DEGs. We note that in anatase TiO$_2$, which was shown to be a suitable 2DEG host\cite{sarkar_electron_2015}, the $d_{xz/yz}$ bands also lie below the $d_{xy}$ bands. This material and related members of its family, if properly engineered, thus look promising for efficient spin-charge interconversion by EE and inverse EE. Finally, although this is not true at all energies, in KTO 2DEGs the spin EE and the orbital EE typically compete with each other. In classical spin pumping experiments only a spin current is injected and the conversion to a charge current is mostly driven by the spin EE \cite{vaz2019mapping}. However, in more advanced schemes aiming to harness orbital-charge interconversion as well \cite{go_orbitronics_2021} it would be beneficial to work with systems in which the spin EE and orbital EE have the same sign. Materials with more than half-filled orbitals, in which the spin and orbital moments should be parallel, could represent a fruitful research direction. \section*{Methods} \textcolor{black}{ \subsection*{Sample preparation} After preannealing the KTO at 300$^\circ$C for 1 h in UHV, we grew 1-2 $\si{\angstrom}$ of Al at room temperature at 7$\times$10$^{-10}$ mbar using a Knudsen cell heated to 1000$^\circ$C at a growth rate of 0.011 $\si{\angstrom}$.s$^{-1}$. The samples were then transferred in UHV to the connected ARPES chamber. Low-energy electron diffraction performed on the samples before and after Al deposition showed sharp diffraction spots corresponding to a square lattice, attesting to the high structural coherence of the surface. In situ X-ray photoelectron spectroscopy showed that Al deposition onto KTO led to the reduction of the Ta valence from the nominal 5+, consistent with 2DEG formation \cite{moreno2021admat}, and that Al was fully oxidized to Al$^{3+}$.} \subsection*{Angular resolved photoemission spectroscopy} High resolution angular resolved photoemission spectroscopy (ARPES) spectra were collected at the Cassiopée beamline of Synchrotron SOLEIL (France), by a Scienta R4000 electron energy analyser. The beamline allows to control the energy and polarization of the VUV photons. Data presented in the manuscript were collected at different polarization to probe electron states with different orbital symmetries: linear horizontal (i.e. parallel) or vertical to the scattering plane (LH or LV), and circular left or right (CL or CR). Considering the sample surface plane \textit{xy} and vertical slit along \textit{y} (LV), for a normal emission geometry the scattering plane is defined as the mirror plane \textit{xz}. This allows to probe only odd symmetry orbitals with respect to \textit{xz} plane, corresponding to \textit{d}\textsubscript{xy} and \textit{d}\textsubscript{yz} orbitals. By using horizontal slit (LH), only \textit{d}\textsubscript{xz} orbitals are selected. The sum of the LH and LV spectra would thus contain contribution from all the \textit{e}\textsubscript{g} bands. The sample was kept at 15 K by liquid He in order to minimize the thermal noise. The energy and angular resolution were 15 meV and <0.25$^\circ$. \subsection*{Tight-binding model} We diagonalize the tig ht-binding Hamiltonian \begin{align}\label{eq:Hamiltonian} H=H_\mathrm{hop}+H_\mathrm{SOC}+H_\mathrm{mix} \end{align} to describe the electrons gas that is confined at the interface. In the model, the electron gas consists of a single layer square lattice (lattice constant $a=4.0\,\mathrm{\mathring{A}}$) formed by the Ta atoms. Only the $5d$ electrons of the Ta atoms have been considered, since they form the states close to the Fermi energy. The model is similar to our previous works on STO 2DEGs~\cite{vaz2019mapping,johansson2021spin} but instead of considering only the three $t_{2g}$ orbitals, we also consider the $d_{z^2}$ and $d_{x^2-y^2}$ orbitals. For the $d_{xy}$ orbital we consider two additional subbands. In total, the Hamiltonian is a $14\times 14$ matrix which results in 14 bands in the band structure. The basis is \begin{align}\label{eq:basis} \{ \ket{d_{z^2\uparrow}},\ket{d_{z^2\downarrow}},\ket{d_{x^2-y^2\uparrow}},\ket{d_{x^2-y^2\downarrow}},\ket{d_{xy1\uparrow}},\ket{d_{xy1\downarrow}} ,\ket{d_{xy2\uparrow}},\ket{d_{xy2\downarrow}} ,\ket{d_{xy3\uparrow}},\ket{d_{xy3\downarrow}},\ket{d_{zx\uparrow}},\ket{d_{zx\downarrow}},\ket{d_{yz\uparrow}},\ket{d_{yz\downarrow}} \}. \end{align} Since we consider a two-dimensional system with hoppings along $x$ and $y$, the hopping matrix $H_\mathrm{hop}$ is diagonal except for a mixing of $d_{z^2}$ and $d_{x^2-y^2}$ states \begin{align} H_\mathrm{hop}=\begin{pmatrix} \epsilon_{z^2}&m_{z^2,x^2-y^2}&0&0&0&0&0\\ m_{z^2,x^2-y^2}&\epsilon_{x^2-y^2}&0&0&0&0&0\\ 0&0&\epsilon_{xy1}&0&0&0&0\\ 0&0&0&\epsilon_{xy2}&0&0&0\\ 0&0&0&0&\epsilon_{xy3}&0&0\\ 0&0&0&0&0&\epsilon_{zx}&0\\ 0&0&0&0&0&0&\epsilon_{yz}\\ \end{pmatrix} \bigotimes \begin{pmatrix} 1&0\\0&1 \end{pmatrix} \end{align} The elements of this matrix have been determined by using the Slater-Koster formalism \begin{align} \epsilon_{z^2}&=\frac{t_\sigma+3t_\delta}{2}\left[\cos(ak_x)+\cos(ak_y)\right]+\Delta\epsilon_{z^2}\\ \epsilon_{x^2-y^2}&=\frac{3t_\sigma+t_\delta}{2}\left[\cos(ak_x)+\cos(ak_y)\right]+\Delta\epsilon_{x^2-y^2}\\ \epsilon_{xy\{1,2,3\}}&=2t_\pi\cos(ak_x)+2t_\pi\cos(ak_y)+\Delta\epsilon_{xy\{1,2,3\}}\\ \epsilon_{zx}&=2t_\pi\cos(ak_x)+2t_\delta\cos(ak_y)+\Delta\epsilon_{zx}\\ \epsilon_{yz}&=2t_\delta\cos(ak_x)+2t_\pi\cos(ak_y)+\Delta\epsilon_{yz}\\ m_{z^2,x^2-y^2}&=\frac{\sqrt{3}}{2}(t_\delta-t_\sigma)\left[\cos(ak_x)-\cos(ak_y)\right] \end{align} Independent hopping amplitudes are $t_\sigma=-0.46\,\mathrm{eV}$, $t_\pi=-1.37\,\mathrm{eV}$ and $t_\delta=-0.07\,\mathrm{eV}$. To account for the band shift due to the broken inversion symmetry at the interface, we take into account the onsite-energies $\Delta\epsilon_{z^2}=3.670\,\mathrm{eV}$, $\Delta\epsilon_{x^2-y^2}=21.45\,\mathrm{eV}$, $\Delta\epsilon_{xy1}=5.180\,\mathrm{eV}$, $\Delta\epsilon_{xy2}=5.275\,\mathrm{eV}$, $\Delta\epsilon_{xy3}=5.515\,\mathrm{eV}$, $\Delta\epsilon_{zx}=2.885\,\mathrm{eV}$ and $\Delta\epsilon_{yz}=2.885\,\mathrm{eV}$. The matrix $H_\mathrm{SOC}$ with $\lambda=0.16\,\mathrm{eV}$ describes on-site spin-orbit coupling and mixes different spins and even different orbital \setcounter{MaxMatrixCols}{14} \begin{align} H_\mathrm{SOC}=\frac{2}{3}\lambda\begin{pmatrix} 0&0&0&0&0&0&0&0&0&0&0&-\frac{\sqrt{3}}{2}&0&i\frac{\sqrt{3}}{2}\\ 0&0&0&0&0&0&0&0&0&0&\frac{\sqrt{3}}{2}&0&i\frac{\sqrt{3}}{2}&0\\ 0&0&0&0&-i&0&-i&0&-i&0&0&\frac{1}{2}&0&\frac{i}{2}\\ 0&0&0&0&0&i&0&i&0&i&-\frac{1}{2}&0&\frac{i}{2}&0\\ 0&0&i&0&0&0&0&0&0&0&0&-\frac{i}{2}&0&\frac{1}{2}\\ 0&0&0&-i&0&0&0&0&0&0&-\frac{i}{2}&0&-\frac{1}{2}&0\\ 0&0&i&0&0&0&0&0&0&0&0&-\frac{i}{2}&0&\frac{1}{2}\\ 0&0&0&-i&0&0&0&0&0&0&-\frac{i}{2}&0&-\frac{1}{2}&0\\ 0&0&i&0&0&0&0&0&0&0&0&-\frac{i}{2}&0&\frac{1}{2}\\ 0&0&0&-i&0&0&0&0&0&0&-\frac{i}{2}&0&-\frac{1}{2}&0\\ 0&\frac{\sqrt{3}}{2}&0&-\frac{1}{2}&0&\frac{i}{2}&0&\frac{i}{2}&0&\frac{i}{2}&0&0&-\frac{i}{2}&0\\ -\frac{\sqrt{3}}{2}&0&\frac{1}{2}&0&\frac{i}{2}&0&\frac{i}{2}&0&\frac{i}{2}&0&0&0&0&\frac{i}{2}\\ 0&-i\frac{\sqrt{3}}{2}&0&-\frac{i}{2}&0&-\frac{1}{2}&0&-\frac{1}{2}&0&-\frac{1}{2}&\frac{i}{2}&0&0&0\\ -i\frac{\sqrt{3}}{2}&0&-\frac{i}{2}&0&\frac{1}{2}&0&\frac{1}{2}&0&\frac{1}{2}&0&0&-\frac{i}{2}&0&0 \end{pmatrix} \end{align} Due to the gradient potential at the interface, the oxygen $p$ orbitals are displaced away from the bond connecting two Ta atoms. This allows for hopping terms that are forbidden in the bulk. The effective hopping amplitude in a hopping network between two neighboring Ta $d$ orbitals, via an intermediate hopping to a oxygen $p$ orbital, is finite and antisymmetric~\cite{khalsa2013theory,zhong2013theory} when mixing $d_{zx}$ or $d_{yz}$ with $d_{xy}$, $d_{z^2}$ or $d_{x^2-y^2}$ orbitals. This gives rise to~\cite{kim2016strongly} \begin{align*} H_\mathrm{mix}=&2i \begin{pmatrix} 0&0&0&0&0&-g_2\sin(ak_x)&-g_2\sin(ak_y)\\ 0&0&0&0&0&-g_3\sin(ak_x)&g_3\sin(ak_y)\\ 0&0&0&0&0&g_{1}\sin(ak_y)&g_{1}\sin(ak_x)\\ 0&0&0&0&0&g_{1}\sin(ak_y)&g_{1}\sin(ak_x)\\ 0&0&0&0&0&g_{1}\sin(ak_y)&g_{1}\sin(ak_x)\\ g_2\sin(ak_x)&g_3\sin(ak_x)&-g_{1}\sin(ak_y)&-g_{1}\sin(ak_y)&-g_{1}\sin(ak_y)&0&0\\ g_2\sin(ak_y)&-g_3\sin(ak_y)&-g_{1}\sin(ak_x)&-g_{1}\sin(ak_x)&-g_{1}\sin(ak_x)&0&0 \end{pmatrix}\\ &\bigotimes \begin{pmatrix} 1&0\\0&1 \end{pmatrix} \end{align*} with the amplitudes $g_{1}=0.005\,\mathrm{eV}$, $g_{2}=0.5\,\mathrm{eV}$ and $g_{3}=0.002\,\mathrm{eV}$. Note that we did not observe the $d_{x^2-y^2}$ and $d_{z^2}$ bands in the ARPES measurements, as they are several eV above the Fermi energy. On the one hand, considering the $d_{x^2-y^2}$ orbital was not useful for improving the fit which is why we practically disregarded this band pair by using a large onsite energy $\Delta\epsilon_{x^2-y^2}$. Still, we wanted to include this orbital for the sake of completeness. On the other hand, considering $d_{z^2}$ improved the fit significantly. In fact, without this orbital we were not able to reproduce the large Rashba splitting of the lower $d_{zx/yz}$ bands. The origin of this effect has been explained in Ref.~\cite{kim2016strongly} for a monolayer of BaHfO$_3$: Near $\Gamma$ the Hamiltonian can be expanded in $\vec{k}$. For the band pair with the strong Rashba splitting (the lower $d_{zx/yz}$ bands), we get \begin{align}\label{eq:expansion} H_\mathrm{eff}= h(\vec{k}) \begin{pmatrix}1&0\\0&1\end{pmatrix}+ \frac{2\sqrt{3}g_2\lambda}{\epsilon_{z^2}-\epsilon_{yz/zx}}\left(\vec{\tau}\times\vec{k}\right)\cdot\vec{e}_z, \end{align} if we treat these two bands individually in the basis $\{ \frac{1}{\sqrt{2}}(\ket{d_{zx\downarrow}}+i\ket{d_{yz\downarrow}}), \frac{1}{\sqrt{2}}(\ket{d_{zx\uparrow}}-i\ket{d_{yz\uparrow}}) \}$. The second term describes the Rashba splitting. It is quantified by $g_2$ (the orbital mixing amplitude of $d_{z^2}$ with $d_{yz}$ and $d_{zx}$), as well as by $\epsilon_{z^2}-\epsilon_{yz/zx}$ (the energy difference of the $d_{yz}/d_{zx}$ band pair and the $d_{z^2}$ band at the $\Gamma$ point). $\vec{\tau}$ describes the pseudo spin $\vec{\tau}=\vec{s}_{d_{zx}}-\vec{s}_{d_{yz}}$ which forms a Rashba spin texture on the Fermi line. However, the actual spin $\vec{s}=\vec{s}_{d_{zx}}+\vec{s}_{d_{yz}}$ is compensated. This explains why the spin texture of the band pair with the strong Rashba splitting is so small in the full model. \subsection*{Calculation of the spin and orbital Edelstein effect} We use the semiclassical Boltzman transport theory to calculate the spin and orbital Edelstein efficiencies defined by Eq.~\eqref{eq:Edelstein_efficiency}. The magnetization $\vec m_{\text{s}/\text{l}}$ originating from the spin and orbital moments, resepectively, and induced by the external electric field $\vec E$ is calculated as~\cite{johansson2021spin} \begin{align}\label{eq:magnetization} \vec m_{\text s/\text l} = - \frac{g_{\text s / \text l} \mu_\text B}{\hbar} \sum \limits_{\vec k} f_{\vec k} \Braket{\vec{s}/\vec{ l}}_{\vec k} \ . \end{align} Here, $g_{\text s/\text l}$ are the spin and orbital Land{\'e}'s $g$ factors, respectively, which we have set to $g_\text s = 2$ and $g_\text l = 1$ in our calculations. $\mu_\text B $ is the Bohr magneton, $f_{\vec k}$ is the distribution function, and $\Braket{\vec{s}/\vec{l}}_{\vec{k}}$ is the $\vec k$ dependent expectation value of the spin and orbital moment, respectively, which is calculated by \begin{align}\label{eq:sl_k} \Braket{\vec s/ \vec l}_{\vec k} = \Braket{\Psi_{\vec k} | \vec{s}/\vec{l} | \Psi_{\vec{k}}} \ . \end{align} $\Ket{\Psi_{\vec k}}$ are the eigenstates of the Hamiltonian~\eqref{eq:Hamiltonian}. For reasons of clearness, we have merged the crystal momentum $\hbar \vec{k}$ and the band index $n$ to the multi-index $\vec{k}$ here and in the following. The operators of spin and orbital moment in the basis~\eqref{eq:basis} of the tight-binding Hamiltonian~\eqref{eq:Hamiltonian} are \begin{align}\label{eq:spin_operator} \vec s = \frac{\hbar}{2} \mathds{1}_{7} \otimes \vec \sigma , \ \ \ \ \vec l = \hbar \bm \lambda \otimes \mathds{1}_2 \end{align} with \begin{align}\label{eq:lambda_xyz} \begin{split} & \lambda_x = \begin{pmatrix} 0 & 0 & 0 & 0 & 0 & 0 & \sqrt{3} \ i\\ 0 & 0 & 0 & 0 & 0 & 0 & i \\ 0 & 0 & 0 & 0 &0 & - i & 0 \\ 0 & 0 & 0 & 0 &0 & - i & 0 \\ 0 & 0 & 0 & 0 &0 & - i & 0 \\ 0 & 0 & i & i & i & 0 & 0 \\ - \sqrt{3} i& - i & 0 & 0 & 0 & 0 & 0 \end{pmatrix} \ , \ \ \lambda_y = \begin{pmatrix} 0 & 0 & 0 & 0 & 0 & - \sqrt{3} i & 0 \\ 0 & 0 & 0 & 0 & 0 & i & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & i \\ 0 & 0 & 0 & 0 & 0 & 0 & i \\ 0 & 0 & 0 & 0 & 0 & 0 & i \\ \sqrt{3} i & - i & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & - i & - i & - \mathrm i & 0 & 0 \end{pmatrix} \ , \\ & \lambda_z = \begin{pmatrix} 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & - 2 i & - 2 i & - 2 i & 0 & 0 \\ 0 & 2 i & 0 & 0 & 0 & 0 & 0 \\ 0 & 2 i & 0 & 0 & 0 & 0 & 0 \\ 0 & 2 i & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & - i \\ 0 & 0 & 0 & 0 & 0 & i & 0 \end{pmatrix} \end{split} \ . \end{align} Here, $\mathds{1}_m$ is the $m \times m$ unity matrix. The distribution funtion $f_{\vec{k}}$ is calculated by solving the semiclassical Boltzmann equation for a stationary and spatially homogeneous system, \begin{align}\label{eq:Boltzmann} \dot{\vec k} \frac{\partial f_{\vec k}}{\partial \vec{k}} = \left( \frac{\partial f_{\vec{k}}}{\partial t} \right)_\text{scatt} \ . \end{align} The left-hand side corresponds to the influence of external fields on the distribution function. In the presence of an external electric field $\vec E$, it is given by the semiclassical equation of motion \begin{align}\label{eq:kdot} \dot{\vec{k}}= - \frac{e}{\hbar} \vec E \end{align} with $e$ the absolute value of the elementary charge. The right-hand side of Eq.~\eqref{eq:Boltzmann} corresponds to the scattering term. Using the constant relaxation time approximation \begin{align}\label{eq:scattering_term} \left( \frac{\partial f_{\vec{k}}}{\partial t} \right)_\text{scatt} = - \frac{1}{\tau_0} \left(f_{\vec{k}} - f_{\vec k} ^0 \right) \end{align} with $\tau_0$ the constant momentum relaxation time (set to $\tau_0=1 \si{\pico\second}$ in our calculations) and $f_{\vec{k}}^0$ the equilibrium distribution function (which is the Fermi Dirac distribution function for fermions), the Boltzmann equation~\eqref{eq:Boltzmann} is solved by \begin{align}\label{eq:Boltzmann_solution} f_{\vec k} = f_{\vec k}^0 + \frac{\partial f _{\vec k} ^0 }{\partial \epsilon} e \tau_0 \vec v_{\vec k} \cdot \vec E \ . \end{align} The group velocity $\vec v_{\vec{k}}$ is the expectation value of the velocity operator $\hat{\vec{v}}$, \begin{align}\label{eq:velocity} \vec v_{\vec{k}}= \Braket{\Psi_{\vec{k}} | \hat{\vec{v}}| \Psi_{\vec{k}}} \end{align} with $\hat{\vec{v}}= \nicefrac{i}{\hbar} \left[ H, \hat{\vec{r}}\right] = \nicefrac{1}{\hbar} \nicefrac{\partial H}{\partial \vec{k}}$ and $\hat{\vec{r}}= i \nicefrac{\partial }{\partial \vec{k}} $. \section*{Acknowledgements} This project received funding from the ERC Advanced Grant "FRESCO" ($\#$833973), the QuantERA project "QUANTOX" (ANR-18-QUAN-0014), the French ANR projects "QUANTOP" (ANR-19-CE47-0006-01) and "CORNFLAKE" (ANR-18-CE24-0015-01), and Intel's Science Technology Center – FEINMAN. M.B. thanks the Alexander von Humboldt Foundation for supporting his stays at Martin-Luther-Universität Halle. The authors thank J.-P. Attané, L. Vila, C. Proust and D. Vignolles for useful discussions. \section*{Author contributions statement} M.B. proposed the study and led it with I.M. L.M.V.A. prepared the samples with help from J.R. and P.L. S.V. led the ARPES study with S.M. and L.M.V.A., with support from J.B., R.S., J.R., F.B. and P.L. S.V. analysed the ARPES data and discussed them with A.J., B.G., L.M.V.A., S.M., J.B., N.B., I.M. and M.B. B.G. performed the TB fits with help from A.J. and I.M.. A.J. performed the EE calculations with help from B.G. and I.M. M.B., S.V., A.J. and B.G. wrote the paper with inputs from all authors. \section*{Additional information} The authors declare no competing interests.
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} Spintronics, or spin electronics, involves the study of active control and manipulation of spin degrees of freedom in solid-state systems and is a rapidly growing field of science.\cite{Zutic04} The key purpose of these studies is the generation, control, and manipulation of spin-polarized currents. A useful tool for achieving this goal is the spin-orbit interaction, which couples the spin of an electron with its spatial motion in a presence of a certain asymmetry of the conductor. For example, Rashba spin-orbit interaction is due to a lack of inversion symmetry in semiconductor heterostructures such as InAs or GaAs.\cite{Rashba60} The advantage of this type of interaction is that it can be tuned by means of electrostatic gates.\cite{Nitta97,Engels97} In truly single-mode quantum channels, spin-orbit interaction alone neither changes the electric current nor results in a spin current if no magnetic field or magnetic materials are involved. In this case, spin-orbit interaction does not change the energy-band topology and can be simply eliminated from the Hamiltonian by means of a unitary transform.\cite{Levitov03} A prototypical scheme of a spin field-effect transistor based on Rashba interaction and single-mode ballistic channel with ferromagnetic electrodes was proposed by Datta and Das\cite{Datta90} more than two decades ago. Recently, such a device was experimentally realized.\cite{Koo09} The current in a single-mode quantum channel also depends on the spin-orbit interaction if a magnetic field is applied parallel to the channel or normally to the plane of the heterostructure (i.e. in the direction of Rashba field).\cite{Pershin04} The interplay of the spin-orbit interaction with magnetic field significantly modifies the band structure and produces an energy gap in the spectrum together with additional subband extrema. This results in a decrease in the charge current and a net spin current as the Fermi level passes through the gap. These effects were recently experimentally observed by Quay et al.\cite{Quay10} Many authors studied spin and charge transport in multimode quantum channels in the absence of magnetic field or magnetic ordering. Governale and Z\"ulicke\cite{Governale02} considered a long channel with parabolic confinement potential and took into account the mixing of different transverse-quantization subbands by the spin-orbit interaction. This mixing results in an asymmetric distortion of the dispersion curves but does not open any gaps in the spectrum. As a consequence, the spin-orbit interaction in a presence of voltage drop across the channel results in a spin accumulation inside the channel but does not lead to a spin current or deviations from the standard conductance quantization. There is also a number of numerical calculations of the spin current,\cite{Eto05,Zhai07,Liu07,Zhai08} but these papers deal with stepwise constrictions and the results are obscured by the interference effects. A more realistic geometry of a saddle-point contact in two-dimensional potential landscape was considered in Ref.~\onlinecite{Sablikov10}, but the Rashba interaction was taken into account there as a perturbation. In Refs. \onlinecite{Sanchez06} and \onlinecite{Gelabert10}, a quasi-one-dimensional wire with localized region of Rashba interaction was considered and nonzero spin current was predicted for sufficiently sharp boundaries of the region. Unusual trajectories were revealed by Silvestrov and Mishchenko\cite{Silvestrov06} within the quasiclassical approach to exist near these regions. However in all the above papers, the spin current and the deviations from perfect conductance quantization are related with the mixing of subbands in the transition areas between the quantum contact and reservoirs by spin-orbit interaction and crucially depend on the geometry and properties of these regions. It is hard to see any general regularities concerning the magnitude of the effect. In this paper, we propose a mechanism of spin current generation that relies on the energy band structure deep in the wire rather than on the reflection effects in the transition areas and leads to 100\% spin-polarized current at definite positions of the Fermi level. This mechanism is reminiscent of the one in Ref. \onlinecite{Pershin04} but requires no magnetic field. \section{The model} Consider a quasi-one-dimensional conducting channel formed in two-dimensional electron gas by means of electrostatic gates. The transition between the reservoirs and the channel is assumed to be adiabatic, and the length of the channel is much larger than that of the transition regions. We assume that the Rashba spin-orbit interaction is present in the channel but absent in the reservoirs, so the spin current through the system is well-defined. The Hamiltonian of the system is of the form \begin{multline}\label{H} \hat{H}=\frac{\hat{p}^2_x}{2m} +\frac{\hat{p}^2_z}{2m}+U(x,z) \\ +\frac{\alpha(x)}{\hbar} \left(\hat{p}_x\hat{\sigma}_z-\hat{p}_z\hat{\sigma}_x\right) -\frac{i}{2}\,\frac{\partial\alpha}{\partial x}\,\hat{\sigma}_z, \end{multline} where $U(x,z)$ is the confining potential and $\alpha(x)$ is the parameter of spin-orbit coupling. Both quantities are smooth functions of the longitudinal coordinate $x$ that are constant almost throughout the whole length of channel and vanish in the reservoirs. It is now straightforward to make use of the adiabatic approximation and introduce a complete set of of eigenfunctions $\varphi_{n}(x,z)$ and eigenenergies $\varepsilon_{n}$ corresponding to the transverse motion of electrons in the $z$ direction. This leads to a set of coupled equations of the form \begin{multline} \left[ \frac{\hat{p}_x^2}{2m} + \frac{\alpha(x)}{\hbar}\,\hat{\sigma}_z\,\hat{p}_x - \frac{i}{2}\,\frac{d\alpha}{dx}\,\hat{\sigma}_z + \varepsilon_m \right] \bar{\psi}_m(x) \\ -\frac{\alpha(x)}{\hbar}\,\hat\sigma_x \sum_n \langle m|p_z|n \rangle\,\bar{\psi}_n(x) = \varepsilon\,\bar\psi_m(x) \label{Schr-multi} \end{multline} for the spinors $\bar{\psi}_n = (u_n,\, v_n)^T$ that describe the longitudinal dependence of the spin-up and spin-down amplitudes of wave-function in the $n$-th transverse quantum mode. \begin{figure}[t] \includegraphics[width=8.5cm]{fig1} \caption{\label{fig1} Dispersion curves for the two lowest subbands of a quantum wire with spin-orbit interaction and parabolic transverse confinement. The dispersion curves are distorted by level crossing but exhibit no local maxima.The color of the curve designates the dominant spin projection.} \end{figure} If the matrix elements of transverse momentum between different modes $\varphi_m$ and $\varphi_n$ were zero, the twofold spin degeneracy of these modes would be lifted by the spin-orbit interaction and one should see two sets of parabolic dispersion curves shifted along the $k_x$ axis that would correspond to the two possible projections of spin on the $z$ axis. The curves of each set would have minima at $k_x=\pm 2m\alpha/\hbar^2$ and intersect without affecting each other. Nonzero matrix elements $\langle m|p_z|n \rangle$ result in anticrossing of the dispersion curves with different $n$ and spin projection and lead to an asymmetric distortion of them (see Fig. \ref{fig1}). However this does not give rise to new maxima and minima in these curves for the case of standard parabolic confining potential.\cite{Moroz99} The reason is that the levels of transverse quantization are evenly spaced and it is impossible to isolate a pair of them with a small separation. In other words, the vertical separation of anticrossing curves is too large as compared with their horizontal shifts. The failure of the approximate two-band model that predicts a nonmonotonic behavior of the curves may be understood as follows. In the absence of band mixing, the two curves corresponding to two subsequent transverse-quantization levels and different spin projection would cross at $k_x= \Delta\varepsilon/2\alpha$, where $\Delta\varepsilon = \varepsilon_{n+1} - \varepsilon_n$. To form a maximum, the crossing branches of these curves should have different signs of slope $k_x+ 2m\alpha/\hbar^2>0$ and $k_x - 2m\alpha/\hbar^2 < 0$ at the intersection point, which results in a condition $\Delta\varepsilon < 4m\alpha^2/\hbar^2$. On the other hand, the band mixing term $\alpha\langle n+1|p_z|n\rangle/\hbar$ would lead to the splitting of the curves at the crossing point of the order of $\Omega \sim \sqrt{m\Delta\varepsilon}\,\alpha/\hbar$. The two-band model is justified only if $\Omega \ll \Delta\varepsilon$, i.e. $\Delta\varepsilon \gg m\alpha^2/\hbar^2$, which is incompatible with the previous condition. Exact calculations\cite{Governale02} show that all the dispersion curves have only one minimum and hence the dependence of the conductance of the channel on the Fermi energy exhibits only the conventional $2e^2/h$ steps, while the spin current is absent. \begin{figure}[t] \includegraphics[width=8.5cm]{fig2} \caption{\label{fig2} The system under consideration. The current flows in the $x$ direction, and the negative voltage at the additional middle gate changes the confining potential from one-well to a double-well shape. As the negative voltage increases, a maximum appears in the lower dispersion curve.} \end{figure} \begin{figure}[t] \includegraphics[width=8.5cm]{fig3} \caption{\label{fig3} Dispersion curves for a pair of closely spaced energy levels with small matrix element of transverse momentum in a quantum wire with spin-orbit interaction. The level crossing results in appearance of local maxima in the lower curves.} \end{figure} Things become different if the confinement is nonparabolic. Consider, e. g., the system in which $U(z)$ has the shape of a double potential well. Such a potential may be formed by means of a negatively biased central gate on top of the quantum wire (see Fig. \ref{fig2}). In the case of a high impenetrable barrier between the wells, each of them would possess the same set of energy levels, so the levels of the whole system would be doubly degenerate. A finite tunneling through the barrier lifts the degeneracy, and therefore one gets a set of pairs of levels with very small spacings inside the pair. In this case, the two-band model is justified and taking into account only the two lowest levels, one obtains the following expression for the resulting dispersion curves \begin{multline} \varepsilon = \frac{\hbar k_x^2}{2m} + \frac{\varepsilon_1+\varepsilon_2}{2} \\ \pm \frac{1}{2}\sqrt{(\varepsilon_2-\varepsilon_1\pm 2 \alpha k_x)^2+4 |p_{12}|^2 \alpha^2/\hbar^2}, \label{E(k)} \end{multline} where $p_{12}=\langle 1|p_z|2\rangle$. The upper sign at $2\alpha k_x$ under the square root corresponds to the mixture of $|1\uparrow\rangle$ and $|2\downarrow\rangle$ states, and the lower sign corresponds to the mixture of $|1\downarrow\rangle$ and $|2\uparrow\rangle$ states. The two pairs of the resulting curves are symmetric with respect to $k_x=0$. The lower dispersion curve may have one or two minima as a function of $k_x$ depending on the relations between $\Delta\varepsilon = \varepsilon_2 - \varepsilon_1$, $p_{12}$, and $\alpha$ (see Fig. \ref{fig3}). The upper minimum disappears by merging with the local maximum, i.~e. when the points where $d\varepsilon/dk_x =0$ and $d^2\varepsilon/dk_x^2=0$ coincide. Therefore it follows from Eq.~(\ref{E(k)}) that the second minimum exists if \begin{equation} \left| \frac{\hbar p_{12}}{m\alpha}\right|^{2/3}+\left( \frac{\hbar^2\Delta \varepsilon}{2m\alpha^2}\right)^{2/3}<1. \label{inequality} \end{equation} Apparently one can meet this condition by making the overlap of the wave functions in the two wells sufficiently small. For example, if a square quantum well with infinitely high external walls is symmetrically cut by a $\delta$-like barrier in the middle, both $\Delta\varepsilon$ and $|p_{12}|$ are inversely proportional to the effective strength of the barrier $k_0$. \section{The conductance} The existence of a local maximum in the lower pair of the dispersion curves leads to significant changes in the conductance of the wire. If the Fermi level lies between the lower and upper minima $\mu_1$ and $\mu_2$ in the dispersion curves (see Fig. \ref{fig3}), it intersects two branches with positive (negative) group velocity that correspond to the two different spin projections in the $z$ direction, and the conductance is $2e^2/h$, while the spin current is absent. If the Fermi level lies between the upper minimum $\mu_2$ and the local maximum $\mu_3$ in the lower curves or above the minimum in the two upper dispersion curves $\mu_4$, it intersect two branches with positive (negative) group velocity and one spin projection and two branches with positive (negative) group velocity with the other spin projection. This results in the $4e^2/h$ conductance and yields no spin current. However if the Fermi level falls within the gap between the local maximum $\mu_3$ in the lower curves and the minimum $\mu_4$ in the upper curves, it intersect two branches with positive group velocities and one spin projection and two branches with negative velocities and the other spin projection. Therefore in the case of a sufficiently long wire the conductance exhibits a dip to $2e^2/h$ where the current is 100\% spin polarized. To calculate the current through the wire, we use the Landauer - B\"uttiker formula\cite{Buttiker85} for the zero-temperature total electric conductance \begin{eqnarray} G = \frac{e^2}{h} \sum\limits_{n_L,n_R, \sigma_L,\sigma_R} |t_{n_R\sigma_R, n_L\sigma_L}|^2 \label{G} \end{eqnarray} where $t_{n_R\sigma_R, n_L\sigma_L}$ are the transmission amplitudes from the state in transverse mode $n_L$ with spin projection $\sigma_L$ in the left lead to the state in the mode $n_R$ with spin projection $\sigma_R$ in the right lead. The spin conductance $G_z^s = I_s/V$ with respect to the $z$ axis is given by\cite{Zhai05} \begin{multline} G_{z}^s = -\frac{e}{4\pi} \sum\limits_{n_L,n_R, \sigma_L} \bigl( t^{*}_{n_R\uparrow,n_L\sigma_L} t_{n_R\uparrow,n_L\sigma_L} \\ - t^{*}_{n_R\downarrow,n_L\sigma_L} t_{n_R\downarrow,n_L\sigma_L} \bigr). \label{G_s} \end{multline} In general, the transmission amplitudes $t_{n_R\sigma_R, n_L\sigma_L}$ can be calculated only numerically. Analytical results may be obtained for the particular case of strong and nearly constant spin-orbit interaction if one neglects the reflection from the boundary regions where the interaction and the confining potential vanish. This is possible if both quantities go to zero in the leads sufficiently smoothly. To make this evident, we perform a unitary transformation of the Hamiltonian with matrix\cite{Levitov03} \begin{align} \hat{S}(x) = \exp[-i\,\hat{\sigma}_z\,\xi(x)/2], \nonumber\\ \xi(x) = \frac{2m}{\hbar^2} \int_{-\infty}^x dx'\,\alpha(x'), \label{S} \end{align} which eliminates the term linear in $\hat{p}_x$ in it and brings Eqs. (\ref{Schr-multi}) to the form \begin{subequations} \label{sys2} \begin{align} \frac{d^2\bar\psi_1}{dx^2} &+ (m^2\alpha^2\,\hbar^{-4} + \Delta k_1^2)\,\bar{\psi}_1 \nonumber\\=& -2m\alpha(x)\,p_{12}\,\hbar^{-3} \left( \hat{\sigma}_x\,\cos\xi - \hat{\sigma}_y \sin\xi \right)\, \bar\psi_2, \\ \frac{d^2\bar\psi_2}{dx^2} &+ (m^2\alpha^2\,\hbar^{-4} + \Delta k_2^2)\,\bar{\psi}_2 \nonumber\\=& -2m\alpha(x)\,p_{12}^{*}\,\hbar^{-3} \left( \hat{\sigma}_x\,\cos\xi - \hat{\sigma}_y \sin\xi \right)\, \bar\psi_1, \end{align} \end{subequations} where $\Delta k^2_{1,2} = 2m\,(\varepsilon - \varepsilon_{1,2})/\hbar^2$. Even though $\alpha$ and $U$ are smooth functions of $x$, the right-hand sides of equations (\ref{sys2}) contains rapidly oscillating functions $\cos\xi$ and $\sin\xi$ that lead to interband scattering. These equations are similar to those of mechanical parametric resonance\cite{LL} and can be solved in a similar way. If the detuning in both bands is small and the interband coupling is weak, i.~e. $\Delta k^2_{1,2} \ll m^2\alpha^2/\hbar^4$ and $|p_{12}| \ll m\alpha/\hbar$, the coupled components of the wave function may be presented in the form \begin{subequations} \begin{align} u_1 = A_1(x)\,e^{i\xi/2} + B_1\,e^{-i\xi/2}, \\ v_2 = C_2(x)\,e^{i\xi/2} + D_2\,e^{-i\xi/2}, \label{ansatz} \end{align} \end{subequations} where $A_1$, $B_1$, $C_2$, and $D_2$ are amplitudes that slowly vary on the scale of $\hbar^2/(m\alpha)$. Substituting Eqs. (\ref{ansatz}) into (\ref{sys2}), neglecting the second derivatives of slowly varying quantities and collecting the terms proportional to $\exp(\pm i\xi/2)$ leads to a system of first-order equations \begin{subequations} \label{a1-d2} \begin{align} \frac{2im\alpha}{\hbar^{2}}\,\frac{dA_1}{dx}& = -\Delta k_1^2\,A_1 - \frac{2m\alpha p_{12}}{\hbar^{3}}\,D_2, \label{a1}\\ \frac{2im\alpha}{\hbar^{2}}\,\frac{dB_1}{dx}& = \Delta k_1^2\,B_1, \label{b1}\\ \frac{2im\alpha}{\hbar^{2}}\,\frac{dC_2}{dx}& = -\Delta k_2^2\,C_2, \label{c2}\\ \frac{2im\alpha}{\hbar^{2}}\,\frac{dD_2}{dx}& = \Delta k_2^2\,D_2 + \frac{2m\alpha p_{12}^{*}}{\hbar^{3}}\,A_1. \label{d2} \end{align} \end{subequations} While the standalone Eqs. (\ref{b1}) and (\ref{c2}) have purely oscillating solutions for any choice of parameters, the solutions of coupled equations (\ref{a1}) and (\ref{d2}) may exponentially grow or decay. If we assume them to be proportional to $e^{sx}$, one easily finds the roots of characteristic equations of system (\ref{a1}) - (\ref{d2}) \begin{align} &s_{1,2} = i\hbar^2\,\frac{\Delta k_1^2 - \Delta k_2^2}{4m\alpha} \pm \kappa, \nonumber\\ \kappa = &\sqrt{ \left|\frac{p_{12}}{\hbar}\right|^2 - \hbar^4 \left(\frac{\Delta k_1^2 + \Delta k_2^2}{4m\alpha}\right)^2 }. \label{roots} \end{align} Solving Eqs. (\ref{a1}) - (\ref{d2}) and similar equations for $u_2$ and $v_1$ results in the transmission amplitudes from the left to the right \begin{multline} |t_{1\uparrow,1\uparrow}|^2 = |t_{2\uparrow,2\uparrow}|^2 \\ = \frac { 16m^2\alpha^2 |p_{12}|^2 - \hbar^6\,(\Delta k_1^2 + \Delta k_2^2)^2 } { 16m^2\alpha^2 |p_{12}|^2\,\cosh^2(\kappa L) - \hbar^6\,(\Delta k_1^2 + \Delta k_2^2)^2 } \label{evanesc} \end{multline} with $ |t_{1\downarrow,1\downarrow}|^2 = |t_{2\downarrow,2\downarrow}|^2 = 1 $ and zero spin-mixing or band-mixing transmission amplitudes. A substitution of these amplitudes into Eqs. (\ref{G}) and (\ref{G_s}) suggests that the electric conductance as a function of the Fermi energy has a dip at the second quantization plateau, which corresponds to a spike in the spin current. This is due to blocking of the current from the left to the right for spin-up electrons inside the gap in the spectrum. In the strong-interaction approximation, the dip is centered at $\varepsilon = (\varepsilon_1+ \varepsilon_2)/2$ and has a width $\Omega =2\alpha|p_{12}|/\hbar$. \begin{figure}[t] \includegraphics[width=8.5cm]{fig4} \caption{\label{fig4}The dependence of normalized conductance and spin current on the Fermi energy. The upper and lower solid curves present $G(\varepsilon)$ and $I_s$ numerically calculated for $|p_{12}| = 0.08m\alpha/\hbar$, $\Delta\varepsilon = 0.32m\alpha^2/\hbar^2$, and $L=20\hbar^2/m\alpha$. The dashed curve shows $G(\varepsilon)$ calculated by means of Eq. (\ref{evanesc}).} \end{figure} \begin{figure}[t] \includegraphics[width=6cm]{fig5} \caption{\label{fig5} Spatial dependence of the gap in the spectrum. The lower curve shows the position of the maximum in the lower dispersion curve, and the upper curve shows the position of the minimum in the upper one. Electrons experience partial reflections at the points where the Fermi level crosses the gap in the spectrum. } \end{figure} Figure \ref{fig4} shows the calculated electric conductance and spin current for $|p_{12}| = 0.08m\alpha/\hbar$, $\Delta\varepsilon = 0.32m\alpha^2/\hbar^2$, and $L=20\hbar^2/m\alpha$.{\cite{estimate}} Solid lines show the values obtained by a numerical solution of Eqs. (\ref{sys2}) with account taken of spatial variations of $\varepsilon_{1,2}$ and $\alpha$, and the dashed line shows analytical results calculated by means of Eq. (\ref{evanesc}). Both the numerically calculated conductance and spin current exhibit an oscillatory behavior as the Fermi level approaches the spectrum gap from below. This behavior is explained by quantum interference effects that arise due to the reflections of electrons from the ceiling of the allowed band at the edges of the wire where it goes down (see Fig. \ref{fig5}). The amplitude of the oscillations increases as the gap is approached because the reflection amplitude increases. \section{Conclusion} We have shown that Rashba spin-orbit interaction may open additional gaps in the spectrum of a multichannel quantum wire if the transverse confining potential is chosen appropriately. This happens if the energy levels of transverse quantization come in pairs and the matrix elements of transverse momentum between the corresponding states is sufficiently small. In this case, the conductance of the wire exhibits a dip and the spin current exhibits a spike inside the gap. If the contact is sufficiently long, the conductance in the dip drops from $4e^2/h$ to $2e^2/h$ and the current is fully spin-polarized in the transverse in-plane direction. This effect may be used for designing an all-electrical spin transistor. By applying a negative voltage to the middle longitudinal gate, one may increase the degree of spin polarization of the current from zero to 100\% if the Fermi level is adjusted appropriately. {One of the main advantages of using electric bias for spin control is the ability to make it time-dependent. This can lead to non-trivial effects in the transport.\cite{Sadreev13} In the future, it would be of interest to study the effects of time-periodic bias in our model.} \begin{acknowledgments} This work was supported by Russian Foundation for Basic Research, grant 13-02-01238-a, and by the program of Russian Academy of Sciences. \end{acknowledgments}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Details on the Evaluated Algorithms} \label{sec:sota} \noindent This section provides a detailed description, including parameter settings, of the state-of-the-art algorithms used for experimental evaluation (\emph{c.f}\onedot} \def\Cf{\emph{C.f}\onedot Sec.~5 in the paper). \subsection{3D Structure-based Localization} \PAR{Active Search (AS).} Active Search \cite{Sattler17PAMI} accelerates 2D-3D descriptor matching via a prioritization scheme. It uses a visual vocabulary to quantize the descriptor space. For each query feature, it determines how many 3D point descriptors are assigned to the feature's closest visual word. This determines the number of descriptor comparisons needed for matching this feature. Active Search then matches the features in ascending order of the number of required descriptor comparisons. If a 2D-to-3D match is found, Active Search attempts to find additional 3D-to-2D correspondences for the 3D points surrounding the matching point. Correspondence search terminates once 100 matches have been found. For the Aachen Day-Night dataset, we trained a visual vocabulary containing 100k words using approximate k-means clustering \cite{Philbin07} on all upright RootSIFT~\cite{Arandjelovic12,Lowe04IJCV} descriptors found in 1,000 randomly selected database images contained in the 3D model. Similarly, we trained a vocabulary containing 10k words for the RobotCar Seasons dataset from the descriptors found in 1,000 randomly selected images contained in the reference 3D model. For the CMU Seasons dataset, we also trained a visual vocabulary consisting of 10k words, but used the SIFT~\cite{Lowe04IJCV} features corresponding to the 3D points in all sub-models instead of RootSIFT features. \TPdel{All vocabularies do not contain any information from the query images.} No vocabulary contains any information from the query images. We use calibrated cameras rather than simultaneously estimating each camera's extrinsic and intrinsic parameters. We thereby exploit the known intrinsic calibrations provided by the intermediate model of the Aachen Day-Night dataset\footnote{Some of the day-time queries were taken with the same camera as the night-time queries and we enforced that the images taken with the same camera have consistent intrinsics. Thus, the intermediate model provides the intrinsic calibration of the night-time queries.} and the known intrinsics of the RobotCar Seasons and CMU Seasons datasets. Besides training new vocabularies and using calibrated cameras, we only changed the threshold on the re-projection error used by RANSAC to distinguish between inliers and outliers. For the Aachen Day-Night dataset, we used a threshold of 10 pixels while we used 5 pixels for both the RobotCat Seasons and the CMU Seasons datasets. Otherwise, we used the standard parameters of Active Search. \PAR{City-Scale Localization (CSL).} The City-Scale Localization algorithm~\cite{Svarm17PAMI} is an outlier rejection algorithm, \emph{i.e}\onedot} \def\Ie{\emph{I.e}\onedot, it is a robust localization algorithm that can prune guaranteed outlier correspondences from a given set of 2D-3D correspondences. CSL is based on the following central insight: If the gravity direction and an approximate height of the camera above the ground plane are known, it is possible to calculate an upper bound for the maximum number of inliers that any solution containing a given 2D-3D correspondence as an inlier can have. At the same time, CSL also computes a lower bound on the number of inliers for a given correspondence by computing a pose for which this correspondence is an inlier. CSL thus computes this upper bound for each 2D-3D match and, similar to RANSAC, continuously updates the best pose found so far (which provides a lower bound on the number of inliers that can be found). All correspondences with an upper bound on the maximum number of inliers that is below the number of inliers in the current best solution can be permanently discarded from further consideration. When outliers have been discarded, three-point RANSAC~\cite{Fischler81CACM,Kneip11CVPR} is performed on the remaining correspondences. Notice that, unlike RANSAC, the outlier filter used by CSL is deterministic. The computational complexity of the filter is $\mathcal{O}(N^2\log N)$, where $N$ is the number of available 2D-3D correspondences. In order to obtain an estimate for the gravity direction, we follow~\cite{Svarm17PAMI} and add noise to the gravity direction obtained from the ground truth poses. CSL iterates over a range of plausible height values, similar to~\cite{Zeisl15ICCV}. In these experiments, the height values cover an interval five meters high. This interval is centered on the camera height of the ground truth pose, with added noise. In the Aachen experiments, the height interval is divided into nine sections, and for the Oxford and CMU experiments, the height interval is divided into three sections. The 2D-3D correspondences are generated by matching the descriptors of all detected features in the query image to the descriptors of the 3D points using approximate nearest neighbour search. To account for the fact that each 3D point is associated with multiple descriptor, the 3D points are each assigned a single descriptor vector equal to the mean of all its corresponding descriptors. This matching strategy yields the same number of correspondences as the number of detected features. As with Active Search, we use a re-projection error threshold of 10 pixels for the Aachen Day-Night dataset and 5 pixels for both the RobotCat Seasons and the CMU Seasons datasets. \subsection{2D Image-based Localization} \begin{table} \centering \begin{tabular}{c|c} \textbf{Parameter} & \textbf{Value} \\ \hline Feature Type & Dense RootSIFT \\ \hline Vocabulary Size & 128 \\ (trained on SF) & ~ \\ \hline Descriptor Dimension & 4,096 \\ (after PCA \& whitening) & ~\\ \end{tabular} \caption{DenseVLAD parameters.} \label{tab:densevlad-parameters} \end{table} \begin{table} \centering \begin{tabular}{c|c} \textbf{Parameter} & \textbf{Value} \\ \hline Network model & VGG-16 + NetVLAD \\ (trained on Pitts30k) & + whitening \\ \hline Descriptor Dimension & 4,096 \\ \end{tabular} \caption{NetVLAD parameters.} \label{tab:netvlad-parameters} \end{table} \PAR{DenseVLAD and NetVLAD.} We use the original implementations of DenseVLAD~\cite{Torii-CVPR15} and NetVLAD~\cite{Arandjelovic16} provided by the authors. Images were processed at their original resolution unless any dimension exceeded $1920$ pixels. For DenseVLAD, we used the Dense SIFT implementation, followed by RootSIFT normalization~\cite{Arandjelovic12}, available in VLFeat~\cite{Vedaldi10a}. The visual vocabulary used consisted of $128$ visual words (centroids) pre-computed on the San-Francisco (SF) dataset~\cite{Chen11b}, \emph{i.e}\onedot} \def\Ie{\emph{I.e}\onedot, we used a general vocabulary trained on a different yet similiar dataset. For NetVLAD we used the pre-computed network ``Pitts30k'' trained on the Pittsburgh time-machine street-view image dataset~\cite{Arandjelovic16}. The network is therefore not fine-tuned on our datasets, \emph{i.e}\onedot} \def\Ie{\emph{I.e}\onedot, we again used a general network trained on a different city. Given a DenseVLAD or NetVLAD descriptor, we find the most similar reference image by exhaustive nearest neighbor search. While this stage could be accelerated by approximate search, we found this to be unnecessary as the search for a single query descriptor typically takes less than 20ms. Tables~\ref{tab:densevlad-parameters} and~\ref{tab:netvlad-parameters} summarize the parameters used for DenseVLAD and NetVLAD in our experiments. \begin{table} \centering \begin{tabular}{c|c} \textbf{Parameter} & \textbf{Value} \\ \hline Feature Type & UprightSURF128 \\ \hline Aachen Vocabulary Size & 3585 \\ \hline RobotCar Vocabulary Size & 5031 \\ \hline CMU Vocabulary Size & 4847 \\ \hline $p\left(z_{i}\left|\right.\bar{e}_{i}\right)$ & 0 \\ \hline $p\left(\bar{z}_{i}\left|\right.e_{i}\right)$ & 0.61 \\ \hline $p\left(L_{\textrm{new}}\left|\right.Z^{k-1}\right)$ & 0.9 \end{tabular} \caption{FAB-MAP parameters.} \label{tab:fabmap-parameters} \end{table} \PAR{FAB-MAP.} For FAB-MAP~\cite{Cummins08IJRR}, we trained a separate vocabulary for each location using Modified Sequential Clustering~\cite{teynor2007fast} on evenly spaced database images, resulting in 3,585 visual words for Aachen Day-Night, 5,031 for RobotCar Seasons and 4,847 for CMU Seasons. A Chow-Liu tree was built for each dataset using the Bag-of-Words generated for each database image using the vocabulary. We used the mean field approximation for the new place likelihood (as additional training images were not available for the sampled approach used in \cite{cummins2011appearance}) and the fast lookup-table implementation in \cite{glover2012openfabmap} to perform image retrieval for each of the query locations. Tab.~\ref{tab:fabmap-parameters} summarizes the parameters used for the experiments. \subsection{Optimistic Baselines} \noindent As explained in Sec.~5 of the paper, we implemented two \emph{optimistic baselines}. Whereas all other localization algorithms evaluated in the paper use no prior information on a query image's pose, both optimistic baselines are given additional knowledge. For each query image, we provide a small set of reference images depicting the same part of the model. The remaining problem is to establish sufficiently many correspondences between the query and the selected reference images to facilitate camera pose estimation. Thus, both approaches measure an upper bound on the pose quality that can be achieved with a given type of local feature. \PAR{LocalSfM.} Given a query image and its relevant set of reference images, LocalSfM first extracts upright RootSIFT~\cite{Lowe04IJCV,Arandjelovic12} features. Next, LocalSfM performs exhaustive feature matching between the relevant reference images as well as between the query and the relevant reference images. While Active Seach and CSL both use Lowe's ratio test\footnote{Active Search uses a ratio test threshold of 0.7 for 2D-to-3D and a threshold of 0.6 for 3D-to-2D matching.}, DenseSfM neither uses the ratio test nor a threshold on the maximum descriptor distance. Instead, it only requires matching features to be mutual nearest neighbors. Given the known poses and intrinsics for the reference images, LocalSfM triangulates the 3D structure of the scene using the previously established 2D-2D matches. Notice that the resulting 3D model is automatically constructed in the global coordinate system of the reference 3D model. Finally, we use the known intrinsics of the query image and the feature matches between the query and the reference images to estimate the camera pose of the query. For each query image, the relevant set of reference images is selected as follows: For the RobotCar Seasons and CMU Seasons datasets, we use the ground truth pose of each query image to identify a relevant set of reference images. More precisely, we select all reference images whose camera centers are within 5m of the ground truth position of the query and whose orientations are within 135$^\circ$ of the orientation of the query image. As explained in Sec.~3.2 of the paper, we manually select a day-time query image taken from a similar viewpoint for each nigh-time query photo in the Aachen Day-Night dataset. The day-time queries were included when constructing the intermediate model. Thus, their ground truth poses as well as a set of 3D points visible in each of them are obtain from the intermediate Structure-from-Motion model. For each day-time query, we select up to 20 reference images that observe the largest number of the 3D points visible in the day-time query. These reference images then form the set of relevant images for the corresponding night-time query photo. LocalSfM is implemented using COLMAP~\cite{Schonberger-CVPR16}. It is rather straight-forward to replace upright RootSIFT features with other types of local features. In order to encourage the use of our benchmark for the evaluation of local features, we will make our implementation publicly available. \PAR{DenseSfM.} DenseSfM modifies the LocalSfM approach by replacing RootSIFT~\cite{Arandjelovic12} features extracted at DoG extrema~\cite{Lowe04IJCV} with features densely extracted from a regular grid~\cite{Bosch07a,Liu08}. The goal of this approach is to increase the robustness of feature matching between day- and night-time images~\cite{Torii-CVPR15,Zhou2016ECCVW}. We used convolutional layers (conv4 and conv3) from a VGG-16 network~\cite{Simonyan15}, which was pre-trained as part of the NetVLAD model (Pitts30k), as features. We generated tentative correspondences by matching the extracted features in a coarse-to-fine manner: We first match conv4 features and use the resulting matches to restrict the correspondence search for conv3 features. As for LocalSfM, we performed exhaustive pairwise image matching. The matches are verified by estimating up to two homographies between each image pair via RANSAC~\cite{Fischler81CACM}. The resulting verified feature matches are then used as input for COLMAP~\cite{Schonberger-CVPR16}. The reconstruction process is the same as for LocalSfM, \emph{i.e}\onedot} \def\Ie{\emph{I.e}\onedot, we first triangulate the 3D points and then use them to estimate the pose of the night-time query. DenseSfM uses the same set of reference images for each query photo as LocalSfM. \subsection{Localization from Multiple Images} \PAR{Active Search + Generalized Cameras (Active Search+GC).} While most existing work on visual localization focuses on estimating the camera pose of an individual single query image, this paper additionally evaluates the benefits of using multiple images simultaneously for pose estimation. To this end, we assume that the relative poses between multiple query images are known and model these multiple images as a generalized camera. Given the matches found via Active Search for each individual image in a generalized camera, we use the 3-point-generalized-pose (GP3P) solver from \cite{Lee2015IJRR} to estimate the pose of the generalized camera. Together with the known relative poses, this provides us with a pose for each image in the generalized camera. We use these individual poses to evaluate the pose estimation accuracy. An inlier threshold of 12 pixels is used by RANSAC. Active Search+GC is not evaluated on the Aachen Day-Night dataset as it only provides individual query images. For the RobotCar Seasons, we evaluate two variants: \emph{Active Search+GC (triplet)} builds a generalized camera from images captured at the same point in time by the three cameras mounted on the RobotCar (left, rear, right). The resulting generalized cameras thus consist of three images each. \emph{Active Search+GC (sequence)} uses longer sequences taken with all three cameras. Each sequence consists of images taken consecutively in time under the same condition. More specifically, each sequence consists of a temporal sequence of images taken around the 49 manually selected reference positions (\emph{c.f}\onedot} \def\Cf{\emph{C.f}\onedot Sec.~3.2 in the paper). For the CMU Seasons dataset, we only evaluate the Active Search+GC (sequence). All query images taken under the same condition for a given sub-model define one sequence. In order to use the GP3P solver, the relative poses between the images in a generalized camera, as well as the scale of the relative translations between the images, need to be known. In our experiments, we extract the required relative poses directly from the ground truth camera poses. As a consequence, the results obtained with Active Search+GC (sequence) are optimistic in the sense that the method does not need to deal with the drift that normally occurs when estimating a trajectory via SLAM or SfM. Notice that we only use the relative poses. No information about the absolute pose of a generalized camera is used during pose estimation. Also, notice that the results obtained for Active Search+GC (triplet) are realistic: In this case, we are only using the known extrinsic calibration between the three cameras mounted on the RobotCar to define each generalized camera. We also experimented with relative poses generated by our own multi-camera visual odometry (VO) system. Tab.~6 in the paper compares the results obtained when using ground truth poses with those obtained when using poses estimated by our VO pipeline on the night-time images of the RobotCar Seasons dataset. As can be seen, using ground truth poses leads to better results as generalized camera pose solvers are typically sensitive to calibration errors. Still, Active Search+GC with VO poses outperforms single image-based methods. We also evaluated Active Search+GC (sequence) on the CMU datasets, but found that the drift in the odometry was too severe to provide accurate camera poses. An interesting experiment would be to use only short sub-sequences (for which the drift is not too large) rather than the full sequences. \begin{table} \centering \begin{tabular}{c|c} \textbf{Parameter} & \textbf{Value} \\ \hline Image Size & $48\times48$ ($144\times48$) \\ \hline Patch Size & $8\times8$ \\ \hline Sequence Length $d_s$ & 10 \end{tabular} \vspace{2mm} \caption{SeqSLAM parameters.} \label{tab:seqslam-parameters} \end{table} \PAR{SeqSLAM.} We used the OpenSeqSLAM implementation from \cite{sunderhauf2013we} with default parameters for template learning and trajectory uniqueness. For each set of synchronized Grasshopper2 images, we downscale the original $1024\times1024$ resolution to $48\times48$, then concatenate all three images to form a $144\times48$ pixel composite. The trajectory length parameter $d_s$ was set to 10 images; as both the query and database images are evenly spaced this corresponds to a trajectory length of 10 meters. Tab.~\ref{tab:seqslam-parameters} summarizes the parameters used for the RobotCar experiments. \begin{table}[t] \begin{center} \setlength{\tabcolsep}{4pt} \footnotesize{ \begin{tabular}{|l|c|c|c|c|} \hline & & \multicolumn{3}{c|}{\# images} \\ condition & recorded & individual & triplets & sequences \\ \hline\hline overcast (reference) & 28 Nov 2014 & 20,862 & 8,707 & - \\ \hline\hline dawn & 16 Dec 2014 & 1,449 & 483 & 54 \\ dusk & 20 Feb 2015 & 1,182 & 394 & 48 \\ night & 10 Dec 2014 & 1,314 & 438 & 49 \\ night+rain & 17 Dec 2014 & 1,320 & 440 & 51 \\ overcast (summer) & 22 May 2015 & 1,389 & 463 & 52 \\ overcast (winter) & 13 Nov 2015 & 1,170 & 390 & 49 \\ rain & 25 Nov 2014 & 1,263 & 421 & 50 \\ snow & 3 Feb 2015 & 1,467 & 489 & 56 \\ sun & 10 Mar 2015 & 1,380 & 460 & 51 \\ \hline total query & - & 11,934 & 3,978 & 460\\ \hline \end{tabular} } \end{center} \caption{Detailed statistics for the \emph{RobotCar Seasons} dataset. We used images from the \emph{overcast (reference)} traversal to build a 3D scene model. For each of the query sequences, we report the total number of query images taken by all three individual cameras, the resulting number of triplets used for Active Search+GC (triplet), and the number of temporally continuous query sequences used for Active Search+GC (sequence).}% \label{tab:robotcar}% \end{table} \begin{table}[t] \begin{center} \setlength{\tabcolsep}{4pt} \footnotesize{ \begin{tabular}{|l|c|c|c|} \hline & & \multicolumn{2}{c|}{\# images} \\ condition & recorded & individual & sequences \\ \hline\hline Sunny + No Foliage (reference) & 4 Apr 2011 & 7,159 & 17 \\ \hline\hline Sunny + Foliage & 1 Sep 2010 & 8,076 & 16 \\ Sunny + Foliage & 15 Sep 2010 & 7,260 & 17 \\ Cloudy + Foliage & 1 Oct 2010 & 7,185 & 17 \\ Sunny + Foliage & 19 Oct 2010 & 6,737 & 17 \\ Overcast + Mixed Foliage & 28 Oct 2010 & 6,744 & 17\\ Low Sun + Mixed Foliage & 3 Nov 2010 & 6,982 & 17 \\ Low Sun + Mixed Foliage & 12 Nov 2010 & 7,262 & 17 \\ Cloudy + Mixed Foliage & 22 Nov 2010 & 6,649 & 17 \\ Low Sun + No Foliage + Snow & 21 Dec 2010 & 6,825 & 17\\ Low Sun + No Foliage & 4 Mar 2011 & 6,976 & 17\\ Overcast + Foliage & 28 Jul 2011 & 4,639 & 17\\ \hline total query & - & 75,335 & 186\\ \hline \end{tabular} } \end{center} \caption{Detailed statistics for the \emph{CMU Seasons} dataset. We used images from the \emph{reference} traversal to build a 3D scene model. For each of the query sequences, we report the total number of query images taken and the number of temporally continuous query sequences used for Active Search+GC (sequence).}% \label{tab:cmu}% \end{table} \begin{table}[t] \begin{center} \setlength{\tabcolsep}{4pt} \footnotesize{ \begin{tabular}{|l|c|c|c|} \hline Scene & Sub-model & \# images \\ \hline\hline Urban & 1 - 7 & 31,250\\ Suburban & 8 - 10 & 13,736\\ Park & 11 - 17 & 30,349\\ \hline total query & - & 75,335 \\ \hline \end{tabular} } \end{center} \caption{The type of scenery (urban, suburban and park) depicted in the different sub-models of the \emph{CMU Seasons} dataset and the total number of query images for each type. In total there are 31,250 urban, 13,736 suburban and 30,349 park images.}% \label{tab:cmu2}% \end{table} \section{Dataset Details} \label{sec:datasets} \noindent This section provides additional details for the RobotCar Seasons and CMU Seasons dataset. More specifically, Tab.~\ref{tab:robotcar} details the time at which the individual traversals were recorded, the number of images per traversal, as well as the number of triplets and sequences used for Active Search+GC. Tab.~\ref{tab:cmu} provides similar details for the CMU Seasons dataset. In addition to listing the conditions for the different recordings, Tab.~\ref{tab:cmu2} lists the respective scenery (urban, suburban and park) for the different sub-models. \begin{table*}[th] \begin{center} \footnotesize{ \begin{tabular}{l|c|c|c|c|c|c|c} & \multicolumn{2}{c|}{Aachen Day-Night} & \multicolumn{4}{c|}{RobotCar Seasons} & CMU Seasons\\ \cline{2-8} & & & \multicolumn{2}{c|}{Day} & \multicolumn{2}{c|}{Night} & \\ \cline{4-7} Method & Day & Night & full model & sub-models & full model & sub-models & All \\ \hline Active Search & 0.102 & 0.140 & 0.291 & 0.061 & 0.973 & 0.093 & 0.065 \\ \hline CSL & 168.6 & 206.2 & 32.9 & $90.3^{\dagger}$ & 66.3 & $90.3^{\dagger}$ & 30.7\\ \hline DenseVLAD \textbf{*} & 0.752 & 0.527 & 0.338 & - & 0.338 & - & 0.785\\ \hline NetVLAD \textbf{$^\diamond$} & 0.105 & 0.105 & 0.137 & - & 0.137 & - & 0.107\\ \hline FABMAP & 0.008 & 0.008 & 0.039 & - & 0.039 & - & 0.013\\ \hline \hline Active Search+GC (triplet) & - &- & 0.879 & 0.180 & 2.940 & 0.289 & - \\ \hline Active Search+GC (sequence) & - & - & 1.570 & 0.317 & 5.267 & 0.515 & 26.278 \\ \hline seqSLAM & - & - & 0.251 & - & 0.251 & - & -\\ \hline \hline LocalSfM\enspace \textbf{*},\textbf{$^\diamond$} & - & 19.591 & \multicolumn{2}{c|}{-} & \multicolumn{2}{c|}{22.486} & 44.577 \\ \hline DenseSfM \textbf{*} & - & 16.719 & - & - & - & - & - \\ \hline \end{tabular} } \end{center} \caption{Average run-time per method on our three datasets. All timings are given in seconds. The timings include the time required for matching and (if applicable) spatial verification. Feature extraction times however are excluded. For Active Search+GC, which performs pose estimation using multiple cameras, run-times are typically dominated by the feature matching step (which is performed for each image that is part of a generalized camera). Methods marked with \textbf{*} are parallelized over multiple threads; all other methods utilize only a single CPU thread. Methods marked with a \textbf{$^\diamond$} symbol use the GPU, \emph{e.g}\onedot} \def\Eg{\emph{E.g}\onedot, for feature matching. The two sub-model query times for the CSL are marked with $\dagger$ since the day and night queries were not timed separately, and the reported time is the average time per query over all queries (both day and night). } \label{tab:run-times}% \end{table*} \section{Timing Results} \label{sec:timings} \noindent Tab.~\ref{tab:run-times} provides an overview over the run-times of the various methods used for experimental evaluation on the three benchmark datasets. Timings are given in seconds and do include feature matching and (if applicable) camera pose estimation. However, feature extraction times are not included in the run-times since most algorithms are independent of the underlying feature representation and, in extension thereof, the specific implementation used to extract the features. We ran the different algorithms on different machines. For all variants of Active Search, a PC with an Intel Core i7-4770 CPU with 3.4GHz, 32GB of RAM, and an NVidia GeForce GTX 780 GPU was used. The same machine was used to run LocalSfM. Notice that due to their need to match multiple images, most of the run-time of Active Search+GC and LocalSfM is spent on feature matching. The increase in run-time for LocalSfM from the Aachen Day-Night to the RobotCar Seasons and CMU Seasons datasets is caused by the number of reference images considered for each dataset. For Aachen Day-Night, at most 20 reference images are considered per query while more images are used on the other two datasets (where more reference images are used for the CMU dataset due to a higher sampling density of the reference images). CSL was run on a computer cluster using an Intel Xeon E5-2650 v3 with 3.2 GB RAM per CPU core. The fact that CSL is substantially slower on the Aachen Day-Night dataset than on the RobotCar Seasons and CMU Seasons datasets is due to the image resolution of the query images. The query images of the Aachen Day-Night dataset have a higher resolution, which results in more detected local features. More features in turn lead to more matches and thus a significant increase in run-time for CSL due to its computational complexity of $\mathcal{O}(N^2 \log N)$ for $N$ matches. Both FAB-MAP and SeqSLAM results were generated using a single core of an Intel Core i7-4790K CPU with 4.0GHz and 32GB of RAM. DenseVLAD, NetVLAD\footnote{The run-time for NetVLAD includes the intermediate convolutional layer computation essentially corresponding to feature extraction.}, and DenseSfM were run on an Intel Xeon E5-2690 v4 with 2.60GHz with 256GB of RAM and an NVidia GeForce TitanX. \section{Experimental Evaluation for All Conditions on CMU Seasons} \label{sec:cmu} \noindent Due to space constraints, Sec.~6.3 of the paper only evaluates two types of conditions on the CMU dataset: Changes in foliage (foliage fully present, foliage somewhat present, no foliage) and differences in the type of scenery (urban, suburban, park) as these conditions are not covered by the other two datasets in our benchmark. Tab.~\ref{tab:full_cmu} provides the full evaluation of the different state-of-the-art algorithms on the CMU dataset. \begin{table*}[th!] \begin{center} \begin{minipage}{.5\linewidth} \scriptsize{\input{cmu_table_foliage}}% \end{minipage}% \begin{minipage}{.5\linewidth} \scriptsize{\input{cmu_table_scenery}} \end{minipage} \vspace{0.5cm} \scriptsize{\input{cmu_table_weather}} \end{center} \caption{Full evaluation on the \textbf{CMU Seasons} dataset. Besides evaluating the impact of foliage (top left) and the type of environment (top right) on the pose estimation accuracy, both of which were already presented in the main paper, we also evaluate the impact of different weather conditions.}% \label{tab:full_cmu}% \end{table*} \section{Cumulative Distributions of Position and Orientation Errors} \label{sec:distributions} \noindent Fig.~4 in the paper shows the cumulative distributions of the position errors of the evaluated methods for the night-time queries of the Aachen Day-Night and RobotCar Seasons datasets. For completeness, Fig.~\ref{fig:results:distributions} shows cumulative distributions of the position and orientation errors for all datasets. Notice that the results reported in the tables in the paper and the appendix are based on thresholding both the position \emph{and} orientation error. Thus, the percentage of localized query images reported in the table is lower than the curves shown in Fig.~\ref{fig:results:distributions}, which are obtained by thresholding either the position or orientation error. \begin{figure*}[tbp] \centering \includegraphics[width=0.98\linewidth]{figs_supp_eps/legend} \begin{tabular}{@{\hskip 2pt}c@{\hskip 0pt}c@{\hskip 0pt}c@{\hskip 0pt}c@{\hskip 0pt}c@{\hskip 2pt}} \includegraphics[height=2.6cm]{figs_supp_eps/Aachen_day_distance_new} & \includegraphics[height=2.6cm]{figs_supp_eps/Aachen_night_distance_new} & \includegraphics[height=2.6cm]{figs_supp_eps/RobotCar_Rear_day_distance_new} & \includegraphics[height=2.6cm]{figs_supp_eps/RobotCar_Rear_night_distance_new} & \includegraphics[height=2.6cm]{figs_eps/cmu_cdf_trans.eps} \\ \end{tabular} \begin{tabular}{@{\hskip 2pt}c@{\hskip 0pt}c@{\hskip 0pt}c@{\hskip 0pt}c@{\hskip 0pt}c@{\hskip 2pt}} \includegraphics[height=2.6cm]{figs_supp_eps/Aachen_day_orientation_new} & \includegraphics[height=2.6cm]{figs_supp_eps/Aachen_night_orientation_new} & \includegraphics[height=2.6cm]{figs_supp_eps/RobotCar_Rear_day_orientation_new} & \includegraphics[height=2.6cm]{figs_supp_eps/RobotCar_Rear_night_orientation_new} & \includegraphics[height=2.6cm]{figs_eps/cmu_cdf_rot.eps} \\ {\small (a) Aachen - day} & {\small (b) Aachen - night} & {\small (c) RobotCar - day} & {\small (d) RobotCar - night} & {\small (e) CMU - all}\\ \end{tabular} \vspace{1mm} \caption{Cumulative distribution of position and orientation errors for the three datasets. } \label{fig:results:distributions} \end{figure*} \section{Introduction} \noindent Estimating the 6DOF camera pose of an image with respect to a 3D scene model is key for visual navigation of autonomous vehicles and augmented/mixed reality devices. Solutions to this \emph{visual localization} problem can also be used to ``close loops'' in the context of SLAM or to register images to Structure-from-Motion (SfM) reconstructions. \begin{figure}[t] \centering \includegraphics[width=1\linewidth]{figs_eps/robotcar-projections-1_original_new-1}% \caption{Visual localization in changing urban conditions. We present three new datasets, \emph{Aachen Day-Night}, \emph{RobotCar Seasons} (shown) and \emph{CMU Seasons} for evaluating 6DOF localization against a prior 3D map (top) using registered query images taken from a wide variety of conditions (bottom), including day-night variation, weather, and seasonal changes over long periods of time. } \label{fig:robotcar-projection-1} \end{figure} Work on 3D structure-based visual localization has focused on increasing efficiency~\cite{Li10ECCV,Sattler17PAMI,Larsson16,Lynen15RSS,Taira2018CVPR}, improving scalability and robustness to ambiguous structures~\cite{Zeisl15ICCV,Svarm17PAMI,Sattler-ICCV-2015,Li12ECCV}, reducing memory requirements~\cite{Sattler-ICCV-2015,Li10ECCV,Cao14CVPR}, and more flexible scene representations~\cite{Sattler2017CVPR}. All these methods utilize local features to establish 2D-3D matches. These correspondences are in turn used to estimate the camera pose. This data association stage is critical as pose estimation fails without sufficiently many correct matches. There is a well-known trade-off between discriminative power and invariance for local descriptors. Thus, existing localization approaches will only find enough matches if both the query images and the images used to construct the 3D scene model are taken under similar viewing conditions. Capturing a scene under all viewing conditions is prohibitive. Thus, the assumption that all relevant conditions are covered is too restrictive in practice. It is more realistic to expect that images of a scene are taken under a single or a few conditions. To be practically relevant, \emph{e.g}\onedot} \def\Eg{\emph{E.g}\onedot, for life-long localization for self-driving cars, visual localization algorithms need to be robust under varying conditions (\emph{c.f}\onedot} \def\Cf{\emph{C.f}\onedot Fig.~\ref{fig:robotcar-projection-1}). Yet, no work in the literature actually measures the impact of varying conditions on 6DOF pose accuracy. One reason for the lack of work on visual localization under varying conditions is a lack of suitable benchmark datasets. The standard approach for obtaining ground truth 6DOF poses for query images is to use SfM. An SfM model containing both the database and query images is built and the resulting poses of the query images are used as ground truth~\cite{Li10ECCV,Sun2017CVPR,Sattler2017CVPR}. Yet, this approach again relies on local feature matches and can only succeed if the query and database images are sufficiently similar~\cite{Radenovic2016CVPR}. The benchmark datasets constructed this way thus tend to only include images that are relatively easy to localize in the first place. In this paper, we construct the first datasets for benchmarking visual localization under changing conditions. To overcome the above mentioned problem, we heavily rely on human work: We manually annotate matches between images captured under different conditions and verify the resulting ground truth poses. We create three complimentary benchmark datasets based on existing data~\cite{Sattler12BMVC,Maddern2017IJRR,Badino_IV11}. All consist of a 3D model constructed under one condition and offer query images taken under different conditions: The \emph{Aachen Day-Night} dataset focuses on localizing high-quality night-time images against a day-time 3D model. The \emph{RobotCar Seasons} and \emph{CMU Seasons} dataset both consider automotive scenarios and depict the same scene under varying seasonal and weather conditions. One challenge of the RobotCar Seasons dataset is to localize low-quality night-time images. The CMU Seasons dataset focuses on the impact of seasons on vegetation and thus the impact of scene geometry changes on localization. This paper makes the following \textbf{contributions}: \emph{(i)} We create a new outdoor benchmark complete with ground truth and metrics for evaluating 6DOF visual localization under changing conditions such as illumination (day/night), weather (sunny/rain/snow), and seasons (summer/winter). Our benchmark covers multiple scenarios, such as pedestrian and vehicle localization, and localization from single and multiple images as well as sequences. \emph{(ii)}~We provide an extensive experimental evaluation of state-of-the-art algorithms from both the computer vision and robotics communities on our datasets. We show that existing algorithms, including SfM, have severe problems dealing with both day-night changes and seasonal changes in vegetated environments. \emph{(iii)} We show the value of querying with multiple images, rather than with individual photos, especially under challenging conditions. \emph{(iv)} We make our benchmarks publicly available at \url{visuallocalization.net} to stimulate research on long-term visual localization. \begin{table*}[t] \begin{center} \setlength{\tabcolsep}{4pt} \scriptsize{ \begin{tabular}{|l|c|c|c|c|c|c|c|c|c|} \hline & & Image & 3D SfM Model & \multicolumn{2}{c|}{\# Images} & \multicolumn{3}{c|}{Condition Changes} & 6DOF query\\\cline{5-9} Dataset & Setting & Capture & (\# Sub-Models) & Database & Query & Weather & Seasons & Day-Night & poses\\ \hline Alderley Day/Night~\cite{milford2012seqslam} & Suburban & Trajectory & & 14,607 & 16,960 & \checkmark & & \checkmark & \\ Nordland~\cite{sunderhauf2013we} & Outdoors & Trajectory & & \multicolumn{2}{c|}{143k} & & \checkmark & &\\ Pittsburgh~\cite{Torii15PAMI} & Urban & Trajectory & & 254k & 24k & & & &\\ SPED~\cite{Chen2017ICRA} & Outdoors & Static Webcams & & 1.27M & 120k & \checkmark & \checkmark & \checkmark & \\ Tokyo 24/7~\cite{Torii2015CVPR} & Urban & Free Viewpoint & & 75,984 & 315 & & & \checkmark & \\ \hline 7 Scenes~\cite{Shotton2013CVPR} & Indoor & Free Viewpoint & & 26,000 & 17,000 & & & & \checkmark\\ Aachen~\cite{Sattler12BMVC} & Historic City & Free Viewpoint & 1.54M / 7.28M (1) & 3,047 & 369 & & & & \\ Cambridge~\cite{Kendall2015ICCV} & Historic City & Free Viewpoint & 1.89M / 17.68M (5) & 6,848 & 4,081 & & & & \checkmark (SfM)\\ Dubrovnik~\cite{Li10ECCV} & Historic City & Free Viewpoint & 1.89M / 9.61M (1) & 6,044 & 800 & & & & \checkmark (SfM)\\ Landmarks~\cite{Li12ECCV} & Landmarks & Free Viewpoint & 38.19M / 177.82M (1k) & 204,626 & 10,000 & & & & \\ Mall~\cite{Sun2017CVPR} & Indoor & Free Viewpoint & & 682 & 2296 & & & & \checkmark\\ NCLT~\cite{CarlevarisBianco2016IJRR} & Outdoors \& Indoors & Trajectory & & \multicolumn{2}{c|}{about 3.8M} & & \checkmark & & \checkmark \\ Rome~\cite{Li10ECCV} & Landmarks & Free Viewpoint & 4.07M / 21.52M (69) & 15,179 & 1000 & & & & \\ San Francisco~\cite{Chen2011CVPR,Li12ECCV,Sattler2017CVPR} & Urban & Free Viewpoint & 30M / 149M (1) & 610,773 & 442 & & & & \checkmark (SfM)\\ Vienna~\cite{Irschara09CVPR} & Landmarks & Free Viewpoint & 1.12M / 4.85M (3) & 1,324 & 266 & & & & \\ \hline \textbf{Aachen Day-Night (ours)} & Historic City & Free Viewpoint & 1.65M / 10.55M (1) & 4,328 & 922 & & & \checkmark & \checkmark\\ \textbf{RobotCar Seasons (ours)} & Urban & Trajectory & 6.77M / 36.15M (49) & 20,862 & 11,934 & \checkmark & \checkmark & \checkmark & \checkmark \\ \textbf{CMU Seasons (ours)} & Suburban & Trajectory & 1.61M / 6.50M (17) & 7,159 & 75,335 & \checkmark & \checkmark & & \checkmark \\%old data base value 75,335 \hline \end{tabular} \vspace{-6pt} } \end{center} \caption{Comparison with existing benchmarks for place recognition and visual localization. "Condition Changes" indicates that the viewing conditions of the query images and database images differ. For some datasets, images were captured from similar camera trajectories. If SfM 3D models are available, we report the number of sparse 3D points and the number of associated features. Only our datasets provide a diverse set of changing conditions, reference 3D models, and most importantly ground truth 6DOF poses for the query images.} \label{tab:benchmarks}% \end{table*} \vspace{-3pt} \section{Related Work} \label{sec:related_work} \vspace{-6pt} \PAR{Localization benchmarks.} Tab.~\ref{tab:benchmarks} compares our benchmark datasets with existing datasets for both visual localization and place recognition. Datasets for place recognition~\cite{milford2012seqslam,sunderhauf2013we,Torii-PAMI2015,Chen2017ICRA,Torii2015CVPR} often provide query images captured under different conditions compared to the database images. However, they neither provide 3D models nor 6DOF ground truth poses. Thus, they cannot be used to analyze the impact of changing conditions on pose estimation accuracy. In contrast, datasets for visual localization~\cite{Sattler12BMVC,Shotton2013CVPR,Kendall2015ICCV,Li10ECCV,Li12ECCV,Sun2017CVPR,Chen2011CVPR,Sattler2017CVPR,Irschara09CVPR} often provide ground truth poses. However, they do not exhibit strong changes between query and database images due to relying on feature matching for ground truth generation. A notable exception is the Michigan North Campus Long-Term (NCLT) dataset~\cite{CarlevarisBianco2016IJRR}, providing images captured over long period of time and ground truth obtained via GPS and LIDAR-based SLAM. Yet, it does not cover all viewing conditions captured in our datasets, \emph{e.g}\onedot} \def\Eg{\emph{E.g}\onedot, it does not contain any images taken at night or during rain. To the best of our knowledge, ours are the first datasets providing both a wide range of changing conditions and accurate 6DOF ground truth. Thus, ours is the first benchmark that measures the impact of changing conditions on pose estimation accuracy. Datasets such as KITTI~\cite{geiger2013vision}, TorontoCity~\cite{Wang2017ICCV}, or the M\'{a}laga Urban dataset~\cite{blanco2014malaga} also provide street-level image sequences. Yet, they are less suitable for visual localization as only few places are visited multiple times. \PAR{3D structure-based localization} methods~\cite{Li10ECCV,Li12ECCV,Sattler17PAMI,Svarm17PAMI,Zeisl15ICCV,Sattler-ICCV-2015,Liu2017ICCV} establish correspondences between 2D features in a query image and 3D points in a SfM point cloud via descriptor matching. These 2D-3D matches are then used to estimate the query's camera pose. Descriptor matching can be accelerated by prioritization ~\cite{Li10ECCV,Choudhary12ECCV,Sattler17PAMI} and efficient search algorithms~\cite{Lynen15RSS,Donoser14CVPR}. In large or complex scenes, descriptor matches become ambiguous due to locally similar structures found in different parts of the scene~\cite{Li12ECCV}. This results in high outlier ratios of up to 99\%, which can be handled by exploiting co-visibility information~\cite{Li12ECCV,Sattler-ICCV-2015,Liu2017ICCV} or via geometric outlier filtering~\cite{Svarm17PAMI,Zeisl15ICCV,Camposeco2017CVPR}. We evaluate \emph{Active Search}~\cite{Sattler17PAMI} and the \emph{City-Scale Localization} approach~\cite{Svarm17PAMI}, a deterministic geometric outlier filter based on a known gravity direction, as representatives for efficient respectively scalable localization methods. \PAR{2D image-based localization} methods approximate the pose of a query image using the pose of the most similar photo retrieved from an image database. They are often used for place recognition \cite{Torii2015CVPR,Arandjelovic16,Sattler-CVPR16,Suenderhauf2015RSS,Chen2017ICRA,lowry2016visual} and loop-closure detection \cite{Cummins08IJRR,galvez2012bags,MurArtal2015TR}. They remain effective at scale~\cite{Arandjelovic14a,Torii-PAMI2015,Sattler-CVPR16,Sattler2017CVPR} and can be robust to changing conditions \cite{Arandjelovic16,Torii2015CVPR,Suenderhauf2015RSS,Chen2017ICRA,Naseer2017ICRA,Sattler2017CVPR}. We evaluate two compact VLAD-based~\cite{Jegou10} image-level representations: DenseVLAD~\cite{Torii2015CVPR} aggregates densely extracted SIFT descriptors~\cite{Arandjelovic12,Lowe04IJCV} while NetVLAD~\cite{Arandjelovic16} uses learned features. Both are robust against day-night changes~\cite{Torii2015CVPR,Arandjelovic16} and work well at large-scale~\cite{Sattler2017CVPR}. We also evaluate the de-facto standard approach for loop-closure detection in robotics \cite{engel2014lsd,linegar2015work}, where robustness to changing conditions is critical for long-term autonomous navigation~\cite{Chen2017ICRA,Naseer2017ICRA,milford2012seqslam,Suenderhauf2015RSS,Torii2015CVPR,Linegar2016ICRA}: FAB-MAP \cite{Cummins08IJRR} is an image retrieval approach based on the Bag-of-Words (BoW) paradigm~\cite{Sivic03} that explicitly models the co-occurrence probability of different visual words. \PAR{Sequence-based} approaches for image retrieval are used for loop-closure detection in robotics~ \cite{maddern2012cat, milford2012seqslam, naseer2014robust}. Requiring a matched sequence of images in the correct order significantly reduces false positive rates compared to single-image retrieval approaches, producing impressive results including direct day-night matches with SeqSLAM~\cite{milford2012seqslam}. We evaluate OpenSeqSLAM~\cite{sunderhauf2013we} on our benchmark. Multiple cameras with known relative poses can be modelled as a generalized camera~\cite{Pless2003CVPR}, \emph{i.e}\onedot} \def\Ie{\emph{I.e}\onedot, a camera with multiple centers of projections. Approaches for absolute pose estimation for both multi-camera systems~\cite{Lee2015IJRR} and camera trajectories~\cite{Camposeco2016ECCV} from 2D-3D matches exist. Yet, they have never been applied for localization in changing conditions. In this paper, we show that using multiple images can significantly improve performance in challenging scenarios. \PAR{Learning-based localization} methods have been proposed to solve both loop-closure detection~\cite{sunderhauf2015performance, Suenderhauf2015RSS, milford2015sequence, Chen2017ICRA} and pose estimation~\cite{Kendall2015ICCV,Walch2017ICCV,Clark2017CVPR,Schoenberger2018CVPR}. They learn features with stable appearance over time~\cite{Chen2017ICRA,muhlfellner2015summary, Naseer2017ICRA}, train classifiers for place recognition~\cite{Cao13,Gronat-IJCV16,Linegar2016ICRA,Weyand-ECCV16}, and train CNNs to regress 2D-3D matches~\cite{Brachmann2016CVPR,Brachmann2017CVPR,Shotton2013CVPR} or camera poses~\cite{Kendall2015ICCV,Walch2017ICCV,Clark2017CVPR}. \vspace{-3pt} \section{Benchmark Datasets for 6DOF Localization} \label{sec:benchmarks} \vspace{-2pt} \noindent This section describes the creation of our three new benchmark datasets. Each dataset is constructed from publicly available data, allowing our benchmarks to cover multiple geographic locations. We add ground truth poses for all query images and build reference 3D models (\emph{c.f}\onedot} \def\Cf{\emph{C.f}\onedot~Fig.~\ref{fig:all_models}) from images captured under a single condition. All three datasets present different challenges. The \emph{Aachen Day-Night} dataset focuses on localizing night-time photos against a 3D model built from day-time imagery. The night-time images, taken with a mobile phone using software HDR post-processing, are of high quality. The dataset represents a scenario where images are taken with hand-held cameras, \emph{e.g}\onedot} \def\Eg{\emph{E.g}\onedot, an augmented reality application. Both the \emph{RobotCar Seasons} and the \emph{CMU Seasons} datasets represent automotive scenarios, with images captured from a car. In contrast to the Aachen Day dataset, both datasets exhibit less variability in viewpoints but a larger variance in viewing conditions. The night-time images from the RobotCar dataset were taken from a driving car with a consumer camera with auto-exposure. This results in significantly less well-lit images exhibiting motion blur, \emph{i.e}\onedot} \def\Ie{\emph{I.e}\onedot, images that are significantly harder to localize (\emph{c.f}\onedot} \def\Cf{\emph{C.f}\onedot Fig.~\ref{fig:queries}). The RobotCar dataset depicts a mostly urban scene with rather static scene geometry. In contrast, the CMU dataset contains a significant amount of vegetation. The changing appearance and geometry of the vegetation, due to seasonal changes, is the main challenge of this dataset. \begin{table*}[th] \begin{center} \scriptsize{ \begin{tabular}{c|c|c|c|c|c} & \multicolumn{4}{c|}{reference model} & query images\\ & \# images & \# 3D points & \# features & condition & conditions (\# images) \\ \hline\hline Aachen Day-Night & 4,328 & 1.65M & 10.55M & day & day (824), night (98)\\ \hline RobotCar Seasons & 20,862 & 6.77M & 36.15M & overcast & dawn (1,449), dusk (1,182), night (1,314), night+rain (1,320), rain (1,263), \\ & & & & (November) & overcast summer / winter (1,389 / 1,170), snow (1,467), sun (1,380)\\ \hline CMU Seasons & 7,159 & 1.61M & 6.50M & sun / no foliage & sun (22,073), low sun (28,045), overcast (11,383), clouds (14,481), \\ & & & & (April) & foliage (33,897), mixed foliage (27,637), no foliage (13,801) \\ & & & & & urban (31,250), suburban (13,736), park (30,349) \\ \hline \end{tabular} \vspace{-6pt} } \end{center} \caption{Detailed statistics for the three benchmark datasets proposed in this paper. For each dataset, a reference 3D model was constructed using images taken under the same reference condition, \emph{e.g}\onedot} \def\Eg{\emph{E.g}\onedot, "overcast" for the RobotCar Seasons dataset. \label{tab:datasets}% \end{table*} \vspace{-3pt} \subsection{The Aachen Day-Night Dataset} \label{sec:benchmarks:aachen} \vspace{-3pt} \noindent Our {Aachen Day-Night} dataset is based on the Aachen localization dataset from~\cite{Sattler12BMVC}. The original dataset contains 4,479 reference and 369 query images taken in the old inner city of Aachen, Germany. It provides a 3D SfM model but does not have ground truth poses for the queries. We augmented the original dataset with day- and night-time queries captured using standard consumer phone cameras. To obtain ground truth poses for the day-time queries, we used COLMAP~\cite{Schonberger-CVPR16} to create an intermediate 3D model from the reference and day-time query images. The scale of the reconstruction is recovered by aligning it with the geo-registered original Aachen model. As in~\cite{Li10ECCV}, we obtain the reference model for the Aachen Day-Night dataset by removing the day-time query images. 3D points visible in only a single remaining camera were removed as well~\cite{Li10ECCV}. The resulting 3D model has 4,328 reference images and 1.65M 3D points triangulated from 10.55M features. \begin{figure}[t] \centering \includegraphics[width=1\linewidth]{figs_eps/query-montage_new}% \\ \includegraphics[width=1\linewidth,clip]{figs_eps/CMUcollage2}% \caption{Example query images for \emph{Aachen Day-Night} (top), \emph{RobotCar Seasons} (middle) and \emph{CMU Seasons} datasets (bottom). } \label{fig:queries} \end{figure} \PAR{Ground truth for night-time queries.} We captured 98 night-time query images using a Google Nexus5X phone with software HDR enabled. Attempts to include them in the intermediate model resulted in highly inaccurate camera poses due to a lack of sufficient feature matches. To obtain ground truth poses for the night-time queries, we thus hand-labelled 2D-3D matches. We manually selected a day-time query image taken from a similar viewpoint for each night-time query. For each selected day-time query, we projected its visible 3D points from the intermediate model into it. Given these projections as reference, we manually labelled 10 to 30 corresponding pixel positions in the night-time query. Using the resulting 2D-3D matches and the known intrinsics of the camera, we estimate the camera poses using a 3-point solver \cite{Fischler81CACM,Kneip11CVPR} and non-linear pose refinement. To estimate the accuracy for these poses, we measure the mean reprojection error of our hand-labelled 2D-3D correspondences (4.33 pixels for 1600x1200 pixel images) and the pose uncertainty. For the latter, we compute multiple poses from a subset of the matches for each image and measure the difference in these poses to our ground truth poses. The mean median position and orientation errors are 36cm and 1$^\circ$. The absolute pose accuracy that can be achieved by minimizing a reprojection error depends on the distance of the camera to the scene. Given that the images were typically taken 15 or more meters from the scene, we consider the ground truth poses to be reasonably accurate. \begin{figure*}[t!] \centering \includegraphics[width=0.25\linewidth]{figs_eps/aachen}\hspace{32pt}% \includegraphics[width=0.20\linewidth]{figs_eps/robotcar_dataset_rotated}\hspace{32pt}% \includegraphics[width=0.23\linewidth]{figs_eps/cmuSeasonsModel_cropped_rotated}% \caption{3D models of the \emph{Aachen Day-Night} (left, showing database (red), day-time query (green), and night-time query images (blue)), \emph{RobotCar Seasons} (middle), and \emph{CMU Seasons} (right) datasets. For RobotCar and CMU, the colors encode the individual submaps. \label{fig:all_models} \end{figure*} \vspace{-3pt} \subsection{The RobotCar Seasons Dataset} \label{sec:benchmarks:robotcar} \vspace{-3pt} \noindent Our {RobotCar Seasons} dataset is based on a subset of the publicly available Oxford RobotCar Dataset \cite{Maddern2017IJRR}. The original dataset contains over 20M images recorded from an autonomous vehicle platform over 12 months in Oxford, UK. Out of the 100 available traversals of the 10km route, we select one reference traversal in overcast conditions and nine query traversals that cover a wide range of conditions (\emph{c.f}\onedot} \def\Cf{\emph{C.f}\onedot Tab.~\ref{tab:datasets}). All selected images were taken with the three synchronized global shutter Point Grey Grasshopper2 cameras mounted to the left, rear, and right of the car. Both the intrinsics of the cameras and their relative poses are known. The reference traversal contains 26,121 images taken at 8,707 positions, with 1m between successive positions. Building a single consistent 3D model from this data is very challenging, both due to sheer size and the lack of visual overlap between the three cameras. We thus built 49 non-overlapping local submaps, each covering a 100m trajectory. For each submap, we initialized the database camera poses using vehicle positions reported by the inertial navigation system (INS) mounted on the RobotCar. We then iteratively triangulated 3D points, merged tracks, and refined both structure and poses using bundle adjustment. The scale of the reconstructions was recovered by registering them against the INS poses. The reference model contains all submaps and consists of 20,862 reference images and 6.77M 3D points triangulated from 36.15M features. We obtained query images by selecting reference positions inside the 49 submaps and gathering all images from the nine query traversals with INS poses within 10m of one of the positions. This resulted in 11,934 images in total, where triplets of images were captured at 3,978 distinct locations. We also grouped the queries into 460 temporal sequences based on the timestamps of the images. \PAR{Ground truth poses for the queries.} Due to GPS drift, the INS poses cannot be directly used as ground truth. Again, there are not enough feature matches between day- and night-time images for SfM. We thus used the LIDAR scanners mounted to the vehicle to build local 3D point clouds for each of the 49 submaps under each condition. These models were then aligned to the LIDAR point clouds of the reference trajectory using ICP~\cite{besl1992method}. Many alignments needed to be manually adjusted to account for changes in scene structure over time (often due to building construction and road layout changes). The final median RMS errors between aligned point clouds was under 0.10m in translation and 0.5$^\circ$ in rotation across all locations. The alignments provided ground truth poses for the query images. \vspace{-3pt} \subsection{The CMU Seasons Dataset} \label{sec:benchmarks:cmu} \vspace{-3pt} \noindent The {CMU Seasons} Dataset is based on a subset of the CMU Visual Localization Dataset~\cite{Badino_IV11}, which contains more than 100K images recorded by the Computer Vision Group at Carnegie Mellon University over a period of 12 months in Pittsburgh, PA, USA. The images were collected using a rig of two cameras mounted at 45 degree forward/left and forward/right angles on the roof of an SUV. The vehicle traversed an 8.5 km long route through central and suburban Pittsburgh 16 times with a spacing in time of between 2 weeks up to 2 months. Out of the 16 traversals, we selected the one from April 4 as the reference, and then 11 query traversals were selected such that they cover the range of variations in seasons and weather that the data set contains. \PAR{Ground truth poses for the queries.} As with the RobotCar dataset, the GPS is not accurate enough and the CMU dataset is also too large to build one 3D model from all the images. The full sequences were split up into 17 shorter sequences, each containing about 250 consecutive vehicle poses. For each short sequence, a 3D model was built using bundle adjustment of SIFT points tracked over several image frames. The resulting submaps of the reference route were merged with the corresponding submaps from the other traversals by using global bundle adjustment and manually annotated image correspondences. Reprojection errors are within a few pixels for all 3D points and the distances between estimated camera positions and expected ones (based on neighbouring cameras) are under 0.10m. The resulting reference model consists of 1.61M 3D points triangulated from 6.50M features in 7,159 database images. We provide 75,335 query images and 187 query sequences. \vspace{-3pt} \section{Benchmark Setup} \vspace{-3pt} \noindent We evaluate state-of-the-art localization approaches on our new benchmark datasets to measure the impact of changing conditions on camera pose estimation accuracy and to understand how hard robust long-term localization is. \PAR{Evaluation measures.} We measure the \emph{pose accuracy} of a method by the deviation between the estimated and the ground truth pose. The \emph{position error} is measured as the Euclidean distance $\|c_\text{est} - c_\text{gt}\|_2$ between the estimated $c_\text{est}$ and the ground truth position $c_\text{gt}$. The absolute \emph{orientation error} $|\alpha|$, measured as an angle in degrees, is computed from the estimated and ground truth camera rotation matrices $\mathtt{R}_\text{est}$ and $\mathtt{R}_\text{gt}$. We follow standard practice~\cite{Hartley2013IJCV} and compute $|\alpha|$ as $2\cos(|\alpha|)= \text{trace}(\mathtt{R}_\text{gt}^{-1}\mathtt{R}_\text{est}) - 1$, \emph{i.e}\onedot} \def\Ie{\emph{I.e}\onedot, we measure the minimum rotation angle required to align both rotations~\cite{Hartley2013IJCV}. We measure the percentage of query images localized within $X$m and $Y^\circ$ of their ground truth pose. We define three pose accuracy intervals by varying the thresholds: \emph{High-precision} (0.25m, 2$^\circ$), \emph{medium-precision} (0.5m, 5$^\circ$), and \emph{coarse-precision} (5m, 10$^\circ$). These thresholds were chosen to reflect the high accuracy required for autonomous driving. We use the intervals (0.5m, 2$^\circ$), (1m, 5$^\circ$), (5m, 10$^\circ$) for the Aachen night-time queries to account for the higher uncertainty in our ground truth poses. Still, all regimes are more accurate than consumer-grade GPS systems. \PAR{Evaluated algorithms.} As discussed in Sec.~\ref{sec:related_work}, we evaluate a set of state-of-the-art algorithms covering the most common types of localization approaches: From the class of 3D structure-based methods, we use \emph{Active Search} (AS)~\cite{Sattler2017CVPR} and \emph{City-Scale Localization} (CSL)~\cite{Svarm17PAMI}. From the class of 2D image retrieval-based approaches, we use \emph{DenseVLAD}~\cite{Torii2015CVPR}, \emph{NetVLAD}~\cite{Arandjelovic16}, and \emph{FAB-MAP}~\cite{Cummins08IJRR}. In order to measure the benefit of using multiple images for pose estimation, we evaluate two approaches: \emph{OpenSeqSLAM}~\cite{sunderhauf2013we} is based on image retrieval and enforces that the images in the sequence are matched in correct order. Knowing the relative poses between the query images, we can model them as a generalized camera \cite{Pless2003CVPR}. Given 2D-3D matches per individual image (estimated via Active Search), we estimate the pose via a generalized absolute camera pose solver~\cite{Lee2015IJRR} inside a RANSAC loop. We denote this approach as \emph{Active Search+GC} (AS+GC). We mostly use ground truth query poses to compute the relative poses that define the generalized cameras\footnote{Note that Active Search+GC only uses the relative poses between the query images to define the geometry of a generalized camera. It does \emph{not} use any information about the absolute poses of the query images.}. Thus, AS+GC provides an upper bound on the number of images that can be localized when querying with generalized cameras. The methods discussed above all perform localization from scratch without any prior knowledge about the pose of the query. In order to measure how hard our datasets are, we also implemented two \emph{optimistic baselines}. Both assume that a set of relevant database images is known for each query. Both perform pairwise image matching and use the known ground truth poses for the reference images to triangulate the scene structure. The feature matches between the query and reference images and the known intrinsic calibration are then be used to estimate the query pose. The first optimistic baseline, \emph{LocalSfM}, uses upright RootSIFT features~\cite{Lowe04IJCV,Arandjelovic12}. The second uses upright CNN features densely extracted on a regular grid. We use the same VGG-16 network~\cite{Simonyan15} as NetVLAD. The \emph{DenseSfM} method uses coarse-to-fine matching with conv4 and conv3 features. We select the relevant reference images for the two baselines as follows: For Aachen, we use the manually selected day-time image (\emph{c.f}\onedot} \def\Cf{\emph{C.f}\onedot Sec.~\ref{sec:benchmarks:aachen}) to select up to 20 reference images sharing the most 3D points with the selected day-time photo. For RobotCar and CMU, we use all reference images within 5m and 135$^\circ$ of the ground truth query pose. We evaluated \emph{PoseNet}~\cite{Kendall2015ICCV} but were not able to obtain competitive results. We also attempted to train \emph{DSAC}~\cite{Brachmann2017CVPR} on KITTI but were not able to train it. Both PoseNet and DSAC were thus excluded from further evaluations. \vspace{-3pt} \section{Experimental Evaluation} \label{sec:evaluation} \vspace{-3pt} \noindent This section presents the second main contribution of this paper, a detailed experimental evaluation on the effect of changing conditions on the pose estimation accuracy of visual localization techniques. In the following, we focus on pose accuracy. Please see the appendix for experiments concerning computation time. \begin{figure}[tbp] \centering \includegraphics[width=0.98\linewidth]{figs_supp_eps/legend} \includegraphics[width=0.49\linewidth]{figs_supp_eps/Aachen_night_distance_new} \includegraphics[width=0.49\linewidth]{figs_supp_eps/RobotCar_Rear_night_distance_new} \caption{Cumulative distribution of position errors for the night-time queries of the Aachen (left) and RobotCar (right) datasets. } \label{fig:results:aachen} \end{figure} \begin{table*}[th!] \begin{center} \scriptsize{ \input{comp_table_aachen_CMU_new} } \end{center} \vspace{-12pt} \caption{Evaluation on the \textbf{Aachen Day-Night} dataset and a subset of the conditions of the \textbf{CMU Seasons} dataset. \label{tab:results:cmu:comparison} \end{table*} \vspace{-3pt} \subsection{Evaluation on the Aachen Day-Night Dataset} \vspace{-3pt} \noindent The focus of the Aachen Day-Night dataset is on benchmarking the pose accuracy obtained by state-of-the-art methods when localizing night-time queries against a 3D model constructed from day-time imagery. In order to put the results obtained for the night-time queries into context, we first evaluate a subset of the methods on the 824 day-time queries. As shown in Tab.~\ref{tab:results:cmu:comparison}, the two structure-based methods are able to estimate accurate camera poses and localize nearly all images within the coarse-precision regime. We conclude that the Aachen dataset is not particularly challenging for the day-time query images. \PAR{Night-time queries.} Tab.~\ref{tab:results:cmu:comparison} also reports the results obtained for the night-time queries. We observe a significant drop in pose accuracy for both Active Search and CSL, down from above 50\% in the high-precision regime to less than 50\% in the coarse-precision regime. Given that the night-time queries were taken from similar viewpoints as the day-time queries, this drop is solely caused by the day-night change. CSL localizes more images compared to Active Search (AS). This is not surprising since CSL also uses matches that were rejected by AS as too ambiguous. Still, there is a significant difference to LocalSfM. CSL and AS both match features against the full 3D model while LocalSfM only considers a small part of the model for each query. This shows that global matching sufficiently often fails to find the correct nearest neighbors, likely caused by significant differences between day-time and night-time descriptors. Fig.~\ref{fig:results:aachen}(left) shows the cumulative distribution of position errors for the night-time queries and provides interesting insights: LocalSfM, despite knowing relevant reference images for each query, completely fails to localize about 20\% of all queries. This is caused by a lack of correct feature matches for these queries, either due to failures of the feature detector or descriptor. DenseSfM skips feature detection and directly matches densely extracted CNN descriptors (which encode higher-level information compared to the gradient histograms used by RootSIFT). This enables DenseSfM to localize more images at a higher accuracy, resulting in the best performance on this dataset. Still, there is significant room for improvement, even in the coarse-precision regime (\emph{c.f}\onedot} \def\Cf{\emph{C.f}\onedot Tab.~\ref{tab:results:cmu:comparison}). Also, extracting and matching dense descriptors is a time-consuming task. \begin{table*}[t!] \begin{center} \scriptsize{ \input{robotcar_table_normal} } \end{center} \vspace{-6pt} \caption{Evaluation on the \textbf{RobotCar Seasons} dataset. We report the percentage of queries localized within the three thresholds.} \label{tab:results:robotcar:comparison} \end{table*} \vspace{-3pt} \subsection{Evaluation on the RobotCar Seasons Dataset} \vspace{-3pt} \noindent The focus of the RobotCar Seasons dataset is to measure the impact of different seasons and illumination conditions on pose estimation accuracy in an urban environment. {Tab.~\ref{tab:results:robotcar:comparison} shows that changing day-time conditions have only a small impact on pose estimation accuracy for all methods.} The reason is that seasonal changes have little impact on the building facades that are dominant in most query images. The exceptions are ``dawn'' and ``sun''. For both, we observed overexposed images caused by direct sunlight (\emph{c.f}\onedot} \def\Cf{\emph{C.f}\onedot Fig.~\ref{fig:robotcar-projection-1}). Thus, fewer features can be found for Active Search and CSL and the global image descriptors used by the image retrieval approaches are affected as well. On the Aachen Day-Night dataset, we observed that image retrieval-based methods (DenseVLAD and NetVLAD) consistently performed worse than structure-based methods (Active Search, CSL, LocalSfM, and DenseSfM). For the RobotCar dataset, NetVLAD and DenseVLAD essentially achieve the same coarse-precision performance as Active Search and CSL. This is caused by the lower variation in viewpoints as the car follows the same road. Compared to Aachen, there is an even stronger drop in pose accuracy between day and night for the RobotCar dataset. All methods fail to localize a significant number of queries for both the high- and medium-precision regimes. Interestingly, DenseVLAD and NetVLAD outperform all other methods in the coarse-precision regime (\emph{c.f}\onedot} \def\Cf{\emph{C.f}\onedot~Fig.~\ref{fig:results:aachen}(right)). This shows that their global descriptors still encode distinctive information even if local feature matching fails. The better performance of all methods under "night+rain" compared to "night" comes from the autoexposure of the RobotCar's cameras. A longer exposure is used for the "night", leading to significant motion blur. \begin{table}[t] \begin{center} \scriptsize{ \input{robotcar_table_multi} } \end{center} \vspace{-6pt} \caption Using \textbf{multiple images} for pose estimation (ActiveSeach+GC) on the \textbf{RobotCar Seasons} dataset.} \label{tab:results:robotcar:multicam} \end{table} \PAR{Multi-image queries.} The RobotCar is equipped with three synchronized cameras and captures sequences of images for each camera. Rather than querying with only a single image, we can thus also query with multiple photos. Tab.~\ref{tab:results:robotcar:multicam} shows the results obtained with seqSLAM (which uses temporal sequences of all images captured by the three cameras) and Active Search+GC. For the latter, we query with triplets of images taken at the same time as well as with temporal sequences of triplets. For the triplets, we use the known extrinsic calibration between the three cameras mounted on the car. For the temporal sequences, we use relative poses obtained from the ground truth (GT) absolute poses. For readability, we only show the results summarized for day- and night-conditions. Tab.~\ref{tab:results:robotcar:multicam} shows that Active Search+GC consistently outperforms single image methods in terms of pose accuracy. Active Search+GC is able to accumulate correct matches over multiple images. This enables Active Search+GC to succeed even if only a few matches are found for each individual image. Naturally, the largest gain can be observed when using multiple images in a sequence. \begin{table}[t] \begin{center} \scriptsize{ \input{robotcar_table_prior} } \end{center} \vspace{-6pt} \caption{Using \textbf{location priors} to query only submodels rather than the full \textbf{RobotCar Seasons} dataset for night-time queries.} \label{tab:results:robotcar:priors} \end{table} \PAR{Location priors.} In all previous experiments, we considered the full RobotCar 3D model for localization. However, it is not uncommon in outdoor settings to have a rough prior on the location at which the query image was taken. We simulate such a prior by only considering the sub-model relevant to a query rather than the full model. While we observe only a small improvement for day-time queries, localizing night-time queries significantly benefits from solving an easier matching problem (\emph{c.f}\onedot} \def\Cf{\emph{C.f}\onedot Tab.~\ref{tab:results:robotcar:priors}). For completeness, we also report results for LocalSfM, which also considers only a small part of the model relevant to a query. Active Search+GC outperforms LocalSfM on this easier matching task when querying with sequences. This is due to not relying on one single image to provide enough matches. One drawback of sequence-based localization is that the relative poses between the images in a sequence need to be known quite accurately. Tab.~\ref{tab:results:robotcar:priors} also reports results obtained when using our own multi-camera visual odometry (VO) system to compute the relative poses. While performing worse compared to ground truth relative poses, this more realistic baseline still outperforms methods using individual images. The reasons for the performance drop are drift and collapsing trajectories due to degenerate configurations. \vspace{-3pt} \subsection{Evaluation on the CMU Seasons Dataset} \vspace{-3pt} \noindent Compared to the urban scenes shown in the other datasets, significant parts of the CMU Seasons dataset show suburban or park regions. Seasonal changes can drastically affect the appearance of such regions. In the following, we thus focus on these conditions (see the appendix for an evaluation of all conditions). For each query image, we only consider its relevant sub-model. Tab.~\ref{tab:results:cmu:comparison} evaluates the impact of changes in foliage and of different regions on pose accuracy. The reference condition for the CMU Seasons dataset does not contain foliage. Thus, other conditions for which foliage is also absent lead to the most accurate poses. Interestingly, DenseVLAD and NetVLAD achieve a better performance than Active Search and CSL for the medium- and coarse-precision regimes under the "Foliage" and "Mixed Foliage" conditions. For the coarse-precision regime, they even outperform LocalSfM. This again shows that global image-level descriptors can capture information lost by local features. We observe a significant drop in pose accuracy in both suburban and park regions. This is caused by the dominant presence of vegetation, leading to many locally similar (and thus globally confusing) features. LocalSfM still performs well as it only considers a few reference images that are known to be a relevant for a query image. Again, we notice that DenseVLAD and NetVLAD are able to coarsely localize more queries compared to the feature-based methods. Localizing sequences (Active Search+GC) again drastically helps to improve pose estimation accuracy. Compared to the RobotCar Seasons dataset, where the sequences are rather short (about 20m maximum), the sequences used for the CMU Seasons dataset completely cover their corresponding sub-models. In practical applications, smaller sequences are preferable to avoid problems caused by drift when estimating the relative poses in a sequence. Still, the results from Tab.~\ref{tab:results:cmu:comparison} show the potential of using multiple rather than a single image for camera pose estimation. \vspace{-3pt} \section{Conclusion \& Lessons Learned} \vspace{-3pt} \noindent In this paper, we have introduced three challenging new benchmark datasets for visual localization, allowing us, for the first time, to analyze the impact of changing conditions on the accuracy of 6DOF camera pose estimation. Our experiments clearly show that the long-term visual localization problem is far from solved. The extensive experiments performed in this paper lead to multiple interesting conclusions: (i) Structure-based methods such as Active Search and CSL are robust to most viewing conditions in urban environments. Yet, performance in the high-precision regime still needs to be improved significantly. (ii) Localizing night-time images against a database built from day-time photos is a very challenging problem, even when a location prior is given. (iii)~Scenes with a significant amount of vegetation are challenging, even when a location prior is given. (iv) SfM, typically used to obtain ground truth for localization benchmarks, does not fully handle problems (ii) and (iii) due to limitations of existing local features. Dense CNN feature matching inside SfM improves pose estimation performance at high computational costs, but does not fully solve the problem. Novel (dense) features, \emph{e.g}\onedot} \def\Eg{\emph{E.g}\onedot, based on scene semantics~\cite{Schoenberger2018CVPR}, seems to be required to solve these problems. Our datasets readily provide a benchmark for such features through the LocalSfM and DenseSfM pipelines. (v) Image-level descriptors such as DenseVLAD can succeed in scenarios where local feature matching fails. They can even provide coarse-level pose estimates in autonomous driving scenarios. Aiming to improve pose accuracy, \emph{e.g}\onedot} \def\Eg{\emph{E.g}\onedot, by denser view sampling via synthetic images~\cite{Torii2015CVPR} or image-level approaches for relative pose estimation, is an interesting research direction. (vi) There is a clear benefit in using multiple images for pose estimation. Yet, there is little existing work on multi-image localization. Fully exploiting the availability of multiple images (rather than continuing the focus on single images) is thus another promising avenue for future research. {\small \PAR{Acknowledgements.} This work was partially supported by ERC grant LEAP No.\ 336845, CIFAR Learning in Machines $\&$ Brains program, EU-H2020 project LADIO~731970, the European Regional Development Fund under the project IMPACT (reg. no.\ CZ$.02.1.01/0.0/0.0/15\_003/0000468$), JSPS KAKENHI Grant Number 15H05313, EPSRC Programme Grant EP/M019918/1, the Swedish Research Council (grant no.\ 2016-04445), the Swedish Foundation for Strategic Research (Semantic Mapping and Visual Navigation for Smart Robots), and Vinnova / FFI (Perceptron, grant no.\ 2017-01942). }
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction}\label{intro} An important motivation for our study comes from elliptic inverse boundary value problems, such as the Calder\'on problem. Let us consider a bounded domain $\Omega\subset \mathbb{R}^N$, $N\geq 2$, which is regular enough and let $\gamma$ be a positive bounded function which is bounded away from zero and that corresponds to the background conductivity of a conducting body contained in $\Omega$. The aim of the inverse problem is to recover perturbations of the background conductivity, for example inhomogeneities, by performing suitable electrostatic measurements at the boundary of current and voltage type. Such a problem comes from several types of nondesctructive evaluation problems in materials, where the aim is to detect the presence of flaws, as well as from medical imaging problems, where the aim is to detect the presence of tumors. Namely, if $\tilde{\gamma}$ is the perturbed conductivity, one usually prescribes the voltage $f$ on the boundary of $\Omega$ and measures the corresponding current still on the boundary, that is, $\tilde{\gamma}\nabla \tilde{u}\cdot \nu$ on $\partial\Omega$, where $\nu$ is the outer normal on $\partial\Omega$ and $\tilde{u}$, the electrostatic potential in $\Omega$, is the solution to the Dirichlet boundary value problem \begin{equation}\label{dirpbmpert} \left\{\begin{array}{ll} \mathrm{div}(\tilde{\gamma}\nabla \tilde{u})=0&\text{in }\Omega\\ \tilde{u}=f &\text{on }\partial\Omega. \end{array} \right. \end{equation} By changing the Dirichlet datum $f$, one can perform two or more measurements. One often assumes that the perturbation is well contained inside $\Omega$, that is, $\tilde{\gamma}$ coincides with the background conductivity $\gamma$ in a known neighbourhood of $\partial\Omega$, that is, outside $\tilde{\Omega}$, a known open set compactly contained in $\Omega$. Since \cite{Mand}, it has been clear that one source of instability for elliptic inverse boundary value problems is due to the interior decay of solutions. We consider the solution $u$ to the Dirichlet problem in the unperturbed body, that is, with $\tilde{\gamma}$ replaced by the background conductivity $\gamma$, namely \begin{equation}\label{dirpbmunpert} \left\{\begin{array}{ll} \mathrm{div}(\gamma\nabla u)=0&\text{in }\Omega\\ u=f &\text{on }\partial\Omega. \end{array} \right. \end{equation} The possibility to recover stably information on the unknown perturbation, using the additional measurement depending on the Dirichlet datum $f$, is directly related to the decay of $u$, or $\nabla u$, in the interior of the domain $\Omega$, in particular in $\tilde{\Omega}$, the region where the perturbation may be present. Therefore it is particularly important to establish decay properties of $u$ in $\tilde{\Omega}$, depending on the Dirichlet datum $f$ and the distance of $\tilde{\Omega}$ from $\partial\Omega$, which is exactly the issue we address in this paper. In \cite{Mand}, it was assumed that the domain $\Omega$ is $B_1$, a ball of radius $1$, and the background conductivity is homogeneous, $\gamma\equiv 1$. Then it was shown that the exponential instability of the inverse problem of Calder\'on is due to the fact that solutions to \eqref{dirpbmunpert} with boundary values $f$ given by spherical harmonics decay in the interior exponentially with respect to the degree of the spherical harmonic itself. We note that a spherical harmonic is a Steklov eigenfunction for the Laplacian on $B_1$ with corresponding Steklov eigenvalue given by its degree. We also note that the degree is exactly equal to the frequency as defined in Definition~\ref{frequencydefin}. The Weyl law on the asymptotic behaviour of Steklov eigenvalues is the other key ingredient. We recall that $\mu$ is a Steklov eigenvalue and $\phi|_{\partial\Omega}$ is its corresponding Steklov eigenfunction if $\phi$ is a nontrivial solution to \begin{equation}\label{Steklov} \left\{\begin{array}{ll} \mathrm{div}(\gamma\nabla \phi)=0&\text{in }\Omega\\ \gamma\nabla\phi\cdot\nu=\mu\phi &\text{on }\partial\Omega. \end{array} \right. \end{equation} We also recall that, for $f\in H^{1/2}(\partial\Omega)$, $f\neq 0$, we call its frequency the number $$\mathrm{frequency}(f)=\frac{|f|^2_{H^{1/2}(\partial\Omega)}}{\|f\|^2_{L^2(\partial\Omega)}}.$$ We refer to Definition~\ref{frequencydefin} for a precise statement, here we just note that, if $\phi|_{\partial\Omega}$ is the trace of a nontrivial solution to \eqref{Steklov}, then its frequency is essentially proportional to the Steklov eigenvalue $\mu$. The ideas of \cite{Mand} have been generalised to other elliptic boundary value and scattering inverse problems in \cite{DC-R1,DC-R2} and to the parabolic case in \cite{DC-R-V}, showing that exponential instability unfortunately holds in all these cases. Besides showing the instability nature of these problems, these results provide hints on the choice of optimal measurements, where optimality may be in the sense of distinguishability as defined in \cite{Isaac}, see also \cite{G-I-N}. As suggested in these papers, if one has at disposal a fixed and finite number $n$ of measurements, one should choose the first $n$ eigenfunctions of a suitable eigenvalue problem involving the perturbed conductivity $\tilde{\gamma}$ and the background one $\gamma$. Since the conductivity $\tilde{\gamma}$ is unknown, by using the arguments developed in \cite{Mand} the best choice should be to employ the first $n$ spherical harmonics, at least when the domain is a ball and the background conductivity is constant. In a general case, it seems reasonable to assume that the correct replacement for spherical harmonics is given by Steklov eigenfunctions. Indeed, the interior decay of the solution corresponding to a Steklov eigenfunction is very fast with respect to the Steklov eigenvalue, at least in a smooth case. For example, in \cite{His-Lut} it is shown that when $\partial\Omega$ is $C^{\infty}$ and $\gamma$ is $C^{\infty}$ the decay is faster than any power. Moreover, in the real-analytic case, the decay is still of exponential type, as shown first for surfaces in \cite{P-S-T} and then for higher dimensional manifolds in \cite{Gal-T}. Consequently, the information carried by the measurements corresponding to boundary data given by high order Steklov eigenfunctions rapidly degrades in the interior of the domain, thus it is of little help for the reconstruction of perturbations of the background conductivity far from the boundary. However, from these examples it seems that the worst case scenario is when the domain and the background conductivity are real-analyitc, because in this case the interior decay of the solution corresponding to a Steklov eigenfunction is indeed of exponential type with respect to the Steklov eigenvalue, or when the domain and the background conductivity are smooth, say $C^{\infty}$, since the interior decay is still very fast in this case. Here, instead, we are interested in understanding the interior decay when the domain and the coefficients are not particularly smooth and also when $f$ is not a Steklov eigenfunction. In fact, in some occasions it might be very difficult to employ a Steklov eigenfunction and we wish to show that the decay might be actually due just to the frequency of the boundary datum $f$, without the much stronger assumption that $f$ is a Steklov eigenfunction. For example, this might be significant for the choice of optimal measurements in a partial data scenario, that is, when data are assigned and collected only on a given portion of the boundary. We also wish to mention that, for the Calder\'on problem, the dependence of the distinguishability on the distance of $\tilde{\Omega}$ from the boundary of $\Omega$, rather than on the choice of boundary measurements, has been carefully analysed in \cite{Gar-Knu,A-S} in two dimensions and in \cite{Gar-Hyv} in higher dimensions. Our decay estimate, since the dependence on such a distance is explicitly given, may also be of interest in this kind of analysis of the instability. We describe the main estimates we are able to prove, Theorems~\ref{mainthm} and \ref{mainthmbis}. We assume that $\Omega$ is a $C^{1,1}$ domain and that $\gamma$ is Lipschitz continuous. We can also assume that $\gamma$ is a symmetric conductivity tensor, and not just a scalar conductivity, or that the underlying metric in $\Omega$ is not the Euclidean one but a Lipschitz Riemannian one. We call $\Phi$ the frequency of the Dirichlet boundary datum $f\in H^{1/2}(\partial\Omega)$, $f\neq 0$. Whenever $f$ has zero mean on $\partial\Omega$, we may use another notion of frequency, which we call lower frequency and which is given by $$\mathrm{lowefrequency}(f)=\frac{\|f\|^2_{L^2(\partial\Omega)}}{\|f\|^2_{H^{-1/2}(\partial\Omega)}}.$$ We refer to Definition~\ref{frequencydefinbis} for a precise statement. Here we point out that, if we call $\Phi_1$ the lower frequency of $f$, then $\Phi_1\leq \Phi$. On the other hand, if $\phi|_{\partial\Omega}$ is the trace of a nontrivial solution to \eqref{Steklov} with $\mu>0$, then also its lower frequency is essentially proportional to the Steklov eigenvalue $\mu$. For $d>0$ small enough, we call $\Omega^d$ the set $$\Omega^d=\{x\in\Omega:\ \mathrm{dist}(x,\partial\Omega)>d\}.$$ The first result is the following. We can find two positive constants $C_1$ and $C_2$, depending on $\Omega$, the Riemannian metric on it, and the coefficient $\gamma$, such that if $d \Phi\geq C_1$, then the function $u$ solving \eqref{dirpbmunpert} satisfies \begin{equation}\label{mainestintro} \int_{\Omega^d}\|\nabla u\|^2\leq C_2\frac{\int_{\Omega}\|\nabla u\|^2}{d\Phi}. \end{equation} We refer to Section~\ref{decaysec}, and in particular to Theorem~\ref{mainthm}, for the precise statement. If we are interested in the decay of $u$ instead of its gradient, when $f$ has zero mean on $\partial\Omega$, we obtain an analogous result but we need to replace the frequency $\Phi$ with the lower frequency $\Phi_1$. Namely, we can find two positive constants $C_1$ and $C_2$, depending on $\Omega$, the Riemannian metric on it, and the coefficient $\gamma$, such that if $d \Phi_1\geq C_1$, then the function $u$ solving \eqref{dirpbmunpert} satisfies \begin{equation}\label{mainestintrobis} \int_{\partial\Omega^d} u^2\, d\sigma\leq C_2\frac{\int_{\partial\Omega}u^2\, d\sigma}{d\Phi_1}. \end{equation} We refer to Section~\ref{decaysec}, and in particular to Theorem~\ref{mainthmbis}, for the precise statement. As an easy consequence of \eqref{mainestintrobis}, under the same assumptions we obtain that, for two positive constants $C_1$ and $C_2$, depending on $\Omega$, the Riemannian metric on it, and the coefficient $\gamma$, if $d \Phi_1\geq C_1$, then the function $u$ solving \eqref{dirpbmunpert} satisfies \begin{equation}\label{mainestintroter} \int_{\Omega^d}\|\nabla u\|^2\leq C_2\frac{\int_{\Omega}\|\nabla u\|^2}{d^2\Phi\Phi_1}. \end{equation} See Corollary~\ref{maincor} for the precise statement. We conclude that, if $f=\phi|_{\partial\Omega}$ is the trace of a nontrivial solution to \eqref{Steklov} with $\mu>0$, then, possibly with different constants $C_1$ and $C_2$, if $d \mu\geq C_1$, then the function $\phi$ solving \eqref{Steklov} satisfies \begin{equation}\label{mainestintroquater} \int_{\Omega^d}\|\nabla \phi\|^2\leq C_2\frac{\int_{\Omega}\|\nabla \phi\|^2}{d^2\mu^2}, \end{equation} see Remark~\ref{Steklovremark} for a precise statement. Let us briefly comment on the difference between these estimates. Assuming that $f\in H^{1/2}(\partial\Omega)$, $f\neq 0$ with zero mean on $\partial\Omega$, since $\Phi_1\leq \Phi$, we have that $D$ in general decays faster than $H$. Actually, by Corollary~\ref{maincor}, we have that, as $\Phi_1$ grows, $D$ decays like $\Phi^{-1}\Phi_1^{-1}$, that is, at least like $\Phi_1^{-2}$, whereas $H$ decays like $\Phi_1^{-1}$. Moreover, if $f$ coincides with a Steklov eigenfunction with Steklov eigenvalue $\mu>0$, then, up to a constant, $\Phi$, $\Phi_1$ and $\mu$ are of the same order, therefore for Steklov eigenfunctions we obtain a decay of order $\mu^{-2}$, a result which is in accord with the estimate one can prove using the technique of \cite{His-Lut}. In fact, an indication of the optimality of our decay estimates comes from the analysis developed in \cite{His-Lut} when $f=\phi|_{\partial\Omega}$ is a Steklov eigenfunction, with positive Steklov eigenvalue $\mu$. Following the idea of the proof of \cite[Theorem~1.1]{His-Lut}, it is evident that one can estimate $u(x)$, for any $x\in \Omega^d$, by a constant times $\mu^{-1}\|f\|_{H^{1/2}(\partial\Omega)}$ provided the Green's function $G_{\gamma}(x,\cdot)$ satisfies \begin{equation}\label{Green} \|\Lambda_{\gamma}(G_{\gamma}(x,\cdot))\|_{H^{1/2}(\partial\Omega)},\ \|\gamma\nabla G_{\gamma}(x,\cdot)\cdot\nu\|_{H^{1/2}(\partial\Omega)}\leq \tilde{C} \end{equation} where $\Lambda_{\gamma}$ is the so-called Dirichlet-to-Neumann map. Roughly speaking, \eqref{Green} corresponds to an $H^2$-bound of $G_{\gamma}(x,\cdot)$ away from $x$, which is what one obtains assuming the conductivity $\gamma$ is Lipschitz continuous. Since $\|f\|_{H^{1/2}(\partial\Omega)}$ is of the order of $\sqrt{\mu}\|f\|_{L^2(\partial\Omega)}=\sqrt{\mu}\|u\|_{L^2(\partial\Omega)}$ and, as we already pointed out, the frequency and the lower frequency of $f$ are of the same order of $\mu$, one can obtain an estimate that is perfectly comparable with \eqref{mainestintrobis}. If one wishes to prove a decay of higher order, like $u(x)$ bounded by a constant times $\mu^{-2}\|f\|_{H^{1/2}(\partial\Omega)}$, by the same technique of \cite{His-Lut}, one should estimate the functions appearing in \eqref{Green} in terms of the $H^{3/2}(\partial\Omega)$ norm instead of the $H^{1/2}(\partial\Omega)$ norm, which corresponds to an $H^3$-bound of $G_{\gamma}(x,\cdot)$ away from $x$. Usually Lipschitz regularity of $\gamma$ is not enough to infer $H^3$-bounds, something like $C^{1,1}$ regularity would be required instead, therefore, under our weak regularity assumptions, our estimate \eqref{mainestintrobis} seems to be optimal even for Steklov eigenfunctions. Another indication of the optimality of our decay estimates comes from the analysis developed in \cite{BLS}. In \cite{BLS} the authors introduce the so-called \emph{penetration function} and study its properties for two dimensional domains, in particular for the two dimensional unit ball. They are particularly interested in low regularity cases, thus they allow discontinuous conductivity tensors. Their aim is to obtain estimates in homogenisation theory, but their results can be easily interpreted as distinguishability estimates with a finite number of boundary measurements for corresponding inverse boundary value problems. In particular, using their notation, if $V_n$ is the space of trigonometric polynomials of degree $n$ on $\partial B_1(0)\subset \mathbb{R}^2$, $d\in (0,1)$ is a constant, and $A=\gamma$ is a symmetric conductivity tensor which is Lipschitz continuous, we can show that the penetration function $\Xi(V_n,d)$ satisfies, for a suitable constant $C$, \begin{equation}\label{pfunction} \Xi(V_n,d)\leq C(dn)^{-1}. \end{equation} In fact, for any $f$ which is orthogonal to $V_n$ in $L^2(\partial\Omega)$, we have that its frequency $\Phi$ is at least $n+1$ and also its lower frequency $\Phi_1$ is at least $n+1$. Therefore \eqref{pfunction} directly follows from \eqref{mainestintroter}. Such a result considerably improves the estimate of \cite[Theorem~3.4]{BLS}, which is however valid for a wider class of conductivity tensors including discontinuous ones. Moreover, they give evidence by some explicit examples that, when discontinuous conductivity tensors are allowed, a lower bound for the penetration function is of order $n^{-1/2}$. It would be interesting to match such a lower bound by an estimate like \eqref{mainestintro} when $\gamma$ is discontinuous, but such an estimate would require a completely different method from the one used here. About the technique we developed to obtain our estimates, let us begin by considering \eqref{mainestintro}, where we use an ordinary differential equation argument that allows us to estimate the decay of $$D(d)=\int_{\Omega^d}\gamma\|\nabla u\|^2$$ when $d$ is positive, and small enough. We closely follow the so-called frequency method introduced in \cite{Gar-Lin1} to determine unique continuation properties of solutions to elliptic partial differential equations. In \cite{Gar-Lin1}, the local behaviour, near a point $x_0\in \Omega$, of a solution $u$ to $\mathrm{div}(\gamma\nabla u)=0$ in $\Omega$ was analysed, even in the case of a symmetric conductivity tensor $\gamma$. A key point of the method was to reduce, locally near $x_0$, the elliptic equation with a symmetric conductivity tensor to an equation in a special Riemannian manifold with a scalar conductivity. By a special Riemannian manifold we mean one whose metric can be written in a special form in terms of polar coordinates centred at $x_0$. Such a reduction is made possible by the technique developed in \cite{AKS}. Here we need to perform a similar construction, the only difference, and the main novelty, is that instead of considering a local modification near a point we consider a global one near the boundary of the domain. Indeed, in order to develop our analysis, we need that $\partial\Omega^d$ depends on $d$ smoothly enough or, equivalently, that the distance function from the boundary is smooth enough, say $C^{1,1}$, in a neighbourhood of the boundary. By \cite{Del-Zol}, see Theorem~\ref{Del-Zolthm}, this is true in the Euclidean setting provided $\partial\Omega$ is $C^{1,1}$ as well. In the Riemannian setting a similar result is much harder to prove. On the other hand, by exploiting the technique of \cite{AKS} and suitably changing the metric near the boundary, we can reduce to the case where the distance from the boundary, in the Riemannian metric, is smooth enough since it coincides with the distance from the boundary in the Euclidean metric in a neighbourhood of $\partial\Omega$. We believe that such a construction, besides being crucial for the proof of our decay estimates, is of independent interest and is one of the major achievement of the paper. The major part of the construction is contained in Proposition~\ref{viceversaprop} and Theorem~\ref{AKSmethod}, with one interesting application developed in Proposition~\ref{normalderivativelemma}. Our argument is based on the notion of frequency, which we essentially take from \cite{Gar-Lin1}, and which is given by $$N(d)=\frac{D(d)}{H(d)}\qquad\text{where }H(d)=\int_{\partial\Omega^d}\gamma u^2\, d\sigma.$$ We note that $N(0)$ is of the same order of the frequency of the boundary datum $f$. We need to compute the derivative of $D$ and of $H$, a task we perform following the analogous computations of \cite{Gar-Lin1}. In particular, for $D'(d)$ we use the coarea formula and a suitable version of the Rellich identity which is given in Lemma~\ref{Rellichid}. Instead, we compute $H'(d)$ by a straightforward application of Proposition~\ref{normalderivativelemma}. The proof of \eqref{mainestintrobis} follows analogous lines of that of \eqref{mainestintro} by replacing $D$ with $H$ and $H$ with $$E(d)=\int_{\Omega^d}\gamma u^2.$$ However there are some additional technical difficulties to be taken care of, see the proof of Theorem~\ref{mainthmbis} in Section~\ref{decaysec}. Moreover, the crucial link between the quotient $H(0)/E(0)$, which plays the role of $N(0)$, and the lower frequency $\Phi_1$ is provided by the estimate of Proposition~\ref{-1/2-2bound}. The plan of the paper is as follows. In Section~\ref{prelsec} we present the preliminary results that are needed for our analysis. In particular, we first discuss the regularity of domains and of the corresponding distance from the boundary, with the main result here being Theorem~\ref{Del-Zolthm} which is taken from \cite{Del-Zol}. We also give the precise definitions of frequencies we use. Then we review the Riemannian setting and the Dirichlet and Neumann problems for elliptic equations in the Euclidean and in the Riemannian setting, pointing out what happens if one suitably changes the underlying metric, see Remarks~\ref{conformalchange} and \ref{anisotropy}. For instance, Remark~\ref{anisotropy} allows us to pass from a symmetric conductivity tensor in the Euclidean setting to a scalar conductivity in the Riemannian one. We also briefly discuss Steklov eigenvalues and eigenfunctions. In Section~\ref{distsec}, we investigate the distance function from the boundary in the Riemannian setting. Here the crucial result is Proposition~\ref{viceversaprop} which, together with Theorem~\ref{AKSmethod} and Remark~\ref{conformalchange}, allows us to assume, without loss of generality, that the distance function from the boundary in the Riemannian case has the same regularity as in the Euclidean case. Another important technical result in this section is Proposition~\ref{normalderivativelemma}. Finally, in Section~\ref{decaysec}, we state and prove our main results, the decay estimates contained in Theorems~\ref{mainthm} and \ref{mainthmbis} and Corollary~\ref{maincor}. \subsubsection*{Acknowledgements} % The authors are partly supported by GNAMPA, INdAM, through 2018 and 2019 projects. The authors wish to thank Eric Bonnetier for pointing them out reference \cite{BLS}. \section{Preliminaries}\label{prelsec} Throughout the paper the integer $N\geq 2$ will denote the space dimension. For any (column) vectors $v$, $w\in\mathbb{R}^N$, $\langle v,w\rangle = v^T w$ denotes the usual scalar product on $\mathbb{R}^N$. Here, and in the sequel, for any matrix $A$, $A^T$ denotes its transpose. For any $x=(x_1,\ldots,x_N)\in\mathbb{R}^N$, we denote $x=(x',x_N)\in\mathbb{R}^{N-1}\times \mathbb{R}$. We let $e_i$, $i=1,\ldots,N$, be the vectors of the canonical base and we call $\pi'$ the projection onto the first $(N-1)$ components and $\pi_N$ the projection onto the last one, namely, for any $x\in\mathbb{R}^N$, $$\pi'(x)=x'=(x_1,\ldots,x_{N-1})\quad\text{and}\quad\pi_N(x)=x_N.$$ For any $s>0$ and any $x\in\mathbb{R}^N$, $B_s(x)$ denotes the open ball contained in $\mathbb{R}^N$ with radius $s$ and center $x$, whereas $B'_s(x')$ denotes the open ball contained in $\mathbb{R}^{N-1}$ with radius $s$ and center $x'$. Finally, for any $E\subset \mathbb{R}^N$, we denote $B_s(E)=\bigcup_{x\in E}B_s(x)$. For any Borel $E\subset\mathbb{R}^N$ we let $|E|=\mathcal{L}^N(E)$. We call $\mathbb{M}^{N\times N}_{\mathrm{sym}}(\mathbb{R})$ the space of real-valued $N\times N$ symmetric matrices and by $I_N$ we denote the identity $N\times N$ matrix. We recall that we drop the dependence of any constant from the space dimension $N$. \subsection{Regular domains and the distance from the boundary} \begin{defin} Let $\Omega\subset\mathbb{R}^N$ be a bounded open set. Let $k$ be a nonnegative integer and $0\leq\alpha\leq 1$. We say that $\Omega$ is of class $C^{k,\alpha}$ if for any $x\in\partial\Omega$ there exist a $C^{k,\alpha}$ function $\phi_x:\mathbb{R}^{N-1}\to\mathbb{R}$ and a neighbourhood $U_x$ of $x$ such that for any $y\in U_x$ we have, up to a rigid transformation depending on $x$, $$y=(y',y_N)\in\Omega\quad \text{if and only if}\quad y_N<\phi_x(y').$$ We also say that $\Omega$ is of class $C^{k,\alpha}$ with positive constants $r$ and $L$ if for any $x\in\partial\Omega$ we can choose $U_x=B_r(x)$ and $\phi_x$ such that $\|\phi_x\|_{C^{k,\alpha}(\mathbb{R}^{N-1})}\leq L$. \end{defin} \begin{oss} If $\Omega\subset\mathbb{R}^N$, a bounded open set, is of class $C^{k,\alpha}$ then there exist positive constants $r$ and $L$ such that $\Omega$ is of class $C^{k,\alpha}$ with constants $r$ and $L$ with the further condition, when $k\geq 1$, that for any $x\in\partial\Omega$ we have $\nabla\phi_x (x')=0$. \end{oss} We note that a bounded open set of class $C^{0,1}$ is said to be of \emph{Lipschitz class} and that typically one assumes at least that $k+\alpha\geq 1$. \begin{defin}\label{Eucldistfun} Let $\Omega\subset\mathbb{R}^N$ be a bounded open set. For any $x\in \mathbb{R}^N$, its distance from the boundary of $\Omega$ is $$\mathrm{dist}(x,\partial\Omega)=\inf_{y\in\partial\Omega}\|x-y\|=\min_{y\in\partial\Omega}\|x-y\|.$$ We call $\varphi:\mathbb{R}^N\to\mathbb{R}$ the \emph{signed distance function} from the boundary of $\Omega$ as follows. For any $x\in \mathbb{R}^N$ $$\varphi(x)=\left\{\begin{array}{ll}\mathrm{dist}(x,\partial\Omega) &\text{if }x\in\overline{\Omega},\\ -\mathrm{dist}(x,\partial\Omega)&\text{otherwise}. \end{array}\right.$$ We call, for any $d\in\mathbb{R}$, $$\Omega^d=\{x\in\mathbb{R}^N:\ \varphi(x)>d\}\quad\text{and} \quad\partial\Omega^d=\{x\in\mathbb{R}^N:\ \varphi(x)=d\}.$$ Finally, for any $d>0$, we call $$U^d=\{x\in\overline{\Omega}:\ \varphi(x)<d\}.$$ \end{defin} The regularity of the signed distance function from the boundary has been thoroughly investigated in \cite{Del-Zol}. Here we are interested in particular in the case of bounded open sets of class $C^{1,1}$ which is treated in \cite[Theorem~5.7]{Del-Zol}. Namely the following result holds true. \begin{teo}\label{Del-Zolthm} Let us fix positive constants $R$, $r$ and $L$. Let $\Omega\subset B_R(0)\subset\mathbb{R}^N$ be a bounded open set of class $C^{1,1}$ with constants $r$ and $L$. Then there exists $\tilde{d}_0>0$, depending on $r$ and $L$ only, such that, if we call $U=\{x\in \mathbb{R}^N:\ |\varphi(x)|<\tilde{d}_0\}$, for any $x\in U$ there exists a unique $y=P_{\partial\Omega}(x)\in \partial\Omega$ such that $$\|x-P_{\partial\Omega}(x)\|=\mathrm{dist}(x,\partial\Omega).$$ Moreover, $\varphi$ is differentiable everywhere in $U$ and we have \begin{equation}\label{varphiproper} (\nabla\varphi(x))^T=-\nu(P_{\partial\Omega}(x))\text{ for any }x\in U, \end{equation} where $\nu$ denotes the exterior normal to $\Omega$, which we assume to be a column vector. In particular, $$\|\nabla\varphi\|=1\text{ in }U.$$ Finally, we have that $P_{\partial\Omega}\in C^{0,1}(U)$, with $C^{0,1}$ norm bounded by $r$, $L$ and $R$ only, and, through \eqref{varphiproper}, we also have that $\varphi\in C^{1,1}(U)$, with $C^{1,1}$ norm bounded by $r$, $L$ and $R$ only. \end{teo} \proof{.} It easily follows by using the arguments of the proof of \cite[Theorem~5.7]{Del-Zol}.\hfill$\square$ \smallskip Let us note that, under the assumptions of Theorem~\ref{Del-Zolthm}, for any $0\leq |d|<\tilde{d}_0$, we have that $\Omega^d$ is a bounded open set of class $C^{1,1}$ and $\partial(\Omega^d)=\partial\Omega^d$. Moreover, for any $x\in \partial(\Omega^d)$, if $\nu(x)$ denotes the exterior normal to $\Omega^d$, then $$(\nabla\varphi(x))^T=-\nu(x)=-\nu(P_{\partial\Omega}(x)).$$ \begin{defin}\label{Omega_d} Let $\Omega\subset\mathbb{R}^N$ be a bounded open set. We say that $A=A(x)\in \mathbb{M}^{N\times N}_{\mathrm{sym}}(\mathbb{R})$, $x\in\Omega$, is a \emph{symmetric tensor} in $\Omega$ if $A\in L^{\infty}(\Omega,\mathbb{M}^{N\times N}_{\mathrm{sym}}(\mathbb{R}))$. We say that a symmetric tensor $A$ in $\Omega$ is \emph{Lipschitz} if $A\in C^{0,1}(\overline{\Omega},\mathbb{M}^{N\times N}_{\mathrm{sym}}(\mathbb{R}))$ and that a symmetric tensor $A$ in $\Omega$ is \emph{uniformly elliptic} with constant $\lambda$, $0<\lambda<1$, if $$\lambda\|\xi\|^2\leq \langle A(x)\xi,\xi \rangle\leq \lambda^{-1}\|\xi\|^2\quad\text{for almost any }x\in\Omega\text{ and any }\xi\in\mathbb{R}^N.$$ \end{defin} If $\Omega$ is of class $C^{1,1}$ and $A$ is a Lipschitz conductivity tensor, we can extend $A$ outside $\Omega$ keeping it Lipschitz, and, in case, uniformly elliptic as well. Namely we have. \begin{prop}\label{extensionprop} Let us fix positive constants $R$, $r$ and $L$. Let $\Omega\subset B_R(0)\subset\mathbb{R}^N$ be a bounded open set of class $C^{1,1}$ with constants $r$ and $L$. Let $A$ be a Lipschitz symmetric tensor in $\Omega$. Then there exists a Lipschitz symmetric tensor $\tilde{A}$ in $\mathbb{R}^N$ such that $$\tilde{A}=A\text{ in }\Omega\quad\text{and}\quad \tilde{A}=I_N\text{ outside }B_{R+1}(0).$$ Moreover, the $C^{0,1}$ norm of $\tilde{A}$ on $\mathbb{R}^N$ depends on $r$, $L$, $R$ and the $C^{0,1}$ norm of $A$ on $\overline{\Omega}$. Finally, if $A$ is uniformly elliptic with constant $\lambda$, also $\tilde{A}$ is uniformly elliptic with the same constant $\lambda$. \end{prop} \proof{.} We sketch the idea of the construction. We pick $\tilde{d}_0$ and $U$ as in Theorem~\ref{Del-Zolthm} and we first extend $A$ in $\Omega\cup U$ as follows. We define, for any $x\in \Omega\cup U$, $$\tilde{A}(x)=\left\{\begin{array}{ll} A(x) &\text{if }x\in\overline{\Omega}\\ A(P_{\partial\Omega}(x))&\text{if }x\in U\backslash\overline{\Omega}. \end{array} \right. $$ Then we fix a cutoff function $\chi\in C^{\infty}(\mathbb{R})$ such that $\chi$ is increasing, $\chi(t)=0$ for any $t\leq -3\tilde{d}_0/4$ and $\chi(t)=1$ for any $t\geq 0$. We extend $\tilde{A}$ all over $\mathbb{R}^N$ as follows. We define, for any $x\in \mathbb{R}^N$, $$\tilde{A}(x)=\chi(\varphi(x))\tilde{A}(x)+(1-\chi(\varphi(x)))I_N.$$ It is not difficult to check, with the help of Theorem~\ref{Del-Zolthm}, that such an extension satisfies the required properties.\hfill$\square$ \smallskip \subsection{Riemannian manifolds} Let us consider the following definition of a Riemannian manifold $M$. \begin{defin}\label{manifold} Let $\Omega\subset \mathbb{R}^N$ be a bounded open set of class $C^{1,1}$. Let $G$ be a Lipschitz symmetric tensor in $\Omega$ which is uniformly elliptic with constant $\lambda$, $0<\lambda<1$. For any $x\in\overline{\Omega}$, we denote as usual by $g_{i,j}(x)$ the elements of $G(x)$ and by $g^{i,j}(x)$ the elements of $G^{-1}(x)$, the inverse matrix of $G(x)$. Finally, we set $g(x)=|\det (G(x))|$. We call $M$ the Riemannian manifold obtained by endowing $\overline{\Omega}$ with the Lipschitz Riemannian metric whose tensor is given at any $x\in\overline{\Omega}$ by $g_{i,j}(x)dx_i\otimes dx_j$. We finally say that $G$ is a scalar metric if $G=\theta I_N$ with $\theta \in C^{0,1}(\overline{\Omega})$, that is, $g_{i,j}=\theta\delta_{i,j}$, where $\delta_{i,j}$ is the Kronecker delta. \end{defin} We recall the basic notation and properties of the Riemannian manifold $M$. At any point $x\in\overline{\Omega}$, given any two (column) vectors $v$ and $w$, we denote $$\langle v,w\rangle_M=\langle G(x)v,w \rangle$$ and, consequently, $$\|v\|_M=\sqrt{\langle v,v\rangle_M}=\sqrt{\langle G(x)v,v\rangle}.$$ Clearly we have $$ \sqrt{\lambda}\|v\|\leq\|v \|_M\leq \sqrt{\lambda^{-1}}\|v \|. $$ For any $u\in L^1(\Omega)$, we have $$\int_{\Omega}u(x)\, d_M(x)=\int_{\Omega}u(x)\sqrt{g(x)}\, dx.$$ If $h\in L^1(\partial\Omega)$, with respect to the surface measure $d\sigma$, that is, with respect to the $(N-1)$-dimensional Hausdorff measure, then $$\int_{\partial \Omega}h(x)\, d\sigma_M(x)=\int_{\partial\Omega}h(x)\frac{\sqrt{g(x)}}{\alpha(x)}\, d\sigma(x),$$ where, for any $x\in\partial\Omega$, $$\alpha(x)=\frac1{\sqrt{\langle G^{-1}(x)\nu(x),\nu(x)\rangle}},$$ $\nu(x)$ being the outer normal to the boundary. We call $\nu_M(x)=\alpha(x) G^{-1}(x)\nu(x)$, which is the outer normal to the boundary with respect to the Riemannian metric. In fact, $\|\nu_M(x)\|_M=1$ and $\langle\tau,\nu_M(x)\rangle_M=0$ for any vector $\tau$ which is tangent to $\partial\Omega$ at the point $x$. At almost every $x\in\Omega$, the intrinsic gradient of a function $u\in W^{1,1}(\Omega)$ is defined by $$\nabla_Mu(x)=\nabla u(x)G^{-1}(x)=g^{i,j}(x)\frac{\partial u}{\partial x_i}(x)e_j,$$ where we used the summation convention. Let us note that, for any (column) vector $v$ $$\nabla u(x) v=\langle (\nabla u(x))^T,v\rangle= \langle (\nabla_M u(x))^T,v\rangle_M.$$ Therefore, \begin{multline}\label{gradcomp} \|\nabla_M u(x)\|_M^2=\langle (\nabla_M u(x))^T,(\nabla_M u(x))^T\rangle_M\\= \langle (\nabla u(x))^T,(\nabla_M u(x))^T\rangle= \langle (\nabla u(x))^T,G^{-1}(x)(\nabla u(x))^T\rangle. \end{multline} Consequently, \begin{equation}\label{normcomparison} \sqrt{\lambda}\|\nabla u(x)\|\leq\|\nabla_M u(x)\|_M\leq \sqrt{\lambda^{-1}}\|\nabla u(x)\|. \end{equation} The intrinsic divergence of a vector field $X\in W^{1,1}(\Omega,\mathbb{R}^N)$ is defined, for almost every $x\in\Omega$, by $$\mathrm{div}_M X(x)=\frac1{\sqrt{g(x)}}\mathrm{div}(\sqrt{g}X)(x).$$ For $X\in W^{1,1}(\Omega,\mathbb{R}^N)$, we have $$\int_{\Omega}\mathrm{div}_M X(x)\, d_M(x)=\int_{\partial\Omega}\langle X(x),\nu_M(x)\rangle_M\, d\sigma_M(x).$$ Moreover, if $X\in W^{1,2}(\Omega,\mathbb{R}^N)$ and $\psi\in W^{1,2}(\Omega)$, we have that $$\mathrm{div}_M(X\psi)=\frac1{\sqrt{g}}\mathrm{div}(\sqrt{g}X)\psi+\nabla\psi X= \mathrm{div}_M(X)\psi+\langle (\nabla_M \psi(x))^T,X\rangle_M.$$ Finally, the following version of the coarea formula holds true. Let $\varphi\in C^{1}(\overline{\Omega})$ be such that $\nabla \varphi\neq 0$ everywhere. Then for any $u\in L^1(\Omega)$, we have $$\int_{\Omega}u(x)\, d_M(x)=\int_{\mathbb{R}}\left(\int_{\{x\in\Omega:\ \varphi(x)=t\}}\frac{u(x)}{\|\nabla_M\varphi(x)\|_M}\, d\sigma_M(x)\right)\, dt.$$ We call $\Gamma=\{\gamma:[0,1]\to \overline{\Omega}:\ \gamma\text{ is piecewise }C^1\}$. For any curve $\gamma\in \Gamma$, we denote its Euclidean length as $\mathrm{length}(\gamma)=\int_0^1\|\gamma'(t)\|\,dt$ and, analogously, its Riemannian length as $$\mathrm{length}_M(\gamma)=\int_0^1\|\gamma'(t)\|_M\,dt.$$ We have that $$\sqrt{\lambda}\,\mathrm{length}(\gamma) \leq \mathrm{length}_M(\gamma)\leq \sqrt{\lambda^{-1}}\,\mathrm{length}(\gamma).$$ For any $x$ and $y\in\overline{\Omega}$, we call $\Gamma(x,y)=\{\gamma\in\Gamma:\ \gamma(0)=x\text{ and }\gamma(1)=y\}$ and define $$d(x,y)=\inf_{\gamma\in \Gamma(x,y)}\, \mathrm{length}(\gamma)\quad\text{and}\quad d_M(x,y)=\inf_{\gamma\in \Gamma(x,y)}\, \mathrm{length}_M(\gamma).$$ Clearly $$\sqrt{\lambda}\, d(x,y) \leq d_M(x,y)\leq \sqrt{\lambda^{-1}}\, d(x,y),$$ whereas \begin{equation}\label{lipd} \|x-y\|\leq d(x,y)\leq C(\Omega)\|x-y\|, \end{equation} where $C(\Omega)$ is a constant depending on $\Omega$ only. If $\Omega$ satisfies the assumptions of Theorem~\ref{Del-Zolthm}, then $C(\Omega)$ depends on $r$, $L$ and $R$ only. We finally define the distance from the boundary in the Riemannian case. Let $\varphi_M:\overline{\Omega}\to\mathbb{R}$ as follows. For any $x\in\overline{\Omega}$, $$\varphi_M(x)=\mathrm{dist}_M(x,\partial\Omega)=\inf_{y\in\partial\Omega}d_M(x,y)=\min_{y\in\partial\Omega}d_M(x,y).$$ We observe that $\varphi$, the distance from the boundary in the Euclidean case that was defined in Definition~\ref{Eucldistfun}, satisfies $$\varphi(x)=\mathrm{dist}(x,\partial\Omega)=\inf_{y\in\partial\Omega}d(x,y)=\min_{y\in\partial\Omega}d(x,y)\quad\text{for any }x\in\overline{\Omega}$$ and, consequently, $$\sqrt{\lambda}\, \varphi(x) \leq \varphi_M(x)\leq \sqrt{\lambda^{-1}}\, \varphi(x)\quad\text{for any }x\in\overline{\Omega}.$$ As in the Euclidean case, we adopt the following notation. For any $d\geq 0$, we define $$\Omega_M^d=\{x\in\overline{\Omega}:\ \varphi_M(x)>d\}\quad\text{and}\quad \partial\Omega_M^d=\{x\in\overline{\Omega}:\ \varphi_M(x)=d\}.$$ Moreover, when $d>0$, we call $$U_M^d=\{x\in\overline{\Omega}:\ \varphi_M(x)<d\}.$$ We recall that Theorem~\ref{Del-Zolthm}, which easily follows from \cite[Theorem~5.7]{Del-Zol}, contains the regularity properties of $\varphi$, the (signed) distance function from the boundary in the Euclidean case. For the Riemannian metric, a corresponding regularity result for $\varphi_M$ is not easy to prove. We recall that fine regularity properties of the distance function from a general subset in a Riemannian manifold have been studied in \cite{Man-Men}. In the next Section~\ref{distsec}, we study the properties of the distance function from the boundary in the Riemannian case. \subsection{Definitions of frequencies of boundary data} Let $\Omega\subset\mathbb{R}^N$ be a bounded Lipschitz domain. By domain we mean, as usual, an open and connected set. We define the space of traces of $H^1(\Omega)$ functions on $\partial\Omega$ as $$H^{1/2}(\partial\Omega)=\{f=u|_{\partial\Omega}:\ u\in H^1(\Omega)\}.$$ We recall that $H^{1/2}(\partial\Omega)\subset L^2(\partial\Omega)$, with compact immersion. By Poincar\'e inequality, an equivalent norm for $H^{1/2}(\partial\Omega)$, which we always adopt for simplicity, is given by the following \begin{equation}\label{1/2norm} \|f\|^2_{H^{1/2}(\partial\Omega)}=\|f\|^2_{L^2(\partial\Omega)}+|f|^2_{H^{1/2}(\partial\Omega)}, \end{equation} where the seminorm is given by \begin{equation}\label{1/2seminorm} |f|^2_{H^{1/2}(\partial\Omega)}=\int_{\Omega}\|\nabla u_0(x)\|^2\, dx \end{equation} where $u_0\in H^1(\Omega)$ is the weak solution to the following Dirichlet boundary value problem for the Laplace equation \begin{equation}\label{Dirichlet0} \left\{\begin{array}{ll} \Delta u_0=0 & \text{in }\Omega\\ u_0=f & \text{on }\partial\Omega. \end{array}\right. \end{equation} \begin{defin}\label{frequencydefin} We call \emph{frequency} of a function $f\in H^{1/2}(\partial\Omega)$, with $f\neq 0$, the following quotient \begin{equation}\label{frequency} \mathrm{frequency}(f)=\frac{|f|^2_{H^{1/2}(\partial\Omega)}}{\|f\|^2_{L^2(\partial\Omega)}}\quad\text{for any }f\in H^{1/2}(\partial\Omega),\ f\neq 0. \end{equation} \end{defin} \smallskip We denote $L_{\ast}^2(\partial\Omega)=\{\psi\in L^2(\partial\Omega):\ \int_{\partial\Omega}\psi\, d\sigma=0\}$ and $$H_{\ast}^{1/2}(\partial\Omega)=\left\{f\in H^{1/2}(\partial\Omega):\ \int_{\partial\Omega}f\, d\sigma=0\right\}.$$ We call $H^{-1/2}(\partial\Omega)$ the dual to $H^{1/2}(\partial\Omega)$ and $$H_{\ast}^{-1/2}(\partial\Omega)=\{\eta\in H^{-1/2}(\partial\Omega):\ \langle \eta,1\rangle_{-1/2,1/2}=0\}.$$ By $\langle\cdot,\cdot\rangle_{-1/2,1/2}$ we denote the duality between $H^{-1/2}(\partial\Omega)$ and $H^{1/2}(\partial\Omega)$. By Poincar\'e inequality, we have that $$\|f\|_{H_{\ast}^{1/2}(\partial\Omega)}=|f|_{H^{1/2}(\partial\Omega)} \quad\text{for any }f\in H_{\ast}^{1/2}(\partial\Omega)$$ is an equivalent norm for $H_{\ast}^{1/2}(\partial\Omega)$ and, analogously, $$ \|\eta\|_{H_{\ast}^{-1/2}(\partial\Omega)}=\sup_{\|\psi\|_{H^{1/2}_{\ast}(\partial\Omega)}=1}\langle\eta,\psi\rangle_{-1/2,1/2} \quad \text{for any }\eta\in H_{\ast}^{-1/2}(\partial\Omega) $$ is an equivalent norm for $H_{\ast}^{-1/2}(\partial\Omega)$. We observe that any $\eta\in L^2(\partial\Omega)$ is considered as an element of $H^{-1/2}(\partial\Omega)$ by setting \begin{equation}\label{L2H-12} \langle \eta,\psi\rangle_{-1/2,1/2}=\int_{\partial\Omega}\eta\psi\, d\sigma\quad\text{for any }\psi\in H^{1/2}(\partial\Omega). \end{equation} Moreover, if $\eta\in L_{\ast}^2(\partial\Omega)$ then $\eta\in H_{\ast}^{-1/2}(\partial\Omega)$. It is important to note that here, and in the definitions of $L^2(\partial\Omega)$ and $L_{\ast}^2(\partial\Omega)$, we use the usual $(N-1)$-dimensional Hausdorff measure on $\partial\Omega$. In the sequel we adopt the same convention even if $\Omega$ is endowed with a Riemannian metric $G$ which is different from the Euclidean one. This simplifies the treatment of certain changes of variables for the Neumann problem or for the Steklov eigenvalue problem, see Remark~\ref{Neumannpointwiseremark}. \begin{defin}\label{frequencydefinbis} We call \emph{lower frequency} of a function $f\in H^{1/2}_{\ast}(\partial\Omega)$, with $f\neq 0$, the following quotient \begin{equation}\label{frequencybis} \mathrm{lowfrequency}(f)=\frac{\|f\|^2_{L^2(\partial\Omega)}}{\|f\|^2_{H^{-1/2}_{\ast}(\partial\Omega)}}\quad\text{for any }f\in H^{1/2}_{\ast}(\partial\Omega),\ f\neq 0. \end{equation} Here $$\|f\|_{H^{-1/2}_{\ast}(\partial\Omega)}=\sup_{\|\psi\|_{H^{1/2}_{\ast}(\partial\Omega)}=1}\int_{\partial\Omega}f\psi\, d\sigma =\sup_{|\psi|_{H^{1/2}(\partial\Omega)}=1}\int_{\partial\Omega}f\psi\, d\sigma .$$ \end{defin} \smallskip From this definition, we immediately infer that, for any $f\in H^{1/2}_{\ast}(\partial\Omega)$, with $f\neq 0$, we have $$\|f\|^4_{L^2(\partial\Omega)}\leq \|f\|^2_{H^{-1/2}_{\ast}(\partial\Omega)}\|f\|^2_{H^{1/2}_{\ast}(\partial\Omega)}= \|f\|^2_{H^{-1/2}_{\ast}(\partial\Omega)}|f|^2_{H^{1/2}(\partial\Omega)} $$ hence \begin{equation}\label{lowfrvsfr} \mathrm{lowfrequency}(f)\leq \mathrm{frequency}(f)\quad\text{for any }f\in H^{1/2}_{\ast}(\partial\Omega),\ f\neq 0. \end{equation} \subsection{Boundary value problems for elliptic equations} Let $\Omega\subset\mathbb{R}^N$ be a bounded Lipschitz domain. We consider Dirichlet and Neumann problems in $\Omega$ for elliptic equations in divergence form, in the Euclidean and in the Riemannian setting. Let $A=A(x)$ be a \emph{conductivity tensor} in $\Omega$, that is, $A$ is a symmetric tensor in $\Omega$ which is uniformly elliptic with some constant $\lambda_1$, $0<\lambda_1<1$. If $A=\gamma I_N$, where $\gamma\in L^{\infty}(\Omega)$ satisfies $$\lambda_1 \leq \gamma(x)\leq \lambda_1^{-1}\quad\text{for a.e. }x\in\Omega,$$ we say that $A$ (or $\gamma$) is a \emph{scalar conductivity}. We say that a conductivity tensor $A$ is \emph{Lipschitz} if $A$ is a Lipschitz symmetric tensor. Analogously, $A$ (or $\gamma$) is a \emph{Lipschitz} scalar conductivity if $\gamma\in C^{0,1}(\overline{\Omega})$. Let $G$ be a Lipschitz symmetric tensor in $\Omega$ which is uniformly elliptic with constant $\lambda$, $0<\lambda<1$, and let $M$ be the corresponding Riemannian manifold on $\overline{\Omega}$ as in Definition~\ref{manifold}. In this subsection we adopt the following assumption. \begin{assum}\label{scalarassum} We assume that either $A$ is a scalar conductivity tensor, that is, $A=\gamma I_N$ with $\gamma\in L^{\infty}(\Omega)$, or $G$ is a scalar metric, that is, $G=\theta I_N$ with $\theta\in C^{0,1}(\overline{\Omega})$. \end{assum} For any $f\in H^{1/2}(\partial\Omega)$, let $u\in H^1(\Omega)$ be the weak solution to the Dirichlet boundary value problem \begin{equation}\label{Dirichlet} \left\{\begin{array}{ll} \mathrm{div}_M(A\nabla_M u)=0 & \text{in }\Omega\\ u=f & \text{on }\partial\Omega. \end{array}\right. \end{equation} We recall that $u\in H^1(\Omega)$ solves \eqref{Dirichlet} if $u=f$ on $\partial\Omega$ in the trace sense and $$\int_{\Omega}\langle A(x)(\nabla_M u(x))^T,(\nabla_M \psi(x))^T\rangle_M\, d_M(x)=0\quad\text{for any }\psi\in H^1_0(\Omega).$$ For the sake of simplicity, we sometimes drop the transpose in the sequel, considering, with a small abuse of notation, the gradient as a column vector. The following remark holds true. \begin{oss}\label{freqrem} Let $u$ and $u_0$ be the solution to \eqref{Dirichlet} and \eqref{Dirichlet0}, respectively. Then there exists a constant $c_1$, $0<c_1<1$ depending on $\lambda$ and $\lambda_1$ only, such that \begin{equation}\label{frequencyvsintegral} c_1 \int_{\Omega}\|\nabla u_0(x)\|^2\, dx\leq \int_{\Omega}\langle A(x)\nabla_M u(x),\nabla_M u(x)\rangle_M\, d_M(x)\leq c_1^{-1} \int_{\Omega}\|\nabla u_0(x)\|^2\, dx. \end{equation} In fact, on the one hand, by the Dirichlet principle, $$\int_{\Omega}\langle A(x)\nabla_M u(x),\nabla_M u(x)\rangle_M\, d_M(x)\geq c_1 \int_{\Omega}\|\nabla u(x)\|^2\, dx\geq c_1 \int_{\Omega}\|\nabla u_0(x)\|^2\, dx.$$ On the other hand, correspondingly we have \begin{multline*} \int_{\Omega}\langle A(x)\nabla_M u(x),\nabla_M u(x)\rangle_M\, d_M(x)\leq \int_{\Omega}\langle A(x)\nabla_M u_0(x),\nabla_M u_0(x)\rangle_M\, d_M(x)\\\leq c_1^{-1}\int_{\Omega}\|\nabla u_0(x)\|^2\, dx. \end{multline*} \end{oss} As a consequence of Remark~\ref{freqrem}, we can define equivalent $H^{1/2}(\partial\Omega)$ norm and seminorm which are given by, for any $f\in H^{1/2}(\partial\Omega)$, \begin{equation} |f|^2_{H^{1/2}_A(\partial\Omega)}=\int_{\Omega}\langle A(x)\nabla_M u(x),\nabla_M u(x)\rangle_M\, d_M(x), \end{equation} where $u$ solves \eqref{Dirichlet}, and \begin{equation} \|f\|^2_{H^{1/2}_A(\partial\Omega)}=\|f\|^2_{L^2(\partial\Omega)}+|f|^2_{H^{1/2}_A(\partial\Omega)}. \end{equation} We can also define an equivalent $H^{-1/2}(\partial\Omega)$ norm given by, for any $\eta\in H^{-1/2}(\partial\Omega)$, $$\|\eta\|_{H^{-1/2}_A(\partial\Omega)}=\sup_{\|\psi\|_{H^{1/2}_A(\partial\Omega)}=1}\langle \eta,\psi\rangle_{-1/2,1/2}.$$ We note that here we drop the dependence on the metric $M$, although the seminorm, and thus the norms as well, clearly also depends on it. Analogously, $$\|f\|_{H_{\ast,A}^{1/2}(\partial\Omega)}=|f|_{H^{1/2}_A(\partial\Omega)} \quad\text{for any }f\in H_{\ast}^{1/2}(\partial\Omega)$$ is an equivalent norm for $H_{\ast}^{1/2}(\partial\Omega)$ and $$ \|\eta\|_{H_{\ast,A}^{-1/2}(\partial\Omega)}=\sup_{\|\psi\|_{H^{1/2}_{\ast,A}(\partial\Omega)}=1}\langle\eta,\psi\rangle_{-1/2,1/2} \quad\text{for any }\eta\in H_{\ast}^{-1/2}(\partial\Omega) $$ is an equivalent norm for $H_{\ast}^{-1/2}(\partial\Omega)$. For any $\eta\in H_{\ast}^{-1/2}(\partial\Omega)$, let $v\in H^1(\Omega)$ be the solution to the Neumann boundary value problem \begin{equation}\label{Neumann} \left\{\begin{array}{ll} \mathrm{div}_M(A\nabla_M v)=0 & \text{in }\Omega\\ \langle A\nabla_M v,\nu_M\rangle_M=\eta & \text{on }\partial\Omega\\ \int_{\partial\Omega}v\, d\sigma=0. & \end{array}\right. \end{equation} By a solution we mean $v\in H^1(\Omega)$ such that $v|_{\partial\Omega}\in H_{\ast}^{1/2}(\partial\Omega)$ and that $$\int_{\Omega} \langle A\nabla_M v,\nabla_M \psi\rangle_M=\langle \eta,\psi|_{\partial\Omega}\rangle_{-1/2,1/2}\quad\text{for any }\psi\in H^1(\Omega).$$ We also note that, for simplicity and by a slight abuse of notation, we denote $A u_{\nu_M}=\langle A\nabla_M v,\nu_M\rangle_M$. Such a notation is actually correct when $A=\gamma I_N$ is a scalar conductivity. In fact, in this case, $$A u_{\nu_M}=\gamma u_{\nu_M}$$ where $u_{\nu_M}$ is the (exterior) normal derivative of $u$ with respect to $\Omega$ which, in the Riemannian setting, is given by $$u_{\nu_M}=\langle(\nabla_Mu)^T,\nu_M\rangle_M=\nabla u \nu_M.$$ By Poincar\'e inequality and Lax-Milgram lemma, we have that there exists a unique solution both to \eqref{Dirichlet} and to \eqref{Neumann}. Moreover, there exists a constant $c_2$, $0<c_2<1$ depending on $\Omega$, $\lambda$ and $\lambda_1$ only, such that for any $f\in H^{1/2}(\partial\Omega)$ $$c_2\|f\|_{H^{1/2}(\partial\Omega)}\leq \|u\|_{H^1(\Omega)}\leq c_2^{-1}\|f\|_{H^{1/2}(\partial\Omega)}$$ and for any $\eta\in H_{\ast}^{-1/2}(\partial\Omega)$ $$c_2\|\eta\|_{H^{-1/2}(\partial\Omega)}\leq \|v\|_{H^1(\Omega)}\leq c_2^{-1}\|\eta\|_{H^{-1/2}(\partial\Omega)}.$$ If $\Omega\subset B_R(0)$ is Lipschitz with positive constants $r$ and $L$, the dependence of $c_2$ on $\Omega$ is just through the constants $r$, $L$ and $R$. Let $\Lambda:H^{1/2}(\partial\Omega)\to H_{\ast}^{-1/2}(\partial\Omega)$ be the linear operator such that $$\Lambda(f)=\langle A\nabla_M v,\nu_M\rangle_M\quad \text{for any }f\in H^{1/2}(\partial\Omega)$$ where $u$ solves \eqref{Dirichlet}. Here, we mean $$\langle \langle A\nabla_M v,\nu_M\rangle_M,\tilde{\psi}\rangle_{-1/2,1/2}=\int_{\Omega}\langle A\nabla_Mu,\nabla_M\psi\rangle_M\, d_M\quad\text{for any }\tilde{\psi}\in H^{1/2}(\partial\Omega),$$ where $\psi$ is any $H^1(\Omega)$ function such that $\psi|_{\partial\Omega}=\tilde{\psi}$. We infer that $\Lambda$ restricted to $H_{\ast}^{1/2}(\partial\Omega)$ is invertible and both $\Lambda$ and $\Lambda^{-1}:H_{\ast}^{-1/2}(\partial\Omega)\to H_{\ast}^{1/2}(\partial\Omega)$ are bounded operators with norms bounded by constants depending on $\Omega$, $\lambda$ and $\lambda_1$ only. As usual we refer to $\Lambda$ as the Dirichlet-to-Neumann map and to $\Lambda^{-1}$ as the Neumann-to-Dirichlet map. We are interested in eigenvalues and eigenfunctions of the Dirichlet-to-Neumann map $\Lambda$, which coincides with the so-called Steklov eigenvalues and eigenfunctions. Namely, we say that $\mu\in \mathbb{C}$ and $\phi\in L^2(\partial\Omega)$, with $\phi\neq 0$ are, respectively, a Steklov eigenvalue and its corresponding eigenfunction if there exists $w\in H^1(\Omega)$ such that $w=\phi$ on $\partial\Omega$ and $w$ satisfies \begin{equation}\label{Steklovequation} \left\{ \begin{array}{ll} \mathrm{div}_M(A\nabla_M w)= 0 & \text{in }\Omega\\ \langle A\nabla_M v,\nu_M\rangle_M =\mu w &\text{on }\partial\Omega, \end{array}\right. \end{equation} that is, $$\int_{\Omega} \langle A\nabla_M w,\nabla_M \psi\rangle_M\, d_M=\langle \mu w|_{\partial\Omega},\psi|_{\partial\Omega}\rangle_{-1/2,1/2}=\int_{\partial\Omega}\mu w\psi\, d\sigma \quad\text{for any }\psi\in H^1(\Omega).$$ In other words, $\phi$ satisfies $\Lambda(\phi)=\mu \phi$. Clearly \eqref{Steklovequation} is satisfied by $\mu=0$ and $w$ a constant function. It is well-known that the Steklov eigenvalues form an increasing sequence of real numbers $$0=\mu_0<\mu_1\leq \mu_2\leq \ldots \leq \mu_n\leq \ldots$$ such that $\lim_n\mu_n=+\infty$. For any $n\geq 0$, we can find a corresponding eigenfunction $\phi_n$, normalised in such a way that $\|\phi_n\|_{L^2(\partial\Omega)}=1$, such that $\{\phi_n\}_{n\geq 0}$ is an orthonormal basis of $L^2(\partial\Omega)$ and $\{\phi_n\}_{n\in\mathbb{N}}$ is an orthonormal basis of $L^2_{\ast}(\partial\Omega)$. Moreover, $\{\phi_n/\sqrt{1+\mu_n}\}_{n\geq 0}$ and $\{\phi_n/\sqrt{1+\mu_n}\}_{n\in\mathbb{N}}$ are an orthonormal basis of $H^{1/2}(\partial\Omega)$ and $H^{1/2}_{\ast}(\partial\Omega)$, respectively, with respect to the $H^{1/2}_A(\partial\Omega)$ norm. Finally, we call $\{\psi_n=\phi_n/\sqrt{\mu_n}\}_{n\in\mathbb{N}}$ and we note that it is an orthonormal basis of $H^{1/2}_{\ast}(\partial\Omega)$ with respect to the $H^{1/2}_{\ast,A}(\partial\Omega)$ norm. If $\phi\in H_{\ast}^{1/2}(\partial\Omega)$ is a Steklov eigenfunction with eigenvalue $\mu$, and $w$ is the corresponding solution to \eqref{Steklovequation}, then $$\mu= \frac{\int_{\partial\Omega}\mu\phi^2\, d\sigma}{\int_{\partial\Omega}\phi^2\, d\sigma}= \frac{\int_{\Omega}\langle A\nabla_Mw,\nabla_Mw\rangle_M\, d_M}{\int_{\partial\Omega}\phi^2\, d\sigma},$$ hence by Remark~\ref{freqrem} we have, with the same constant $c_1$, \begin{equation}\label{Stekvsfreq} c_1\,\mathrm{frequency}(\phi)\leq \mu \leq c_1^{-1}\,\mathrm{frequency}(\phi). \end{equation} An important property of Steklov eigenfunctions is that their frequency and lower frequency are of the same order. In fact, for $\mu>0$ we have $\phi\in H^{1/2}_{\ast}(\partial\Omega)$ and, setting $\int_{\partial\Omega}\phi^2=1$, $$c_1|\phi|^2_{H^{1/2}(\partial\Omega)}\leq\|\phi\|^2_{H^{1/2}_{\ast,A}(\partial\Omega)}=\mu\leq c_1^{-1}|\phi|^2_{H^{1/2}(\partial\Omega)},$$ therefore $$c_1\|\phi\|^2_{H^{-1/2}_{\ast}(\partial\Omega)}\leq\|\phi\|^2_{H^{-1/2}_{\ast,A}(\partial\Omega)}=\mu^{-1}\leq c_1^{-1}\|\phi\|^2_{H^{-1/2}_{\ast}(\partial\Omega)},$$ and, finally, \begin{equation}\label{Stekvslowfreq} c_1\,\mathrm{lowfrequency}(\phi)\leq \mu \leq c_1^{-1}\,\mathrm{lowfrequency}(\phi). \end{equation} Although their proofs are elementary, and actually quite similar, the next two remarks are crucial. \begin{oss}\label{conformalchange} Let $A$ be a conductivity tensor in $\Omega$ which is uniformly elliptic with some constant $\lambda_1$, $0<\lambda_1<1$. Let $G$ be a Lipschitz symmetric tensor in $\Omega$ which is uniformly elliptic with constant $\lambda$, $0<\lambda<1$, and let $M$ be the corresponding Riemannian manifold on $\overline{\Omega}$. Let Assumption~\ref{scalarassum} be satisfied. Let us take $\eta_1\in C^{0,1}(\overline{\Omega})$ such that $\lambda_1\leq \eta_1\leq\lambda^{-1}_1$ in $\overline{\Omega}$, for some constant $\lambda_1$, $0<\lambda_1<1$. Let us define $\tilde{G}=\eta_1 G$ and let us consider the Riemannian manifold $\tilde{M}$ obtained by endowing $\overline{\Omega}$ with the Lipschitz Riemannian metric given by $\tilde{G}$. We define $$\tilde{A}=\eta_1^{(2-N)/2}A,$$ and we note that $\tilde{A}=A$ if $N=2$. Then, for any $\psi_1$, $\psi_2\in H^1(\Omega)$ we have $$\int_{\Omega}\langle A \nabla_M\psi_1,\nabla_M\psi_2\rangle_M\, d_M= \int_{\Omega}\langle \tilde{A} \nabla_{\tilde{M}}\psi_1,\nabla_{\tilde{M}}\psi_2\rangle_{\tilde{M}}\, d_{\tilde{M}}.$$ \end{oss} The next remark shows that, under Assumption~\ref{scalarassum} and if $A$ is Lipschitz, we can always assume that the conductivity tensor is a scalar conductivity, up to changing the Riemannian metric. For example, this applies when $A$ is a Lipschitz conductivity tensor and the metric is the Euclidean one. Namely we have the following. \begin{oss}\label{anisotropy} Let $A$ be a Lipschitz conductivity tensor in $\Omega$ which is uniformly elliptic with some constant $\lambda_1$, $0<\lambda_1<1$. Let $G$ be a Lipschitz symmetric tensor in $\Omega$ which is uniformly elliptic with constant $\lambda$, $0<\lambda<1$, and let $M$ be the corresponding Riemannian manifold on $\overline{\Omega}$. Let Assumption~\ref{scalarassum} be satisfied. We call $A_1=\sqrt{g}A G^{-1}$ and $\gamma_1=(\det{A_1})^{1/N}$ so that $A_1=\gamma_1\hat{A}_1$ with $\det{\hat{A}_1}\equiv 1$. If $N>2$, we define $\tilde{A}\equiv I_N$ and $$\tilde{G}=(\det(A_1))^{1/(N-2)}A_1^{-1}.$$ If $N=2$, we define $\tilde{A}\equiv \gamma_1 I_N$ and $$\tilde{G}=\hat{A}_1^{-1}.$$ Let us consider the Riemannian manifold $\tilde{M}$ obtained by endowing $\overline{\Omega}$ with the Lipschitz Riemannian metric given by $\tilde{G}$. Then, for any $\psi_1$, $\psi_2\in H^1(\Omega)$ we have $$\int_{\Omega}\langle A \nabla_M\psi_1,\nabla_M\psi_2\rangle_M\, d_M= \int_{\Omega}\langle \tilde{A} \nabla_{\tilde{M}}\psi_1,\nabla_{\tilde{M}}\psi_2\rangle_{\tilde{M}}\, d_{\tilde{M}}.$$ \end{oss} Both for the case of Remark~\ref{conformalchange} and the one of Remark~\ref{anisotropy}, we infer the following consequences. Fixed $f\in H^{1/2}(\partial\Omega)$, let $u$ be the solution to \eqref{Dirichlet}. Then $u$ solves \begin{equation}\label{Dirichletmod} \left\{\begin{array}{ll} \mathrm{div}_{\tilde{M}}(\tilde{A}\nabla_{\tilde{M}} u)=0 & \text{in }\Omega\\ u=f & \text{on }\partial\Omega. \end{array}\right. \end{equation} Analogously, fixed $\eta\in H_{\ast}^{-1/2}(\partial\Omega)$, let $v$ be the solution to \eqref{Neumann}. Then $v$ solves \begin{equation}\label{Neumannmod} \left\{\begin{array}{ll} \mathrm{div}_{\tilde{M}}(\tilde{A}\nabla_{\tilde{M}} v)=0 & \text{in }\Omega\\ \langle \tilde{A}\nabla_{\tilde{M}} v,\nu_{\tilde{M}}\rangle_{\tilde{M}}=\eta & \text{on }\partial\Omega\\ \int_{\partial\Omega}v\, d\sigma=0. & \end{array}\right. \end{equation} Finally, if $w$ solves \eqref{Steklovequation} for a constant $\mu$, then $w$ solves \begin{equation}\label{Steklovmod} \left\{\begin{array}{ll} \mathrm{div}_{\tilde{M}}(\tilde{A}\nabla_{\tilde{M}} w)=0 & \text{in }\Omega\\ \langle \tilde{A}\nabla_{\tilde{M}} w,\nu_{\tilde{M}}\rangle_{\tilde{M}}=\mu w & \text{on }\partial\Omega. \end{array}\right. \end{equation} We conclude this section by investigating the regularity of the solutions to \eqref{Dirichlet}, \eqref{Neumann} and \eqref{Steklovequation}. We need stronger assumptions on the domain $\Omega$ and the conductivity tensor $A$. Namely we assume the following till the end of the section. Let us fix positive constants $R$, $r$, $L$, $C_0$, $C_1$, $\lambda$ and $\lambda_1$, with $0<\lambda<1$ and $0<\lambda_1<1$. We refer to these constants as the \emph{a priori data}. Let $\Omega\subset B_R(0)\subset\mathbb{R}^N$ be a bounded domain of class $C^{1,1}$ with constants $r$ and $L$. Let $G$ be a Lipschitz symmetric tensor in $\Omega$ which is uniformly elliptic with constant $\lambda$ and such that $\|G\|_{C^{0,1}(\overline{\Omega})}\leq C_0$. Let $A$ be a Lipschtitz conductivity tensor in $\Omega$ which is uniformly elliptic with constant $\lambda_1$ and such that $\|A\|_{C^{0,1}(\overline{\Omega})}\leq C_1$. We suppose that Assumption~\ref{scalarassum} holds. We note that, without loss of generality, through Remark~\ref{anisotropy}, we could just assume that $A$ is a scalar conductivity. The first remark is that, by standard regularity estimates for elliptic equations, if $u$ is any weak solution to $\mathrm{div}_M(A\nabla_M u)=0$ in $\Omega$, then $u\in H^2_{\mathrm{loc}}(\Omega)$ and the equation is satisfied pointwise almost everywhere in $\Omega$. Here we are interested on the conditions that guarantee that our solutions are actually belonging to $H^2(\Omega)$. We adopt the standard definition of $H^{3/2}(\partial\Omega)$, see for example \cite{G}, and by $H_{\ast}^{3/2}(\partial\Omega)$ we denote the elements of $H^{3/2}(\partial\Omega)$ with zero mean on $\partial\Omega$. Let $u$ be the solution to \eqref{Dirichlet} with boundary datum $f\in H^{1/2}(\partial\Omega)$ and $v$ the solution to \eqref{Neumann} with boundary datum $\eta\in H^{-1/2}_{\ast}(\partial\Omega)$. The following regularity properties hold true. \begin{prop}\label{regprop} There exist a positive constants $c_3$, $0<c_3<1$ depending on the a priori data only, such that for any $f\in H^{3/2}(\partial\Omega)$ \begin{equation}\label{aineq} c_3\|f\|_{H^{3/2}(\partial\Omega)}\leq \|u\|_{H^2(\Omega)}\leq c_3^{-1}\|f\|_{H^{3/2}(\partial\Omega)} \end{equation} and for any $\eta\in H_{\ast}^{1/2}(\partial\Omega)$ \begin{equation}\label{bineq} c_3\|\eta\|_{H^{1/2}(\partial\Omega)}\leq \|v\|_{H^2(\Omega)}\leq c_3^{-1}\|\eta\|_{H^{1/2}(\partial\Omega)}. \end{equation} In \eqref{bineq}, we can replace $\|\eta\|_{H^{1/2}(\partial\Omega)}$ with $\|\eta\|_{H^{1/2}_{\ast}(\partial\Omega)}$, $\|\eta\|_{H^{1/2}_A(\partial\Omega)}$ or $\|\eta\|_{H^{1/2}_{\ast,A}(\partial\Omega)}$. As a consequence, $\Lambda$ is bounded between $H_{\ast}^{3/2}(\partial\Omega)$ and $H_{\ast}^{1/2}(\partial\Omega)$, with a bounded inverse, and their norms are bounded by constants depending on the a priori data only. \end{prop} Before sketching the proof of this standard regularity result, we state the following important remark. \begin{oss}\label{Neumannpointwiseremark} Let $v\in H^2(\Omega)$ be a solution to $\mathrm{div}_M(A\nabla_M v)=0$ in $\Omega$. Then $\nabla v\in H^1(\Omega)$, therefore $\nabla v$ is well-defined, in the trace sense, on $\partial\Omega$. It follows that $A v_{\nu_M}$ is well-defined for instance in $L^2(\partial\Omega)$. Moreover, using integration by parts, we conclude that for any $\psi\in H^1(\Omega)$ we have $$\int_{\Omega} \langle A\nabla_M v,\nabla_M \psi\rangle_M\, d_M= \int_{\partial\Omega}A v_{\nu_M}\psi\, d\sigma_M=\int_{\partial\Omega}\eta\psi\, d\sigma$$ where \begin{equation}\label{Neumannpointwise} \eta= \frac{\sqrt{g}}{\alpha}A v_{\nu_M}= \sqrt{g}\langle A \nabla_M v,\nu\rangle. \end{equation} Therefore, in the Riemannian setting, the Neumann condition $$A v_{\nu_M}=\eta\quad\text{on }\partial\Omega$$ is in general not valid in a pointwise or $L^2$ sense, even when both $Av_{\nu_M}$ and $\eta$ are well-defined as $L^2(\partial\Omega)$ functions. The correct pointwise or $L^2$ boundary condition is given in \eqref{Neumannpointwise}. \end{oss} \proof{ of Proposition~\textnormal{\ref{regprop}}.} This result is essentially proved in \cite{G}. Using for instance \cite[Theorem~1.5.1.2]{G} and \cite[Theorem~1.5.1.3]{G}, with the help of Remark~\ref{Neumannpointwiseremark}, we immediately infer that the left inequalities of \eqref{aineq} and \eqref{bineq} hold true. The right inequalities of \eqref{aineq} and \eqref{bineq} easily follow by \cite[Corollary~2.2.2.4]{G} and \cite[Corollary~2.2.2.6]{G}.\hfill$\square$ \smallskip An important consequence of Proposition~\ref{regprop} for Steklov eigenfunctions is the following. \begin{cor}\label{Stekest} Let $\phi\in H_{\ast}^{1/2}(\partial\Omega)$ be a Steklov eigenfunction with eigenvalue $\mu>0$ and let $w$ be the corresponding solution to \eqref{Steklovequation}. Then \begin{equation}\label{H2steklov} c_3^2\mu^2(1+c_1\mu)\|\phi\|^2_{L^2(\partial\Omega)}\leq \|w\|^2_{H^2(\Omega)}\leq c_3^{-2}\mu^2(1+c_1^{-1}\mu)\|\phi\|^2_{L^2(\partial\Omega)}, \end{equation} where $c_1$ is as in \eqref{Stekvsfreq} and $c_3$ is as in Proposition~\textnormal{\ref{regprop}}, thus they depend on the a priori data only. \end{cor} \proof{.} By \eqref{Stekvsfreq}, we have that $$c_1\mu\|\phi\|^2_{L^2(\partial\Omega)} \leq|\phi|^2_{H^{1/2}(\partial\Omega)}\leq c_1^{-1}\mu\|\phi\|^2_{L^2(\partial\Omega)}.$$ Therefore $$(1+c_1\mu)\|\phi\|^2_{L^2(\partial\Omega)}\leq \|\phi\|^2_{H^{1/2}(\partial\Omega)}\leq (1+c_1^{-1}\mu)\|\phi\|^2_{L^2(\partial\Omega)}.$$ Then the result follows by Proposition~\ref{regprop}, in particular by \eqref{bineq} with $\eta=\mu \phi$.\hfill$\square$ \smallskip Finally, we state and prove the following result. \begin{prop}\label{-1/2-2bound} There exists a constant $C_2$, depending on the a priori data only, such that for any $f\in H^{1/2}(\partial\Omega)$ we have \begin{equation}\label{-1/2-2boundest} \|u\|_{L^2(\Omega)}\leq C_2\|f\|_{H^{-1/2}(\partial\Omega)}, \end{equation} where $u$ is the solution to \eqref{Dirichlet}. \end{prop} \proof{.} Without loss of generality, we can restrict our attention to $f\in H_{\ast}^{1/2}(\partial\Omega)$ and we can replace the $H^{-1/2}(\partial\Omega)$ norm with the $H^{-1/2}_{\ast,A}(\partial\Omega)$ norm. Given $f\in H^{1/2}_{\ast}(\partial\Omega)$, we can find a sequence $\{\alpha_n\}_{n\in\mathbb{N}}$ of real numbers such that $$f=\sum_{n\in\mathbb{N}}\alpha_n\psi_n\quad\text{and}\quad\|f\|^2_{H^{1/2}_{\ast,A}(\partial\Omega)}=\sum_{n\in\mathbb{N}}\alpha_n^2.$$ Furthermore, it is easy to infer that $$\|f\|^2_{H^{-1/2}_{\ast,A}(\partial\Omega)}=\sum_{n\in\mathbb{N}}\frac{\alpha_n^2}{\mu_n^2}.$$ We have that $$\Lambda(f)=\sum_{n\in\mathbb{N}}\alpha_n\mu_n\psi_n,$$ therefore $$\|\Lambda(f)\|^2_{H^{1/2}_{\ast,A}(\partial\Omega)}=\sum_{n\in\mathbb{N}}\alpha_n^2\mu_n^2.$$ By Proposition~\ref{regprop}, in particular by \eqref{bineq}, for any $f\in H_{\ast}^{1/2}(\partial\Omega)$, we have that $$c_3^2 \sum_{n\in\mathbb{N}}\alpha_n^2\mu_n^2\leq \|u\|^2_{H^2(\Omega)}\leq c_3^{-2} \sum_{n\in\mathbb{N}}\alpha_n^2\mu_n^2,$$ possibly for a different constant $0<c_3<1$ still depending on the a priori data only. Let us now consider a function $v\in H^2(\Omega)$ such that $h=v|_{\partial\Omega}\in H^{1/2}_{\ast}(\partial\Omega)$. In particular $h=\sum_{n\in\mathbb{N}}\beta_n\psi_n$ for a suitable sequence $\{\beta_n\}_{n\in\mathbb{N}}$ of real numbers. We call $\tilde{v}$ the solution to \eqref{Dirichlet} with boundary datum given by $h$. Then $$\int_{\Omega}\langle A\nabla_M u,\nabla_M v\rangle_M\, d_M= \int_{\Omega}\langle A\nabla_M u,\nabla_M \tilde{v}\rangle_M\, d_M= \sum_{n\in\mathbb{N}}\alpha_n\beta_n.$$ If we call, for any $n\in\mathbb{N}$, $\tilde{\beta}_n=\beta_n\mu_n$, then $$\sup_{\sum_{n}\tilde{\beta}_n^2\leq 1} \left(\sum_{n\in\mathbb{N}}\alpha_n\frac{\tilde{\beta}_n}{\mu_n}\right)= \sup_{\sum_{n}\tilde{\beta}_n^2\leq 1}\left(\sum_{n\in\mathbb{N}}\frac{\alpha_n}{\mu_n}\tilde{\beta}_n\right) = \left(\sum_{n\in\mathbb{N}}\frac{\alpha^2_n}{\mu^2_n}\right)^{1/2}. $$ In other words, for any $v\in H^2(\Omega)$ with $h=v|_{\partial\Omega}\in H^{1/2}_{\ast}(\partial\Omega)$ we have \begin{multline*} \left|\int_{\Omega}\langle A\nabla_M u,\nabla_M v\rangle_M\, d_M\right| \leq \|f\|_{H^{-1/2}_{\ast,A}(\partial\Omega)}\left(\sum_{n\in\mathbb{N}}\beta^2_n\mu^2_n\right)^{1/2} \leq c_3^{-1}\|f\|_{H^{-1/2}_{\ast,A}(\partial\Omega)}\|\tilde{v}\|_{H^2(\Omega)} \\\leq c_3^{-2}\|f\|_{H^{-1/2}_{\ast,A}(\partial\Omega)}\|h\|_{H^{3/2}(\partial\Omega)}\leq c_4^{-1}\|f\|_{H^{-1/2}_{\ast,A}(\partial\Omega)}\|v\|_{H^2(\Omega)}, \end{multline*} where $0<c_4<1$ is a constant still depending on the a priori data only. Now, for any $\varphi\in L^2(\Omega)$, let $w$ be the weak solution to \begin{equation}\label{Neumannmodified} \left\{\begin{array}{ll} \mathrm{div}_M(A\nabla_M w)=\varphi & \text{in }\Omega\\ \langle A\nabla_M w,\nu_M\rangle_M=c & \text{on }\partial\Omega\\ \int_{\partial\Omega}w\, d\sigma=0, & \end{array}\right. \end{equation} where the constant $c$ is such that $$\int_{\partial\Omega}c\, d\sigma=\int_{\Omega}\varphi(x)\, dx.$$ By a solution we mean $w\in H^1(\Omega)$ such that $w|_{\partial\Omega}\in H_{\ast}^{1/2}(\partial\Omega)$ and that $$\int_{\Omega} \langle A\nabla_M w,\nabla_M \psi\rangle_M=\int_{\partial\Omega}c\psi\, d\sigma-\int_{\Omega}\varphi(x)\psi(x)\, dx\quad\text{for any }\psi\in H^1(\Omega).$$ Still by standard regularity estimates, see for instance \cite[Chapter~2]{G}, we have that $$\|w\|_{H^2(\Omega)}\leq C_3\|\varphi\|_{L^2(\Omega)},$$ where $C_3$ is a constant depending on the a priori data only. We conclude that, for any $\varphi\in L^2(\Omega)$, \begin{multline*} \left|\int_{\Omega}u(x)\varphi(x)\, dx\right|=\left|\int_{\Omega}\langle A\nabla_M u,\nabla_M w\rangle_M\, d_M\right|\\\leq c_4^{-1}\|f\|_{H^{-1/2}_{\ast,A}(\partial\Omega)}\|w\|_{H^2(\Omega)} \leq C_3c_4^{-1}\|f\|_{H^{-1/2}_{\ast,A}(\partial\Omega)}\|\varphi\|_{L^2(\Omega)}, \end{multline*} therefore $$\|u\|_{L^2(\Omega)}\leq C_3c_4^{-1}\|f\|_{H^{-1/2}_{\ast,A}(\partial\Omega)}$$ and the proof is concluded.\hfill$\square$ \section{The distance function from the boundary}\label{distsec} Let $M$ be a Riemannian manifold as in Definition~\ref{manifold}. We begin by investigating the consequences of assuming that $\varphi_M$ is smooth enough, namely we consider the following. \begin{assum}\label{regdistassum} For $M$, a Riemannian manifold as in Definition~\ref{manifold}, we assume that there exists $d_0>0$ such that $\varphi_M\in C^{1,1}(U_M^{d_0})$. \end{assum} The first consequence of Assumption~\ref{regdistassum} is the following. \begin{prop}\label{normofdistance} Under Assumption~\textnormal{\ref{regdistassum}}, we have \begin{equation}\label{varphiprop} \|\nabla_M\varphi\|_M=1\text{ in }U_M^{d_0}. \end{equation} \end{prop} \proof{.} We divide the proof into several steps. \smallskip \noindent \emph{First step}. We show that $\nabla\varphi_M$ is different from $0$ on $\partial\Omega$. In fact, for any $x\in\partial\Omega$ we have \begin{multline*} -\frac{\partial \varphi_M}{\partial \nu(x)}(x)=\lim_{t\to 0^+}\frac{\varphi_M(x-t\nu(x))-\varphi_M(x)}{t}= \lim_{t\to 0^+}\frac{\varphi_M(x-t\nu(x))}{t}\\ \geq \sqrt{\lambda}\lim_{t\to 0^+}\frac{\varphi(x-t\nu(x))}{t} = \sqrt{\lambda}\lim_{t\to 0^+}\frac{\varphi(x-t\nu(x))-\varphi(x)}{t} =\sqrt{\lambda}>0. \end{multline*} In the last equality we used \eqref{varphiproper}. \smallskip \noindent \emph{Second step}. We prove that $\|\nabla_M\varphi_M\|_M\leq 1$ in $U_M^{d_0}$. This follows from the obvious fact that $\varphi_M$ is Lipschitz with Lipschitz constant $1$ with respect to the distance $d_M$, that is, $$|\varphi_M(x)-\varphi_M(y)|\leq d_M(x,y)\quad\text{for any }x,y\in \overline{\Omega}.$$ Then, let $x\in U_M^{d_0}$ and let $\gamma:[0,1]\to\overline{\Omega}$ be a $C^1$ curve such that $\gamma(0)=x$ and $\gamma'(0)=v$, with $v=-\nu(x)$ if $x\in\partial\Omega$. We have $$\frac{d}{dt}(\varphi_M\circ\gamma)(0)=\nabla\varphi_M(x)v=\langle(\nabla_M\varphi_M(x))^T,v\rangle_M.$$ On the other hand, \begin{multline*} \left|\frac{d}{dt}(\varphi_M\circ\gamma)(0)\right|=\lim_{t\to 0^+}\frac{|\varphi_M(\gamma(t))-\varphi_M(\gamma(0))|}{t}\\ \leq \lim_{t\to 0^+}\frac{d_M(\gamma(t),\gamma(0))}{t} \leq \lim_{t\to 0^+}\frac{\int_0^t\|\gamma'(s)\|_M\, ds}{t}=\|\gamma'(0)\|_M=\|v\|_M. \end{multline*} Thus, for any $v$ or for $v=-\nu(x)$ if $x\in\partial\Omega$, we have $$\left|\langle(\nabla_M\varphi_M(x))^T,v\rangle_M\right|\leq \|v\|_M,$$ hence $\|\nabla_M\varphi_M(x)\|_M\leq 1$. \smallskip \noindent \emph{Third step}. By the first step and continuity, there exists $d_1$, $0<d_1\leq d_0$, such that we have $0<\|\nabla_M\varphi_M(x)\|_M\leq 1$ for any $x\in U_M^{d_1}$. We show that $\|\nabla_M\varphi_M(x)\|_M= 1$ for any $x\in U_M^{d_1}$. By contradiction, we assume there exist $x_0\in U_M^{d_1}$, $r>0$ and $0<c<1$ such that $B_r(x_0)\subset U_M^{d_1}$ and $0<\|\nabla_M\varphi_M(y)\|_M\leq c$ for any $y\in B_r(x_0)$. In particular, there exists $0<t_0$ such that $y_0=x_0+t_0\nabla\varphi_M\in B_r(x_0)$ and it satisfies the following conditions $$ \varphi_M(y_0)>\varphi_M(x_0)\quad\text{and}\quad 2d_M(x_0,y_0)\leq d_M(y_0,y)\text{ for any }y\in U_M^{d_1}\backslash B_r(x_0). $$ We call $h=\varphi_M(y_0)-\varphi_M(x_0)$ and we obviously have $0<h\leq d_M(x_0,y_0)$. Finally, we fix $\varepsilon$ such that $$0<\varepsilon< \min\left(1,\frac{1-c}{c}\right) h\quad\text{and}\quad \varphi_M(y_0)+\varepsilon<d_1.$$ Let $\gamma\in \Gamma$ be such that $\gamma([0,1])\subset U^{d_1}_M$, $\gamma(0)=y_0$, $\gamma(1)\in\partial\Omega$ and $$\mathrm{length}_M(\gamma)\leq \varphi_M(y_0)+\varepsilon<d_1.$$ There must be $s_0$, $0<s_0\leq 1$, such that $\varphi_M(\gamma(s_0))=\varphi_M(x_0)$. Therefore $$h=\varphi_M(\gamma(0))-\varphi_M(\gamma(s_0))\leq d_M(\gamma(0),\gamma(s_0))\leq \mathrm{length}_M(\gamma([0,s_0]))\leq h+\varepsilon.$$ But $\gamma([0,s_0]))\subset B_r(x_0)$, otherwise $$0<2h\leq2 d_M(x_0,y_0)\leq \mathrm{length}_M(\gamma([0,s_0]))\leq h+\varepsilon<2h$$ which leads to a contradiction. Therefore, \begin{multline*} h=\varphi_M(\gamma(0))-\varphi_M(\gamma(s_0))=\left|-\int_0^{s_0}\nabla\varphi_M(\gamma(t))\gamma'(t)\, dt\right|\\= \left|\int_0^{s_0}\langle(\nabla_M\varphi_M(\gamma(t)))^T,\gamma'(t)\rangle_M\, dt\right| \leq \int_0^{s_0}c\|\gamma'(t)\|_M\, dt\\ = c\, \mathrm{length}_M(\gamma([0,s_0]))\leq c(h+\varepsilon)<h \end{multline*} which leads to a contradiction, thus $\|\nabla_M\varphi_M(x)\|_M= 1$ for any $x\in U_M^{d_1}$. \smallskip \noindent \emph{Fourth step}. Let $$d_2=\sup\{d:\ 0<d\leq d_0\text{ and }\|\nabla_M\varphi_M(x)\|_M= 1\text{ for any }x\in U_M^d\}.$$ By the third step we have $d_1\leq d_2$. If $d_2=d_0$ then the result is proved. Assume, by contradiction, that $d_2<d_0$. Then, by continuity, there exists $d$, $d_2<d<d_0$, such that $0<\|\nabla_M\varphi_M(x)\|_M\leq 1$ for any $x\in U_M^d$. By the same reasoning used in the third step, we conclude that $\|\nabla_M\varphi_M\|_M=1$ in $U_M^d$, which contradicts the definition of $d_2$.\hfill$\square$ \smallskip Under Assumption~\ref{regdistassum}, we have that, for any $0\leq d<d_0$, $\Omega_M^d$ is a $C^{1,1}$ open set and $\partial(\Omega_M^d)=\partial\Omega_M^d$. Let $\nu$ denote the exterior normal to $\Omega_M^d$ on $\partial\Omega_M^d$, and $\nu_M$ its corresponding one in the Riemannian setting. Then we have. \begin{prop}\label{exteriornorm} Under Assumption~\textnormal{\ref{regdistassum}}, for any $0\leq d<d_0$, we have \begin{equation}\label{varphipropRiem} (\nabla_M\varphi_M)^T=-\nu_M\text{ on }\partial\Omega_M^d. \end{equation} In particular this is true on $\partial\Omega$. \end{prop} \proof{.} It is clear that, for any $x\in\partial\Omega_M^d$, we have $(\nabla\varphi_M(x))^T=-a(x)\nu(x)$ for some positive constant $a(x)$ depending on $x$. By the definitions of $\nabla_M\varphi_M(x)$ and of $\nu_M(x)$, we easily conclude that $(\nabla_M\varphi_M(x))^T=-a_1(x)\nu_M(x)$ for some positive constant $a_1(x)$ depending on $x$. Since, by Proposition~\ref{normofdistance}, $\|\nabla_M\varphi_M(x)\|_M=\|\nu_M(x)\|_M=1$, the result immediately follows.\hfill$\square$ \smallskip \begin{oss}\label{regUd} Under Assumption~\ref{regdistassum}, if $\|\varphi_M\|_{C^{1,1}(U_M^{d_0})}\leq C_0$, then $U_M^{d_0}$ is a $C^{1,1}$ open set with constants $r_1$ and $L_1$ depending on $r$, $L$, $R$, $d_0$ and $C_0$ only. This result can be obtained by an approximation argument, namely by suitably approximating $\partial U_M^{d_0}\cap \Omega$ with $\partial\Omega^d_M$ as $d\to d_0^-$. \end{oss} The key point is the following complementary result. \begin{prop}\label{viceversaprop} Fixed $d_0>0$, let $f\in C^{1,1}(U_M^{d_0})$ be a nonnegative function such that $$\|\nabla_Mf\|_M=1\text{ in }U_M^{d_0}\quad\text{and}\quad f=0\text{ on }\partial\Omega.$$ Then $f=\varphi_M$ on $U_M^{d_0}$. Moreover, if $\|f\|_{C^{1,1}(U_M^{d_0})}\leq C_0$, we have that \begin{equation}\label{Lipschitznabla} \|\nabla_Mf\|_{C^{0,1}(U_M^{d_0},\mathbb{R}^N)}\leq C_1, \end{equation} with $C_1$ depending on $C_0$, $\lambda$ and the Lipschitz constant of the metric $G$ only. \end{prop} \proof{.} First of all, we note that, since $f=0$ on $\partial\Omega$ and $f\geq 0$ in $U_M^{d_0}$, for any $x\in\partial\Omega$ we have $(\nabla f(x))^T=-a(x)\nu(x)$ for some positive constant $a(x)$ depending on $x$, thus, reasoning as in Proposition~\ref{exteriornorm}, $(\nabla_M f)^T=-\nu_M$ on $\partial\Omega$. We can also easily conclude that $f>0$ on $U_M^{d_0}\backslash\partial\Omega$. Let $x\in U_M^{d_0}\backslash\partial\Omega$. Fixed $y\in\partial\Omega$, let $\gamma\in \Gamma(y,x)$. Without loss of generality we can assume that $\gamma([0,1])\subset U^{d_0}_M$. Then \begin{multline*} f(x)=f(x)-f(y)=\int_0^1\nabla f(\gamma(s))\gamma'(s)\, ds\\ = \int_0^1 \langle(\nabla_Mf(\gamma(s)))^T,\gamma'(s)\rangle_M\, ds\leq \int_0^1\|\gamma'(s)\|_M\, ds= \mathrm{length}_M(\gamma). \end{multline*} We can conclude that \begin{equation}\label{ina} f(x)\leq \varphi_M(x)\quad\text{for any }x\in U_M^{d_0}. \end{equation} Since $\nabla f$ is Lipschitz, by the definition of $\nabla_Mf$ and the properties of $G$, we immediately infer that also $\nabla_M f$ is Lipschitz. Analogously, one can prove \eqref{Lipschitznabla}. For any $x\in U_M^{d_0}$, let $\gamma_x$ be the (maximal) solution to the Cauchy problem for the ordinary differential equation $$\left\{\begin{array}{l} \gamma_x'=(\nabla_M f(\gamma_x))^T, \\ \gamma_x(0)=x. \end{array}\right.$$ Since $\nabla_M f$ is Lipschitz, we have existence and uniqueness of a solution $\gamma_x$ for $t\in [0,T)$, for some suitable $T>0$ depending on $x$, even if $x\in\partial\Omega$. Moreover, for any $x\in U_M^{d_0}$, $\gamma_x(t)\in U_M^{d_0}\backslash\partial\Omega$ for any $0<t<T$. Let $x\in U_M^{d_0}$. For any $t_0$, $t_1\in\mathbb{R}$ such that $t_0<t_1$ and for which $\gamma_x$ is defined, let us call $z_0=\gamma_x(t_0)$ and $z_1=\gamma_x(t_1)$. Then we observe that \begin{multline*} f(z_1)-f(z_0)=\int_{t_0}^{t_1}\nabla f(\gamma_x(s))\gamma_x'(s)\, ds\\ = \int_{t_0}^{t_1} \langle(\nabla_Mf(\gamma_x(s)))^T,\gamma_x'(s)\rangle_M\, ds = \mathrm{length}_M(\gamma_x([t_0,t_1]))=t_1-t_0, \end{multline*} therefore $d_M(z_0,z_1)\leq f(z_1)-f(z_0)$. In particular, if $z_0\in\partial\Omega$, then $$\varphi_M(z_1)\leq d_M(z_0,z_1)\leq f(z_1)-f(z_0)=f(z_1),$$ thus, by the previous inequality \eqref{ina}, we have $\varphi_M(z_1)=f(z_1)$. We claim the following result. Let $d_1$, $0\leq d_1<d_0$ be such that $f(x)=\varphi_M(x)$ for any $x\in U_M^{d_0}$ with $\varphi_M(x)\leq d_1$. Then there exists $d$ such that $d_1<d< d_0$ and $f(x)=\varphi_M(x)$ for any $x\in U_M^{d_0}$ with $\varphi_M(x)\leq d$. In order to prove the claim, let us begin with the following remark, where we assume that $d_1>0$. Let $x\in U_M^{d_0}$ be such that $\varphi_M(x)=f(x)=d_1$. By the implicit function theorem, there exist a $C^1$ function $\phi_x:\mathbb{R}^{N-1}\to\mathbb{R}$ and an open neighbourhood $U_x$ of $x$ such that for any $y\in U_x$ we have, up to a rigid transformation depending on $x$, \begin{equation}\label{levelf} \left\{y\in U_x:\ f(y)\lesseqqgtr d_1\right\}=\left\{y=(y',y_N)\in U_x:\ y_N\lesseqqgtr\phi_x(y')\right\}. \end{equation} Without loss of generality, up to changing $U_x$, we can assume that $$U_x^-=\left\{y=(y',y_N)\in U_x:\ y_N<\phi_x(y')\right\}$$ is connected. We want to show that \eqref{levelf} holds true even if we replace $f$ with $\varphi_M$. By \eqref{ina}, it is clear that $$\left\{y\in U_x:\ \varphi_M(y)> d_1\right\}\supset\left\{y\in U_x:\ f(y)> d_1\right\}=\left\{y=(y',y_N)\in U_x:\ y_N>\phi_x(y')\right\}.$$ Moreover, by our assumption, $$\left\{y\in U_x:\ \varphi_M(y)\leqq d_1\right\}\subset\left\{y\in U_x:\ f(y)\leqq d_1\right\}=\left\{y=(y',y_N)\in U_x:\ y_N\leqq\phi_x(y')\right\}.$$ Since $\varphi_M$ can not have interior local minimum points, there exists $y_1\in U_x$ such that $\varphi_M(y_1)< d_1$. Then $f(y_1)<d_1$ and $y_1\in U_x^-$. Assume by contradiction that there exists $y_2\in U_x$ such that $\varphi_M(y_2)>d_1\geq f(y_2)$. Actually, by continuity, we can always assume that $\varphi_M(y_2)>d_1> f(y_2)$, hence that $y_2\in U^-_x$ as well. We connect $y_1$ to $y_2$ with a smooth curve all contained in $U_x^-$. There must be a point $y$ along this curve on which $\varphi_M(y)=d_1$, thus we obtain a contradiction since $f(y)<d_1$. This remark allows us to show that there exists $\varepsilon>0$ such that $d_1+\varepsilon<d_0$ and $f(y)>d_1$ for any $y$ with $d_1<\varphi_M(y)<d_1+\varepsilon$. Assume by contradiction that there exists $y_0$ such that $d_1<f(y_0)<\varphi_M(y_0)\leq d_1+\varepsilon/4$. We note that it is well-defined $z_0=\gamma_{y_0}(d_1-f(y_0))$. It happens that $f(z_0)=d_1$ and $d_M(z_0,y_0)\leq (f(y_0)-d_1)\leq \varepsilon/4$, therefore $\varphi_M(z_0)\leq d_1+\varepsilon/2$. By \eqref{ina}, $d_1\leq \varphi_M(z_0)$ but $\varphi_M(z_0)$ can not be greater than $d_1$, otherwise $f(z_0)$ should be greater than $d_1$ as well. We conclude that $\varphi_M(z_0)=d_1$, therefore $\varphi_M(y_0)\leq \varphi_M(z_0)+d_M(z_0,y_0)\leq d_1+(f(y_0)-d_1)=f(y_0)$ which gives the contradiction and proves the claim. Let us conclude the proof by defining $$d_2=\sup\{d:\ 0<d< d_0\text{ and }f(x)=\varphi_M(x)\text{ for any }x\in U_M^{d_0}\text{ with }\varphi_M(x)\leq d\}.$$ If $d_2=d_0$ the proof is concluded. If, by contradiction, $d_2<d_0$, by continuity we have that $f(x)=\varphi_M(x)$ for any $x\in U_M^{d_0}$ with $\varphi_M(x)\leq d_2$ and the claim contradicts the definition of $d_2$.\hfill$\square$ \smallskip We point out the following important property. Under Assumption~\ref{regdistassum}, or equivalently under the assumptions of Proposition~\ref{viceversaprop}, for any $x\in U_M^{d_0}$, let $\gamma_x$ be the (maximal) solution to the Cauchy problem for the ordinary differential equation \begin{equation}\label{ode} \left\{\begin{array}{l} \gamma_x'=(\nabla_M \varphi_M(\gamma_x))^T, \\ \gamma_x(0)=x. \end{array}\right. \end{equation} Then $\gamma_x:[-\varphi_M(x),d_0-\varphi_M(x))$ with $\gamma_x(-\varphi_M(x))=y\in \partial\Omega$. In other words, for any $x\in U_M^{d_0}$ there exists $y\in\partial\Omega$ such that $x=\gamma_y(\varphi_M(x))$ and $$\varphi_M(x)=d_M(x,y)=\mathrm{length}_M(\gamma_y([0,\varphi_M(x)])).$$ We can then state the following result. \begin{cor}\label{localcoordinate} Under Assumption~\textnormal{\ref{regdistassum}}, or equivalently under the assumptions of Proposition~\textnormal{\ref{viceversaprop}}, we can define a coordinate system for $U_M^{d_0}$ given by $T:\partial\Omega\times [0,d_0)\to U_M^{d_0}$ such that for any $(y,d)\in \partial\Omega\times [0,d_0)$ we have $T(y,d)=\gamma_y(d)$. We note that, for any $0\leq d<d_0$, we have $T(\partial\Omega\times{d})=\partial\Omega^d_M$. Moreover, if we assume that $\|\varphi_M\|_{C^{1,1}(U_M^{d_0})}\leq C_0$, then $T$ is bi-Lipschitz, that is, $T$ and its inverse $T^{-1}$ are Lipschitz, with Lipschitz constants bounded by a constant depending on $C_0$, $d_0$, $\lambda$, the Lipschitz constant of the metric $G$ and $C(\Omega)$ as in \eqref{lipd} only. \end{cor} \proof{.} The fact that $T$ is injective simply depends on the uniqueness for the solution to \eqref{ode}. We begin by showing that $T$ is Lipschitz, using an argument that is related to the continuity of solutions to ordinary differential equations with respect to the data. First of all, as for \eqref{Lipschitznabla}, we note that \begin{equation}\label{Lipschitznabla2} \|\nabla_M\varphi_M\|_{C^{0,1}(U_M^{d_0},\mathbb{R}^N)}\leq C_1, \end{equation} with $C_1$ depending on $C_0$, $\lambda$ and the Lipschitz constant of the metric $G$ only. For any $i=1,2$, let $x_i\in U_M^{d_0}$ and $t_i\in [-\varphi_M(x_i),d_0-\varphi_M(x_i))$. We wish to estimate $\|\gamma_{x_2}(t_2)-\gamma_{x_1}(t_1)\|$. By Volterra integral equation, we have that $$\gamma_{x_2}(t_2)-\gamma_{x_1}(t_1)= \left(x_2+\int_0^{t_2}\nabla_M \varphi_M(\gamma_{x_2}(s))\, ds\right) -\left(x_1+\int_0^{t_1}\nabla_M \varphi_M(\gamma_{x_1}(s))\, ds\right). $$ We begin by considering the case $t_1=t_2$. Then \begin{multline}\label{uno} \|\gamma_{x_2}(t_1)-\gamma_{x_1}(t_1)\|\leq\|x_2-x_1\|+ \left|\int_0^{t_1}\| \nabla_M \varphi_M(\gamma_{x_2}(s))-\nabla_M \varphi_M(\gamma_{x_1}(s)) \|\, ds\right| \\\leq \|x_2-x_1\|+C_1\left|\int_0^{t_1}\| \gamma_{x_2}(s)-\gamma_{x_1}(s) \|\, ds\right|, \end{multline} where we used \eqref{Lipschitznabla2}. Then, by Gronwall lemma, we have that \begin{equation}\label{due} \|\gamma_{x_2}(t_1)-\gamma_{x_1}(t_1)\|\leq e^{C_1d_0}\|x_2-x_1\|. \end{equation} Moreover, we infer that \begin{equation}\label{tre} \|(\gamma_{x_2}(t_1)-x_2)-(\gamma_{x_1}(t_1)-x_1)\|\leq C_1e^{C_1d_0}\|x_2-x_1\||t_1|, \end{equation} an inequality that will be crucial later on. We now turn to the general case. If $t_1\leq 0\leq t_2$, or $t_2\leq 0\leq t_1$, then \begin{multline}\label{Lipgammaa} \|\gamma_{x_2}(t_2)-\gamma_{x_1}(t_1)\|\leq \|\gamma_{x_2}(t_2)-x_2\|+\|x_2-x_1\|+\|x_1-\gamma_{x_1}(t_1)\|\\\leq \|x_2-x_1\|+\left|\int_0^{t_2}\|\nabla_M \varphi_M(\gamma_{x_2}(s))\|\, ds\right|+ \left|\int_0^{t_1}\|\nabla_M \varphi_M(\gamma_{x_1}(s))\|\, ds\right|\\\leq \|x_2-x_1\|+\sqrt{\lambda^{-1}}(|t_1|+|t_2|)=\|x_2-x_1\|+\sqrt{\lambda^{-1}}|t_2-t_1|, \end{multline} where we used \eqref{normcomparison} and the fact that $\|\nabla_M \varphi_M\|_M=1$. Otherwise, up to swapping $x_1$ with $x_2$, we have $0\leq t_1\leq t_2$ or $t_2\leq t_1\leq 0$, and then \begin{multline}\label{Lipgamma0} \|\gamma_{x_2}(t_2)-\gamma_{x_1}(t_1)\|\leq \|\gamma_{x_2}(t_2)-\gamma_{x_2}(t_1)\| +\|\gamma_{x_2}(t_1)-\gamma_{x_1}(t_1)\|\\\leq \left|\int_{t_1}^{t_2}\|\nabla_M \varphi_M(\gamma_{x_2}(s))\|\, ds\right|+ \|\gamma_{x_2}(t_1)-\gamma_{x_1}(t_1)\|\\\leq \sqrt{\lambda^{-1}} |t_2-t_1|+\|\gamma_{x_2}(t_1)-\gamma_{x_1}(t_1)\|. \end{multline} By \eqref{due} and \eqref{Lipgamma0} we can conclude that \begin{equation}\label{Lipgamma} \|\gamma_{x_2}(t_2)-\gamma_{x_1}(t_1)\|\leq e^{C_1d_0}\|x_2-x_1\|+\sqrt{\lambda^{-1}} |t_2-t_1|. \end{equation} By \eqref{Lipgammaa} and \eqref{Lipgamma}, it is immediate to prove that $T$ is Lipschitz and that its Lipschitz constant is bounded by a constant depending on $C_0$, $d_0$, $\lambda$ and the Lipschitz constant of the metric $G$ only. Let us now pass to the properties of $T^{-1}$. For any $x\in U_M^{d_0}$, we have that $$T^{-1}(x)=(\gamma_{x}(-\varphi_M(x)),\varphi_M(x))\in\partial\Omega\times [0, d_0).$$ We recall that $\varphi_M$ is Lipschitz, with Lipschitz constant $1$, with respect to the distance $d_M$. Hence we can conclude the proof using again \eqref{Lipgamma}.\hfill$\square$ \smallskip The following technical proposition is a crucial ingredient for the proof of our main decay estimate and it may be of independent interest as well. \begin{prop}\label{normalderivativelemma} Under Assumption~\textnormal{\ref{regdistassum}}, or equivalently under the assumptions of Proposition~\textnormal{\ref{viceversaprop}}, let $\|\varphi_M\|_{C^{1,1}(U_M^{d_0})}\leq C_0$. Let $w\in W^{1,1}_{\mathrm{loc}}(\Omega)$ and let, for any $0<d<d_0$, $$S(d)=\int_{\partial\Omega^d_M}w(x)\, d\sigma_M(x).$$ We have that $S$ is absolutely continuous on any compact subinterval of $(0,d_0)$ and, for almost any $d$, $0<d<d_0$, \begin{multline}\label{derivative} S'(d)=-\int_{\partial\Omega^d_M}\nabla w(x)\nu_M(x)\, d\sigma_M(x)+A(d)\\= -\int_{\partial\Omega^d_M}\langle \nabla_M w(x),\nu_M(x)\rangle_M\, d\sigma_M(x)+A(d) \end{multline} where $$|A(d)|\leq C\int_{\partial\Omega^d_M}|w(x)|\, d\sigma_M(x)$$ for a constant $C$ depending on $C_0$, $d_0$, $\lambda$ and the Lipschitz constant of the metric $G$ only. In particular, if $w\geq 0$, then \begin{equation}\label{derivativeresto} |A(d)|\leq C S(d). \end{equation} \end{prop} \begin{oss}\label{W11remark} If $w\in W^{1,1}(\Omega)$, then we can define $$S(0)=\int_{\partial\Omega^0_M}w(x)\, d\sigma_M(x)=\int_{\partial\Omega}w(x)\, d\sigma_M(x),$$ and we have that $S$ is absolutely continuous on any compact subinterval of $[0,d_0)$. \end{oss} \proof{.} We just assume $w\in W^{1,1}(\Omega)$ as in Remark~\ref{W11remark}, since, when $w\in W^{1,1}_{\mathrm{loc}}(\Omega)$, the result easily follows by the arguments we present in the sequel. We begin by observing that, for any $s$, $0\leq s<d_0$, we have $$\int_{\partial\Omega^s_M}w(x)\, d\sigma_M(x)= \int_{\partial\Omega^s_M}w(x)h(x)\, d\sigma(x) $$ where $$h(x)=\sqrt{\langle G^{-1}(x)\nu(x),\nu(x)\rangle}\sqrt{g(x)}.$$ Moreover, for any $s_1$, $s_2\in [0,d_0)$, we call $T_{s_1,s_2}:\partial\Omega^{s_1}_M\to\partial\Omega^{s_2}_M$ the change of coordinates such that $$T_{s_1,s_2}(x)=\gamma_x(s_2-s_1).$$ By \eqref{due} and the fact that $T_{s_1,s_2}$ is invertible with $T^{-1}_{s_1,s_2}=T_{s_2,s_1}$, we deduce that $T_{s_1,s_2}$ is bi-Lipschitz, therefore $$\int_{\partial\Omega^{s_2}_M}w(z)\, d\sigma_M(z)= \int_{\partial\Omega^{s_1}_M}w(\gamma_x(s_2-s_1))h(\gamma_x(s_2-s_1))k(x)\, d\sigma(x) $$ where $k(x)$ can be computed as follows. For almost every $x\in \partial\Omega^{s_1}_M$, with respect to the $(N-1)$-dimensional Hausdorff measure, $T_{s_1,s_2}$ admits a tangential differential at $x$. Namely, for any orthonormal basis $v_1,\ldots,v_{N-1}$ of the tangent space to $\partial\Omega^{s_1}_M$ at $x$, there exists $$J_{\tau}(x)=J_{\tau}T_{s_1,s_2}(x)=\left[\frac{\partial T_{s_1,s_2}}{\partial v_1}(x)\cdots\frac{\partial T_{s_1,s_2}}{\partial v_{N-1}}(x)\right].$$ Then \begin{equation}\label{quattro} k(x)=\sqrt{\det \left((J_{\tau}(x))^TJ_{\tau}(x)\right)}. \end{equation} Let us call $\tilde{T}_{s_1,s_2}=T_{s_1,s_2}-Id$ and let, analogously, $\tilde{J}_{\tau}(x)=J_{\tau}\tilde{T}_{s_1,s_2}(x)$. By \eqref{tre}, we infer that for any $i=1,\ldots,N-1$, \begin{equation}\label{cinque} \left\|\frac{\partial \tilde{T}_{s_1,s_2}}{\partial v_i}(x) \right\|\leq C_1e^{C_1d_0}|s_2-s_1|. \end{equation} Therefore, for almost every $x\in \partial\Omega^{s_1}_M$, again with respect to the $(N-1)$-dimensional Hausdorff measure, we call $a(x,s_1,s_2)$ the number such that $$\frac{h(\gamma_x(s_2-s_1))}{h(x)}k(x)=1+a(x,s_1,s_2).$$ By using \eqref{quattro} and \eqref{cinque} to handle $k(x)$, it is not difficult to show that, for some constant $C_2$ depending on $C_0$, $d_0$, $\lambda$ and the Lipschitz constant of the metric $G$ only, \begin{equation}\label{sei} |a(x,s_1,s_2)|\leq C_2|s_2-s_1|\quad\text{for almost every }x\in \partial\Omega^{s_1}_M. \end{equation} Then, for almost every $x\in \partial\Omega^{s_1}_M$, or for almost every $z=\gamma_x(s_2-s_1)\in \partial\Omega^{s_2}_M$, \begin{multline*} w(\gamma_x(s_2-s_1))=w(x)+\int_{0}^{s_2-s_1}\nabla w(\gamma_x(s))\gamma_x'(s)\, ds\\= w(x)-\int_{0}^{s_2-s_1}\nabla w(\gamma_x(s))\nu_M(\gamma_x(s))\, ds= w(x)-\int_{0}^{s_2-s_1}\nabla w(\gamma_z(-s))\nu_M(\gamma_z(-s))\, ds. \end{multline*} We call $\Omega_{s_1,s_2}$ the following set $$\Omega_{s_1,s_2}=\left\{\begin{array}{ll} \Omega^{s_1}_M\backslash\Omega^{s_2}_M &\text{if }s_1\leq s_2\\ \\ \Omega^{s_2}_M\backslash\Omega^{s_1}_M &\text{if }s_2\leq s_1 \end{array} \right.$$ and we call $$ b(s_1,s_2)=\left\{\begin{array}{ll} 1 &\text{if }s_1< s_2\\ 0 &\text{if }s_1= s_2\\ -1 &\text{if }s_1> s_2.\end{array} \right.$$ Then, by Fubini theorem and the coarea formula, \begin{multline*} \int_{\partial\Omega^{s_2}_M}w(x)\, d\sigma_M(x) -\int_{\partial\Omega^{s_1}_M}w(x)\, d\sigma_M(x)\\= \int_{\partial\Omega^{s_1}_M}w(x)a(x,s_1,s_2)\, d\sigma_M(x)- \int_{\partial\Omega^{s_2}_M}\left(\int_{0}^{s_2-s_1}\nabla w(\gamma_z(-s))\nu_M(\gamma_z(-s))\, ds\right)\, d\sigma_M(z) \\= \int_{\partial\Omega^{s_1}_M}w(x)a(x,s_1,s_2)\, d\sigma_M(x)- \int_{s_1}^{s_2}\left(\int_{\partial\Omega^t_M}\nabla w(x)\nu_M(x)(1+a(x,t,s_2))\, d\sigma_M(x)\right)\, dt \\= \int_{\partial\Omega^{s_1}_M}w(x)a(x,s_1,s_2)\, d\sigma_M(x)-b(s_1,s_2) \int_{\Omega_{s_1,s_2}}\nabla w(x)\nu_M(x)(1+a(x,s,s_2))\, d_M(x)\\= \int_{\partial\Omega^{s_1}_M}w(x)a(x,s_1,s_2)\, d\sigma_M(x)\\-b(s_1,s_2) \int_{\Omega_{s_1,s_2}}\nabla w(x)\nu_M(x)\, d_M(x)-b(s_1,s_2) \int_{\Omega_{s_1,s_2}}\nabla w(x)\nu_M(x)a(x,s,s_2)\, d_M(x) \\=A(s_1,s_2)-B(s_1,s_2)-C(s_1,s_2) \end{multline*} where, for any $x\in \Omega_{s_1,s_2}$ we set $s=\varphi_M(x)$. First of all, we deduce that $$[0,d_0)\ni s\mapsto \int_{\partial\Omega^{s}_M}w(x)\, d\sigma_M(x)$$ is a continuous function. Again by coarea formula, we have that the function \begin{multline*} [0,d_0)\ni s\mapsto B(d_0/2,s)=b(d_0/2,s)\int_{\Omega_{d_0,s}}\nabla w(x)\nu_M(x)\, d_M(x)\\= \int_{d_0/2}^{s}\left(\int_{\partial\Omega^t_M}\nabla w(x)\nu_M(x)\, d\sigma_M(x)\right)\, dt \end{multline*} is absolutely continuous, with respect to $s$, on any compact subinterval of $[0,d_0)$ and, for almost every $s_1\in (0,d_0)$, we have \begin{multline*} B'(d_0/2,s_1)=\lim_{s_2\to s_1}\frac{B(d_0/2,s_2)-B(d_0/2,s_1)}{s_2-s_1}\\= \lim_{s_2\to s_1}\frac{B(s_1,s_2)}{s_2-s_1} =\int_{\partial\Omega^{s_1}_M}\nabla w(x)\nu_M(x)\, d\sigma_M(x). \end{multline*} The function $$[0,d_0)\ni s\mapsto D(s)=\int_{\partial\Omega^{s}_M}w(x)\, d\sigma_M(x)+B(d_0/2,s) $$ is clearly Lipschitz continuous on any compact subinterval of $[0,d_0)$, therefore, for almost every $s_1\in (0,d_0)$, there exists $$ D'(s_1)=\lim_{s_2\to s_1}\frac{D(s_2)-D(s_1)}{s_2-s_1}= \lim_{s_2\to s_1}\frac{A(s_1,s_2)-C(s_1,s_2)}{s_2-s_1}. $$ It is easy to see that $$\frac{C(s_1,s_2)}{s_2-s_1}\to 0\quad\text{as }s_2\to s_1$$ and that $$\left|\frac{A(s_1,s_2)}{s_2-s_1}\right|\leq C_2\int_{\partial\Omega^{s_1}_M}|w(x)|\, d\sigma_M(x).$$ Therefore the proof can be easily concluded.\hfill$\square$ \smallskip Our aim is to modify our metric $G$ near the boundary of $\Omega$, by multiplying it with a scalar function $\eta$, in such a way that the new metric satisfies Assumption~\ref{regdistassum}. The construction is given in the next theorem. \begin{teo}\label{AKSmethod} Let us fix positive constants $R$, $r$ and $L$. Let $\Omega\subset\overline{B_R(0)}\subset\mathbb{R}^N$ be a bounded open set of class $C^{1,1}$ with constants $r$ and $L$. Let us consider $\tilde{d}_0>0$ as in Theorem~\textnormal{\ref{Del-Zolthm}} and $\varphi$ the distance to the boundary of $\Omega$ as in Definition~\textnormal{\ref{Eucldistfun}}. Let $G$ be a Lipschitz symmetric tensor in $\Omega$ which is uniformly elliptic with constant $\lambda$, $0<\lambda<1$, in $\Omega$ and such that $\|G\|_{C^{0,1}(\overline{\Omega})}\leq C$. Then there exist a constant $C_1>0$, depending on $r$, $L$, $R$, $\lambda$ and $C$ only, and a function $\eta\in C^{0,1}(\overline{\Omega})$, which is uniformly elliptic with constant $\lambda$ in $\Omega$ and such that $\|\eta\|_{C^{0,1}(\overline{\Omega})}\leq C_1$, such that the following holds. Let us call $\tilde{G}=\eta G$ and $\tilde{M}$ the corresponding Riemannian manifold on $\overline{\Omega}$. Let $\varphi_{\tilde{M}}$ be the corresponding distance from the boundary and, for any $d\geq 0$, $U_{\tilde{M}}^d=\{x\in\overline{\Omega}:\ \varphi_{\tilde{M}}(x)<d\}$. Then we have that $U^{\tilde{d}_0/2}=U_{\tilde{M}}^{\tilde{d}_0/2}$ and \begin{equation}\label{crucialequality} \varphi_{\tilde{M}}=\varphi\quad\text{in }U_{\tilde{M}}^{\tilde{d}_0/2}. \end{equation} \end{teo} \proof{.} Let us define $\hat{\eta}:U^{\tilde{d}_0}\to\mathbb{R}$ such that $$\hat{\eta}=\|\nabla_M\varphi\|_M^2\text{ in }U^{\tilde{d}_0}.$$ By \eqref{normcomparison}, we obtain that $\lambda\leq \hat{\eta}\leq\lambda^{-1}$ in $U^{\tilde{d}_0}$, and we have that $$\hat{\eta}^{-1}\|\nabla_M\varphi\|_M^2=1\text{ in }U^{\tilde{d}_0}.$$ Then we fix a cutoff function $\chi\in C^{\infty}(\mathbb{R})$ such that $\chi$ is decreasing, $\chi(t)=1$ for any $t\leq \tilde{d}_0/2$ and $\chi(t)=0$ for any $t\geq 3\tilde{d}_0/4$. We define, for any $x\in\overline{\Omega}$, $$\eta(x)= \chi(\varphi(x))\hat{\eta}(x)+(1-\chi(\varphi(x))$$ and we observe that $\lambda\leq \eta\leq\lambda^{-1}$ in $\overline{\Omega}$. Let $\tilde{G}=\eta G$. By construction of $\eta$ and by \eqref{gradcomp}, we have that $$\|\nabla_{\tilde{M}}\varphi\|_{\tilde{M}}=1\quad\text{in }U^{\tilde{d}_0/2}.$$ Therefore, applying Proposition~\ref{viceversaprop} with $f=\varphi$, we conclude that, at least in a neighbourhood of $\partial\Omega$, $\varphi_{\tilde{M}}=\varphi$. It is not difficult to show that such a neighbourhood is actually equal to $U^{\tilde{d}_0/2}$ and that it coincides with $U_{\tilde{M}}^{\tilde{d}_0/2}$ as well. It remains to show the Lipschitz regularity of $\eta$ and for this purpose it is enough to show that $\hat{\eta}$ is Lipschitz in $U^{\tilde{d}_0}$. Again by \eqref{gradcomp}, we infer that for any $x\in U^{\tilde{d}_0}$ $$\hat{\eta}(x)=\|\nabla_M\varphi(x)\|_M^2=\langle (\nabla \varphi(x))^T,G^{-1}(x)(\nabla \varphi(x))^T\rangle.$$ Then we can easily conclude by exploiting the Lipschitz regularity of $G$ and the fact that $\varphi\in C^{1,1}(U^{\tilde{d}_0})$ as proved in Therem~\ref{Del-Zolthm}.\hfill$\square$ \smallskip We conclude that $\tilde{G}=\eta G$ constructed in Theorem~\ref{AKSmethod} is a Lipschitz symmetric tensor in $\Omega$ which is uniformly elliptic with constant $\lambda_1=\lambda^2$ in $\Omega$ and such that $\|\tilde{G}\|_{C^{0,1}(\overline{\Omega})}\leq C_2$, with $C_2$ depending on $C$, $C_1$ and $\lambda$ only. Moreover, by Theorem~\ref{Del-Zolthm} and \eqref{crucialequality}, $\tilde{G}$ satisfies Assumption~\ref{regdistassum} with $d_0=\tilde{d}_0/2$. \section{The decay estimate}\label{decaysec} Let us fix positive constants $R$, $r$, $L$, $C_0$, $C_1$, $\lambda$ and $\lambda_1$, with $0<\lambda<1$ and $0<\lambda_1<1$. We refer to these constants as the \emph{a priori data}. Let $\Omega\subset\overline{B_R(0)}\subset\mathbb{R}^N$ be a bounded domain of class $C^{1,1}$ with constants $r$ and $L$. Let us consider $\tilde{d}_0>0$ as in Theorem~\textnormal{\ref{Del-Zolthm}} and $\varphi$ the distance to the boundary of $\Omega$ as in Definition~\textnormal{\ref{Eucldistfun}}. Let $G$ be a Lipschitz symmetric tensor in $\Omega$ which is uniformly elliptic with constant $\lambda$ and such that $\|G\|_{C^{0,1}(\overline{\Omega})}\leq C_0$. Let $A$ be a Lipschtitz conductivity tensor in $\Omega$ which is uniformly elliptic with constant $\lambda_1$ and such that $\|A\|_{C^{0,1}(\overline{\Omega})}\leq C_1$. We further suppose that Assumption~\ref{scalarassum} holds. Let us fix $f\in H^{1/2}(\partial\Omega)$, with $f\neq 0$, and let us call $\Phi$ its frequency as in Definition~\ref{frequencydefin}. We assume that $\Phi>0$, that is, $f$ is not constant on $\partial\Omega$. Let $u\in H^1(\Omega)$ be the solution to \eqref{Dirichlet}. We recall that $u\in H^2_{\mathrm{loc}}(\Omega)$ and the equation is satisfied pointwise almost everywhere in $\Omega$. The important remark is that, without loss of generality, we can assume that the following fact holds. By Remark~\ref{anisotropy}, we can assume that \begin{equation} \label{isoassum} A=\gamma I_N\text{ with }\gamma\in C^{0,1}(\overline{\Omega}). \end{equation} We can assume that $G$ satisfies Assumption~\ref{regdistassum} with some positive constant $d_0$. Under this assumption, we need to add $d_0$ and $\|\varphi_M\|_{C^{1,1}(U_M^{d_0})}$ to the a priori data. In particular, by Theorem~\ref{AKSmethod} and Remark~\ref{conformalchange}, we can assume that \begin{equation}\label{crucialequality2} d_0=\tilde{d}_0/2,\quad U^{d_0}=U_M^{d_0}\quad\text{and}\quad \varphi_M=\varphi\quad\text{in }U_M^{d_0}. \end{equation} In this case, by Theorem~\ref{Del-Zolthm}, $d_0$ and $\|\varphi_M\|_{C^{1,1}(U_M^{d_0})}$ depend on $r$, $L$ and $R$ only. Before stating our decay estimates, we need to set some notation. For any $0\leq d<d_0$, let us define $$D(d)=\int_{\Omega^d_M}\gamma(x)\|\nabla_M u(x)\|_M^2\, d_M(x)\quad\text{and}\quad H(d)=\int_{\partial\Omega^d_M}\gamma(x) u^2(x)\, d\sigma_M(x).$$ We recall that, for any such $d$, $\partial\Omega^d_M=\partial(\Omega^d_M)$ and, if \eqref{crucialequality2} holds, $\Omega^d=\Omega^d_M$ and $\partial\Omega^d=\partial\Omega^d_M=\partial(\Omega^d)$. Moreover, by unique continuation, for example by \cite{Gar-Lin1} for $N\geq 3$, and the maximum principle, both $D(d)$ and $H(d)$ must be strictly positive for any $0\leq d<d_0$. We define the \emph{frequency function} $N$ as follows \begin{equation} N(d)=\frac{D(d)}{H(d)},\qquad 0\leq d<d_0. \end{equation} We note that, by Remark~\ref{freqrem}, there exists a constant $c_1$, $0<c_1<1$ depending on $\lambda$ and $\lambda_1$ only, such that \begin{equation} \lambda_1c_1\Phi\leq N(0)\leq (\lambda_1c_1)^{-1}\Phi \end{equation} where $\Phi$ is the frequency of the boundary datum $f$. For any $s\geq 0$ we define \begin{equation}\label{hfundef} h(s)=\left\{\begin{array}{ll}e^{-s} &\text{if }s\leq 1\\ (es)^{-1}&\text{if }s>1. \end{array}\right. \end{equation} We note that $h(0)=1$ and $h$ is a positive $C^1$ strictly decreasing function. \begin{teo}\label{mainthm} Let $f\in H^{1/2}(\partial\Omega)$, with $f\neq 0$, and let its frequency $\Phi$ be positive. Under the previous assumptions and notation, there exist two positive constants $C_2$ and $c_2$, depending on the a priori data only, such that, for any $d$, $0<d<d_0$, we have \begin{equation}\label{decayestimate} D(d)\leq e^{C_2d}D(0)h(c_2d\Phi). \end{equation} \end{teo} In the next theorem, we control the decay of the function, instead of that of its gradient. Namely, we assume that $f\in H^{1/2}_{\ast}(\partial\Omega)$, with $f\neq 0$, and that $\Phi_1$ is its lower frequency. We recall that $\Phi_1\leq \Phi$. \begin{teo}\label{mainthmbis} Let $f\in H^{1/2}_{\ast}(\partial\Omega)$, with $f\neq 0$, and let $\Phi_1$ be its lower frequency. Under the previous assumptions and notation, there exist two positive constants $C_3$ and $c_3$, depending on the a priori data only, such that, for any $d$, $0<d<d_0/2$, we have \begin{equation}\label{decayestimatebis} H(d)\leq e^{C_3d}H(0)h(c_3d\Phi_1). \end{equation} \end{teo} As a corollary, we obtain a higher order decay for $D$ with respect to the lower frequency. \begin{cor}\label{maincor} Let $f\in H^{1/2}_{\ast}(\partial\Omega)$, with $f\neq 0$, and let $\Phi_1$ be its lower frequency. Under the previous assumptions and notation, there exists a further absolute positive constant $C_4$ such that, for any $d$, $0<d<d_0/4$, we have \begin{multline}\label{decayestimateter} D(d)\leq \frac{C_4}{d}e^{3C_3d/2}H(0)h(c_3d\Phi_1/2)\\\leq C_4e^{3C_3d/2}D(0)\frac{h(c_3d\Phi_1/2)}{\lambda_1c_1d\Phi}\leq C_4e^{3C_3d/2}D(0)\frac{h(c_3d\Phi_1/2)}{\lambda_1c_1d\Phi_1}. \end{multline} \end{cor} \begin{oss}\label{Steklovremark} If $f=\phi$ where $\phi$ is a Steklov eigenfunction with Steklov eigenvalue $\mu>0$, that is, $u=w$ where $w$ is a solution to \eqref{Steklov}, then the results of Theorems~\ref{mainthm} and \ref{mainthmbis} and of Corollary~\ref{maincor} still hold, possibly with different constants still depending on the a priori data only, even if we replace both $\Phi$ and $\Phi_1$ with the Steklov eigenvalue $\mu$. \end{oss} \proof{ of Corollary~\textnormal{\ref{maincor}}.} We sketch the proof of the corollary. Let $0<d<d_0/4$. Then we have, by coarea formula and \eqref{decayestimatebis}, \begin{equation}\label{bbb} \int_{\Omega_M^{d/2}\backslash\Omega_M^{3d/2}}\gamma u^2\, d_M =\int_{d/2}^{3d/2}H(t)\, dt \leq H(0) de^{3C_3d/2}h(c_3d\Phi_1/2). \end{equation} Then we apply a Caccioppoli inequality. Let $\chi\in C^{\infty}_0(\mathbb{R})$ be an even positive function such that $\chi$ is decreasing on $[0,1)$, $\chi=1$ on $[0,1/2]$ and $\chi=0$ on $[3/4,+\infty)$. We define the function $\eta_d$ as follows $$\eta_d(x)=\chi\left(2\frac{\varphi_M(x)-d}{d}\right)\quad\text{for any }x\in\Omega$$ and we note that $$\nabla_M\eta_d(x)=\frac{2}{d}\chi'\left(2\frac{\varphi_M(x)-d}{d}\right)\nabla_M\varphi_M(x) \quad\text{for any }x\in\Omega.$$ Therefore $$\|\nabla_M\eta_d(x)\|_M\leq \frac{C}{d}\quad\text{for any }x\in\Omega\backslash\Omega_M^{d_0/2}$$ where $C$ is an absolute constant. Then \begin{multline*} 0=\int_{\Omega}\gamma\langle \nabla_Mu,\nabla_M(u\eta_d^2)\rangle_M\, d_M\\= \int_{\Omega}\gamma\langle \nabla_Mu,\nabla_Mu\rangle_M\eta_d^2\, d_M+2 \int_{\Omega}\gamma\langle \nabla_Mu,\nabla_M\eta_d\rangle_Mu\eta_d\, d_M. \end{multline*} We obtain that \begin{multline*} \int_{\Omega_M^{d/2}\backslash\Omega_M^{3d/2}}\gamma\langle \nabla_Mu,\nabla_Mu\rangle_M\eta_d^2\, d_M= \int_{\Omega}\gamma\langle \nabla_Mu,\nabla_Mu\rangle_M\eta_d^2\, d_M\\= -2\int_{\Omega}\gamma\langle \nabla_Mu,\nabla_M\eta_d\rangle_Mu\eta_d\, d_M= -2\int_{\Omega_M^{d/2}\backslash\Omega_M^{3d/2}}\gamma\langle \nabla_Mu,\nabla_M\eta_d\rangle_Mu\eta_d\, d_M \\\leq \frac{2C}{d}\left(\int_{\Omega_M^{d/2}\backslash\Omega_M^{3d/2}}\gamma\langle \nabla_Mu,\nabla_Mu\rangle_M\eta_d^2\, d_M\right)^{1/2}\left(\int_{\Omega_M^{d/2}\backslash\Omega_M^{3d/2}}\gamma u^2\, d_M\right)^{1/2} \end{multline*} and we conclude that \begin{equation}\label{aaa} \int_{\Omega_M^{d/2}\backslash\Omega_M^{3d/2}}\gamma\langle \nabla_Mu,\nabla_Mu\rangle_M\eta_d^2\, d_M\leq \frac{4C^2}{d^2}\int_{\Omega_M^{d/2}\backslash\Omega_M^{3d/2}}\gamma u^2\, d_M. \end{equation} Since $$\int_{\Omega_M^{3d/4}\backslash \Omega_M^{5d/4}}\gamma\langle \nabla_Mu,\nabla_Mu\rangle_M\, d_M \leq \int_{\Omega_M^{d/2}\backslash\Omega_M^{3d/2}}\gamma\langle \nabla_Mu,\nabla_Mu\rangle_M\eta_d^2\, d_M, $$ by \eqref{aaa} and \eqref{bbb}, we infer that \begin{equation}\label{ccc} \int_{\Omega_M^{3d/4}\backslash \Omega_M^{5d/4}}\gamma\langle \nabla_Mu,\nabla_Mu\rangle_M\, d_M \leq H(0)\frac{4C^2}{d} e^{3C_3d/2}h(c_3d\Phi_1/2). \end{equation} Now we consider the function $u_d=u\eta_{d/4}$ and we easily prove that \begin{multline*} \int_{\Omega_M^d}\gamma\|\nabla_M u_d\|^2_M\, d_M\leq 2 \int_{\Omega_M^d}\gamma\left(\eta_{d/4}^2\|\nabla_M u\|^2_M+u^2\|\nabla_M\eta_{d/4}\|^2_M\right)\, d_M \\\leq 2\left(\int_{\Omega_M^{3d/4}\backslash \Omega_M^{5d/4}}\gamma\langle \nabla_Mu,\nabla_Mu\rangle_M\, d_M +\frac{16C^2}{d^2}\int_{\Omega_M^{3d/4}\backslash \Omega_M^{5d/4}}\gamma u^2\, d_M\right)\\\leq H(0)\frac{40C^2}{d} e^{3C_3d/2}h(c_3d\Phi_1/2). \end{multline*} We have that $w_d=u-u_d$ solves, in a weak sense, $$\left\{\begin{array}{ll} \mathrm{div}_M(\gamma\nabla_M w_d)=-\mathrm{div}_M(\gamma\nabla_M u_d)&\text{in }\Omega_M^d\\ w_d=0&\text{on }\partial\Omega_M^d, \end{array} \right. $$ from which we deduce that $$\left(\int_{\Omega_M^d}\gamma\|\nabla_M w_d\|^2_M\, d_M\right)^{1/2}\leq\left(\int_{\Omega_M^d}\gamma\|\nabla_M u_d\|^2_M\, d_M\right)^{1/2},$$ hence $$D(d)\leq H(0)\frac{160C^2}{d} e^{3C_3d/2}h(c_3d\Phi_1/2). $$ and the proof of \eqref{decayestimateter} is concluded by taking $C_4=160C^2$.\hfill$\square$ \smallskip The rest of the section is devoted to the proofs of Theorems~\ref{mainthm} and \ref{mainthmbis}. We also need the following notation. We note that, since $u\in H^2_{\mathrm{loc}}(\Omega)$, for any $d$ with $0<d<d_0$, $\nabla u$ is well-defined, in the trace sense, on $\partial\Omega^d_M$ and that $\nabla u\in L^2(\partial\Omega^d_M,\mathbb{R}^N)$. For any $d$ with $0<d<d_0$, and almost any $x\in\partial\Omega^d_M$, with respect to the $(N-1)$-dimensional Hausdorff measure, we call $u_{\nu_M}(x)$ the (exterior) normal derivative of $u$ at $x$ with respect to $\Omega^d_M$ in the Riemannian setting which is given by $$u_{\nu_M}(x)=\langle(\nabla_Mu(x))^T,\nu_M(x)\rangle_M=\nabla u(x) \nu_M(x).$$ We note that, analogously, $u_{\nu_M}\in L^2(\partial\Omega^d_M)$ is well-defined, again in the trace sense. Moreover, using the equation and the divergence theorem, we have, for any $d$ with $0<d<d_0$, $$D(d)=\int_{\partial\Omega^d_M}\gamma(x)u(x) u_{\nu_M}(x)\, d\sigma_M(x).$$ Finally we call, for any $d$ with $0<d<d_0$, $$T(d)=\int_{\partial\Omega^d_M}\gamma(x)u^2_{\nu_M}(x)\, d\sigma_M(x)\quad\text{and}\quad F(d)=\frac{T(d)}{D(d)}.$$ We note that, by a simple application of the Cauchy-Schwarz inequality, we have $$F(d)\geq N(d)\qquad\text{for any }0<d<d_0.$$ Following essentially the arguments developed in \cite{Gar-Lin1}, we compute the derivatives, with respect to $d$, of $D$ and $H$. By coarea formula and the properties of $\varphi_M$, we infer that $D$ is absolutely continuous on every compact subinterval contained in $[0,d_0)$ and that, for almost every $d\in (0, d_0)$, $$D'(d)=-\int_{\partial\Omega^d_M}\gamma(x)\|\nabla_M u(x)\|_M^2\, d\sigma_M(x).$$ Then we use the following lemma, which is a suitable version of the Rellich identity. \begin{lem}\label{Rellichid} Let $u\in H^2_{\mathrm{loc}}(\Omega)$, $\gamma\in C^{0,1}(\overline{\Omega})$ and $v=(v^1,\ldots,v^N)\in C^{0,1}(\overline{\Omega},\mathbb{R}^N)$. Then \begin{multline}\label{rellich} \mathrm{div}_M(\gamma\langle\nabla_M u,\nabla_M u\rangle_M v)+2\mathrm{div}_M(\gamma \nabla_M u)\langle\nabla_M u,v\rangle_M \\ =\gamma\langle\nabla_M u,\nabla_M u\rangle_M\mathrm{div}_M(v)+(\nabla\gamma v)\langle\nabla_M u,\nabla_M u\rangle_M+ \gamma(\nabla g^{l,j}v)u_lu_j\\+2\mathrm{div}_M(\gamma\langle\nabla_M u,v\rangle_M \nabla_M u)-2\gamma g^{l,j}u_lu_kv^k_j. \end{multline} \end{lem} \proof{.} It follows by straightforward computations. In fact, with the summation convention and using subscripts for partial derivatives, \begin{multline*} \mathrm{div}_M(\gamma\langle\nabla_M u,\nabla_M u\rangle_M v)= \gamma\langle\nabla_M u,\nabla_M u\rangle_M\mathrm{div}_M(v)+(\gamma g^{l,j}u_lu_j)_kv^k\\ =\gamma\langle\nabla_M u,\nabla_M u\rangle_M\mathrm{div}_M(v)+(\nabla\gamma v)(g^{l,j}u_lu_j)+ \gamma(\nabla g^{l,j}v)u_lu_j+\gamma g^{l,j}(u_{lk}u_j+u_lu_{jk})v^k. \end{multline*} On the other hand, \begin{multline*} 2\mathrm{div}_M(\gamma\langle\nabla_M u,v\rangle_M \nabla_M u)= 2\mathrm{div}_M(\gamma \nabla_M u)\langle\nabla_M u,v\rangle_M+2\gamma \langle \nabla_M u,\nabla(\nabla uv)\rangle\\ =2\mathrm{div}_M(\gamma \nabla_M u)\langle\nabla_M u,v\rangle_M+2\gamma g^{l,j}u_l(u_kv^k)_j\\ =2\mathrm{div}_M(\gamma \nabla_M u)\langle\nabla_M u,v\rangle_M+2\gamma g^{l,j}u_lu_kv^k_j +2\gamma g^{l,j}u_lu_{kj}v^k. \end{multline*} Finally, $$\gamma g^{l,j}u_lu_{kj}v^k=\gamma g^{l,j}u_lu_{jk}v^k=\gamma g^{l,j}u_{lk}u_jv^k,$$ which follows by symmetry of the Hessian matrix and by observing that $\gamma g^{l,j}u_{lk}u_jv^k=\gamma g^{j,l}u_{jk}u_lv^k=\gamma g^{l,j}u_lu_{jk}v^k$ since again by symmetry $g^{l,j}=g^{j,l}$. Putting the three previous equality together, the lemma is proved.\hfill$\square$ \smallskip We construct a Lipschitz function $v$ on $\overline{\Omega}$, with values in $\mathbb{R}^N$, coinciding with $\nu_M$ in $U_M^{d_0}$. By Remark~\ref{regUd} and Proposition~\ref{extensionprop}, or with a much simpler argument if \eqref{crucialequality2} holds, we can construct $v$ in such a way that $\|v\|_{C^{0,1}(\overline{\Omega},\mathbb{R}^N)}$ is bounded by a constant depending on the a priori data only. Then we apply the Rellich identity \eqref{rellich} in $\Omega^d_M$, with $0<d<d_0$, to our solution $u$ and such a function $v$. Namely, by the divergence theorem, \begin{multline*} \int_{\partial\Omega^d_M}\gamma\langle\nabla_M u,\nabla_M u\rangle_M\, d\sigma_M=\int_{\partial\Omega^d_M}\gamma\langle\nabla_M u,\nabla_M u\rangle_M\langle v,\nu_M\rangle_M\, d\sigma_M\\= \int_{\Omega^d_M}(\mathrm{div}_M(\gamma\langle\nabla_M u,\nabla_M u\rangle_M v)+2\mathrm{div}_M(\gamma \nabla_M u)\langle\nabla_M u,v\rangle_M)\, d_M\\= 2\int_{\Omega^d_M}\mathrm{div}_M(\gamma\langle\nabla_M u,v\rangle_M \nabla_M u)\, d_M+A_0(d)= 2\int_{\partial\Omega^d_M}\gamma u^2_{\nu_M}\, d\sigma_M+A_0(d) \end{multline*} where $$ A_0(d)=\int_{\Omega^d_M}\left(\langle\nabla_M u,\nabla_M u\rangle_M(\gamma\mathrm{div}_M(v)+\nabla\gamma v)+\gamma(\nabla g^{l,j}v)u_lu_j-2\gamma g^{l,j}u_lu_kv^k_j \right)\, d_M. $$ In other words, for almost every $d\in (0, d_0)$, $$D'(d)=-2T(d)-A_0(d).$$ Finally, it is not difficult to show that there exists a positive constant $C$, depending on the a priori data only, such that for any $d$ with $0<d<d_0$ we have $$|A_0(d)|\leq C D(d),$$ consequently, for almost every $d\in (0, d_0)$, \begin{equation}\label{Dprime} 2F(d)-C\leq-\frac{D'(d)}{D(d)}\leq 2F(d)+C. \end{equation} Now we turn to the computation of $H'$. We wish to prove a similar estimate, namely that there exists a positive constant $\tilde{C}$, depending on the a priori data only, such that for almost every $d\in (0, d_0)$, \begin{equation}\label{Hprime} 2N(d)-\tilde{C}\leq-\frac{H'(d)}{H(d)}\leq 2N(d)+\tilde{C}. \end{equation} Such a result directly follows by applying Proposition~\ref{normalderivativelemma} to $w=\gamma u^2$. In fact, we obtain that $H$ is absolutely continuous on every compact subinterval contained in $[0,d_0)$ and that, for almost any $d$, $0<d<d_0$, $$-H'(d)=2D(d)+\int_{\partial\Omega^d_M}(\nabla \gamma(x)\nu_M(x))u^2(x)\, d\sigma_M(x)+A(d)=2D(d)+A_1(d).$$ Again, it is not difficult to show that there exists a positive constant $\tilde{C}$, depending on the a priori data only, such that for any $d$ with $0<d<d_0$ we have $$|A_1(d)|\leq \tilde{C} H(d),$$ consequently, for almost every $d\in (0, d_0)$, \eqref{Hprime} holds. We are now in the position to conclude the proof of Theorem~\ref{mainthm}. \smallskip \proof{ of Theorem~\textnormal{\ref{mainthm}}.} We use an ordinary differential equation argument, exploiting \eqref{Dprime} and \eqref{Hprime}. With $C$ as in \eqref{Dprime}, let us define, for $0\leq d<d_0$, $$\tilde{D}(d)=e^{-Cd}D(d).$$ Then $$-\frac{\tilde{D}'(d)}{\tilde{D}(d)}\geq 2F(d)=2(F(d)-N(d))+2N(d).$$ Therefore, for any $0<d<d_0$, $$\log(\tilde{D}(0))-\log(\tilde{D}(d))\geq 2\int_0^d(F(t)-N(t))\, dt+2\int_0^d N(t)\, dt.$$ We call $$G_0(d)=e^{-2\int_0^dF}\quad\text{and}\quad G(d)=e^{-2\int_0^d(F-N)}\quad\text{and}\quad G_1(d)=e^{-2\int_0^dN}$$ and we obtain that $$\tilde{D}(d)\leq G_0(d)\tilde{D}(0)=G(d)G_1(d)\tilde{D}(0),$$ therefore \begin{equation}\label{firsteq} D(d)\leq e^{Cd}G_0(d)D(0)=e^{Cd}G(d)G_1(d)D(0). \end{equation} Since $$\frac{N'(d)}{N(d)}=\frac{D'(d)}{D(d)}-\frac{H'(d)}{H(d)},$$ we infer that \begin{equation}\label{logNder} 2(F(d)-N(d))-\hat{C}\leq-\frac{N'(d)}{N(d)}\leq 2(F(d)-N(d))+\hat{C} \end{equation} where $\hat{C}=C+\tilde{C}$. Let us define, for $0\leq d<d_0$, $$\tilde{N}(d)=e^{-\hat{C}d}N(d).$$ Then, by \eqref{logNder}, we conclude that $$-\frac{\tilde{N}'(d)}{\tilde{N}(d)}\geq 2(F(d)-N(d))\geq 0.$$ In other words, $\tilde{N}$ is decreasing. We note that this is the crucial point in the argument of \cite{Gar-Lin1}. However, in our case, such a property is not enough, since, in order to estimate $G_1$, we need to control how fast $N$ can decrease. Still by \eqref{logNder}, we infer that \begin{equation}\label{secondeq} N(d)\geq e^{-\hat{C}d}G(d)N(0). \end{equation} We note that $G(0)=1$ and, since $F-N\geq 0$, $G$ is positive and decreasing with respect to $d$. We estimate $G_1(d)$ by using \eqref{secondeq} and the fact that $G(s)\geq G(d)$ for any $0<s<d$, obtaining that $$G_1(d)\leq e^{-2N(0)G(d)\int_0^de^{-\hat{C}s}\, ds}= e^{-b(d)N(0)G(d)},$$ where $$b(d)=\frac{2}{\hat{C}}\left(1-e^{-\hat{C}d}\right).$$ We consider the auxiliary function $g(x)=xe^{-\alpha x}$, $x\in[0,1]$, with $\alpha>0$, and note that $$\max_{x\in[0,1]}g(x)=h(\alpha),$$ thus we conclude that \begin{equation}\label{finaleq} D(d)\leq e^{Cd}D(0)h(b(d)N(0)). \end{equation} Since $\lambda_1c_1\Phi\leq N(0)$ and $2e^{-\hat{C}d_0}d\leq b(d)$, the proof of \eqref{decayestimate} is concluded by setting $C_2=C$ and $c_2=2\lambda_1c_1e^{-\hat{C}d_0}$.\hfill$\square$ \smallskip We note that, without any control on $F-N$, besides the fact that it is positive, using this technique it is in practice impossible to improve the estimate of Theorem~\ref{mainthm}. We now turn to the proof of Theorem~\ref{mainthmbis}. We need the following notation. For any $d\in [0,d_0)$ we define $$E(d)=\int_{\Omega^d_M}\gamma u^2\, d_M.$$ We note that $E$ is a strictly positive function which is absolutely continuous on any compact subinterval of $[0,d_0)$. Moreover, for almost any $d\in(0,d_0)$, we have $$E'(d)=-\int_{\partial\Omega^d_M}\gamma u^2\, d\sigma_M=-H(d).$$ We construct a Lipschitz function $v_1\in C^{0,1}(\overline{\Omega},\mathbb{R}^N)$ coinciding with $\nu_M$ in $U_M^{d_0/2}$ and such that $\|v_1\|_{C^{0,1}(\overline{\Omega},\mathbb{R}^N)}$ is bounded by a constant depending on the a priori data only and $$\|v_1\|_M\leq 1\quad\text{in }\overline{\Omega}.$$ Such a construction is fairly easy. We consider a $C^{\infty}$ function $\chi:\mathbb{R}\to\mathbb{R}$ such that $\chi$ is increasing, $\chi=0$ on $(-\infty,3d_0/5]$ and $\chi=1$ on $[4d_0/5,+\infty)$. We can define $v_1$ with the desired properties as follows $$v_1(x)=\chi(\varphi_M(x))\frac{e_1}{\sqrt{\langle G(x)e_1,e_1\rangle}}+(1-\chi(\varphi_M(x)))\nu_M(x)\quad\text{for any }x\in\overline{\Omega}.$$ Then we have, for any $d$, $0\leq d\leq d_0/2$, \begin{multline}\label{Sdef} H(d)= \int_{\partial\Omega^d_M}\gamma u^2\, d\sigma_M= \int_{\Omega^d_M}\mathrm{div}_M(\gamma u^2v_1)\, d_M\\= 2\int_{\Omega^d_M}\gamma u\langle(\nabla_M u)^T,v_1\rangle_M\, d_M+\int_{\Omega^d_M}\mathrm{div}_M(\gamma v_1)u^2\, d_M=2S(d)+A_2(d). \end{multline} It is not difficult to show that, for some constant $\tilde{C}_1$ depending on the a priori data only, we have \begin{equation}\label{Sdefbound} |A_2(d)|\leq\tilde{C}_1E(d). \end{equation} We now call, for any $d\in [0,d_0)$, $$K(d)=\frac{H(d)}{E(d)}\quad\text{and}\quad K_1(d)=\frac{H(d)}{\sqrt{E(d)}}.$$ For almost any $d$ with $0<d<d_0$, we have \begin{equation}\label{Eprime} -\frac{E'(d)}{E(d)}=K(d). \end{equation} Since $$\frac{K'(d)}{K(d)}=\frac{H'(d)}{H(d)}-\frac{E'(d)}{E(d)},$$ by \eqref{Hprime} and \eqref{Eprime} we infer that, for almost every $d\in (0,d_0)$, \begin{equation}\label{logKder} 2N(d)-K(d)-\tilde{C}\leq-\frac{K'(d)}{K(d)}\leq 2N(d)-K(d)+\tilde{C}. \end{equation} Analogously, since $$\frac{K_1'(d)}{K_1(d)}=\frac{H'(d)}{H(d)}-\frac{1}{2}\frac{E'(d)}{E(d)},$$ we obtain that, for almost every $d\in (0,d_0)$, \begin{equation}\label{logK1der} 2N(d)-\frac{K(d)}{2}-\tilde{C}\leq-\frac{K_1'(d)}{K_1(d)}\leq 2N(d)-\frac{K(d)}{2}+\tilde{C}. \end{equation} We are now in the position to conclude the proof of Theorem~\ref{mainthmbis}. \smallskip \proof{ of Theorem~\textnormal{\ref{mainthmbis}}.} In the sequel, we adopt the following normalisation, that is, we assume that \begin{equation}\label{Enorm} E(0)=1. \end{equation} It is immediate to show, with this assumption, that for any $d\in [0,d_0)$ we have $E(d)\leq 1$ and, consequently, \begin{equation}\label{KvsK1} K_1(d)\leq K(d). \end{equation} We now apply a similar technique we used before to estimate $D$ to the function $H$. Namely, with $\tilde{C}$ as in \eqref{Hprime}, let us define, for any $d\in [0,d_0)$, $$\tilde{H}(d)=e^{-\tilde{C}d}H(d).$$ Then, by \eqref{KvsK1}, $$-\frac{\tilde{H}'(d)}{\tilde{H}(d)}\geq 2N(d)\geq \left(2N(d)-\frac{K(d)}{2}\right)+\frac{K_1(d)}{2}.$$ The main difference with respect to the argument for $D$ is that, whereas it is immediate to show that $F\geq N$, it is not that evident that $2N\geq K/2$. The other difference with respect to the previous argument is that we need to use $K_1$ instead of $K$ itself. However, for any $d\in[0,d_0/2)$, using \eqref{Sdef} and calling $\tilde{A}_2(d)=A_2(d)/2$, \begin{multline*} 2N(d)-\frac{K(d)}{2}=2N(d)-\frac{S(d)}{E(d)}-\frac{\tilde{A}_2(d)}{E(d)}= \frac{D(d)}{S(d)+\tilde{A}_2(d)}-\frac{S(d)}{E(d)}-\frac{\tilde{A}_2(d)}{E(d)}\\ =2\frac{D(d)E(d)-S^2(d)-S(d)\tilde{A}_2(d)}{H(d)E(d)}-\frac{\tilde{A}_2(d)}{E(d)}= 2\frac{D(d)E(d)-S^2(d)}{H(d)E(d)}-\frac{S(d)A_2(d)}{H(d)E(d)}-\frac{\tilde{A}_2(d)}{E(d)}. \end{multline*} It is easy to see that the first term $$M(d)=2\frac{D(d)E(d)-S^2(d)}{H(d)E(d)}$$ is positive, therefore, using \eqref{Sdef} and \eqref{Sdefbound}, we obtain that \begin{multline}\label{aneq} 2N(d)-\frac{K(d)}{2}= M(d)-\frac{A_2(d)}{E(d)}\left[1-\frac{1}{2}\frac{A_2(d)}{H(d)}\right]\\ = M(d)+\frac{A_2^2(d)}{2H(d)E(d)} -\frac{A_2(d)}{E(d)} =M_1(d)-\frac{A_2(d)}{E(d)} \end{multline} where \begin{equation}\label{M1defin} M_1(d)=M(d)+\frac{A_2^2(d)}{2H(d)E(d)}\geq 0\quad\text{for any }d\in (0,d_0/2). \end{equation} We note that, by \eqref{Sdefbound}, for any $d$ with $0<d<d_0/2$ \begin{equation}\label{rightone} M_1(d)-\tilde{C}_1\leq 2N(d)-\frac{K(d)}{2}\leq M_1(d)+\tilde{C}_1, \end{equation} consequently, by \eqref{logK1der} and calling $\tilde{C}_2=\tilde{C}+\tilde{C}_1$, we have, for almost any $d\in (0,d_0/2)$, \begin{equation}\label{rightonebis} M_1(d)-\tilde{C}_2\leq -\frac{K_1'(d)}{K_1(d)}\leq M_1(d)+\tilde{C}_2. \end{equation} For any $d$ with $0<d<d_0/2$, we have $$\log(\tilde{H}(0))-\log(\tilde{H}(d))\geq \int_0^dM_1(t)\, dt+\frac{1}{2}\int_0^d K_1(t)\, dt-\tilde{C}_1d.$$ We call $$J_0(d)=e^{-2\int_0^dN}\quad\text{and}\quad J(d)=e^{-\int_0^dM_1}\quad\text{and}\quad J_1(d)=e^{-\int_0^d(K_1/2)}$$ and we obtain that $$\tilde{H}(d)\leq J_0(d)\tilde{H}(0)\leq e^{\tilde{C}_1d}J(d)J_1(d)\tilde{H}(0),$$ therefore, \begin{equation}\label{firsteqH} H(d)\leq e^{\tilde{C}d}J_0(d)H(0)\leq e^{\tilde{C}_2d}J(d)J_1(d)H(0). \end{equation} For any $d$ with $0\leq d<d_0/2$, by \eqref{rightonebis} and since $K_1(0)=K(0)$, \begin{equation}\label{secondeqbis} K_1(d)\geq e^{-\tilde{C}_2d}J(d)K(0). \end{equation} We note that $J(0)=1$ and, since $M_1\geq 0$, $J$ is positive and decreasing with respect to $d$. We estimate $J_1(d)$ by using \eqref{secondeqbis} and the fact that $J(s)\geq J(d)$ for any $0<s<d$, obtaining that $$J_1(d)\leq e^{-(K(0)/2)J(d)\int_0^de^{-\tilde{C}_2s}\, ds}= e^{-\tilde{b}(d)K(0)J(d)},$$ where $$\tilde{b}(d)=\frac{1}{2\tilde{C}_2}\left(1-e^{-\tilde{C}_2d}\right).$$ Arguing as in the proof of Theorem~\ref{mainthm}, we conclude that, setting $C_3=\tilde{C}_2$, for any $d$ with $0<d<d_0/2$ we have \begin{equation} H(d)\leq e^{C_3d}H(0)h(\tilde{b}(d)K(0)). \end{equation} In order to conclude the proof it is enough to show that, for some positive constant $c_3$ depending on the a priori data only, we have for any $d$ with $0<d<d_0/2$ $$\tilde{b}(d)K(0)\geq c_3d\Phi_1.$$ This is an immediate consequence of Proposition~\ref{-1/2-2bound}.\hfill$\square$
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} Thermal convection is one the fundamental mechanisms by which the turbulent flows evolve. The temperature dependence of the fluid mass density and the resulting buoyancy forces drives fluid motion that in turn advects the temperature leading to a fully turbulent motion of the fluid \cite{Kadanoff2001,Ahlers2009,Chilla2012,Verma2018}. The simplest configuration in this specific class of turbulent flows is Rayleigh-B\'{e}nard convection (RBC) -- a fluid layer of height $H$ between two parallel rigid plates heated uniformly from below and cooled uniformly from above, such that a constant temperature difference $\Delta T=T_{\rm bot}-T_{\rm top}>0$ is sustained across the layer. Experimentally, RBC can be realized in a horizontally extended closed cylindrical or cuboid cell with thermally insulated sidewalls. In the past two decades, a larger number of direct numerical simulations (DNS) of this configuration investigated various aspects of the large-scale structure formation, the longer-term dynamics and the dependencies on Rayleigh and Prandtl numbers, ${\rm Ra}$ and ${\rm Pr}$, as well as on the aspect ratio $\Gamma$ of the cells in detail \cite{Hartlep2003,Parodi2004,Hartlep2005,Hardenberg2008,Bailon2010,Emran2015,Pandey2018,Stevens2018,Fonda2019,Green2020,Krug2020,Berghout2021}. The number of studies of RBC in air at a Prandtl number of ${\rm Pr}=0.7$ by means of controlled laboratory experiments is much smaller, see e.g. refs. \cite{Deardorff1967,Fitzjarrald1976,Schmeling2014,Kaestner2018,Cierpka2019} for investigations in large-aspect-ratio setups. The three dimensionless ccontrol parameters of RBC experiments are defined as follows, \begin{equation} {\rm Ra}=\frac{g\alpha \Delta T H^3}{\nu\kappa}\,,\quad {\rm Pr}=\frac{\nu}{\kappa}\,,\quad \Gamma=\frac{L}{H}\,. \end{equation} Here, $g$ is the acceleration due to gravity, $\alpha$ the thermal expansion coefficient, $\nu$ the kinematic viscosity, $\kappa$ the thermal diffusivity, and $L$ the horizontal length scale. The dynamics and structure of the boundary layers of the velocity and temperature fields at the top and bottom of an RBC layer are essential for the amount of heat that can be carried from the bottom to the top in this configuration. In Valori and Schumacher \cite{Valori2021}, a dynamical connection between this dynamics and the small-scale intermittent motion in the bulk, in particular the formation of high-amplitude dissipation events, was established. Thermal and kinetic energy dissipation fields probe the magnitude of gradients of the temperature and velocity fields, respectively. Their amplitudes are known to be largest at the smallest scales \cite{Schumacher2010,Scheel2013,Yeung2015,Buaria2020}. In detail, it was shown in \cite{Valori2021} how the formation of coherent plumes at the top and bottom and their subsequent collision or passing creates large temperature gradients in the bulk that cause the formation of localized shear layers \cite{Scheel2016}. These studies set one motivation for the present work. In the present work, we want to study the velocity derivatives in the bulk of an RBC configuration experimentally by means of stereoscopic particle image velocimetry (SPIV) in the midplane of the horizontal cell. This allows us to obtain all three velocity components $u_i({\bm x},t)$ in a horizontal measurement region $A=2.9 H\times 2.2 H$ and thus 7 out of the 9 components of the velocity gradient tensor field $M_{ij}({\bm x},t)=\partial u_i/\partial x_j$. These are the in-plane component $\partial u_i/\partial x_j$ with $i=1,2,3$ and $j=1,2$ and $\partial u_3/\partial x_3=-(\partial u_1/\partial x_1+\partial u_2/\partial x_2)$ via the incompressibility condition of the flow. Thus the out-of-plane component of the vorticity field $\omega_3=\partial u_2/\partial x_1-\partial u_1/\partial x_2$ can be obtained and we have access to highly intermittent derivative fields in the bulk of the turbulent convection layer. We analyze the statistics of these 7 derivatives and probe their statistical convergence. Furthermore, the resulting probability density functions (PDFs) are found to agree to those of existing high-resolution DNS data from ref. \cite{Valori2021} in a similar setup and the same parameter range. The SPIV snapshot series contain a few extreme events in the form of intense vortex cores that sweep across the measurement plane $A$. In the second part of this work, we analyze their temporal growth and explore the capability of recurrent neural network (RNN) architectures to model time series of the out-of-plane vorticity component and thus to predict high-amplitude or extreme events \cite{Goodfellow2016}. More specifically, we therefore apply the reservoir computing model (RCM) \cite{Jaeger2004,Pandey2020} which has been shown to successfully predict the time evolution of nonlinear dynamical systems and turbulent flows. See for example refs. \cite{Lu2017,Srinivasan2019,Vlachas2020,Pandey2020a,Heyder2021,Farazmand2021} for applications to fluid dynamical problems. We show that an RCM with continually available sparse data is able to predict high-amplitude or extreme out-of-plane vorticity events which can be quantified by a squared vorticity integrated over $A$ (see ref. \cite{Sapsis2021} for a recent review). The outline of the manuscript is as follows. In Sections IIA to IIC, details on the experiment and the machine learning algorithm are provided. Section IIIA discusses the results in respect to the vorticity and velocity derivative statistics. Furthermore, we investigate the dynamics of a particular high-vorticity event tracked in the experiment in Sec. IIIB. Section IIIC finally provides the results of the application of the RNN to predict the time evolution and the non-Gaussian PDFs with the extended tails. We conclude the work with a summary and an outlook in Section IV. For convenience, we will also switch from the notation $u_1$, $u_2$, $u_3$, and $\omega_3$ for velocity and vorticity field components to $u_x$, $u_y$, $u_z$, and $\omega_z$ in the following sections with coordinate $z$ parallel to the inferred temperature gradient between top and bottom. \section{Methods} \subsection{Rayleigh-B\'{e}nard convection cell for pressurized air} The experimental set-up is a large aspect ratio Rayleigh-B\'{e}nard convection (RBC) cell, of size $W\times W\times H$, where the horizontal dimension $W$ is ten times larger than the vertical distance between the two plates $H$ that is 3 cm (see figure \ref{fig:RBC-setup}). The aspect ratio of the cell is therefore $\Gamma=10$. The bottom plate of the cell is made of two glas plates coated on one side with a thin layer of Indium Tin Oxide. It has the important characteristic of being transparent and at the same time uniformly heatable by the Joule effect. The latter arises from the electrical current that goes through the oxide layer (for further details see \cite{Kaestner2018}). Each plate has a measured light transmission coefficient of about 68$\%$ \cite{Kaestner2018}. This coating was manufactured at the Fraunhofer Institute for Organic Electronics in Dresden (Germany). The top plate of the RBC cell is made of Aluminium with an internal cooling circuit where water flows at controlled temperature fed by a thermostat. The side walls are made of 4 mm thick polycarbonate that allows optical access for the laser light. The surface temperatures at both horizontal walls were measured by four thermoresistances PT100 (class B) for each plate, which were located 20 mm far from the side walls. The RBC cell is inserted in the high-pressure facility, the Scaled Convective Airflow Laboratory Experiment (SCALEX). This facility consists of a high-pressure vessel with 35 observation windows, allowing for optical access from the outside. The pressure within the vessel can be regulated from 10-100 mbars to 8 bars in steps of about 100 mbars. The working pressure was measured with a Cerabar PMC131 sensor from Endress + Hauser AG. Further details on the device are for example summarized in ref. \cite{Kaestner2018}. \begin{figure} \includegraphics[width=0.47\textwidth]{Figures/PIVsetup.pdf} \caption{Relative position of cameras, RBC cell and measurement section. a) top view, b) side view. The heated bottom plate of the RBC cell and the cooled top plate are indicated by a red and a blue lines, respectively. The measurement section $A$ is represented in green in panels (a) and (b).\label{fig:RBC-setup}} \end{figure} \subsection{Particle image velocimetry measurements} Stereoscopic Particle Image Velocimetry experiments are performed in a horizontal layer at mid height between top and bottom plate of the cell. The position of the measurement plane is shown in green in figure \ref{fig:RBC-setup}. As already discussed, the measurement region covers a horizontal area of $A\approx 2.9 H\times 2.2 H$. In order to conduct stereo PIV measurements in a horizontal plane within the cell, two cameras are located below the cell and which take images through the transparent bottom plate. A mirror is placed in the optical path as represented in figure \ref{fig:RBC-setup}. A stereo angle of about $50^\circ$ is used. The cameras are sCMOS from LaVision GmbH with digital resolution of $2560 \times 2160$ pixels and a pixel pitch of $6.5 \mu$m. These cameras have particularly thin connector cables made of optical fibres that allow them to be placed inside the high-pressure vessel. These cables are inserted into feed through systems for high pressure from Spectite$\textsuperscript{\tiny{\textregistered}}$ that are built in a flange of the vessel. The images are recorded at a frame rate of 10 Hz with interframe time of 7 ms or 10 ms depending on the strength of the convective flow (or Rayleigh number ${\rm Ra}$) studied. Each camera is equipped with a Zeiss Milvus 2/100M objective lens under the Scheimpflug condition. The focal length of the lenses is 100 mm and the aperture stop used is 5.6 and 8 for the camera in forward and backward scattering, respectively. Both cameras and lenses are placed inside the vessel at a working pressure of up to 4.5 bars. The field of view of the measurements is 8.8 cm $\times$ 6.6 cm. Therefore the image magnification factor is $M \sim0.2$. The laser light sheet was created with a double pulse laser (Quantel Q-smart Twins 850) with a pulse energy of about 175 mJ for a 2 mm thick light sheet. A spherical and a cylindrical lens were used to generate the light sheet. \begin{table} \begin{ruledtabular} \begin{tabular}{ccccccc} Ra & $T_{\rm top} [^{\circ}$C$]$ & $T_{\rm bot} [^{\circ}$C$]$ &$\Delta T$ & p [bars] & $U_{\rm ff}$ [mm/s] & $T_{\rm ff}$ [s]\\ \hline $1.7\times10^4$ & $22.3$ & $28.9$ & $6.6$ &1 & $80.7$ & $0.4$\\ $2.1\times10^4$ & $21.2$ & $29.2$ & $8$ &1 & $80.7$ & $0.4$\\ $1.1\times10^5$ & $21.2$ & $28.5$ & $7.3$ & $2.47$ & $85$ & $0.35$\\ $2.9\times10^5$ & $22.9$ & $28.4$ & $5.5 $ & $4.5$ & $74.3$ & $0.4$\\ $5.1\times10^5$ & $20.1$ & $29.9$ & $9.8$ & $4.5$ & $98.8$ & $0.3$\\ \end{tabular} \end{ruledtabular} \caption{Summary of the experimental conditions used in the convection measurements. The first column reports the Rayleigh numbers, the second and the third one the values of the top ($T_{\rm top}$) and bottom ($T_{\rm bot}$) temperatures, respectively, the fourth one the temperature difference between the bottom and the top walls ($\Delta T$), the fifth one the working pressure. The sixth and the seventh columns are for the characteristic free-fall velocity and free-fall time of the flow, respectively.\label{tab:expProgramme}} \end{table} The depth of field of the measurements $\delta_z$ is given by \begin{equation} \delta_z =4(1+\frac{1}{M})^2f_\#^2\lambda\,, \label{eq:depth-of-field} \end{equation} where $\lambda$ is the wavelength of the laser. Here, $\lambda = 532$ nm for the green laser is used. For the camera in forward scattering with the smallest f-stop $f_\#$, the depth is $\delta_z = 2.4$ mm, which is larger than the laser sheet thickness and therefore ensures good focusing conditions of the illuminated particles. The geometrical calibration is made on the measurement plane with the three-dimensional target 204-15 from LaVision, where the distance between two dots is 15 mm, the dot diameter and the level separation are both 3 mm. Additionally, a stereoscopic self-calibration is made and iteratively repeated with final mean residual displacement below 1 pixel. This final value cannot be further reduced because of the presence of optical distortions due to the temperature gradient \cite{Valori2018,Valori2019}. \begin{table} \begin{ruledtabular} \begin{tabular}{cccc} ${\rm Ra}^{\rm DNS}$ & $\eta_K$ [mm] & ${\rm Ra}^{\rm PIV}$ & $\Delta x^{\rm PIV}$ [mm]\\ \hline $1.5\times10^4$ & $2.4$ & $1.7\times10^4$ & 4\\ $2\times10^4$ & $ 2.2$ & $2.1\times10^4$ & 4\\ $1\times10^5$ & $1.3$ & $1.1\times10^5$ &4\\ $2\times10^5$ & $1.0 $ & $2.9\times10^5$ &4\\ $5\times10^5$ & $0.7$ & $5.1\times10^5$ &3.5\\ \end{tabular} \end{ruledtabular} \caption{Kolmogorov length scale from the direct numerical simulations (DNS) and spatial resolution of the PIV measurements computed without overlap for each of the five Rayleigh numbers studied. The simulations were conducted in a domain of $\Gamma=8$ with periodic boundary conditions at the sidewalls at Pr $=1$ by a spectral element method, see ref. \cite{Valori2021} for details. \label{tab:DNS}} \end{table} The seeding in the PIV measurements is established by droplets of di(2ethylhexyl)sebacate (DEHS) with average diameter of 0.9 $\mu$m generated by a vaporizer from PIVTECH GmbH that is placed inside the pressure vessel. The characteristic velocity of the flow is estimated as the free fall velocity $U_{\rm ff} = \sqrt{\alpha g \Delta T H}$, see also table \ref{tab:expProgramme} for the values of $U_{\rm ff}$ and $\Delta T$. The sedimentation velocity of the particles used as tracers is given by \begin{equation}\label{eq:sedimentation} u_S = \frac{d_p^2(\rho_p-\rho_f)}{18\mu g}, \end{equation} where $\rho_f$ and $\mu=\rho_f\nu$ are, respectively, the density and the dynamic viscosity of the fluid at the mean temperature and pressure of the cell, and $\rho_p$ is the mass density of the particles. The sedimentation velocity of the particles used is negligible as visible by the ratio $u_{S}/U_{\rm ff}$ that is about $4\cdot 10^{-6}$ or smaller for all flow conditions studied here. The particles faithfully follow the flow as indicated by the values of the Stokes number ${\rm St}$ defined in eq. \eqref{eq:Stokes}, which expresses the ratio between the characteristic time of a particle of diameter $d_{p}$ and mass density ${\rho_{p}}$ suspended in the flow and the characteristic time of the flow. Values of the Stokes number in the present study are always smaller than 0.01, as requested for good tracers particles \cite{PIVbookJerry,PIVbookDLR}.\\ \begin{equation}\label{eq:Stokes} {\rm St} = \frac{d_{p}\rho_{p}U_{\rm ff}}{18\mu g} \end{equation} The PIV images are acquired and processed with the software DaVis 10 from LaVision. The only preprocessing that is applied to the raw images is a time-averaged image subtraction in order to reduce the noise. A two-passes cross-correlation algorithm with decreasing interrogation window size is used. The size of the final pass is 128 $\times$ 128 pixels for all measurements except for the largest Rayleigh number where the size is 96 $\times$ 96, which leads to a spatial resolution of about 4 mm and 3.5 mm, respectively (see table \ref{tab:DNS}). Here, we also list the Kolmogorov length $\eta_K$ of the DNS (at the comparable Rayleigh numbers) which are given in dimensionless units by \begin{equation} \eta_K=\left(\frac{\rm Pr}{\rm Ra}\right)^{3/8} \langle\epsilon\rangle_{V,t}^{-1/4}\,, \end{equation} where $\langle\epsilon\rangle_{V,t}$ is the combined volume-time average of the kinetic energy dissipation rate field \cite{Scheel2013}. This value is multiplied with the actual cell height $H$ to get a length in physical units as shown in the table. The values of Rayleigh numbers of the experiments range from ${\rm Ra} = 1.7\times10^4$ to $Ra = 5.1\times10^5$ and are listed in table \ref{tab:expProgramme}. The working fluid is air with ${\rm Pr} = 0.7$ for all cases. Rayleigh numbers ${\rm Ra} \ge 1.1\times10^5$ are obtained by putting the vessel under pressure. In table \ref{tab:expProgramme} the corresponding values of the working pressure of each experiment are shown together with the values of the temperatures at the bottom ($T_{\rm bot}$) and top ($T_{\rm top}$) walls of the cell in addition to the resulting difference $\Delta T$. The last two columns of the table indicate the characteristic free-fall velocity $U_{\rm ff}$ and resulting free-fall time $T_{\rm ff} = H/U_{\rm ff}$ of the flow. Values of the fluid properties are taken from the NIST RefProp database version 9.1 \cite{NIST}. \begin{figure*} \includegraphics[width=0.75\textwidth]{Figures/figure_sketch.pdf} \caption{Sketch of the reservoir computing model (RCM) and arrangement of continually available data during the prediction phase of the machine learning algorithm. (a) The three building blocks of the RCM are the input layer, the reservoir (a random network of neurons), and the output layer. b) Sketch of a part of the data grid $A$ obtained in the stereoscopic particle image velocimetry (PIV) measurement. This figure illustrates the $5\times 5$ scenario: one data point with continually available experimental vorticity data (in red) is surrounded by 24 grid points for which the trained RCM predicts the time evolution of the vorticity component autonomously. The whole measurement area $A$ is covered sparsely with continually available measurements in this way. The uniform PIV resolution is also indicated. \label{fig:sketch}} \end{figure*} \begin{figure*} \includegraphics[width=0.75\textwidth]{Figures/figure_convergence.pdf} \caption{Statistical convergence of the higher-order velocity derivative statistics in the experiments. The vorticity component $\omega_z$ (a,g) and the derivatives $\partial u_x/\partial x$ (b,h), $\partial u_y/\partial x$ (c,i), $\partial u_x/\partial y$ (d,j), $\partial u_y/\partial y$ (e,k), as well as $\partial u_z/\partial z$ (f,l) are shown. All derivatives are normalized by their corresponding root mean square values. The moment order $n$ is indicated in the legend above the panels. Panels (a--f) are for ${\rm Ra}=2.9\times 10^5$, panels (g--l) for ${\rm Ra}=5.1\times 10^5$.} \label{convergence} \end{figure*} \subsection{Reservoir computing model} In the following, we briefly review the reservoir computing model (RCM) \cite{Jaeger2004}. The reservoir is a random, recurrently and sparsely connected network. The model consists of the input layer, the reservoir, and the output layer as shown in figure \ref{fig:sketch}(a). The input layer takes the training data in the form of discrete time series ${\bm \Omega}(t)=(\omega_z^{(1)}(t),\dots ,\omega_z^{(N_{\rm PIV})}(t))$ with $N_{\rm PIV}$ the number grid points of the PIV measurement region $A$. Vorticity data are converted at each instant into a reservoir state vector ${\bm r}(t)\in \mathbb{R}^N$ with a number of reservoir nodes $N\gg N_{\rm PIV}$. This is done by a random weight matrix $W^{({\rm in})} \in \mathbb{R}^{N\times N_{\rm PIV}}$ which is determined at the beginning of the training and left unchanged, \begin{equation} {\bm r}(t) = W^{({\rm in})} {\bm \Omega}(t)\,. \label{RC0} \end{equation} The reservoir is described by a symmetric adjacency matrix $W^r \in \mathbb{R}^{N\times N}$, also determined initially and left unchanged. Typically, an ensemble of different realizations of the reservoir matrix is considered in the training process. Two important parameters of $W^r$ are the reservoir density $D$ of active nodes and the spectral radius $\rho(W^r)$, which is set by the largest absolute value of the eigenvalues. The reservoir nodes are updated by a simple nonlinear dynamical system evolves which comprises the short-term memory of the network, \begin{align} {\bm r}(t+\Delta t) &= (1-\alpha){\bm r}(t) \nonumber\\ &+\alpha \tanh\left[ W^r {\bm r}(t) + W^{({\rm in})} {\bm \Omega}(t)\right]\,, \label{RC1} \end{align} where nonlinearity enters in the form of an activation function by a hyperbolic tangent. The leakage rate is $0<\alpha<1$ and the spectral radius of the reservoir is typically taken $\rho(W^r)\lesssim 1$. The final of the three building blocks of the RCM is the random output weight matrix $W^{({\rm out})} \in \mathbb{R}^{N_{\rm PIV}\times N}$ which maps the updated reservoir vector back to the vorticity field, \begin{equation} \hat{\bm \Omega}(t+\Delta t) = W^{({\rm out})} {\bm r}(t+\Delta t)\,. \label{RC2} \end{equation} The iteration of \eqref{RC1} is repeated for the all snapshots of the PIV training data and the sequence of corresponding reservoir states is saved. In contrast to most other neural networks, the training of the RCM is performed with respect to the output layer only. The optimized output weight matrix, $W^{({\rm out})\ast}$, is obtained by a minimization of a regularized quadratic cost function $C$. The regularization term is added to $C$ to tackle the over-fitting problem \cite{Goodfellow2016}. This cost function is given by \begin{align} C \left[W^{({\rm out})} \right] &=\sum_{k=1}^{N_{\rm train}} \Big| W^{({\rm out})} {\bm r}(k\Delta t)-{\bm \Omega}(k\Delta t)\Big|^2\nonumber\\ &+\gamma\, \mbox{Tr}\left(W^{({\rm out})} W^{T({\rm out})}\right)\,. \label{RC3} \end{align} To summarize, the hyperparameters of the RCM training process are the number of nodes $N$, the reservoir density $D$, the spectral radius $\rho(W^r)$, the leakage rate $\alpha$, and the ridge regression parameter $\gamma>0$ of the regularization term of the cost function $C$. These parameters span a 5-dimensional space and have to be tuned, e.g., by a grid search, such that optimal output matrix $W^{({\rm out})\ast}$ results and a mean square error between prediction and test data becomes minimal. \begin{figure*} \includegraphics[width=0.7\textwidth]{Figures/figure_PDFs.pdf} \caption{Comparison of the probability density function of out-of-plane vorticity $\omega_z$ (a) and the five partial derivatives, which are $\partial u_x/\partial x$ in (b), $\partial u_y/\partial x$ in (c), $\partial u_x/\partial y$ in (d), $\partial u_y/\partial y$ in (e), and $\partial u_z/\partial z$ in panel (f). The PIV experiments are at ${\rm Ra} = 5.1\times10^5$, the DNS at ${\rm Ra} = 5\times10^5$ . All quantities are normalized by their corresponding root-mean-square values. \label{fig:PDF}} \end{figure*} In the subsequent prediction mode, the RCM predicts the vertical vorticity component field in the measurement region that can be compared with original unseen test data. The reservoir dynamics in prediction mode is given by \begin{align} {\bm r}(t+\Delta t) & = (1-\alpha){\bm r}(t) \nonumber\\ & +\alpha \tanh\left[W^r {\bm r}(t) + W^{({\rm in})} W^{({\rm out})\ast} {\bm r}(t)\right]\,. \label{RC4} \end{align} More details can be found in refs. \cite{Pandey2020,Heyder2021}. The dynamics of eq. \eqref{RC4} is used to generate synthetic time series of the intermittent vorticity component at the grid points of the measurement region $A$. \begin{figure} \includegraphics[width=0.45\textwidth]{Figures/figure_PDFsOmega.pdf} \caption{Probability density functions (PDFs) of the out-of-plane vorticity field component obtained from different stereoscopic particle image velocimetry measurements for the Rayleigh numbers which are indicated in the legend. The gray dashed line displays the Gaussian case for reference.} \label{allPDF} \end{figure} \section{Results} \subsection{Velocity derivative statistics} The long total measurement acquisition time of 2500 $T_{\rm ff}$ allows a good convergence of the statistics as displayed in figure \ref{convergence}. In these plots, we report results of the out-of-plane vorticity and selected individual velocity derivative components, all normalized by their corresponding root-mean-square (rms) values. We therefore define \begin{equation} \chi:=\frac{\omega_z}{\sqrt{\langle \omega_z^2 \rangle_{A,t}}} \quad\mbox{or}\quad \chi:=\frac{\partial u_i/\partial x_j}{\sqrt{\langle (\partial u_i/\partial x_j)^2 \rangle_{A,t}}}\,, \end{equation} with $i,j=1,2,3$. The denominators in both equations are the root-mean-square (rms) values of the corresponding quantities. The statistical convergence of the $n$-th order normalized moment follows from plots of $\chi^n p(\chi)$ versus $\chi$. The area below these curves corresponds then to the $n$-th order moment $M_n$ which is given by \begin{equation} M_n(\chi):=\int_{-\infty}^{\infty} \chi^n p(\chi) \,\mbox{d}\chi\,. \end{equation} These moments can be evaluated in a discretized approximation of this integral. The statistical convergence of moments $M_2$, $M_4$, and $M_6$ is shown in figure \ref{convergence} for the two largest experiment Rayleigh numbers $Ra = 2.9\times10^5$ and $Ra=5.1\times10^5$. A converged velocity derivative statistics implies that the tails for the largest $\chi$--values tend to decay to zero which seems to be case for the shown components. Note that $y$--axes are displayed in logarithmic units in the figure. We have also verified that the two velocity gradient tensor components which are not shown in the figure satisfy the statistical convergence criteria as well. It can be confirmed that the PIV measurements obtain sufficiently well-resolved velocity gradients in the range of accessible Rayleigh numbers. Figure \ref{fig:PDF} reports a direct comparison between the PDFs of 5 components of the velocity gradient tensor $A_{ij}$ and the out-of-plane vorticity component $\omega_z$ from the SPIV measurements and the DNS data from \cite{Valori2021}. One can see that the experimental results are in very good agreement with the simulation data all the way to the far tails. Again, the PDFs of the 2 missing components, $\partial u_z/\partial x$ and $\partial u_z/\partial y$, are qualitatively and quantitatively similar to the shown data. The PDFs of the out-of-plane vorticity of the SPIV of all series as displayed in table II are shown in figure \ref{allPDF}, respectively. One can observe that the tails of the PDFs become wider as the Rayleigh number grows, which is an indication of a transition from Gaussian to non-Gaussian intermittent velocity derivative statistics as discussed for RBC in refs. \cite{Schumacher2014,Schumacher2018,Valori2021}. We can thus conclude that this transition is also detectable in the bulk of controlled laboratory experiments at moderate Rayleigh numbers. This allows to run long-term measurements of velocity derivative statistics which is challenging in simulations where the numerical effort grows with $\Gamma^2$. \subsection{Extreme event of out-of-plane vorticity} An example of an extreme event of $\omega_z$ from PIV measurements at Rayleigh number $Ra = 5.1 \times 10^5$ is shown in figure \ref{fig_Extr}. We count an extreme event whenever the vorticity magnitude exceeds $10\omega_{z,{\rm rms}}$. In the time interval at the highest Rayleigh number, we were able to record two of these events. It can be expected that their frequency and excess magnitude increase when the Rayleigh number grows. We have to leave this point for future work. Panel (a) of the figure shows a prominent vortex core on the right hand side of $A$. This vortex is the results of a horizontal shear in combination with an upward motion through the observation plane. From the corresponding velocity field, whose components are represented in panels (b--d) of this figure, one can suspect that a plume collision is responsible for the generation of the extreme vorticity event, as it was observed so far only in DNS data records \cite{Valori2021}. Lu and Doering \cite{Lu2008} showed that the temporal growth of the enstrophy, which is given by \begin{equation} \bar E(t) = \int_V \omega_i^2 dV \quad\mbox{with}\quad \omega_i({\bm x},t)=\varepsilon_{ijk} \frac{\partial u_k({\bm x},t)}{\partial x_j}\,, \end{equation} in homogeneous isotropic turbulence in a triply periodic box of volume $V$, is rigorously bounded by \begin{equation} \frac{d\bar E(t)}{dt} \le \frac{27 c^3}{16\nu} \bar E(t)^3\,, \end{equation} with $c=\sqrt{2/\pi}$. It was found that axially symmetric, colliding vortex rings maximize the enstrophy growth and that interacting Burgers vortices cause a growth $d\bar E/dt \sim \bar E(t)^{7/4}$. Even though we cannot expect that the same bound holds for a turbulent convection flow, we can probe the growth of the out-of-plane squared vorticity in the measurement region $A$. In figure \ref{growth}, we therefore plot the temporal growth of the vorticity component, $d\omega^2_z/dt$, versus the squared vorticity, $\omega_z^2$, as a scatter plot for all grid points which are centered around ${\bm x}_{\ast}\in A$, the point where $\omega_z^2$ yields an extreme event at time $t=t_{\ast}$. In both panels the time series (which extend over $10^4$ data points) are plotted for $5\times 5$ grid points. The data right before $t=t_{\ast}$ are replotted in blue in both panels. In addition, the growth laws with the exponents 3 and 7/4 are also indicated by solid lines. It is seen that for the highest-amplitude extreme event the growth to the maximum is close to the 7/4--scaling which suggests that a vortex stretching process generates this event. Further details cannot be provided on the basis of the measurements due to the missing velocity derivatives. \begin{figure*} \includegraphics[width=0.7\textwidth]{Figures/figure_event.pdf} \caption{Visualisation of an extreme event of $\omega_z$ from the SPIV measurements at $Ra = 5.1 \times 10^5$. Filled contours of $\omega_z$ in (a), $u_x$ in (b), $u_y$ in (c), and $u_z$ in (d) are shown. Color bars of the vorticity and velocity components are given in units of $t_{\rm ff}$ and $U_{\rm ff}$, respectively. Arrows show the same in-plane velocity vector field in all four panels.\label{fig_Extr}} \end{figure*} \begin{figure} \includegraphics[width=0.45\textwidth]{Figures/figure_extremegrowth.pdf} \caption{Scatter plots of the time series, taken at the 25 grid points around extreme event position ${\bm x}_{\ast}$ for $Ra = 5.1 \times 10^5$. Panels (a) and (b) stand for the two events at $t=853.25 t_{\rm ff}$ and $t=599 t_{\rm ff}$, respectively, for which $|\omega_z|\ge 10 \omega_{z,rms}$. The squared out-of-plane vorticity growth is plotted versus squared out-of-plane vorticity. The final data points with $t<t_{\ast}$ are replotted in blue. Also indicated are the scaling exponents that follow from \cite{Lu2008}.\label{growth}} \end{figure} \begin{figure*} \includegraphics[width=0.73\textwidth]{Figures/figure_rcm1.pdf} \caption{Prediction of temporal evolution and statistics of the out-of-plane vorticity component $\omega_z$ by reservoir computing models. The Rayleigh number is ${\rm Ra}=1.7\times 10^4$. (a) Time series prediction example at one grid point in $A$. Integer $n$ corresponds to time $n\Delta t$ with $\Delta t= 0.25 t_{\rm ff}$. (b) Zoom into the first steps of the prediction phase. Panel (c) shows the probability density function obtained from a prediction of correspondingly 8 grid points around each continually available data point (scenario $3\times 3$) and panel (d) from 48 grid points (scenario $7\times 7$).\label{fig:rcm1}} \end{figure*} \begin{figure*} \includegraphics[width=0.73\textwidth]{Figures/figure_rcm2.pdf} \caption{Same as in figure \ref{fig:rcm1} for ${\rm Ra}=2.9\times 10^5$ with $\Delta t=0.25 t_{\rm ff}$.\label{fig:rcm2}} \end{figure*} \subsection{Reservoir computing model for dynamical evolution of vorticity} Finally, we apply a reservoir computing model (RCM) to predict the dynamical evolution of time series of $\omega_z$ which are taken at specific points in the measurement region $A$. The corresponding procedure is visualized in figure \ref{fig:sketch}(b) and follows the one of Lu et al. \cite{Lu2017} taken to predict the dynamics of the one-dimensional Kuramoto-Sivashinsky equation: for the prediction phase, we provide time series of the measurements sampled on a very coarse uniform grid that covers the measurement region $A$. This implies that the prediction in the present case is obtained with the help of continually available, partial observations. The RCM is then trained to generate vorticity time series at the remaining and surrounding grid points of the measurement region $A$. The procedure allows us to reconstruct the dynamics of the vorticity component in $A$. In table \ref{tab:hyper}, we summarize the optimal hyperparameters of the RCM that are chosen after the training procedure for the time series prediction. All time series consist of $10^4$ PIV snapshots. The first $N_{\rm T}=5000$ snapshots were used to train the RCM model, the remaining $N_{\rm P}=5000$ data snapshots are test data. Furthermore, we list the mean squared errors (MSE) in the table which follow for the training and prediction phases. These errors are given by \begin{equation} {\rm MSE}_{\rm T,P}= \frac{1}{N_{\rm T,P}} \sum_{n=1}^{N_{\rm T,P}} \frac{1}{N_{\rm PIV}} \sum_{k=1}^{N_{\rm PIV}} (\Omega_{k}(n)-\hat{\Omega}_k(n))^2 \,, \end{equation} where ${\bm \Omega}(n)$ is the ground truth and $\hat{\bm \Omega}(n)$ the RCM prediction. \begin{table} \begin{ruledtabular} \begin{tabular}{lcccccc} & \multicolumn{3}{c}{${\rm Ra}=1.7\times 10^4$} & \multicolumn{3}{c}{${\rm Ra}=2.9\times 10^5$} \\ & $3\times 3$ & $5\times 5$ & $7\times 7$ & $3\times 3$ & $5\times 5$ & $7\times 7$\\ \hline $N$ & 2936 & 2526 & 2816 &1522 & 2109 & 2469\\ $\alpha$ & 0.79 & 0.12 & 0.19 & 0.45 & 0.58 & 0.38\\ $\rho(W^r)$ & 0.99 & 0.99 & 0.96 & 0.99 & 0.96 & 0.98\\ $\gamma$ & 0.27 & 0.13 & 0.09 & 0.01 & 0.04 & 0.12\\ $D$ & 0.2 & 0.2 & 0.2 & 0.2 & 0.2 & 0.2\\ ${\rm MSE}_{\rm T}$ & 0.028 & 0.027 & 0.048 & 0.075 & 0.204 & 0.29\\ ${\rm MSE}_{\rm P}$ & 0.040 & 0.22 & 0.32 & 0.25 & 0.49 & 0.79\\ \end{tabular} \end{ruledtabular} \caption{Summary of the optimal hyperparameters. Here, each run was trained individually. Furthermore, the mean squared error (MSE) of the training (T) and prediction phases (P) is given. The optimal parameters were chosen for $N\in [1000,3000]$, $\rho(W_r)\in [0.9,1]$, $\gamma\in[0.01, 1]$, and $\alpha\in [0.1,1]$. For each of 10 different random configurations of the reservoir matrix $W^r$ 100 different random combinations of the hyperparameters within the given intervals were taken. \label{tab:hyper}} \end{table} Three different scenarios, namely learning the vorticity time series on $3\times 3$, $5\times 5$, and $7\times 7$ grid points around continually provided data points have been investigated. We have conducted this analysis for the smallest and one of the largest Rayleigh numbers of our data record, ${\rm Ra}=1.7\times 10^4$ and $2.9\times 10^5$. The results are summarized correspondingly in figures \ref{fig:rcm1} and \ref{fig:rcm2}. Panels (a) and (b) in each of both figures display the prediction of an example of a time series taken at a specific position in $A$. We see that in both cases the time dependence is approximated fairly well by the reservoir computing model. Both figures are obtained for a $5\times 5$ reconstruction scenario. Panel (b) magnifies the initial time steps $n$ of the prediction phase. Panels (c) and (d) of both figures display the PDFs that result for the normalized vorticity component from the RCM in comparison to the experimental test data. While the velocity derivative statistics for the smallest Rayleigh number is still very close to a Gaussian distribution, it has crossed over to the non-Gaussian intermittent regime for the higher Rayleigh number. This becomes visible by the extended tails that are also reproduced well by our machine learning algorithm. The PDFs in both panels are shown for two scenarios, $3\times 3$ and $7\times 7$, in each case. Note that the latter scenario implies that less input is provided for the machine learning algorithm during the prediction phase. Only each 48th grid point contains a partial observation. We can see again that the statistics in both cases is in fair agreement with the experimental results. The RCM with continually available sparse data is thus able to predict the statistical properties of a highly intermittent out-of-plane vorticity component. \begin{figure*} \includegraphics[width=0.8\textwidth]{Figures/figure_extreme.png} \caption{Dynamical sequence of an extreme vorticity event (a--d) and its prediction by the reservoir computing model algorithm with the $3\times 3$ scenario (e--h). The extreme event of $\omega_z$ is at time $t=t_{\ast}$. The time interval between two snapshots is $\Delta t=0.25 t_{\rm ff}$. The Rayleigh number is ${\rm Ra}=2.9\times 10^5$.} \label{fig:extreme} \end{figure*} \begin{figure} \includegraphics[width=0.48\textwidth]{Figures/figure_enstrophy.pdf} \caption{Time series of the out-of-plane enstrophy $E(n)$ for two Rayleigh numbers, (a) ${\rm Ra}=1.7\times 10^4$ and (b) ${\rm Ra}=2.9\times 10^5$ which is calculated by eq. \eqref{enst}. Both plots compare the measurement data with the reservoir computing predictions which have been obtained either by the $3\times 3$, the $5\times 5$, or the $7\times 7$ scenario as indicated in the legend of panel (a). Note that argument $n$ stands for time $t=n\Delta t$.} \label{fig:enstrophy} \end{figure} In figure \ref{fig:extreme}, we demonstrate the capability of the trained recurrent neural network to predict an extreme vorticity event. A typical event, which is detected at a time $t=t_{\ast}$ (that translates into snapshot number $n=n_{\ast}$) is shown here. We therefore compare a sequence of PIV snapshots with the corresponding model predictions. The RCM results have been composed of $3\times 3$ predictions. The panels have been subsequently smoothed by a $6\times 6$ procedure averaging. It can be seen that the figures at the corresponding times agree well with each other. We can thus conclude from this visual inspection that extreme or high-amplitude vorticity events are predictable by the specific recurrent machine learning algorithm. We also note here that for a prediction without partial observations, i.e., for a fully autonomous RCM prediction, the time series at the grid points of the measurement section $A$ start to deviate after a few time steps $n$ for the highest Rayleigh numbers of our data record. In figure \ref{fig:enstrophy}, the squared out-of-plane vorticity or out-of-plane enstrophy $E(t)$ is integrated over the observation plane $A$. The quantity is given by \begin{equation} E(t):=\int_A \frac{\omega_z^2(t)}{2} \mbox{d}A\,, \label{enst} \end{equation} and displayed for two Rayleigh numbers. The snapshot number of the global vorticity component maximum is denoted as $n_{\ast}$. The quantitative comparison of the 3 predictions with the experimental test data demonstrates that all 3 scenarios follow the ground truth data fairly well. As expected, the deviations increase with increasing spatial sparsity of the available partial observations. \section{Summary and outlook} Our present work was motivated by two major scientific objectives: (1) the detailed analysis of the intermittent statistics of velocity derivatives in the bulk of a turbulent Rayleigh-B\'{e}nard convection flow in air by means of stereoscopic particle image velocimetry measurement including the monitoring of high-amplitude events of the out-of-plane vorticity component and (2) the machine--learning--assisted reconstruction of the dynamical evolution and statistics of the small-scale velocity derivatives including the prediction of extreme or high-amplitude events. Therefore, the moderate Rayleigh numbers were varied over an order of magnitude, in a range for which the statistics of the spatial velocity derivatives (and thus of the vorticity components) goes over from Gaussian to non-Gaussian, as discussed in our recent direct numerical simulations \cite{Valori2021}. This transition in the derivative statistics was detected by both, the experiments and the subsequent machine learning algorithm which is based on a recurrent neural network. As in most laboratory experiments and in contrast to direct numerical simulations the three velocity components of the turbulent convection flow were detectable in a horizontal section only and not fully resolved in the whole volume. Exceptions are high-resolution experiments which are practically almost as expensive in their postprocessing as fully resolved direct numerical simulations \cite{Schanz2016}. SPIV allows us to reconstruct 7 out of the 9 components of the velocity gradient tensor $M_{ij}$ together with the out-of-plane vorticity component $\omega_z$. In many situations, such as field measurements, the time series data are taken at sparsely distributed locations. These practical constraints suggest the application (recurrent) machine learning algorithms. They can process sequential data, make predictions on the dynamics, and thus add missing dynamical information on the turbulence fields. In the second part of this work, we had exactly such a proof-of-concept in mind when investigating the prediction capabilities of the applied reservoir computing model particularly those of extreme or high-amplitude vorticity events. The studies can be extended in several directions. One direction would be a combination with temperature measurements close to the boundary layer, such that strong plume detachments can serve as precursors for extreme dissipation or vorticity events in the bulk of the convection layer. This idea has been developed in ref. \cite{Valori2021} on the basis of fully resolved DNS. A further direction consists of the design of generative algorithms that produce time series of the missing 2 velocity derivative components complemented by the existing statistical symmetries in the flow at hand \cite{Sondak2019,Karniadakis2021}. In this way, machine--learning--assisted measurements of the kinetic energy and thermal dissipation rates would be possible without the usage of tomographic techniques. Studies in this direction are currently underway and will be reported elsewhere. \acknowledgements The authors would like to thank Alexander Thieme for the support with the experiments.\\ The work of VV was partly supported by Priority Programme DFG-SPP 1881 on Turbulent Superstructures of the Deutsche Forschungsgemeinschaft (DFG). Her work is currently supported by the Marie Curie Fellowship of the European Union with project number 101024531. The work of RK is supported by project SCHU 1410/30-1 of the DFG. Training and prediction of the machine learning algorithm were carried out on up to 8 CPUs of the compute cluster Makalu at Technische Universit\"at Ilmenau.\\ \bibliographystyle{unsrt}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} \subsection{Monotone systems} We will be interested in Markov processes, both in discrete and continuous time, that take values in the space $\{0,1\}^{{\mathbb Z}^d}$ of configurations $x=(x(i))_{i\in{\mathbb Z}^d}$ of zeros and ones on the $d$-dimensional integer lattice ${\mathbb Z}^d$. By definition, a map $\phi:\{0,1\}^{{\mathbb Z}^d}\to\{0,1\}$ is \emph{local} if $\phi$ depends only on finitely many coordinates, i.e., there exists a finite set $\Delta\subset{\mathbb Z}^d$ and a function $\phi':\{0,1\}^\Delta\to\{0,1\}$ such that $\phi\big((x(i))_{i\in{\mathbb Z}^d}\big)=\phi'\big((x(i))_{i\in\Delta}\big)$ for each $x\in\{0,1\}^{{\mathbb Z}^d}$. We say that $\phi$ is \emph{monotone} if $x\leq y$ (coordinatewise) implies $\phi(x)\leq\phi(y)$. We say that $\phi$ is \emph{monotonic} if it is both local and monotone. The discrete time Markov chains $(X_n)_{n\geq 0}$ taking values in $\{0,1\}^{{\mathbb Z}^d}$ that we will be interested in are uniquely characterised by a finite collection $\phi_1,\ldots,\phi_m$ of monotonic maps and a probability distribution $p_1,\ldots,p_m$ on $\{1,\ldots,m\}$. They evolve in such a way that independently for each $n\geq 0$ and $i\in{\mathbb Z}^d$, \begin{equation} X_{n+1}(i)=\phi_k(\theta_iX_n)\quad\mbox{with probability }p_k\quad(1\leq k\leq m), \end{equation} where for each $j\in{\mathbb Z}^d$, we define a translation operator $\theta_i:\{0,1\}^{{\mathbb Z}^d}\to\{0,1\}^{{\mathbb Z}^d}$ by $(\theta_ix)(j):=x(i+j)$ $(i,j\in{\mathbb Z}^d)$. We call such a Markov chain $(X_n)_{n\geq 0}$ a \emph{monotone random cellular automaton}. The continuous time Markov chains $(X_t)_{t\geq 0}$ taking values in $\{0,1\}^{{\mathbb Z}^d}$ that we will be interested in are similarly characterised by a finite collection $\phi_1,\ldots,\phi_m$ of monotonic maps and a collection of nonnegative rates $r_1,\ldots,r_m$. They evolve in such a way that independently for each $i\in{\mathbb Z}^d$, \begin{equation}\label{traj} X_t(i)\mbox{ is replaced by }\phi_k(\theta_iX_t)\mbox{ at the times of a Poisson process with rate }r_k \end{equation} $(1\leq k\leq m)$. We call such a Markov process a \emph{monotone interacting particle system}. Well-known results \cite[Thm~I.3.9]{Lig85} show that such processes are well-defined. They are usually constructed so that $t\mapsto X_t(i)$ is piecewise constant and right-continuous at its jump times. Let $\P^x$ denote the law of the discrete time process started in $X_0=x$ and let $\underline 0$ and $\underline 1$ denote the configurations that are constantly zero or one, respectively. Well-known results imply that there exist invariant laws $\underline\nu$ and $\overline\nu$, called the \emph{lower} and \emph{upper invariant law}, such that \begin{equation}\label{upconv} \P^{\underline 0}[X_n\in\,\cdot\,]\Asto{n}\underline\nu \quad\mbox{and}\quad \P^{\underline 1}[X_n\in\,\cdot\,]\Asto{n}\overline\nu, \end{equation} where $\Rightarrow$ denotes weak convergence of probability laws on $\{0,1\}^{{\mathbb Z}^d}$ with respect to the product topology. Each invariant law $\nu$ of $(X_n)_{n\geq 0}$ satisfies $\underline\nu\leq\nu\leq\overline\nu$ in the stochastic order, and one has $\underline\nu=\overline\nu$ if and only if $\underline\rho=\overline\rho$, where \begin{equation} \underline\rho:=\lim_{n\to\infty}\P^{\underline 0}[X_n(i)=1]=\int\underline\nu(\mathrm{d} x)x(i) \quad\mbox{and}\quad \overline\rho:=\lim_{n\to\infty}\P^{\underline 1}[X_n(i)=1]=\int\overline\nu(\mathrm{d} x)x(i) \end{equation} denote the intensities of the lower and upper invariant laws. Completely analogue statements hold in the continuous-time setting \cite[Thm~III.2.3]{Lig85}. We will be interested in methods to derive lower bounds on $\overline\rho$. It will be convenient to give names to some special monotonic functions. We start with the constant monotonic functions \begin{equation}\label{phiconst} \phi^0(x):=0\quad\mbox{and}\quad\phi^1(x):=1\qquad(x\in{\mathbb Z}^d). \end{equation} Apart from these constant functions, all other monotonic functions have the property that $\phi(\underline 0)=0$ and $\phi(\underline 1)=1$, and therefore monotone systems that do not use the function $\phi^0$ (resp.\ $\phi^1$) have the constant configuration $\underline 1$ (resp.\ $\underline 0$) as a fixed point of their evolution. We will discuss whether this fixed point is stable when the original system is perturbed by applying $\phi^0$ (resp.\ $\phi^1$) with a small probability or rate. The next monotonic function of interest is the ``identity map'' \begin{equation}\label{phiid} \phi^{\rm id}(x):=x(0)\qquad\big(x\in\{0,1\}^{{\mathbb Z}^d}\big). \end{equation} Monotone systems that only use $\phi^{\rm id}$ do not evolve at all, of course. We can think of the continuous-time interacting particle systems as limits of discrete-time cellular automata where time is measured in steps of some small size $\varepsilon$, the maps $\phi_1,\ldots,\phi_m$ are applied with probabilities $\varepsilon r_1,\ldots,\varepsilon r_m$, and with the remaining probability, the identity map $\phi^{\rm id}$ is applied. For concreteness, to have some examples at hand, we consider three further, nontrivial examples of monotonic functions. For simplicity, we restrict ourselves to two dimensions. We will be interested in the functions \be\begin{array}{r@{\,}c@{\,}l}\label{phiNEC} \displaystyle\phi^{\rm NEC}(x)&:=&\displaystyle{\tt round}\big((x(0,0)+x(0,1)+x(1,0))/3\big),\\[5pt] \displaystyle\phi^{\rm NN}(x)&:=&\displaystyle{\tt round}\big((x(0,0)+x(0,1)+x(1,0)+x(0,-1)+x(-1,0))/5\big),\\[5pt] \displaystyle\phi^{\rm coop}(x)&:=&\displaystyle x(0,0)\vee\big(x(0,1)\wedge x(1,0)\big), \end{array}\ee where ${\tt round}$ denotes the function that rounds off a real number to the nearest integer. The function $\phi^{\rm NEC}$ is known as \emph{North-East-Center voting} or \emph{NEC voting}, for short, and also as \emph{Toom's rule}. In analogy to $\phi^{\rm NEC}$, we let $\phi^{\rm NWC},\phi^{\rm SWC},\phi^{\rm SEC}$ denote maps that describe North-West-Center voting, South-West-Center voting, and South-East-Center voting, respectively, defined in the obvious way. We will call the map $\phi^{\rm NN}$ from (\ref{phiNEC}) \emph{Nearest Neigbour voting} or \emph{NN voting}, for short. Another name found in the literature is the \emph{symmetric majority rule}. Figure~\ref{fig:updens} shows numerical data for random perturbations of the cellular automata defined by $\phi^{\rm NEC}$ and $\phi^{\rm NN}$. Both $\phi^{\rm NEC}$ and $\phi^{\rm NN}$ have obvious generalisations to higher dimensions, but we will not need these. We call $\phi^{\rm coop}$ the \emph{cooperative branching rule}. It is also known as the \emph{sexual reproduction rule} because of the interpretation that when $\phi^{\rm coop}$ is applied at a site $(i_1,i_2)$, two parents at $(i_1+1,i_2)$ and $(i_1,i_2+1)$ produce offspring at $(i_1,i_2)$, provided the parents' sites are both occupied and $(i_1,i_2)$ is vacant. \subsection{Toom's stability theorem}\label{S:stabil} Recall the definition of the constant monotonic map $\phi^0$ in (\ref{phiconst}). In what follows, we fix a monotonic map $\phi:\{0,1\}^{{\mathbb Z}^d}\to\{0,1\}$ that is not constantly zero or one. For each $p\in[0,1]$, we let $(X^p_k)_{k\geq 0}$ denote the monotone random cellular automaton defined by the monotonic functions $\phi^0$ and $\phi$ that are applied with probabilities $p$ and $1-p$, respectively. We let $\overline\rho(p)$ denote the density of the upper invariant law as a function of $p$. Since $\phi$ is not constant, $\underline 1$ is a fixed point of the deterministic system $(X^0_k)_{k\geq 0}$, and hence $\overline\rho(0)=1$. We say that $(X_k)_{k\geq 0}=(X^0_k)_{k\geq 0}$ is stable if $\overline\rho(p)\to 1$ as $p\to 0$. Furthermore, we say that $\phi$ is an \emph{eroder} if for each initial state $X^0_0$ that contains only finitely many zeros, one has $X^0_n=\underline 1$ for some $n\in{\mathbb N}$. We quote the following result from \cite[Thm~5]{Too80}. \begin{quote} \textbf{Toom's stability theorem} $(X_k)_{k\geq 0}$ is stable if and only if $\phi$ is an eroder. \end{quote} In words, this says that the all-one fixed point is stable under small random perturbations if and only if $\phi$ is an eroder. For general local maps that need not be monotone, it is known that there exists no algorithm to decide whether a given map is an eroder, even in one dimension \cite{Pet87}. By contrast, for monotonic maps, there exists a simple criterion to check whether a given map is an eroder. Each monotonic map $\phi:\{0,1\}^{{\mathbb Z}^d}\to\{0,1\}$ can uniquely be written as \begin{equation}\label{Aphi} \phi(x)=\bigvee_{A\in{\cal A}(\phi)}\bigwedge_{i\in A}x(i), \end{equation} where ${\cal A}(\phi)$ is a finite collection of finite subsets of ${\mathbb Z}^d$ that have the interpretation that their indicator functions $1_A$ $(A\in{\cal A}(\phi))$ are the minimal configurations on which $\phi$ gives the outcome~1. In particular, ${\cal A}(\phi^0)=\emptyset$ and ${\cal A}(\phi^1)=\{\emptyset\}$, where in (\ref{Aphi}) we use the convention that the supremum (resp.\ infimum) over an empty set is 0 (resp.\ 1). We let ${\rm Conv}(A)$ denote the convex hull of a set $A$, viewed as a subset of ${\mathbb R}^d$. Then \cite[Thm~6]{Too80}, with simplifications due to \cite[Thm~1]{Pon13}, says that a monotonic map $\phi$ that is not constantly zero or one is an eroder if and only if \begin{equation}\label{erosion} \bigcap_{A\in{\cal A}(\phi)}{\rm Conv}(A)=\emptyset. \end{equation} We note that by Helly's theorem \cite[Corollary~21.3.2]{Roc70}, if (\ref{erosion}) holds, then there exists a subset ${\cal A}'\subset{\cal A}(\phi)$ of cardinality at most $d+1$ such that $\bigcap_{A\in{\cal A}'}{\rm Conv}(A)=\emptyset$. Using (\ref{erosion}), it is straightforward to check that the maps $\phi^{\rm NEC}$ and $\phi^{\rm coop}$, defined in (\ref{phiNEC}), are eroders. On the other hand, one can easily check that $\phi^{\rm NN}$ is not an eroder. Indeed, if $(X^0_n)_{n\geq 0}$ is started in an initial state with a zero on the sites $(0,0),(0,1),(1,0),(1,1)$ and ones everywhere else, then the deterministic system remains in this state forever. \begin{figure}[htb] \begin{center} \inputtikz{updens} \caption{Density $\overline\rho$ of the upper invariant law of two monotone cellular automata as a function of the parameters, shown on a scale from 0 (white) to 1 (black). On the left: a version of Toom's model that applies the maps $\phi^0$, $\phi^1$, and $\phi^{\rm NEC}$ with probabilities $p$, $r$, and $1-p-r$, respectively. On the right: the mononotone random cellular automaton that applies the maps $\phi^0$, $\phi^1$, and $\phi^{\rm NN}$ with probabilities $p$, $r$, and $1-p-r$, respectively. Contrary to $\phi^{\rm NEC}$, the map $\phi^{\rm NN}$ is not an eroder. By the symmetry between the 0's and the 1's, in both models, the density $\underline\rho$ of the lower invariant law equals $1-\overline\rho$. Due to metastability effects, the area where the upper invariant law differs from the lower invariant law is shown too large in these numerical data. For Toom's model with $r=0$, the data shown above suggest a first order phase transition at $p_{\rm c}\approx 0.057$ but based on numerical data for edge speeds we believe the true value is $p_{\rm c}\approx 0.053$. We conjecture that the model on the right has a unique invariant law everywhere except on the diagonal $p=r$ for $p$ sufficiently small.} \label{fig:updens} \end{center} \end{figure} \subsection{Main results} While Toom's stability theorem is an impressive result, it is important to realise its limitations. As Toom already remarked \cite[Section~V]{Too80}, his theorem does not apply to monotone cellular automata whose local state space is not $\{0,1\}$, but $\{0,1,2\}$, for example. Also, his theorem only applies in discrete time and only to random perturbations of cellular automata defined by a single non-constant monotonic map $\phi$. The most difficult part in the proof of Toom's stability theorem is showing that if $\phi$ is an eroder, then $\overline\rho(p)\to 1$ as $p\to 0$. To give a lower bound on $\overline\rho(p)$ for small values of $p$, Toom uses a Peierls contour argument. The main result of our article is extending this Peierls argument to monotone cellular automata whose definition involves, apart from the constant monotonic map $\phi^0$, several non-constant monotonic maps $\phi_1,\ldots,\phi_m$. We are especially interested in the case when one of these maps is the identity map $\phi^{\rm id}$ and in the closely related problem of giving lower bounds on $\overline\rho(p)$ for monotone interacting particle systems, which evolve in continuous time. Another result of our work is obtaining explicit lower bounds for $\overline\rho(p)$ for concrete models, which has not been attempted very much. In particular, we extend Toom's definition of a contour to monotone cellular automata that apply several non-constant monotonic maps and to monotone interacting particle systems. We show that $X_n(i)=0$ for some $i\in{\mathbb Z}^d$ (or equivalently $X_t(i)=0$ in continuous time) implies the presence of a Toom contour ``rooted at'' $(n, i)$ (or $(t,i)$ respectively), which in turn can be used to obtain lower bounds for $\overline\rho(p)$ via a Peierls argument. Our main results are contained in Theorems~\ref{T:contour},~\ref{T:strongpres} and~\ref{T:contcontour}. At this point rather than formally stating these results, which would require dwelling into technical details, we state the explicit bounds we obtain as a result of our construction. Our extension of Toom's result allows us to establish or improve explicit lower bounds for $\overline\rho(p)$ for concrete models. First we consider Toom's set-up, that is monotone random cellular automata that apply the maps $\phi^0$ and $\phi$ with probabilities $p$ and $1-p$, respectively, where $\phi$ is an eroder. An easy coupling argument shows that the intensity $\overline\rho(p)$ of the upper invariant law is a nonincreasing function of $p$, so we can define a \emph{critical parameter} \begin{equation} p_{\rm c}:=\sup\{p: \overline\rho(p)>0\}\in[0,1]. \end{equation} Since $\phi$ is an eroder, Toom's stability theorem tells us that $p_{\rm c}>0$. We show how to derive explicit lower bounds on $p_{\rm c}$ for any choice of the eroder $\phi$, and do this for two concrete examples. We first take for $\phi$ the map $\phi^{\rm NEC}$ and obtain the bound $p_{\rm c}\geq 3^{-21}$, which does not compare well to the estimated value $p_{\rm c}\approx 0.053$ coming from numerical simulations. Nevertheless, this is probably the best rigorous bound currently available. Then we take for $\phi$ the map $\phi^{\rm coop}$ and, improving on Toom's method, we get the bound $p_{\rm c}\geq 1/64$. This is also some way off the estimated value $p_{\rm c}\approx 0.105$ coming from numerical simulations. Then we consider the monotone random cellular automaton on ${\mathbb Z}^d$ that applies the maps $\phi^0,\phi^{\rm id}$, and $\phi^{\rm coop}$ with probabilities $p,q,r$, respectively with $q=1-p-r$. For each $p,r\geq 0$ such that $p+r\leq 1$, let $\overline\rho(p,r)$ denote the intensity of the upper invariant law of the process with parameters $p,1-p-r,r$. Arguing as before, it is easy to see that for each $0\leq r<1$ we can define a critical paramete \begin{equation} p_{\rm c}(r):=\sup\{p: \overline\rho(p, r)>0\}\in[0,1-r]. \end{equation} By carefully examining the structure of Toom contours for this model, we prove the bound $p_c(r)> 0.00624 r$. Finally, we consider the interacting particle system on ${\mathbb Z}^2$ that applies the monotonic maps $\phi^0$ and $\phi^{\rm coop}$ with rates $1$ and $\lambda$, respectively. This model was introduced by Durrett~\cite{Dur86} as the \emph{sexual contact process}, and we can think of it as the limit of the previous discrete-time cellular automata. For each $\lambda>0$ we let $\overline\rho(\lambda)$ denote the intensity of the upper invariant law of the process with parameters $1, \lambda$. Again, we define a critical parameter \begin{equation} \lambda_{\rm c}:=\inf\{\lambda\geq 0:\overline\rho(\lambda)>0\}\in(0,\infty). \end{equation} Numerical simulations suggest the value $\lambda_{\rm c}\approx 12.4$, we show the upper bound $\lambda_{\rm c}\leq 161.1985$. Durrett claimed a proof that $\lambda_{\rm c}\leq 110$, which he describes as ridiculous, but for which he challenges the reader to do better. We have quite not managed to beat his bound, though we are not far off. The proofs of all results in \cite{Dur86} are claimed to be contained in a forthcoming paper with Lawrence Gray \cite{DG85} that has never appeared. In \cite{Gra99}, Gray refered to these proofs as ``unpublished'' and in \cite{BD17}, Durrett cites the paper as an ``unpublished manuscript''. Although for monotone cellular automata that apply several non-constant monotonic maps and for monotone interacting particle systems our methods do not seem to be enough to obtain bounds on the critical value in general, we believe that our examples are instructive of how one can try to do it for a concrete model. \subsection{Discussion} The cellular automaton defined by the NEC voting map $\phi^{\rm NEC}$ is nowadays known as \emph{Toom's model}. In line with Stigler's law of eponymy, Toom's model was not invented by Toom, but by Vasilyev, Petrovskaya, and Pyatetski-Shapiro, who simulated random perturbations of this and other models on a computer \cite{VPP69}. The function $p\mapsto\overline\rho(p)$ appears to be continuous except for a jump at $p_{\rm c}$ (see Figure~\ref{fig:updens}). Toom, having heard of \cite{VPP69} during a seminar, proved in \cite{Too74} that there exist random cellular automata on ${\mathbb Z}^d$ with at least $d$ different invariant laws. Although Toom's model is not explicitly mentioned in the paper, his proof method can be applied to prove that $p_{\rm c}>0$ for his model. In \cite{Too80}, Toom improved his methods and proved his celebrated stability theorem. His paper is quite hard to read. One of the reasons is that Toom tries to be as general as possible. For example, he allows for cellular automata that look back more than one step in time, which severely complicates the statement of conditions like (\ref{erosion}). He also allows for noise that is not i.i.d.\ and cellular automata that are not monotone, even though all his results in the general case can easily be obtained by comparison with the i.i.d.\ monotone case. Toom's Peierls argument in the original paper is quite hard to understand. A more accessible account of Toom's original argument (with pictures!) in the special case of Toom's model can be found in the appendix of \cite{LMS90}.\footnote{Unfortunately, their Figure~6 contains a small mistake, in the form of an arrow that should not be there.} Although in principle, Toom's Peierls argument can be used to derive explicit bounds on $p_{\rm c}$, Toom did not attempt to do so, no doubt in the belief that more powerful methods would be developed in due time. Bramson and Gray \cite{BG91} have given another alternative proof of Toom's stability theorem that relies on comparison with continuum models (which describe unions of convex sets in ${\mathbb R}^d$ evolving in continuous time) and renormalisation-style block arguments. They somewhat manage to relax Toom's conditions but the proof is very heavy and any explicit bounds derived using this method would presumably be very bad. Gray \cite{Gra99} proved a stability theorem for monotone interacting particle systems. The proofs use ideas from \cite{Too80} and \cite{BG91} and do not lend themselves well to the derivation of explicit bounds. Gray also derived necessary and sufficient conditions for a monotonic map to be an eroder \cite[Thm~18.2.1]{Gra99}, apparently overlooking the fact that Toom had already proved the much simpler condition (\ref{erosion}). Motivated by abstract problems in computer science, a number of authors have given alternative proofs of Toom's stability theorem in a more restrictive setting \cite{GR88,BS88,Gac95,Gac21}. Their main interest is in a three-dimensional system which evolves in two steps: letting $e_1,e_2,e_3$ denote the basis vectors in ${\mathbb Z}^3$, they first replace $X_n(i)$ by \[ X'_n(i):={\tt round}\big((X_n(i)+X_n(i+e_1)+X_n(i+e_2))/3\big), \] and then set \[ X_{n+1}(i):={\tt round}\big((X'_n(i)+X'_n(i+e_3)+X'_n(i-e_3))/3\big). \] They prove explicit bounds for finite systems, although for values of $p$ that are extremely close to zero.\footnote{In particular, \cite{Gac95} needs $p<2^{-21}3^{-8}$.} The proofs of \cite{GR88} do not use Toom's Peierls argument but rely on different methods. Their bounds were improved in \cite{BS88}. Still better bounds can be found in the unpublished note \cite{Gac95}. The proofs in the latter manuscript are very similar to Toom's argument, with some crucial improvements at the end that are hard to follow due to missing definitions. This version of the argument seems to have inspired the incomplete note by John Preskill \cite{Pre07} who links it to the interesting idea of counting ``minimal explanations''. His definition of a ``minimal explanation'' is a bit stronger than the definition we will adopt in Subsection~\ref{S:finexpl} below, but sometimes, such as in the picture in Figure~\ref{fig:minexpl} on the right, the two definitions coincide. Figure~\ref{fig:minexpl} shows that the relation between Toom contours and minimal explanations is not so straightforward as suggested in \cite{Gac95,Pre07}. We have not found a good way to control the number of minimal explanations with a given number of defective sites and we do not know how to derive the lower bounds on the density of the upper invariant law stated in \cite{Gac95,Pre07}. Hwa-Nien Chen \cite{Che92,Che94}, who was a PhD student of Lawrence Gray, studied the stability of various variations of Toom's model under perturbations of the initial state and the birth rate. The proofs of two of his four theorems depend on results that he cites from the as yet nonexisting paper \cite{DG85}. Ponselet \cite{Pon13} gave an excellent account of the existing literature and together with her supervisor proved exponential decay of correlations for the upper invariant law of a large class of randomly perturbed monotone cellular automata \cite{MP11}. There exists duality theory for general monotone interacting particle systems \cite{Gra86,SS18}. The basic idea is that the state in the origin at time zero is a monotone function of the state at time $-t$, and this monotone function evolves in a Markovian way as a function of~$t$. Durrett \cite{Dur86} mentions this dual process as an important ingredient of the proofs of the forthcoming paper \cite{DG85} and it is also closely related to the minimal explanations of Preskill \cite{Pre07}. A good understanding of this dual process could potentially help solve many open problems in the area, but its behaviour is already quite complicated in the mean-field case \cite{MSS20}. \subsection{Outline} The paper is organized as follows. We define Toom contours and give an outline of the main idea of the Peierls argument in Subsection~\ref{S:Peierls}. In Subsection~\ref{S:erod} we prove Toom's stability theorem. In Susbsection~\ref{S:twochar} we introduce a stronger notion of Toom contours, that allows us to improve bounds for certain models. We then present two explicit bounds in Toom's set-up in Subsection~\ref{S:explic}. In Subsection~\ref{S:intrins} we consider monotone random cellular automata that apply several non-constant monotonic maps and in Subsection~\ref{S:contfirst} we discuss continuous time results and bounds. The rest of the paper is devoted for proofs and technical arguments. The results stated in Subsections~\ref{S:Peierls} are proved in Section~\ref{S:contour}. Section~\ref{S:bounds} contains all the proofs of the results stated in Subsections~\ref{S:erod},~\ref{S:twochar} and~\ref{S:explic}. The results of Subsection~\ref{S:intrins} are proved in Section~\ref{S:intbd}. Section~\ref{S:cont} gives the precise definitions and results together with their proofs in the continuous-time setting. Finally, the relation between Toom contours and minimal explanations in the sense of John Preskill \cite{Pre07} is discussed in Section~\ref{S:expla}, where we also discuss the open problem of counting minimal explanations. \section{Setting and definitions}\label{S:Toomcontours} \subsection{Toom's Peierls argument}\label{S:Peierls} In this subsection, we derive a lower bound on the intensity of the upper invariant law for a class of monotone random cellular automata. We use a Peierls argument based on a special type of contours that we will call \emph{Toom contours}. In their essence, these are the contours used in \cite{Too80}, though on the face of it our definitions will look a bit different from those of \cite{Too80}. This pertains especially to the ``sources'' and ``sinks'' defined below that are absent from Toom's formulation and that we think help elucidate the argument. We start by defining a special sort of directed graphs, which we will call \emph{Toom graphs} (see Figure~\ref{fig:Toomgraph}). After that we first give an outline of the main idea of the Peierls argument and then provide the details. \subsubsection*{Toom graphs} Recall that a directed graph is a pair $(V,\vec E)$ where $V$ is a set whose elements are called vertices and $\vec E$ is a subset of $V\times V$ whose elements are called directed edges. For each directed edge $(v,w)\in E$, we call $v$ the starting vertex and $w$ the endvertex of $(v,w)$. We let \begin{equation} \vec E_{\rm in}(v):=\big\{(u,v')\in\vec E:v'=v\big\} \quad\mbox{and}\quad \vec E_{\rm out}(v):=\big\{(v',w)\in\vec E:v'=v\big\} \end{equation} denote the sets or directed edges entering and leaving a given vertex $v\in V$, respectively. We will need to generalise the concept of a directed graph by allowing directed edges to have a \emph{type} in some finite set $\{1,\ldots,\sigma\}$, with the possibility that several edges of different types connect the same two vertices. To that aim, we define an \emph{directed graph with $\sigma$ types of edges} to be a pair $(V,{\cal E})$, where ${\cal E}=(\vec E_1,\ldots,\vec E_\sigma)$ is a sequence of subsets of $V\times V$. We interpret $\vec E_s$ as the set of directed edges of type $s$. \begin{figure}[t] \begin{center} \inputtikz{Toomgraph} \caption{Example of a Toom graph with three charges. Sources and sinks are indicated with solid dots and internal vertices are indicated with open dots. Note the isolated vertex in the lower right corner, which is a source and a sink at the same time.} \label{fig:Toomgraph} \end{center} \end{figure} \begin{defi}\label{def:toomgraph} A \emph{Toom graph} with $\sigma\geq 2$ \emph{charges} is a directed graph with $\sigma$ types of edges $(V,{\cal E})=(V,\vec E_1,\ldots,\vec E_\sigma)$ such that each vertex $v\in V$ satisfies one of the following four conditions: \begin{enumerate} \item $|\vec E_{s,{\rm in}}(v)|=0=|\vec E_{s,{\rm out}}(v)|$ for all $1\leq s\leq\sigma$, \item $|\vec E_{s,{\rm in}}(v)|=0$ and $|\vec E_{s,{\rm out}}(v)|=1$ for all $1\leq s\leq\sigma$, \item $|\vec E_{s,{\rm in}}(v)|=1$ and $|\vec E_{s,{\rm out}}(v)|=0$ for all $1\leq s\leq\sigma$, \item there exists an $s\in\{1,\ldots,\sigma\}$ such that $|\vec E_{s,{\rm in}}(v)|=1=|\vec E_{s,{\rm out}}(v)|$\\ and $|\vec E_{l,{\rm in}}(v)|=0=|\vec E_{l,{\rm out}}(v)|$ for each $l\neq s$. \end{enumerate} \end{defi} See Figure~\ref{fig:Toomgraph} for a picture of a Toom graph with three charges. We set \be\begin{array}{r@{\,}c@{\,}l}\label{eq:sourcesinkint} \displaystyle V_\circ&:=&\displaystyle\big\{v\in V: |\vec E_{s,{\rm in}}(v)|=0\ \; \forall 1\leq s\leq\sigma\big\},\\[5pt] \displaystyle V_\ast&:=&\displaystyle\big\{v\in V: |\vec E_{s,{\rm out}}(v)|=0\ \; \forall 1\leq s\leq\sigma\big\},\\[5pt] \displaystyle V_s&:=&\displaystyle\big\{v\in V: |\vec E_{s,{\rm in}}(v)|=1=|\vec E_{s,{\rm out}}(v)|\big\}\qquad(1\leq s\leq\sigma). \end{array}\ee Vertices in $V_\circ,V_\ast$, and $V_s$ are called \emph{sources}, \emph{sinks}, and \emph{internal vertices} with \emph{charge} $s$, respectively. Vertices in $V_\circ\cap V_\ast$ are called \emph{isolated vertices}. Informally, we can imagine that at each source there emerge $\sigma$ charges, one of each type, that then travel via internal vertices of the corresponding charge through the graph until they arrive at a sink, in such a way that at each sink there converge precisely $\sigma$ charges, one of each type. It is clear from this description that $|V_\circ|=|V_\ast|$, i.e., the number of sources equals the number of sinks. We let $\vec E:=\bigcup_{s=1}^\sigma\vec E_s$ denote the union of all directed edge sets and we let $E:=\big\{\{v,w\}:(v,w)\in\vec E\big\}$ denote the corresponding set of undirected edges. We say that a Toom graph $(V,{\cal E})$ is \emph{connected} if the associated undirected graph $(V,E)$ is connected. \subsubsection*{Toom contours} Our next aim is to define \emph{Toom contours}, which are connected Toom graphs that are embedded in space-time ${\mathbb Z}^{d+1}$ in a special way. Let $(V,{\cal E})=(V,\vec E_1,\ldots,\vec E_\sigma)$ be a Toom graph with $\sigma$ charges. Recall that $\vec E=\bigcup_{s=1}^\sigma\vec E_s$. \begin{defi}\label{def:embedding} An \emph{embedding} of $(V,{\cal E})$ is a map \begin{equation}\label{psi} V\ni v\mapsto\psi(v)=\big(\vec\psi(v),\psi_{d+1}(v)\big)\in{\mathbb Z}^d\times{\mathbb Z} \end{equation} that has the following properties: \begin{enumerate} \item $\displaystyle\psi_{d+1}(w)=\psi_{d+1}(v)-1$ for all $(v,w)\in\vec E$, \item $\psi(v_1)\neq\psi(v_2)$ for each $v_1\in V_\ast$ and $v_2\in V$ with $v_1\neq v_2$, \item $\psi(v_1)\neq\psi(v_2)$ for each $v_1,v_2\in V_s$ with $v_1\neq v_2$ $(1\leq s\leq\sigma)$. \end{enumerate} \end{defi} We interpret~$\vec\psi(v)$ and~$\psi_{d+1}(v)$ as the space and time coordinates of~$\psi(v)$ respectively. Condition (i) says that directed edges $(v,w)$ of the Toom graph $(V,\vec E)$ point in the direction of decreasing time. Condition~(ii) says that sinks do not overlap with other vertices and condition~(iii) says that internal vertices do not overlap with other internal vertices of the same charge. See Figure~\ref{fig:minexpl} for an example of an embedding of a Toom graph. Not every Toom graph can be embedded. Indeed, it is easy to see that if $(V,{\cal E})$ has an embedding in the sense defined above, then \begin{equation} |\vec E_1|=\cdots=|\vec E_\sigma|, \end{equation} i.e., there is an equal number of charged edges of each charge. The Toom graph of Figure~\ref{fig:Toomgraph} can be embedded, but if we would change the number of internal vertices on one of the paths from a source to a sink, then the resulting graph would still be a Toom graph but it would not be possible to embed it. \begin{figure}[htb] \begin{center} \inputtikz{minexpl} \caption{On the left: a Toom graph with two charges. Middle: embedding of the Toom graph on the left, with time running downwards. The connected component containing the root $v_\circ$ forms a Toom contour rooted at the origin $(0,0,0)$. On the right: a minimal explanation for a monotone cellular automaton $\Phi$ that applies the maps $\phi^0$ and $\phi^{\rm coop}$ with probabilities $p$ and $1-p$, respectively. The origin has the value zero because the sites marked with a star are defective. This explanation is minimal in the sense that removing any of the defective sites results in the origin having the value one. The Toom contour in the middle picture is present in $\Phi$. In particular, the sinks of the Toom contour coincide with some, though not with all of the defective sites of the minimal explanation.} \label{fig:minexpl} \end{center} \end{figure} \begin{defi}\label{def:toomcontour} A \emph{Toom contour} is a quadruple $(V,{\cal E},v_\circ,\psi)$, where $(V,{\cal E})$ is a connected Toom graph, $v_\circ\in V_\circ$ is a specially designated source, and $\psi$ is an embedding of $(V,{\cal E})$ that has the additional properties that: \begin{enumerate}\addtocounter{enumi}{3} \item $\psi_{d+1}(v_\circ)>t$ for all $(i,t)\in\psi(V)\backslash\{\psi(v_\circ)\}$, \end{enumerate} where $\psi(V):=\{\psi(v):v\in V\}$ denotes the image of $V$ under $\psi$. \end{defi} We call $v_\circ$ the \emph{root} of the Toom contour and we say that the Toom contour $(V,{\cal E},v_\circ,\psi)$ is \emph{rooted} at the space-time point $\psi(v_\circ)\in{\mathbb Z}^{d+1}$. See Figure~\ref{fig:minexpl} for an example of a Toom contour with two charges. For any Toom contour $(V,{\cal E},v_\circ,\psi)$, we write \begin{equation}\begin{array}{l}\label{Ecirc} \vec E^\ast:=\bigcup_{s=1}^\sigma\vec E^\ast_s \quad\mbox{with}\quad \vec E^\ast_s:=\big\{(v,w)\in\vec E_s:v\in V_s\cup\{v_\circ\}\big\} \quad(1\leq s\leq\sigma),\\[5pt] \vec E^\circ:=\bigcup_{s=1}^\sigma\vec E^\circ_s \quad\mbox{with}\quad \vec E^\circ_s:=\big\{(v,w)\in\vec E_s:v\in V_\circ\backslash\{v_\circ\}\big\} \quad(1\leq s\leq\sigma). \end{array}\ee i.e., $\vec E^\ast$ is the set of directed edges that have an internal vertex or the root as their starting vertex, and $\vec E^\circ$ are all the other directed edges, that start at a source that is not the root. The special role played by the root will become important in the next subsection, when we define what it means for a Toom contour to be present in a collection of i.i.d.\ monotonic maps. If $(V,{\cal E},v_\circ,\psi)$ is a Toom contour, then we let \begin{equation}\begin{array}{c} \displaystyle\psi(V_\ast):=\big\{\psi(v):v\in V_\ast\big\},\quad \psi(\vec E^\ast_s):=\big\{\big(\psi(v),\psi(w)\big):(v,w)\in \vec E^\ast_s\big\},\\[5pt] \psi(\vec E^\circ_s):=\big\{\big(\psi(v),\psi(w)\big):(v,w)\in \vec E^\circ_s\big\} \qquad(1\leq s\leq\sigma), \end{array}\ee denote the images under $\psi$ of the set of sinks $V_\ast$ and the sets of directed edges $\vec E^\ast_s$ and $\vec E^\circ_s$, respectively. We call two Toom contours $(V,{\cal E},v_\circ,\psi)$ and $(V',{\cal E}',v'_\circ,\psi')$ \emph{equivalent} if \begin{equation}\label{equiv} \psi(v_\circ)=\psi'(v_\circ),\quad \psi(V_\ast)=\psi'(V'_\ast),\quad \psi(\vec E^\ast_s)=\psi'(\vec{E'}^\ast_s),\quad \psi(\vec E^\circ_s)=\psi'(\vec{E'}^\circ_s). \end{equation} \subsubsection*{The main idea of the construction} We will be interested in monotone random cellular automata that are defined by a probability distribution $p_0,\ldots,p_m$ and monotonic maps $\phi_0,\ldots,\phi_m$, of which $\phi_0=\phi^0$ is the constant map that always gives the outcome zero and $\phi_1,\ldots,\phi_m$ are non-constant. This generalises Toom's set-up, who only considered the case $m=1$. We fix an i.i.d.\ collection $\Phi=(\Phi_{(i,t)})_{(i,t)\in{\mathbb Z}^{d+1}}$ of monotonic maps such that $\P[\Phi_{(i,t)}=\phi_k]=p_k$ $(0\leq k\leq m)$. A space-time point $(i,t)$ with $\Phi_{(i,t)}=\phi^0$ is called a \emph{defective} site. In Lemmas \ref{L:maxtraj} and \ref{L:maxup} below, we show that $\Phi$ almost surely determines a stationary process $(\overline X_t)_{t\in{\mathbb Z}}$ that at each time $t$ is distributed according to the upper invariant law $\overline\nu$. Our aim is to give an upper bound on the probability that $\overline X_0(0)=0$, which then translates into a lower bound on the intensity $\overline\rho$ of the upper invariant law. To achieve this, we first describe a special way to draw a Toom graph inside space-time ${\mathbb Z}^{d+1}$. Such an embedding of a Toom graph in space-time is then called a \emph{Toom contour}. Since our argument requires looking backwards in time, it will be convenient to adopt the convention that in all our pictures (such as Figure~\ref{fig:minexpl}), time runs downwards. Next, we define when a Toom contour is \emph{present} in the random collection of maps $\Phi$. Theorem~\ref{T:contour} then states that the event $\overline X_0(0)=0$ implies the presence of a Toom contour in $\Phi$. This allows us to bound the probability that $\overline X_0(0)=0$ from above by the expected number of Toom contours that are present in $\Phi$. In later subsections, we will then discuss conditions under which this expectation can be controlled and derive explicit bounds. Before we state the remaining definitions, which are mildly complicated, we explain the main idea of the construction. We will define presence of Toom contours in such a way that the space-time point $(0,0)$ is a source and all the sinks correspond to defective sites where the map $\phi^0$ is applied. Let $M_n$ denote the number of Toom contours that have $(0,0)$ as a source and that have $n$ sinks. One would like to show that if the map $\phi^0$ is applied with a sufficiently small probability $p$, then the expression $\sum_{n=1}^\infty M_np^n$ is small. This will not be true, however, unless one imposes additional conditions on the contours. In fact, it is rather difficult to control the number of contours with a given number of sinks. It is much easier to count contours with a given number of edges. Letting $N_n$ denote the number of contours with $n$ edges (rather than sinks), it is not hard to show that $N_n$ grows at most exponentially as a function of $n$. To complete the argument, therefore, it suffices to impose additional conditions on the contours that bound the number of edges in terms of the number of sinks. If at a certain space-time point $(i,t)$, the stationary process satisfies $\overline X_t(i)=0$, and the map $\Phi_{(i,t)}$ that is applied there is $\phi_k$, then for each set $A\in{\cal A}(\phi_k)$ (with ${\cal A}(\phi_k)$ defined in (\ref{Aphi})), at least one of the sites $j\in A$ must have the property that $\overline X_{t-1}(j)=0$. We will use this to steer edges in a certain direction, in such a way that different charges tend to move away from each other, \emph{except for edges that originate in a source}. Since in the end, edges of all charges must convene in each sink, this will allow us to bound the total number of edges in terms of the ``bad'' edges that originate in a source. Equivalently, this allows us to bound the total number of edges in terms of the number of sources, which is the same as the number of sinks. This is the main idea of the argument. We now continue to give the precise definitions. \subsubsection*{The contour argument} Having defined the right sort of contours, we now come to the core of the argument: the fact that $\overline X_0(0)=0$ implies the existence of a Toom contour with certain properties. We first need a special construction of the stationary process $(\overline X_t)_{t\in{\mathbb Z}}$. We let $\{0,1\}^{{\mathbb Z}^{d+1}}$ denote the space of all space-time configurations $x=(x_t(i))_{(i,t)\in{\mathbb Z}^{d+1}}$. For $x\in\{0,1\}^{{\mathbb Z}^{d+1}}$ and $t\in{\mathbb Z}$, we define $x_t\in\{0,1\}^{{\mathbb Z}^d}$ by $x_t:=(x_t(i))_{i\in{\mathbb Z}^d}$. We will call a collection ${\bm{\phh}}=(\varphi_{(i,t)})_{(i,t)\in{\mathbb Z}^{d+1}}$ of monotonic maps from $\{0,1\}^{{\mathbb Z}^d}$ to $\{0,1\}$ a \emph{monotonic flow}. By definition, a \emph{trajectory} of ${\bm{\phh}}$ is a space-time configuration $x$ such that \begin{equation} x_t(i)=\varphi_{(i,t)}(\theta_ix_{t-1})\qquad\big((i,t)\in{\mathbb Z}^{d+1}\big). \end{equation} We need the following two simple lemmas. \begin{lemma}[Minimal and maximal trajectories] Let\label{L:maxtraj} ${\bm{\phh}}$ be a monotonic flow. Then there exist trajectories $\underline x$ and $\overline x$ that are uniquely characterised by the property that each trajectory $x$ of ${\bm{\phh}}$ satisfies $\underline x\leq x\leq\overline x$ (pointwise). \end{lemma} \begin{lemma}[The lower and upper invariant laws] Let\label{L:maxup} $\phi_0,\ldots,\phi_m$ be monotonic functions, let $p_0,\ldots,p_m$ be a probability distribution, and let $\underline\nu$ and $\overline\nu$ denote the lower and upper invariant laws of the corresponding monotone random cellular automaton. Let $\Phi=\big(\Phi_{(i,t)}\big)_{(i,t)\in{\mathbb Z}^{d+1}}$ be an i.i.d.\ collection of monotonic maps such that $\P[\Phi_{(i,t)}=\phi_k]=p_k$ $(0\leq k\leq m)$, and let $\underline X$ and $\overline X$ be the minimal and maximal trajectories of $\Phi$. Then for each $t\in{\mathbb Z}$, the random variables $\underline X_t$ and $\overline X_t$ are distributed according to the laws $\underline\nu$ and $\overline\nu$, respectively. \end{lemma} From now on, we fix a monotonic flow ${\bm{\phh}}$ that takes values in $\{\phi_0,\ldots,\phi_m\}$, of which $\phi_0=\phi^0$ is the constant map that always gives the outcome zero and $\phi_1,\ldots,\phi_m$ are non-constant. Recall that ${\cal A}(\phi_k)$, defined in (\ref{Aphi}), corresponds to the set of minimal configurations on which $\phi_k$ gives the outcome~1. We fix an integer $\sigma\geq 2$ and for each $1\leq k\leq m$ and $1\leq s\leq\sigma$, we choose a set \begin{equation}\label{As} A_s(\phi_k)\in{\cal A}(\phi_k). \end{equation} Informally, the aim of these sets is to steer edges of different charges away from each other. In later subsections, when we derive bounds for concrete models, we will make an explicit choice for $\sigma$ and sets $A_s(\phi_k)$. For the moment, we allow these to be arbitrary. The integer $\sigma$ corresponds to the number of charges. The definition of what it means for a contour to be present will depend on the choice of the sets in (\ref{As}). As a concrete example, consider the case $m=1$ and $\phi_1=\phi^{\rm coop}$, the cooperative branching map defined in (\ref{phiNEC}). The set ${\cal A}(\phi^{\rm coop})$ from (\ref{Aphi}) is given by ${\cal A}(\phi^{\rm coop})=\{A_1,A_2\}$ with $A_1:=\{(0,0)\}$ and $A_2:=\{(0,1),(1,0)\}$. Using (\ref{erosion}) we see that $\phi^{\rm coop}$ is an eroder. In this concrete example, we will set $\sigma:=2$ and for the sets $A_s(\phi_1)$ $(s=1,2)$ of (\ref{As}) we choose the sets $A_1,A_2$ we have just defined. \begin{defi}\label{D:present} A Toom contour $(V,{\cal E},v_\circ,\psi)$ with $\sigma$ charges is \emph{present} in the monotonic flow ${\bm{\phh}}$ if: \begin{enumerate} \item $\displaystyle\varphi_{\psi(v)}=\phi^0$ for all $\displaystyle v\in V_\ast$, \item $\displaystyle\varphi_{\psi(v)}\in\{\phi_1,\ldots,\phi_m\}$ for all $\displaystyle v\in V\backslash V_\ast$, \item $\displaystyle\vec\psi(w)-\vec\psi(v)\in A_s(\varphi_{\psi(v)})$ for all $(v,w)\in\vec E^\ast_s$ $(1\leq s\leq\sigma$), \item $\displaystyle\vec\psi(w)-\vec\psi(v)\in\bigcup_{s=1}^\sigma A_s(\varphi_{\psi(v)})$ for all $(v,w)\in\vec E^\circ$, \end{enumerate} where $\vec E^\ast_s$ and $\vec E^\circ$ are defined in (\ref{Ecirc}) and $\vec\psi(v)$, defined in (\ref{psi}), denotes the spatial coordinates of the space-time point $\psi(v)$. \end{defi} Note that the definition of what it means for a contour to be present depends on the choice of the sets $A_s(\phi_k)$ in (\ref{As}). Conditions (i) and (ii) say that sinks of $(V,{\cal E})$ are mapped to defective space-time points, where the constant map $\phi^0$ is applied, and all other vertices are mapped to space-time points where one of the non-constant maps $\phi_1,\ldots,\phi_m$ is applied. Together with our earlier definition of an embedding, condition~(iii) says that if $(v,w)$ is an edge with charge $s$ that comes out of the root or an internal vertex, then $(v,w)$ is mapped to a pair of space-time points of the form $\big((i,t),(i+j,t-1)\big)$ with $j\in A_s(\varphi_{\psi(v)})$. Condition~(iv) is similar, except that if $v$ is a source different from the root, then we only require that $j\in\bigcup_{s=1}^\sigma A_s(\varphi_{\psi(v)})$. It is clear from this definition that if $(V,{\cal E},v_\circ,\psi)$ and $(V',{\cal E}',v'_\circ,\psi')$ are equivalent Toom contours, then $(V,{\cal E},v_\circ,\psi)$ is present in ${\bm{\phh}}$ if and only if the same is true for $(V',{\cal E}',v'_\circ,\psi')$. For our example of the monotone cellular automaton with $\phi_1=\phi^{\rm coop}$, Definition~\ref{D:present} is demonstrated in Figure~\ref{fig:minexpl}. Directed edges of charge 1 and 2 are indicated in red and blue, respectively. Because of our choice $A_2(\phi_1):=\{(0,1),(1,0)\}$, blue edges that start at internal vertices or the root point in directions where one of the spatial coordinates increases by one. Likewise, since $A_1(\phi_1):=\{(0,0)\}$, red edges that start at internal vertices or the root point straight up, i.e., in the direction of decreasing time. Sinks of the Toom contour correspond to defective sites, as indicated in Figure~\ref{fig:minexpl} on the right. In view of Lemma~\ref{L:maxup}, the following crucial theorem links the upper invariant law to Toom contours. \begin{theorem}[Presence of a Toom contour] Let\label{T:contour} ${\bm{\phh}}$ be a monotonic flow on $\{0,1\}^{{\mathbb Z}^d}$ that take values in $\{\phi_0,\ldots,\phi_m\}$, where $\phi_0=\phi^0$ is the constant map that always gives the outcome zero and $\phi_1,\ldots,\phi_m$ are non-constant. Let $\overline x$ denote the maximal trajectory of ${\bm{\phh}}$. Let $\sigma\geq 2$ be an integer and for each $1\leq s\leq\sigma$ and $1\leq k\leq m$, let $A_s(\phi_k)\in{\cal A}(\phi_k)$ be fixed. If $\overline x_0(0)=0$, then, with respect to the given choice of $\sigma$ and the sets $A_s(\phi_k)$, a Toom contour $(V,{\cal E},v_\circ,\psi)$ rooted at $(0,0)$ is present in ${\bm{\phh}}$. \end{theorem} We note that the converse of Theorem~\ref{T:contour} does not hold, i.e., the presence in ${\bm{\phh}}$ of a Toom contour $(V,{\cal E},v_\circ,\psi)$ that is rooted at $(0,0)$ does not imply that $\overline X_0(0)=0$. This can be seen from Figure~\ref{fig:minexpl}. In this example, if there would be no other defective sites apart from the sinks of the Toom contour, then the origin would have the value one. This is a difference with the Peierls arguments used in percolation theory, where the presence of a contour is a necessary and sufficient condition for the absence of percolation. Let ${\cal T}_0$ denote the set of Toom contours rooted at $(0,0)$ (up to equivalence). We formally denote a Toom contour by $T=(V,E,v_\circ,\psi)$. Let $\Phi=(\Phi_{(i,t)})_{(i,t)\in{\mathbb Z}^{d+1}}$ be an i.i.d.\ collection of monotonic maps taking values in $\{\phi_0,\ldots,\phi_m\}$. Then Theorem~\ref{T:contour} implies the Peierls bound: \begin{equation}\label{Pei} 1-\overline\rho=\P[\overline X_0(0)=0] \leq\sum_{T\in{\cal T}_0}\P\big[T\mbox{ is present in }\Phi\big]. \end{equation} In Section~\ref{S:erod} below, we will show how (\ref{Pei}) can be used to prove the most difficult part of Toom's stability theorem, namely, that the upper invariant law of eroders is stable under small random perturbations. \subsubsection*{Toom contours with two charges} Although Theorem~\ref{T:contour} is sufficient to prove stability of eroders, when deriving explicit bounds, it is often useful to have stronger versions of Theorem~\ref{T:contour} at one's disposal that establish the presence of Toom contours with certain additional properties that restrict the sum on the right-hand side in (\ref{Pei}) and hence lead to improved bounds. Here we formulate one such result that holds specifically for Toom contours with two charges. As before, we let ${\bm{\phh}}$ be a monotonic flow taking values in $\{\phi_0,\ldots,\phi_m\}$, of which $\phi_0=\phi^0$ is the constant map that always gives the outcome zero and $\phi_1,\ldots,\phi_m$ are non-constant. We set $\sigma=2$ and choose sets $A_s(\phi_k)\in{\cal A}(\phi_k)$ $(1\leq k\leq m,\ 1\leq s\leq 2)$ as in (\ref{As}). \begin{defi}\label{D:strongpres} A Toom contour $(V,{\cal E},v_\circ,\psi)$ with $2$ charges is \emph{strongly present} in the monotonic flow ${\bm{\phh}}$ if in addition to conditions (i)--(iv) of Definition~\ref{D:present}, for each $v\in V_\circ\backslash\{v_\circ\}$ and $w_1,w_2\in V$ with $(v,w_s)\in\vec E_{s,{\rm out}}(v)$ $(s=1,2)$, one has: \begin{enumerate}\addtocounter{enumi}{4} \item $\vec\psi(w_1)-\vec\psi(v)\in A_2(\varphi_{\psi(v)})$ and $\vec\psi(w_2)-\vec\psi(v)\in A_1(\varphi_{\psi(v)})$, \item $\vec\psi(w_1)\neq\vec\psi(w_2)$. \end{enumerate} \end{defi} Condition~(v) can informally be described by saying that charged edges pointing out of any source other than the root must always point in the ``wrong'' direction, compared to charged edges pointing out of an internal vertex or the root. Note that for the Toom contour in Figure~\ref{fig:minexpl}, this is indeed the case. With this definition, we can strengthen Theorem~\ref{T:contour} as follows. \begin{theorem}[Strong presence of a Toom contour] If\label{T:strongpres} $\sigma=2$, then the Toom contour $(V,{\cal E},v_\circ,\psi)$ from Theorem~\ref{T:contour} can be chosen such that it is strongly present in ${\bm{\phh}}$. \end{theorem} Our proof of Theorem~\ref{T:strongpres} follows quite a different strategy from the proof of Theorem~\ref{T:contour}. We do not know to what extent Theorem~\ref{T:strongpres} can be generalised to Toom contours with three or more charges. In the following subsections, we will show how the results of the present subsection can be applied in concrete situations. In Subsection~\ref{S:erod}, we show how Theorem~\ref{T:contour} can be used to prove stability of eroders, which is the difficult implication in Toom's stability theorem. In Subsection~\ref{S:twochar}, building on the results of Subsection~\ref{S:erod}, we show how for Toom contours with two charges, the bounds can be improved by applying Theorem~\ref{T:strongpres} instead of Theorem~\ref{T:contour}. In Subsection~\ref{S:explic}, we derive explicit bounds for two concrete eroders. In Subsection~\ref{S:intrins}, we leave the setting of Toom's stability theorem and discuss monotone random cellular automata whose definition involves more than one non-constant monotonic map. In Subsection~\ref{S:contbounds} we derive bounds for monotone interacting particle systems in continuous time. \subsection{Stability of eroders}\label{S:erod} In this subsection, we restrict ourselves to the special set-up of Toom's stability theorem. We fix a non-constant monotonic map $\phi$ that is an eroder and let $\Phi^p=(\Phi^p_{(i,t)})_{(i,t)\in{\mathbb Z}^d}$ be an i.i.d.\ collection of monotonic maps that assume the values $\phi^0$ and $\phi$ with probabilities $p$ and $1-p$, respectively. We let $(\overline X^p_t)_{t\in{\mathbb Z}}$ denote the maximal trajectory of $\Phi^p$ and let $\overline\rho(p):=\P[\overline X^p_0(0)=1]$ denote the intensity of the upper invariant law. We will show how the Peierls bound (\ref{Pei}) can be used to prove that $\overline\rho(p)\to 1$ as $p\to 0$, which is the most difficult part of Toom's stability theorem. To do this, first we will need another equivalent formulation of the eroder property~\eqref{erosion}. By definition, a \emph{polar function} is a linear function ${\mathbb R}^d\ni z\mapsto L(z)=(L_1(z),\ldots,L_\sigma(z))\in{\mathbb R}^\sigma$ such that \begin{equation}\label{polar} \sum_{s=1}^\sigma L_s(z)=0\qquad(z\in{\mathbb R}^d). \end{equation} We call $\sigma\geq 2$ the \emph{dimension} of $L$. The following lemma is adapted from \cite[Lemma~12]{Pon13}, with the basic idea going back to \cite{Too80}. Recall the definition of ${\cal A}(\phi)$ in (\ref{Aphi}). \begin{lemma}[Erosion criterion] A\label{L:erode} non-constant monotonic function $\phi:\{0,1\}^{{\mathbb Z}^d}\to\{0,1\}$ is an eroder if and only if there exists a polar function $L$ of dimension $\sigma\geq 2$ such that \begin{equation}\label{erode} \sum_{s=1}^\sigma\sup_{A\in{\cal A}(\phi)}\inf_{i\in A}L_s(i)>0. \end{equation} If $\phi$ is an eroder, then $L$ can moreover be chosen so that its dimension $\sigma$ is at most $d+1$. \end{lemma} To understand why the condition (\ref{erode}) implies that $\phi$ is an eroder, for $1\leq s\leq\sigma$, let \begin{equation} \delta_s:=\sup_{A\in{\cal A}(\phi)}\inf_{i\in A}L_s(i)\quad\mbox{and}\quad r_s(x):=\sup\big\{L_s(i):i\in{\mathbb Z}^d,\ x(i)=0\big\} \qquad\big(x\in\{0,1\}^{{\mathbb Z}^d}\big), \end{equation} with $r_s(\underline 1):=-\infty$, and let $(X^0_k)_{k\geq 0}$ denote the deterministic cellular automaton that applies the map $\phi$ in each space-time point, started in an arbitrary initial state. In the proof of Lemma~\ref{L:Lerod} below, we will show that \begin{equation}\label{edgespeed} r_s(X^0_n)\leq r_s(X^0_0)-\delta_sn\qquad(n\geq 0). \end{equation} This says that $\delta_s$ has the interpretation of an \emph{edge speed} in the direction defined by the linear function $L_s$. If $x$ is a configuration containing finitely many zeros, then we define the \emph{extent} of $x$ by \begin{equation} {\rm ext}(x):=\sum_{s=1}^\sigma r_s(x). \end{equation} Then ${\rm ext}(\underline 1)=-\infty$, while on the other hand, by the defining property (\ref{polar}) of a polar function, ${\rm ext}(x)\geq 0$ for each $x$ that contains at least one zero. Now (\ref{edgespeed}) implies that if $X^0_0$ contains finitely many zeros, then \begin{equation} {\rm ext}(X^0_n)\leq{\rm ext}(X^0_0)-n\delta \quad\mbox{with}\quad \delta:=\sum_{s=1}^\sigma\delta_s. \end{equation} It follows that $X^0_n=\underline 1$ for all $n$ such that ${\rm ext}(X^0_0)-n\delta<0$. Since $\delta>0$ by (\ref{erode}), we conclude that $\phi$ is an eroder. We use Lemma~\ref{L:erode} and the polar functions to choose the number of charges $\sigma$ and to make a choice for the sets $A_s(\phi)\in{\cal A}(\phi)$ $(1\leq s\leq\sigma)$ as in (\ref{As}) when defining Toom contours. For a given choice of a polar function $L$ and sets $A_s(\phi)$, let us set \begin{equation}\label{Bphi} B(\phi):=\bigcup_{s=1}^\sigma A_s(\phi), \end{equation} and define \begin{equation}\begin{array}{r@{\,}c@{\,}lcr@{\,}c@{\,}ll}\label{epsR} \displaystyle\varepsilon&:=&\displaystyle\sum_{s=1}^\sigma\varepsilon_s&\quad\mbox{with}\quad& \displaystyle\varepsilon_s&:=&\displaystyle\inf_{i\in A_s(\phi)}L_s(i)\qquad&\displaystyle(1\leq s\leq\sigma),\\[5pt] \displaystyle R&:=&\displaystyle\sum_{s=1}^\sigma R_s&\quad\mbox{with}\quad& \displaystyle R_s&:=&\displaystyle-\inf_{i\in B(\phi)}L_s(i)\qquad&\displaystyle(1\leq s\leq\sigma). \end{array}\ee Then Lemma~\ref{L:erode} tells us that since $\phi$ is an eroder, we can choose the polar function $L$ and sets $A_s(\phi)$ in such a way that $\varepsilon>0$, which we assume from now on. Recall that in the example where $\phi=\phi^{\rm coop}$, we earlier made the choices $\sigma:=2$, $A_1(\phi):=\{(0,0)\}$, and $A_2(\phi):=\{(0,1),(1,0)\}$. We will now also choose a polar function by setting \begin{equation}\label{Lcoop} L_1(z_1,z_2):=-z_1-z_2\quad\mbox{and}\quad L_2:=-L_1\qquad\big((z_1,z_2)\in{\mathbb R}^2\big), \end{equation} One can check that for this choice of $L$ the constants from~\eqref{epsR} are given by \begin{equation}\label{coopepsR} \varepsilon=1\quad\mbox{and}\quad R=1. \end{equation} Returning to the setting where $\phi$ is a general eroder, we let ${\cal T}_0$ denote the set of Toom contours rooted at $(0,0)$ (up to equivalence). Since we apply only one non-constant monotonic map, conditions (iii) and (iv) of Definition~\ref{D:present} of what it means for a contour to be present in $\Phi^p$ do not involve any randomness, i.e., these conditions now simplify to the deterministic conditions: \begin{itemize} \item[{\rm(iii)'}] $\displaystyle\vec\psi(w)-\vec\psi(v)\in A_s(\phi)$ for all $(v,w)\in\vec E^\ast_s$ $(1\leq s\leq\sigma$), \item[{\rm(iv)'}] $\displaystyle\vec\psi(w)-\vec\psi(v)\in B(\phi)$ for all $(v,w)\in\vec E^\circ$. \end{itemize} \begin{defi} We\label{D:Tac} let ${\cal T}'_0$ denote the set of Toom contours rooted at $(0,0)$ (up to equivalence) that satisfy conditions (iii)' and (iv)'. \end{defi} For each $T=(V,{\cal E},v_\circ,\psi)\in{\cal T}'_0$, let \begin{equation}\label{eq:nstartne} n_\ast(T):=|V_\circ|=|V_\ast|\quad\mbox{and}\quad n_{\rm e}(T):=|\vec E_1|=\cdots=|\vec E_\sigma| \end{equation} denote its number of sinks and sources, each, and its number of directed edges of each charge. As already explained informally, the central idea of Toom contours is that differently charged edges move away from each other except for edges starting at a source, which allows us to bound the number $n_{\rm e}(T)$ of edges in terms of the number $n_\ast(T)$ of sources (or equivalently sinks). We now make this informal idea precise. It is at this point of the argument that the eroder property is used in the form of Lemma~\ref{L:erode} which allowed us to choose the sets $A_s(\phi)$ and the polar function $L$ such that the constant $\varepsilon$ from (\ref{epsR}) is positive. We also need the following simple lemma.\footnote{Lemmas \ref{L:zerosum} and \ref{L:edgebnd} are similar to \cite[Lemmas 1 and 2]{Too80}. The main difference is that in Toom's construction, the number of incoming edges of each charge equals the number of outgoing edges of that charge at all vertices of the contour, i.e., there are no sources and sinks.} \begin{lemma}[Zero sum property] Let\label{L:zerosum} $(V,{\cal E})$ be a Toom graph with $\sigma$ charges, let $\psi:V\to{\mathbb Z}^{d+1}$ be an embedding of $(V,{\cal E})$, and let $L:{\mathbb R}^d\to{\mathbb R}^\sigma$ be a polar function with dimension $\sigma$. Then \begin{equation}\label{zerosum} \sum_{s=1}^\sigma\sum_{(v,w)\in\vec E_s}\big(L_s(\vec\psi(w))-L_s(\vec\psi(v))\big)=0. \end{equation} \end{lemma} \begin{Proof} We can rewrite the sum in (\ref{zerosum}) as \begin{equation} \sum_{v\in V}\Big\{\sum_{s=1}^\sigma\sum_{(u,v)\in\vec E_{s,{\rm in}}(v)}L_s(\vec\psi(v)) -\sum_{s=1}^\sigma\sum_{(v,w)\in\vec E_{s,{\rm out}}(v)}L_s(\vec\psi(v))\Big\}. \end{equation} At internal vertices, the term inside the brackets is zero because the number of incoming edges of each charge equals the number of outgoing edges of that charge. At the sources and sinks, the term inside the brackets is zero by the defining property (\ref{polar}) of a polar function, since there is precisely one outgoing (resp.\ incoming) edge of each charge. \end{Proof} As a consequence of Lemma~\ref{L:zerosum}, we can estimate $n_{\rm e}(T)$ from above in terms of $n_\ast(T)$. \begin{lemma}[Upper bound on the number of edges] Let\label{L:edgebnd} $\varepsilon$ and $R$ be defined in~\eqref{epsR}. Then each $T\in{\cal T}'_0$ satisfies $n_{\rm e}(T)\leq(1+R/\varepsilon)\big(n_\ast(T)-1\big)$. \end{lemma} \begin{Proof} Since $|\vec E^\circ_s|=n_\ast(T)-1$ and $|\vec E^\ast_s|=n_{\rm e}(T)-n_\ast(T)+1$ $(1\leq s\leq\sigma)$, Lemma~\ref{L:zerosum} and rules (iii)' and (iv)' imply that \be\begin{array}{r@{\,}c@{\,}l} \displaystyle 0 &=&\displaystyle\sum_{s=1}^\sigma\Big(\sum_{(v,w)\in \vec E^\ast_s} \big(L_s(\vec\psi(w))-L_s(\vec\psi(v))\big) +\sum_{(v,w)\in\vec E^\circ_s} \big(L_s(\vec\psi(w))-L_s(\vec\psi(v))\big)\Big)\\[5pt] &\geq&\displaystyle\sum_{s=1}^\sigma\big[\big(n_{\rm e}(T)-n_\ast(T)+1\big)\varepsilon_s-\big(n_\ast(T)-1\big)R_s\big] =\varepsilon n_{\rm e}(T)-(\varepsilon+R)\big(n_\ast(T)-1\big), \end{array}\ee where we have used that $L_s(\vec\psi(w))-L_s(\vec\psi(v))=L_s\big(\vec\psi(w)-\vec\psi(v)\big)$ by the linearity of $L_s$. \end{Proof} By condition~(ii) of Definition~\ref{def:embedding} of an embedding, sinks of a Toom contour do not overlap. By condition~(i) of Definition~\ref{D:present} of what it means for a Toom contour to be present, each sink corresponds to a space-time point $(i,t)$ that is defective, meaning that $\Phi_{(i,t)}=\phi^0$, which happens with probability $p$, independently for all space-time points. By Lemma~\ref{L:edgebnd}, we can then estimate the right-hand side of (\ref{Pei}) from above by \begin{equation}\begin{array}{l}\label{Peierls} \displaystyle\sum_{T\in{\cal T}_0}\P\big[T\mbox{ is present in }\Phi\big] \leq\sum_{T\in{\cal T}'_0}p^{n_\ast(T)}=p\sum_{T\in{\cal T}'_0}p^{n_\ast(T)-1}\\[5pt] \displaystyle\quad\leq p\sum_{T\in{\cal T}'_0}p^{n_{\rm e}(T)/(1+R/\varepsilon)} =p\sum_{n=0}^\infty N_n p^{n/(1+R/\varepsilon)}, \end{array}\ee where \begin{equation}\label{eq:Nn} N_n:=\big|\{T\in{\cal T}'_0:n_{\rm e}(T)=n\}\big|\qquad(n\geq 0) \end{equation} denotes the number of (nonequivalent) contours in ${\cal T}'_0$ that have $n$ edges of each charge. The following lemma gives a rough upper bound on $N_n$. Recall the definition of $B(\phi)$ in (\ref{Bphi}). \begin{lemma}[Exponential bound] Let\label{L:expbd} $M:=\big|B(\phi)\big|$ and let $\tau:=\lceil\ffrac{1}{2}\sigma\rceil$ denote $\ffrac{1}{2}\sigma$ rounded up to the next integer. Then \begin{equation} N_n\leq n^{\tau-1}(\tau+1)^{2\tau n}M^{\sigma n}\qquad(n\geq 1). \end{equation} \end{lemma} Combining (\ref{Peierls}) and Lemma~\ref{L:expbd}, we see that the right-hand side of (\ref{Pei}) is finite for $p$ sufficiently small and hence (by dominated convergence) tends to zero as $p\to 0$. This proves that $\overline\rho(p)\to 1$ as $p\to 0$, which is the most difficult part of Toom's stability theorem. \subsection{Contours with two charges}\label{S:twochar} For Toom contours with two charges, the bounds derived in the previous subsection can be improved by using Theorem~\ref{T:strongpres} instead of Theorem~\ref{T:contour}. To make this precise, for Toom contours with two charges, we define a subset ${\cal T}''_0$ of the set of contours ${\cal T}'_0$ from Definition~\ref{D:Tac} as follows: \begin{defi} For\label{D:Tacc} Toom contours with $\sigma=2$ charges, we let ${\cal T}''_0$ denote the set of Toom contours rooted at $(0,0)$ (up to equivalence) that satisfy: \begin{itemize} \item[{\rm(iii)'}] $\displaystyle\vec\psi(w)-\vec\psi(v)\in A_s(\phi)$ for all $(v,w)\in\vec E^\ast_s$ $(1\leq s\leq 2$), \item[{\rm(iv)''}] $\displaystyle\vec\psi(w)-\vec\psi(v)\in A_{3-s}(\phi)$ for all $(v,w)\in\vec E^\circ_s$ $(1\leq s\leq 2$), \item[{\rm(v)''}] $\displaystyle\vec\psi(w_1)\neq\vec\psi(w_2)$ for all $v\in V_\circ\backslash\{v_\circ\}$, $w_1\in\vec E_{1,{\rm out}}$, and $w_2\in\vec E_{2,{\rm out}}$. \end{itemize} \end{defi} Note that condition (iii)' above is the same condition as (iii)' of Definition~\ref{D:Tac}. Condition (iv)'' strengthens condition (iv)' of Definition~\ref{D:Tac}. Conditions (iv)'' and (v)'' correspond to conditions (v) and (vi) of Definition~\ref{D:strongpres}, which in our present set-up do not involve any randomness. We will need analogues of Lemmas~\ref{L:edgebnd} and~\ref{L:expbd} with ${\cal T}'_0$ replaced by ${\cal T}''_0$. We define \begin{equation}\label{eq:Rprime} R'':=\sum_{s=1}^\sigma R''_s\qquad\mbox{with}\quad R''_1:=-\inf_{i\in A_2(\phi)}L_1(i)\quad\mbox{and}\quad R''_2:=-\inf_{i\in A_1(\phi)}L_2(i). \end{equation} The following lemma is similar to Lemma~\ref{L:edgebnd}. \begin{lemma}[Upper bound on the number of edges for $\sigma=2$] Let\label{L:edgebndcycle} $\varepsilon$ and $R''$ be defined in~\eqref{epsR} and~\eqref{eq:Rprime}. Then each $T\in{\cal T}''_0$ satisfies $n_{\rm e}(T)\leq(1+R''/\varepsilon)\big(n_\ast(T)-1\big)$. \end{lemma} \begin{Proof} The proof is the same as that of Lemma~\ref{L:edgebnd}, with the only difference that condition (iv)'' of Definition~\ref{D:Tacc} allows us to use $R''_s$ instead of $R_s$ $(s=1,2)$ as upper bounds. \end{Proof} Similarly to (\ref{eq:Nn}), we let \begin{equation} N''_n:=\big|\{T\in{\cal T}''_0:n_{\rm e}(T)=n\}\big|\qquad(n\geq 0) \end{equation} denote the number of (nonequivalent) contours in ${\cal T}''_0$ that have $n$ edges of each charge. Then Theorem~\ref{T:strongpres} implies the Peierls bound: \begin{equation}\label{Peierlscycle} 1-\overline\rho(p)\leq \sum_{T\in{\cal T}_0}\P\big[T\mbox{ is strongly present in }\Phi\big] \leq \sum_{T\in{\cal T}''_0}p^{n_\ast(T)} \leq p\sum_{n=0}^\infty N''_n p^{n/(1+R''/\varepsilon)}. \end{equation} The following lemma is similar to Lemma~\ref{L:expbd}. \begin{lemma}[Exponential bound for $\sigma=2$] Let\label{L:expbdcycle} $M_s:=\big|A_s(\phi)\big|$ $(s=1,2)$. Then \begin{equation} N''_n\leq\ffrac{1}{2}(4M_1M_2)^{n} \qquad(n\geq 1). \end{equation} \end{lemma} \subsection{Some explicit bounds}\label{S:explic} We continue to work in the set-up of the previous subsections, i.e., we consider monotone random cellular automata that apply the maps $\phi^0$ and $\phi$ with probabilities $p$ and $1-p$, respectively, where $\phi$ is an eroder. An easy coupling argument shows that the intensity $\overline\rho(p)$ of the upper invariant law is a nonincreasing function of $p$, so there exists a unique $p_{\rm c}\in[0,1]$ such that $\overline\rho(p)>0$ for $p<p_{\rm c}$ and $\overline\rho(p)=0$ for $p>p_{\rm c}$. Since $\phi$ is an eroder, Toom's stability theorem tells us that $p_{\rm c}>0$. In this subsection, we derive explicit lower bounds on $p_{\rm c}$ for two concrete choices of the eroder $\phi$. If one wants to use (\ref{Pei}) to show that $\overline\rho>0$, then one must show that the right-hand side of (\ref{Pei}) is less than one. In practice, when deriving explicit bounds, it is often easier to show that a certain sum is finite than showing that it is less than one. We will prove a generalisation of Theorems \ref{T:contour} and \ref{T:strongpres} that can in many cases be used to show that if a certain sum is finite, then $\overline\rho>0$. In the set-up of Theorem~\ref{T:contour}, we choose $j_s\in A_s(\phi_1)$ $(1\leq s\leq\sigma)$. We fix an integer $r\geq 0$ and we let ${\bm{\phh}}^{(r)}$ denote the modified monotonic flow defined by \begin{equation}\label{eq:modifiedbooleanmaps} \varphi^{(r)}_{(i,t)}:=\left\{\begin{array}{ll} \phi_1\quad&\mbox{if }-r<t\leq 0,\\[5pt] \varphi_{(i,t)}&\mbox{otherwise.} \end{array}\right. \end{equation} Below, we let $\overline x^{(r)}$ denote the maximal trajectory of the modified monotonic flow ${\bm{\phh}}^{(r)}$. As before, we let ${\rm Conv}(A)$ denote the convex hull of a set $A$. \begin{proposition}[Presence of a large contour] In\label{P:Peifin} the set-up of Theorem~\ref{T:contour}, on the event that $\overline x^{(r)}_{-r}(i)=0$ for all $i\in{\rm Conv}(\{rj_1,\ldots,rj_\sigma\})$, there is a Toom contour $(V,{\cal E},v_\circ,\psi)$ rooted at $(0,0)$ present in ${\bm{\phh}}^{(r)}$ such that $\psi_{d+1}(v)\leq-r$ for all $v\in V_\ast$ and $\psi_{d+1}(v)\leq 1-r$ for all $v\in V_\circ\backslash\{v_\circ\}$. If $\sigma=2$, then such a Toom contour is strongly present in ${\bm{\phh}}^{(r)}$. \end{proposition} As a simple consequence of this proposition, we obtain the following lemma. \begin{lemma}[Finiteness of the Peierls sum] If\label{L:Peifin} $\displaystyle\sum_{T\in{\cal T}'_0}p^{n_\ast(T)}<\infty$, then $\overline\rho(p)>0$. If $\sigma=2$,\\[-15pt] \noindent then similarly $\displaystyle\sum_{T\in{\cal T}''_0}p^{n_\ast(T)}<\infty$ implies $\overline\rho(p)>0$. \end{lemma} We prove Proposition~\ref{P:Peifin} and Lemma~\ref{L:Peifin} in Section~\ref{S:finP}. \vspace{0.2cm} \noindent \textbf{Cooperative branching} Generalizing the definition in (\ref{phiNEC}), for each dimension $d\geq 1$, we define a monotonic map $\phi^{{\rm coop},d}:\{0,1\}^{{\mathbb Z}^d}\to\{0,1\}$ by \begin{equation}\label{phicoopddim} \phi^{{\rm coop},d}(x):=\displaystyle x(0)\vee\big(x(e_1)\wedge\dots\wedge x(e_d)\big), \end{equation} where 0 is the origin and $e_i$ denotes the $i$th unit vector in ${\mathbb Z}^d$. In particular, in dimension $d=2$, this is the cooperative branching rule $\phi^{\rm coop}$ defined in (\ref{phiNEC}). We chose $\sigma:=2$, $A_1(\phi):=\{0\}$, and $A_2(\phi_1):=\{e_1,\dots,e_d\}$, and as our polar function $L$ we chose \begin{equation} L_1(z_1,\dots, z_d):=-\sum_{i=1}^dz_i\quad\mbox{and}\quad L_2(z_1,\dots, z_d):=\sum_{i=1}^dz_i, \end{equation} which has the result that the constants from~\eqref{epsR} and~\eqref{eq:Rprime} are given by $\varepsilon=1$, $R=1$ and $R''=1$. Arguing as in (\ref{Peierls}), using Lemmas \ref{L:edgebnd} and \ref{L:expbd} with $M=d+1$, $\sigma=2$ and $\tau=1$, we obtain the Peierls bound: \begin{equation} \sum_{T\in{\cal T}_0}\P\big[T\mbox{ is present in }\Phi\big] \leq\sum_{T\in{\cal T}'_0}p^{n_\ast(T)} \leq p\sum_{n=0}^\infty 2^{2n}(d+1)^{2n}p^{n/2}. \end{equation} This is finite when $4(d+1)^2p^{1/2}<1$, so using Lemma~\ref{L:Peifin} we obtain the bound $p_{\rm c}(d)\geq 16^{-1}(d+1)^{-4}$. This bound can be improved by using Theorem~\ref{T:strongpres} and its consequences. Applying Lemmas \ref{L:edgebndcycle} and \ref{L:expbdcycle} with $M_1=d$, $M_2=1$, we obtain the Peierls bound: \begin{equation} \sum_{T\in{\cal T}_0}\P\big[T\mbox{ is strongly present in }\Phi\big] \leq\sum_{T\in{\cal T}''_0}p^{n_\ast(T)} \leq \frac p 2 \sum_{n=0}^\infty 4^n d^n p^{n/2}. \end{equation} This is finite when $4d p^{1/2}<1$, so using Lemma~\ref{L:Peifin} we obtain the bound \begin{equation} p_{\rm c}(d)\geq \frac 1 {16 d^2}. \end{equation} In particular, in two dimensions this yields $p_{\rm c}(2)\geq 1/64$. This is still some way off the estimated value $p_{\rm c}(2)\approx 0.105$ coming from numerical simulations but considerably better than the bound obtained from Lemmas \ref{L:edgebnd} and \ref{L:expbd}.\medskip \noindent \textbf{Toom's model} We take for $\phi$ the map $\phi^{\rm NEC}$. Then the set ${\cal A}(\phi)$ from (\ref{Aphi}) is given by ${\cal A}(\phi)=\{A_1,A_2,A_3\}$ with $A_1:=\{(0,0),(0,1)\}$, $A_2:=\{(0,0),(1,0)\}$, and $A_3:=\{(0,1),(1,0)\}$. Using (\ref{erosion}) we see that $\phi^{\rm NEC}$ is an eroder. We set $\sigma:=3$ and for the sets $A_s(\phi^{\rm NEC})$ $s=1,2,3$ of (\ref{As}) we choose the sets $A_1,A_2,A_3$ we have just defined. We define a polar function $L$ with dimension $\sigma=3$ by \begin{equation}\label{eq:Toompolar} L_1(z_1,z_2):=-z_1,\quad L_2(z_1,z_2):=-z_2,\quad L_3(z_1,z_2):=z_1+z_2, \end{equation} $\big((z_1,z_2)\in{\mathbb R}^2\big)$. One can check that for this choice of $L$ and the sets $A_s(\phi^{\rm NEC})$ $(1\leq s\leq 3)$, the constants from~\eqref{epsR} are given by \begin{equation} \varepsilon=1\quad\mbox{and}\quad R=2. \end{equation} Using Lemma~\ref{L:expbd} with $M=3$, $\sigma=3$, and $\tau=2$, we can estimate the Peierls sum in (\ref{Peierls}) from above by \begin{equation} p\sum_{n=0}^\infty n3^{4n}3^{3n}p^{n/3}. \end{equation} This is finite when $3^7p^{1/3}<1$, so using Lemma~\ref{L:Peifin} we obtain the bound \begin{equation} p_{\rm c}\geq 3^{-21}, \end{equation} which does not compare well to the estimated value $p_{\rm c}\approx 0.053$ coming from numerical simulations. Nevertheless, this is probably the best rigorous bound currently available. \subsection{Cellular automata with intrinsic randomness}\label{S:intrins} In this subsection we will be interested in monotone random cellular automata whose definition involves more than one non-constant monotonic map. We fix a dimension $d\geq 1$, a collection $\phi_1,\ldots,\phi_m$ of non-constant monotonic maps $\phi_k:\{0,1\}^{{\mathbb Z}^d}\to\{0,1\}$, and a probability distribution $p_1,\ldots,p_m$. Let $(X_k)_{k\geq 0}$ denote the monotone random cellular automaton that applies the maps $\phi_1,\ldots,\phi_m$ with probabilities $p_1,\ldots,p_m$ and let $\phi_0:=\phi^0$ be the constant map that always gives the outcome zero. By definition, an \emph{$\delta$-perturbation} of $(X_k)_{k\geq 0}$ is a monotone random cellular automaton $(X'_k)_{k\geq 0}$ that applies the maps $\phi_0,\ldots,\phi_m$ with probabilities $p'_0,\ldots,p'_m$ that satisfy $p'_0\leq\delta$ and $p'_k\leq p_k$ for all $k=1,\ldots,m$. We say that $(X_k)_{k\geq 0}$ is \emph{stable} if for each $\varepsilon>0$, there exists a $\delta>0$ such that the density $\overline\rho'$ of the upper invariant law of any $\delta$-perturbation of $(X_k)_{k\geq 0}$ satisfies $\overline\rho'\geq 1-\varepsilon$. Note that in the special case that $m=1$, which corresponds to the set-up of Toom's stability theorem, these definitions coincide with our earlier definition. For deterministic monotone cellular automata, which in our set-up corresponds to the case $m=1$, we have seen in Lemma~\ref{L:erode} and formula (\ref{edgespeed}) that the eroder property can equivalently be formulated in terms of edge speeds. For a random monotone cellular automaton $(X_k)_{k\geq 0}$, the intuition is similar, but it is not entirely clear how to define edges speeds in the random setting and it can be more difficult to determine whether $(X_k)_{k\geq 0}$ is an eroder. Fix a polar function $L$ of dimension $\sigma\geq 2$ and let \begin{equation}\label{eq:speeds} \varepsilon^k_s:=\sup_{A\in{\cal A}(\phi_k)}\inf_{i\in A}L_s(i)\qquad(1\leq k\leq m,\ 1\leq s\leq\sigma) \end{equation} denote the edge speed in the direction defined by the linear function $L_s$ of the deterministic automaton that only applies the map $\phi_k$ . If \begin{equation}\label{unispeed} \sum_{s=1}^\sigma\varepsilon_s>0\quad\mbox{with}\quad\varepsilon_s:=\inf_{1\leq k\leq m}\varepsilon^k_s, \end{equation} then (\ref{edgespeed}) remains valid almost surely. In such a situation, it is not very hard to adapt the arguments of Section~\ref{S:erod} to see that $(X_k)_{k\geq 0}$ is stable. The condition (\ref{unispeed}) is, however, very restrictive and excludes many interesting cases. In particular, it excludes the case when one of the maps $\phi_1,\ldots,\phi_m$ is the identity map $\phi^{\rm id}$, which, as explained below (\ref{phiid}) is relevant in view of treating continuous-time interacting particle systems. Indeed, observe that, if $\phi_k=\phi^{\rm id}$, then $\varepsilon_s^k=0$ for each polar function $L$ of dimension $\sigma$ and each $1\leq s\leq \sigma$, implying $\sum_{s=1}^\sigma \varepsilon_s\leq 0$. The following example, which is an adaptation of \cite[Example~18.3.5]{Gra99}, shows that in such situations it can be much more subtle whether a random monotone cellular automaton is stable. Fix an integer $n\geq 1$ and let $\phi_1:\{0,1\}^{{\mathbb Z}^2}\to\{0,1\}$ be the monotonic map defined as in (\ref{Aphi}) by the set of minimal configurations \begin{equation} {\cal A}(\phi_1):=\big\{\{(-1,0),(0,0)\},\{(-2,0),(0,0)\},\{(m,k):-3\leq m\leq-2,\ |k|\leq n\}\big\}. \end{equation} Using (\ref{erosion}), it is straightforward to check that $\phi_1$ is an eroder. Now consider the random monotone cellular automaton $(X_k)_{k\geq 0}$ that applies the maps $\phi_1$ and $\phi^{\rm id}$ with probabilities $p$ and $1-p$, respectively, for some $0\leq p\leq 1$. We claim that if $p<1$, then for $n$ sufficiently large, $(X_k)_{k\geq 0}$ is not stable. To see this, fix $l\geq 2$ and consider an initial state such that $X_0(i)=0$ for $i\in\{0,\ldots,l\}\times\{0,\ldots,n\}$ and $X_0(i)=1$ otherwise. Set \begin{equation} \alpha_k:=\inf_{0\leq i_2\leq n}\inf\{i_1:X_k(i_1,i_2)=0\}\quad\mbox{and}\quad \beta^j_k:=\sup\{i_1:X_k(i_1,j)=0\}\quad(0\leq j\leq n). \end{equation} As long as at each height $0\leq j\leq n$, there are at least two sites of type 0, the right edge processes $(\beta^j_k)_{k\geq 0}$ with $0\leq j\leq n$ behave as independent random walks that make one step to the right with probability $p$. Therefore, the right edge of the zeros moves with speed $p$ to the right. In each time step, all sites in $\{\alpha_k,\alpha_k+1\}\times\{0,\ldots,n\}$ that are of type $0$ switch to type 1 with probability $p$. When $p=1$, the effect of this is that the left edge of the zeros moves with speed two to the right and eventually catches up with the right edge, which explains why $\phi_1$ is an eroder. However, when $p<1$, the left edge can move to the right only once all sites in $\{\alpha_k\}\times\{0,\ldots,n\}$ have switched to type 1. For $n$ large enough, this slows down the speed of the left edge with the result that in $(X_k)_{k\geq 0}$ the initial set of zeros will never disappear. It is not difficult to prove that this implies that $(X_k)_{k\geq 0}$ is not stable. To see a second example that demonstrates the complications that can arise when we replace deterministic monotone cellular automata by random ones, recall the maps $\phi^{\rm NEC}$, $\phi^{\rm NWC}$, $\phi^{\rm SWC}$, and $\phi^{\rm SEC}$ defined in and below (\ref{phiNEC}). For the map $\phi^{\rm NEC}$, the edge speeds in the directions defined by the linear functions $L_1$ and $L_2$ from (\ref{eq:Toompolar}) are zero but the edge speed corresponding to $L_3$ is not, which we used in Subsection~\ref{S:explic} to prove that the deterministic monotone cellular automaton that always applies the map $\phi^{\rm NEC}$ is stable. By contrast, for the cellular automaton that applies the maps $\phi^{\rm NEC}$, $\phi^{\rm NWC}$, $\phi^{\rm SWC}$, and $\phi^{\rm SEC}$ with equal probabilities, by symmetry in space and since these maps treat the types 0 and 1 symmetrically, the edge speed in each direction is zero. As a result, we conjecture that, although each map applied by this random monotone cellular automaton is an eroder, it is not stable. In spite of these complications, Toom contours can sometimes be used to prove stability of random monotone cellular automata, even in situations where the simplifying assumption (\ref{unispeed}) does not hold. In these cases we cannot rely on the use of polar functions, instead we have to carefully examine the structure of the contour to be able to bound the number of contours in terms of the number of defective sites. Furthermore, one can generally take $\sigma:=\bigvee_{k=1}^m|{\cal A}(\phi_k)|$. We will demonstrate this on a cellular automaton that combines the cooperative branching map defined in (\ref{phicoopddim}) with the identity map.\medskip \noindent \textbf{Cooperative branching with identity map} We consider the monotone random cellular automaton on ${\mathbb Z}^d$ that applies the maps $\phi^0,\phi^{\rm id}$, and $\phi^{{\rm coop},d}$ with probabilities $p,q,r$, respectively with $q=1-p-r$. For each $p,r\geq 0$ such that $p+r\leq 1$, let $\overline\rho(p,r)$ denote the intensity of the upper invariant law of the process with parameters $p,1-p-r,r$. A simple coupling argument shows that for fixed $0\leq r<1$, the function $p\mapsto\overline\rho(p,r)$ is nonincreasing on $[0,1-r]$, so for each $0\leq r<1$, there exists a $p_{\rm c}(r)\in[0,1-r]$ such that $\overline\rho(p,r)>0$ for $0\leq p<p_{\rm c}(r)$ and $\overline\rho(p,r)=0$ for $p_{\rm c}(r)<p\leq 1-r$. We will derive a lower bound on $p_{\rm c}(r)$. Recall that setting $p:=\varepsilon$ and $r:=\lambda\varepsilon$, rescaling time by a factor $\varepsilon$, and sending $\varepsilon\to 0$ corresponds to taking the continuous-time limit, where in the limiting interacting particle system the maps $\phi^0$ and $\phi^{{\rm coop},d}$ are applied with rates 1 and $\lambda$, respectively. For this reason, we are especially interested in the asymptotics of $p_{\rm c}(r)$ when $r$ is small. In line with notation introduced in Subsection~\ref{S:explic}, we define $A_1:=\{0\}$ and $A_2:=\{e_1,\dots, e_d\}$. We have \begin{equation} {\cal A}(\phi^{\rm id})=\big\{A_1\big\}\quad\mbox{and}\quad {\cal A}(\phi^{\rm coop, d})=\big\{A_1,A_2\big\}, \end{equation} thus we set $\sigma:=|{\cal A}(\phi^{\rm id})|\vee|{\cal A}(\phi^{\rm coop, d})|=2$, and for the sets $A_s(\phi_k)$ in (\ref{As}) we make the choices \begin{equation}\begin{array}{ll}\label{coopA12} \displaystyle A_1(\phi^{\rm id}):=A_1,\quad& A_2(\phi^{\rm id}):=A_1,\\[5pt] \displaystyle A_1(\phi^{\rm coop,d}):=A_1,\quad& A_2(\phi^{\rm coop, d}):=A_2. \end{array}\ee Let $\Phi=(\Phi_{(i,t)})_{(i,t)\in{\mathbb Z}^3}$ be an i.i.d.\ collection of monotonic maps so that $\P[\Phi_{(i,t)}=\phi^0]=p$, $\P[\Phi_{(i,t)}=\phi^{\rm id}]=q$, and $\P[\Phi_{(i,t)}=\phi^{\rm coop, d}]=r$. We let ${\cal T}_0$ denote the set of Toom contours $(V, \mathcal E, 0, \psi)$ rooted at the origin with respect to the given choice of $\sigma$ and the sets $A_s(\phi_k)$ in~\eqref{coopA12}. Theorem~\ref{T:contour} then implies the Peierls bound \begin{equation}\label{strPei} 1-\overline\rho \leq \sum_{T\in{\cal T}_0}\P\big[T\mbox{ is strongly present in }\Phi\big]. \end{equation} In Section~\ref{S:intbd}, we give an upper bound on this expression by carefully examining the structure of Toom contours for this model. We will prove the following lower bound on $p_{\rm c}(r)$ for each $r\in[0,1)$: \[p_{\rm c}(r)>\big(\sqrt{(d+0.5)^2+1/(16d)}-d-0.5\big)r.\] In particular for $d=2$ we obtain the bound $p_c(r)> 0.00624 r$. \subsection{Continuous time}\label{S:contfirst} In this subsection, we consider monotone interacting particle systems of the type described in (\ref{traj}). We briefly recall the set-up described there. We are given a finite collection $\phi_1,\ldots,\phi_m$ of non-constant monotonic maps $\phi_k:\{0,1\}^{{\mathbb Z}^d}\to\{0,1\}$ and a collection of nonnegative rates $r_1,\ldots,r_m$, and we are interested in interacting particle systems $(X_t)_{t\geq 0}$ taking values in $\{0,1\}^{{\mathbb Z}^d}$ that evolve in such a way that independently for each $i\in{\mathbb Z}^d$, \begin{equation} X_t(i)\mbox{ is replaced by }\phi_k(\theta_iX_t)\mbox{ at the times of a Poisson process with rate }r_k \end{equation} $(1\leq k\leq m)$. Without loss of generality we can assume that $\phi_k\neq\phi^{\rm id}$ for all $0\leq k\leq m$. For each $r\geq 0$, let $(X^r_t)_{t\geq 0}$ denote the perturbed monotone interacting particle system that apart from the non-constant monotonic maps $\phi_1,\ldots,\phi_m$, that are applied with rates $r_1,\ldots,r_m$, also applies the constant monotonic map $\phi_0:=\phi^0$ with rate $r_0:=r$. We let $\overline\rho(r)$ denote the density of its upper invariant law. We say that the unperturbed interacting particle system $(X_t)_{t\geq 0}$ is \emph{stable} if $\overline\rho(r)\to 1$ as $r\to 0$. Gray \cite[Theorem~18.3.1]{Gra99} has given (mutually non-exclusive) sufficient conditions on the edge speeds for a monotone interacting particle system to be either stable or unstable. Furthermore, \cite[Examples~18.3.5 and 6]{Gra99} he has shown that $(X_t)_{t\geq 0}$ may fail to be stable even when $m=1$ and the map $\phi_1$ is an eroder in the sense of (\ref{erosion}), and conversely, in such a situation, $(X_t)_{t\geq 0}$ be stable even $\phi_1$ is not an eroder. The reason for this is that we can think of interacting particle systems as continuous-time limits of cellular automata that apply the identity map $\phi^{\rm id}$ most of the time, and, as we have seen in the previous subsection, combining an eroder $\phi_1$ with the identity map $\phi^{\rm id}$ can change the stability of a cellular automaton in subtle ways. However, for a certain type of interacting particle system called generalized contact process Gray's conditions on the edge speed turn out to be sufficient and necessary for the stability of $(X_t)_{t\geq 0}$. We now briefly describe this argument, as it is not present in~\cite{Gra99}. Recall that $\mathcal A(\phi_k)$ defined in~\eqref{Aphi} denotes the set of minimal configurations on which $\phi_k$ gives the outcome 1. We say that a monotone interacting particle system that applies the non-constant monotonic maps $\phi_1,\ldots,\phi_m$ is a \emph{generalized contact process}, if $\{0\}\in \mathcal A(\phi_k)$ for each $1\leq k\leq m$. The perturbed system $(X^r_t)_{t\geq 0}$ then can be seen as a model for the spread of epidemics: vertices represent individuals that can be healthy (state 0) or infected (state 1). Each healthy vertex can get infected, if a certain set of vertices in its neighbourhood is entirely infected, and each infected vertex can recover at rate $r$ independently of the state of the other vertices. For a monotone interacting particle system that applies the non-constant monotonic maps $\phi_1,\ldots,\phi_m$ Gray defines the \emph{Toom operator} $\phi(x):\{0,1\}^{{\mathbb Z}^d}\to\{0,1\}$ as the map \begin{equation}\label{eq:Toomoperator} \phi(x):=\big(1-x(0)\big)\bigwedge_{k=1}^m \phi_k(x) + x(0)\bigvee_{k=1}^m \phi_k(x) \qquad \big(x\in \{0,1\}^{{\mathbb Z}^d}\big). \end{equation} That is, $\phi$ flips the state of the origin if at least one of the maps $\phi_1, \dots, \phi_m$ would flip its state in configuration $x$. As each $\phi_k$ is monotonic, it is easy to see that $\phi$ is monotonic as well. Recall from~\eqref{epsR} that for each fixed polar function $L$ of dimension $\sigma$ we defined \begin{equation} \varepsilon:=\sum_{s=1}^\sigma \varepsilon_s, \qquad \varepsilon_s:=\inf_{i\in A_s(\phi)}L_s(i) \quad(1\leq s\leq \sigma). \end{equation} For a Toom operator $\phi$ with $\{0\}\in \mathcal A(\phi)$ we have $\varepsilon_s\geq 0$ for each $s$. In this case, Gray's condition for stability simplifies as follows. A monotone interacting particle system with Toom operator $\phi$ satisfying $\{0\}\in \mathcal A(\phi)$ is stable if and only if there exists a polar function $L$ for which $\varepsilon>0$. It is easy to see, that finding such a polar function is equivalent to finding a set $A\in{\cal A}(\phi)$ which is entirely contained in an open halfspace in ${\mathbb Z}^d$. As $\{0\}\subset \mathcal A(\phi)$, this is further equivalent to $\bigcap_{A\in{\cal A}(\phi)}{\rm Conv}(A)=\emptyset$, which is the eroder condition in~\eqref{erosion}. Let $(X_t)_{t\geq 0}$ be a generalized contact process. As $\{0\}\subset \mathcal A(\phi_k)$ for each $1\leq k\leq m$, we clearly have $\{0\}\subset \mathcal A(\phi)$ for the corresponding Toom operator $\phi$ in~\eqref{eq:Toomoperator}. Thus in this case we can formulate Gray's theorem \cite[Theorem~18.3.1]{Gra99} as follows. \begin{quote} The generalized contact process $(X_t)_{t\geq 0}$ is stable if and only if the corresponding Toom operator $\phi$ is an eroder. \end{quote} While Gray's results can be used to show stability of certain models, his ideas do not lend themselves well to the derivation of explicit bounds. It is with this goal in mind that we have extended Toom's framework to continuous time. Toom contours in continuous time are defined similarly as in the discrete time setting and can be thought of as the limit of the latter. Since this is very simiar to what we have already seen in Subsection~\ref{S:Peierls}, we do not give the precise definitions in the continuous-time setting here but refer to Section~\ref{S:cont} instead. We will demonstrate how Toom contours can be used to give bounds on the critical parameters of some monotone interacting particle systems. As mentioned in the previous subsection, in our methods we cannot rely on the use of polar functions. Again, one can generally take $\sigma:=\bigvee_{k=1}^m|{\cal A}(\phi_k)|$. \medskip \noindent \textbf{Sexual contact process on $\mathbb Z^d \; (d\geq 1)$} We consider the interacting particle system on ${\mathbb Z}^d$ that applies the monotonic maps $\phi^0$ and $\phi^{{\rm coop},d}$ defined in (\ref{phiconst}) and (\ref{phicoopddim}) with rates $1$ and $\lambda$, respectively. We let $\overline\rho(\lambda)$ denote the intensity of the upper invariant law as a function of $\lambda$ and we define the critical parameter as $\lambda_{\rm c}:=\inf\{\lambda\geq 0:\overline\rho(\lambda)>0\}$. In line with notation introduced in Subsection~\ref{S:explic}, we define $A_1:=\{0\}$ and $A_2:=\{e_1,\dots, e_d\}$. We have \begin{equation} {\cal A}(\phi^{\rm coop, d})=\big\{A_1,A_2\big\}, \end{equation} thus we set $\sigma:=|{\cal A}(\phi^{\rm coop, d})|=2$, and for the sets $A_s(\phi_k)$ in (\ref{As}) we make the choices \begin{equation} A_1(\phi^{\rm coop,d}):=A_1,\quad A_2(\phi^{\rm coop, d}):=A_2. \end{equation} In Section~\ref{S:cont} we will show that $X_t(i)=0$ implies the presence of a continuous Toom contour rooted at $(i, t)$ with respect to the given choice of $\sigma$ and sets $A_s(\phi^{\rm coop,d})$, and use these contours to carry out a similar Peierls argument as in the discrete time case. In one dimension, this process is called the one-sided contact process, and our computation yields the bound \begin{equation} \lambda_c(1)\leq 49.3242\dots . \end{equation} There are already better estimates in the literature: in \cite{TIK97} the authors prove the bound $\lambda_c(1)\leq3.882$ and give the numerical estimate $\lambda_c(1)\approx3.306$. In two dimensions this is the sexual contact process defined in~\cite{Dur86}, and we prove the bound \begin{equation}\label{coopbd} \lambda_c(2)\leq 161.1985\dots . \end{equation} In \cite{Dur86} Durrett claimed a proof that $\lambda_{\rm c}(2)\leq 110$, while numerical simulations suggest the value $\lambda_c(2)\approx 12.4$.\medskip \section{Toom contours}\label{S:contour} \subsubsection*{Outline} In this section, we develop the basic abstract theory of Toom contours. In particular, we prove all results stated in Subsection~\ref{S:Peierls}. In Subsection~\ref{S:max}, we prove the preparatory Lemmas \ref{L:maxtraj} and \ref{L:maxup}. Theorems \ref{T:contour} and \ref{T:strongpres} about the (strong) presence of Toom contours are proved in Subsections \ref{S:constr} and \ref{S:Tcycles}, respectively. In Section~\ref{S:fork}, we briefly discuss ``forks'' which played a prominent role in Toom's \cite{Too80} original formulation of Toom contours and which can be used to prove a somewhat stronger version of Theorem~\ref{T:contour}. \subsection{The maximal trajectory}\label{S:max} In this subsection we prove Lemmas \ref{L:maxtraj} and \ref{L:maxup}.\medskip \begin{Proof}[of Lemma~\ref{L:maxtraj}] By symmetry, it suffices to show that there exists a trajectory $\overline x$ that is uniquely characterised by the property that each trajectory $x$ of ${\bm{\phh}}$ satisfies $x\leq\overline x$. For each $s\in{\mathbb Z}$, we inductively define a function $x^s:{\mathbb Z}^d\times\{s,s+1,\ldots\}\to\{0,1\}$ by \begin{equation}\label{maxs} x^s_s(i):=1\quad(i\in{\mathbb Z}^d)\quad\mbox{and}\quad x^s_t(i)=\varphi_{(i,t)}(\theta_ix^s_{t-1})\qquad\big(i\in{\mathbb Z}^d,\ s<t\big). \end{equation} Then $x^{s-1}_s(i)\leq 1=x^s_s(i)$ and hence by induction $x^{s-1}_t(i)\leq x^s_t(i)$ for all $s\leq t$, which implies that the pointwise limit \begin{equation}\label{maxconv} \overline x_t(i):=\lim_{s\to-\infty}x^s_t(i)\qquad\big((i,t)\in{\mathbb Z}^{d+1}\big) \end{equation} exists. It is easy to see that $\overline x$ is a trajectory. If $x$ is any other trajectory, then $x_s(i)\leq 1=x^s_s(i)$ and hence by induction $x_t(i)\leq x^s_t(i)$ for all $s\leq t$, which implies that $x\leq\overline x$. Thus, $\overline x$ is the maximal trajectory, and such a trajectory is obviously unique. \end{Proof} \begin{Proof}[of Lemma~\ref{L:maxup}] By symmetry, it suffices to prove the claim for the upper invariant law. We recall that two probability measures $\nu_1,\nu_2$ on $\{0,1\}^{{\mathbb Z}^d}$ are stochastically ordered, which we denoted as $\nu_1\leq\nu_2$, if and only if random variables $X_1,X_2$ with laws $\nu_1,\nu_2$ can be coupled such that $X_1\leq X_2$. The law $\mu$ of $\overline X_t$ clearly does not depend on $t$ and hence is an invariant law. The proof of Lemma~\ref{L:maxtraj} shows that $\P^{\overline 1}[X_t\in\,\cdot\,]\Rightarrow\overline\mu$ as $t\to\infty$ as claimed in (\ref{upconv}). Alternatively, $\mu$ is uniquely characterised by the fact that it is maximal with respect to the stochastic order, i.e., if $\nu$ is an arbitrary invariant law, then $\nu\leq\mu$. Indeed, if $\nu$ is an invariant law, then for each $s\in{\mathbb Z}$, we can inductively define a stationary process $(X^s_t)_{t\geq s}$ by \begin{equation} X^s_t(i)=\varphi_{(i,t)}(\theta_iX^s_{t-1})\qquad\big(i\in{\mathbb Z}^d,\ s<t\big), \end{equation} where $X^s_s$ has the law $\nu$ and is independent of $\Phi$. Since $\nu$ is an invariant law, the laws of the processes $X^s$ are consistent in the sense of Kolmogorov's extension theorem and therefore we can almost surely construct a trajectory $X$ of $\Phi$ such that $X_t$ has the law $\nu$ and is independent of $(\Phi_{(i,s)})_{i\in{\mathbb Z}^d,\ t<s}$ for each $t\in{\mathbb Z}$. By Lemma~\ref{L:maxtraj}, $X\leq\overline X$ a.s.\ and hence $\nu\leq\mu$ in the stochastic order. We conclude that as claimed, $\mu=\overline\nu$, the upper invariant law. \end{Proof} \subsection{Explanation graphs} In this subsection we start preparing for the proof of Theorem~\ref{T:contour}. We fix a monotonic flow ${\bm{\phh}}$ on $\{0,1\}^{{\mathbb Z}^d}$ that take values in $\{\phi_0,\ldots,\phi_m\}$, where $\phi_0=\phi^0$ is the constant map that always gives the outcome zero and $\phi_1,\ldots,\phi_m$ are non-constant. We also fix an integer $\sigma\geq 2$ and for each $1\leq s\leq\sigma$ and $1\leq k\leq m$, we fix $A_s(\phi_k)\in{\cal A}(\phi_k)$. Letting $\overline x$ denote the maximal trajectory of ${\bm{\phh}}$, our aim is to prove that almost surely on the event that $\overline x_0(0)=0$, there is a Toom contour $(V,{\cal E},v_\circ,\psi)$ rooted at $(0,0)$ present in ${\bm{\phh}}$. As a first step towards this aim, in the present subsection, we will show that the event that $\overline x_0(0)=0$ almost surely implies the presence of a simpler structure, which we will call an \emph{explanation graph}. Recall from Subsection~\ref{S:Peierls} that a directed graph with $\sigma$ types of edges is a pair $(U,{\cal H})$, where ${\cal H}=(\vec H_1,\ldots,\vec H_\sigma)$ is a sequence of subsets of $U\times U$. We interpret $\vec H_s$ as the set of directed edges of type $s$. For such a directed graph with $\sigma$ types of edges, we let $\vec H_{s,{\rm in}}(u)$ and $\vec H_{s,{\rm out}}(u)$ denote the set of vertices with type $s$ that end and start in a vertex $u\in U$, respectively. We also use the notation $\vec H:=\bigcup_{s=1}^\sigma\vec H_s$. Then $(U,\vec H)$ is a directed graph in the usual sense of the word. The following two definitions introduce the concepts we will be interested in. Although they look a bit complicated at first sight, in the proof of Lemma~\ref{L:explan} we will see that they arise naturally in the problem we are interested in. Further motivation for these definitions is provided in Section~\ref{S:expla} below, where it is shown that explanation graphs naturally arise from an even more elementary concept, which we will call a \emph{minimal explanation}. \begin{defi}\label{def:finiteexpl} An \emph{explanation graph} for $(0,0)$ is a directed graph with $\sigma$ types of edges $(U,{\cal H})$ with $U\subset{\mathbb Z}^{d+1}$ for which there exists a subset $U_\ast\subset U$ such that the following properties hold: \begin{enumerate} \item each element of $\vec H$ is of the form $\big((j,t),(i,t-1)\big)$ for some $i,j\in{\mathbb Z}^d$ and $t\in{\mathbb Z}$, \item $(0,0)\in U\subset{\mathbb Z}^{d+1}$ and $t<0$ for all $(i,t)\in U\backslash\{(0,0)\}$, \item for each $(i,t)\in U\backslash\{(0,0)\}$, there exists a $(j,t+1)\in U$ such that $\big((j,t+1),(i,t)\big)\in\vec H$, \item if $u\in U_\ast$, then $\vec H_{s,{\rm out}}(u)=\emptyset$ for all $1\leq s\leq\sigma$, \item if $u\in U\backslash U_\ast$, then $\big|\vec H_{s,{\rm out}}(u)\big|=1$ for all $1\leq s\leq\sigma$. \end{enumerate} \end{defi} Note that $U_\ast$ is uniquely determined by $(U,{\cal H})$. We call $U_\ast$ the set of \emph{sinks} of the explanation graph $(U,{\cal H})$. \begin{defi}\label{def:finexpres} An explanation graph $(U,{\cal H})$ is \emph{present} in ${\bm{\phh}}$ if: \begin{enumerate} \item $\overline x_t(i)=0$ for all $(i,t)\in U$, \item $U_\ast=\big\{u\in U:\varphi_u=\phi^0\big\}$, \item $j-i\in A_s(\varphi_{(i,t)})$ for all $\big((i,t),(j,t-1)\big)\in\vec H_s$ $(1\leq s\leq\sigma)$. \end{enumerate} \end{defi} \begin{lemma}[Presence of an explanation graph] The\label{L:explan} maximal trajectory $\overline x$ of a monotonic flow ${\bm{\phh}}$ satisfies $\overline x_0(0)=0$ if and only if there is an explanation graph $(U,{\cal H})$ for $(0,0)$ present in ${\bm{\phh}}$. \end{lemma} \begin{Proof} By condition~(i) of Definition~\ref{def:finexpres}, the presence of an explanation graph clearly implies $\overline x_0(0)=0$. To prove the converse implication, let $x^r:{\mathbb Z}^d\times\{r,r+1,\ldots\}\to\{0,1\}$ be defined as in (\ref{maxs}). We have seen in the proof of Lemma~\ref{L:maxtraj} that $x^r_t(i)$ decreases to $\overline x_t(i)$ as $r\to-\infty$. Therefore, since $\overline x_0(0)=0$, there must be an $r<0$ such that $x^r_0(0)=0$. We fix such an $r$ from now on. We will inductively construct a finite explanation for $(0,0)$ with the desired properties. At each point in our construction, $(U,{\cal H})$ will be a finite explanation for $(0,0)$ such that: \begin{itemize} \item[{\rm(i)}] $x^r_t(i)=0$ for all $(i,t)\in U$, \item[{\rm(ii)}'] $\varphi_{(i,t)}\neq\phi^0$ for all $(i,t)\in U\backslash U_\ast$, \item[{\rm(iii)}] $j-i\in A_s(\varphi_{(i,t)})$ for all $\big((i,t),(j,t-1)\big)\in\vec H_s$ $(1\leq s\leq\sigma)$. \end{itemize} The induction stops as soon as: \begin{itemize} \item[{\rm(ii)}] $U_\ast=\big\{u\in U:\varphi_u=\phi^0\big\}$. \end{itemize} We start with $U=\{(0,0)\}$ and $\vec H_s=\emptyset$ for all $1\leq s\leq\sigma$. In each step of the construction, we select a vertex $(i,t)\in U_\ast$ such that $\varphi_{(i,t)}\neq\phi^0$. Since $x^r_t(i)=0$ and $A_s(\varphi_{(i,t)})\in{\cal A}(\varphi_{(i,t)})$ as defined in (\ref{Aphi}), for each $1\leq s\leq\sigma$ we can choose $j_s\in A_s(\varphi_{(i,t)})$ such that $x^r_{t-1}(j_s)=0$. We now replace $U$ by $U\cup\{(j_s,t-1):1\leq s\leq\sigma\}$ and we replace $\vec H_s$ by $\vec H_s\cup\{\big((i,t),(j_s,t-1))\}$ $(1\leq s\leq\sigma)$, and the induction step is complete. At each step in our construction, $r<t\leq 0$ for all $(i,t)\in U$, since at time $r$ one has $x^r_r(i)=1$ for all $i\in{\mathbb Z}^d$. Since $U$ can contain at most $\sigma^{-t}$ elements with time coordinate $t$, we see that the inductive construction ends after a finite number of steps. It is straightforward to check that the resulting graph is an explanation graph in the sense of Definition~\ref{def:finiteexpl}. \end{Proof} \subsection{Toom matchings}\label{S:match} In this subsection, we continue our preparations for the proof of Theorem~\ref{T:contour}. Most of the proof of Theorem~\ref{T:contour} will consist, informally speaking, of showing that to each explanation graph, it is possible to add a suitable set of sources, such that the sources and sinks together define a Toom contour. It follows from the definition of an explanation graph that for each $w\in U$ and $1\leq s\leq\sigma$, there exist a unique $n\geq 0$ and $w_0,\ldots,w_n$ such that \begin{enumerate} \item $w_0=w$ and $(w_{i-1},w_i)\in\vec H_s$ for all $0<i\leq n$, \item $w_n\in U_\ast$ and $w_i\in U\backslash U_\ast$ for all $0\leq i<n$. \end{enumerate} In other words, this says that starting at each $w\in U$, there is a unique directed path that uses only directed edges from $\vec H_s$ and that ends at some vertex $w_n\in U_\ast$. We will use the following notation: \begin{equation}\left.\begin{array}{r@{\,}c@{\,}l}\label{Ww} \displaystyle P_s(w)&:=&\displaystyle\big\{w_0,\ldots,w_n\big\},\\[5pt] \displaystyle\pi_s(w)&:=&\displaystyle w_n, \end{array}\quad\right\}\quad(w\in U,\ 1\leq s\leq\sigma). \end{equation} Then $P_s(w)$ is the path we have just described and $\pi_s(w)\in U_\ast$ is its endpoint. By definition, we will use the word \emph{polar} to describe any sequence $(a_1,\ldots,a_\sigma)$ such that $a_s\in U$ for all $1\leq s\leq\sigma$ and the points $a_1=(i_1,t),\ldots,a_\sigma=(i_\sigma,t)$ all have the same time coordinate. We call $t$ the \emph{time} of the polar. \begin{defi}\label{def:toommatching} A \emph{Toom matching} for an explanation graph $(U,{\cal H})$ with $N:=|U_\ast|$ sinks is an $N\times\sigma$ matrix \begin{equation} \big(a_{i,s}\big)_{1\leq i\leq N,\ 1\leq s\leq\sigma} \end{equation} such that \begin{enumerate} \item $(a_{i,1},\ldots,a_{i,\sigma})$ is a polar for each $1\leq i\leq N$, \item $\pi_s:\{a_{1,s},\ldots,a_{N,s}\}\to U_\ast$ is a bijection for each $1\leq s\leq\sigma$. \end{enumerate} \end{defi} We will be interested in polars that have the additional property that all their elements lie ``close together'' in a certain sense. By definition, a \emph{point polar} is a polar $(a_1,\ldots,a_\sigma)$ such that $a_1=\cdots=a_\sigma$. We say that a polar $(a_1,\ldots,a_\sigma)$ is \emph{tight} if it is either a point polar, or there exists a $v\in U$ such that $(v,a_s)\in\vec H$ for all $1\leq s\leq\sigma$, where we recall that $\vec H:=\bigcup_{s=1}^\sigma\vec H_s$. The following proposition is the main result of this subsection. \begin{proposition}[Toom matchings] Let\label{P:match} $(U,{\cal H})$ be an explanation graph for $(0,0)$ with $N:=|U_\ast|$ sinks. Then there exists a Toom matching for $(U,{\cal H})$ such that in addition to the properties (i) and (ii) above, \begin{enumerate}\addtocounter{enumi}{2} \item $a_{1,1}=\cdots=a_{1,\sigma}=(0,0)$, \item $(a_{i,1},\ldots,a_{i,\sigma})$ is a tight polar for each $1\leq i\leq N$. \end{enumerate} \end{proposition} In the next subsection, we will derive Theorem~\ref{T:contour} from Proposition~\ref{P:match}. It is instructive to jump a bit ahead and already explain the main idea of the construction. Let $(a_{i,s})_{1\leq i\leq N,\ 1\leq s\leq\sigma}$ be the Toom matching from Proposition~\ref{P:match}. For each $i$ and $s$, we connect the vertices of the path $P_s(a_{i,s})$ defined in (\ref{Ww}) with directed edges of type $s$. By property~(ii) of a Toom matching, this has the consequence that each sink $u\in U_\ast$ of the explanation graph is the endvertex of precisely $\sigma$ edges, one of each type. Each point polar gives rise to a source where $\sigma$ charges emerge, one of each type, that then travel through the explanation graph until they arrive at a sink. For each polar $(a_{i,1},\ldots,a_{i,\sigma})$ that is not a point polar, we choose $v_i\in U$ such that $(v_i,a_{i,s})\in\vec H$ for all $1\leq s\leq\sigma$, and for each $1\leq s\leq\sigma$ we connect $v_i$ and $a_{i,s}$ with a directed edge of type $s$. These extra points $v_i$ then act as additional sources and, as will be proved in detail in the next subsection, our collection of directed edges now forms a Toom graph that is embedded in ${\mathbb Z}^{d+1}$, and the connected component of this Toom graph containing the origin forms a Toom contour that is present in ${\bm{\phh}}$. This is illustrated in Figure~\ref{fig:minexpl}. The picture on the right shows an explanation graph $(U,{\cal H})$, or rather the associated directed graph $(U,\vec H)$, with sinks indicated with a star. The embedded Toom graph in the middle picture of Figure~\ref{fig:minexpl} originates from a Toom matching of this explanation graph. The proof of Proposition~\ref{P:match} takes up the remainder of this subsection. The proof is quite complicated and will be split over several lemmas. We fix an explanation graph $(U,{\cal H})$ for $(0,0)$ with $N:=|U_\ast|$ sinks. Because of our habit of drawing time downwards in pictures, it will be convenient to define a function $h:U\to{\mathbb N}$ by \begin{equation}\label{eq:height} h(i,t):=-t\qquad\big((i,t)\in U\big). \end{equation} We call $h(w)$ the \emph{height} of a vertex $w\in U$. For $u,v\in U$, we write $u\leadsto_{\vec H}v$ when there exist $u_0,\ldots,u_n\in U$ with $n\geq 0$, $u_0=u$, $u_n=v$, and $(u_{k-1},u_k)\in\vec H$ for all $0<k\leq n$. By definition, for $w_1,w_2\in U$, we write $w_1\approx w_2$ if $h(w_1)=h(w_2)$ and there exists a $w_3\in U$ such that $w_i\leadsto_{\vec H}w_3$ for $i=1,2$. Moreover, for $v,w\in U$, we write $v\sim w$ if there exist $m\geq 0$ and $v=v_0,\ldots,v_m=w$ such that $v_{i-1}\approx v_i$ for $1\leq i\leq m$. Then $\sim$ is an equivalence relation. In fact, if we view $U$ as a graph in which two vertices $v,w$ are adjacent if $v\approx w$, then the equivalence classes of $\sim$ are just the connected components of this graph. We let ${\cal C}$ denote the set of all (nonempty) equivalence classes. It is easy to see that the origin $(0,0)$ and the sinks form equivalence classes of their own. With this in mind, we set ${\cal C}_\ast:=\big\{\{w\}:w\in U_\ast\big\}$. Each $C\in{\cal C}$ has a height $h(C)$ such that $h(v)=h(C)$ for all $v\in C$. For $C_1,C_2\in{\cal C}$, we write $C_1\to C_2$ if there exists a $(v_1,v_2)\in\vec H$ such that $v_i\in C_i$ $(i=1,2)$. Note that this implies that $h(C_2)=h(C_1)+1$. The following lemma says that ${\cal C}$ has the structure of a directed tree with the sinks as its leaves. \begin{lemma}[Tree of equivalence classes] For\label{L:Ctree} each $C\in{\cal C}$ with $C\neq\{(0,0)\}$, there exists a unique $C'\in{\cal C}$ such that $C'\to C$. Moreover, for each $C\in{\cal C}\backslash{\cal C}_\ast$, there exists at least one $C''\in{\cal C}$ such that $C\to C''$. Also, $C\in{\cal C}\backslash{\cal C}_\ast$ implies $C\cap U_\ast=\emptyset$. \end{lemma} \begin{Proof} Since the sinks form equivalence classes of their own, $C\in{\cal C}\backslash{\cal C}_\ast$ implies $C\cap U_\ast=\emptyset$. If $C\in{\cal C}\backslash{\cal C}_\ast$, then condition~(v) in Definition~\ref{def:finiteexpl} of an explanation graph implies the existence of a $C''\in{\cal C}$ such that $C\to C''$. Similarly, if $C\in{\cal C}$ and $C\neq\{(0,0)\}$, then the existence of a $C'\in{\cal C}$ such that $C'\to C$ follows from condition~(iii) in Definition~\ref{def:finiteexpl}. It remains to show that $C'$ is unique. Assume that, to the contrary, there exist $w,w'\in C$ and $(v,w),(v',w')\in\vec H$ so that $v$ and $v'$ do not belong to the same equivalence class. Since $w$ and $w'$ lie in the same equivalence class $C$, there exist $w_0,\ldots,w_m\in C$ with $w=w_0$, $w_m=w'$, and $w_{i-1}\approx w_i$ for all $0<i\leq m$. Using condition~(iii) in Definition~\ref{def:finiteexpl}, we can find $v_0,\ldots,v_m\in U$ such that $(v_i,w_i)\in\vec H$ $(0\leq i\leq m)$. In particular we can choose $v_0=v$ and $v_m=v'$. Since $v$ and $v'$ do not belong to the same equivalence class, there must exist an $0<i\leq m$ such that $v_{i-1}$ and $v_i$ do not belong to the same equivalence class. Since $w_{i-1}\approx w_i$, there exists a $u\in U$ such that $w_{i-1}\leadsto_{\vec H}u$ and $w_i\leadsto_{\vec H}u$. But then also $v_{i-1}\leadsto_{\vec H}u$ and $v_i\leadsto_{\vec H}u$, which contradicts the fact that $v_{i-1}$ and $v_i$ do not belong to the same equivalence class. \end{Proof} For $C,C'\in{\cal C}$, we describe the relation $C\to C'$ in words by saying that $C'$ is a direct descendant of $C$. We let ${\cal D}_C:=\{C'\in{\cal C}:C\to C'\}$ denote the set of all direct descendants of $C$. We will view ${\cal D}_C$ as an undirected graph with set of edges \begin{equation} {\cal E}_C:=\big\{\{C_1,C_2\}:\exists v\in C,\ w_1\in C_1 ,\ w_2\in C_2\mbox{ s.t.\ } (v,w_i)\in\vec H\ \forall i=1,2\big\}. \end{equation} The fact that this definition is reminiscent of the definition of a tight polar is no coincidence and will become important in Lemma~\ref{L:tiso} below. We first prove the following lemma. \begin{lemma}[Structure of the set of direct descendants] For\label{L:DiC} each $C\in{\cal C}\backslash{\cal C}_\ast$, the graph $({\cal D}_C,{\cal E}_C)$ is connected. \end{lemma} \begin{Proof} Let ${\cal D}_1,{\cal D}_2$ be nonempty disjoint subsets of ${\cal D}_C$ such that ${\cal D}_1\cup{\cal D}_2={\cal D}_C$ and let \begin{equation} D_i:=\big\{v\in C:\exists C'\in{\cal D}_i\mbox{ and }w\in C'\mbox{ s.t.\ }(v,w)\in\vec H\big\} \qquad(i=1,2). \end{equation} To show that $({\cal D}_C,{\cal E}_C)$ is connected, we need to show that $D_1\cap D_2\neq\emptyset$ for all choices of ${\cal D}_1,{\cal D}_2$. By Lemma~\ref{L:Ctree}, $C\cap U_\ast=\emptyset$ and hence for each $v\in C$ there exists a $w\in U$ such that $(v,w)\in\vec H$. Therefore, since ${\cal D}_C$ contains all direct descendants of $C$, we have $D_1\cup D_2=C$. Since ${\cal D}_1$ and ${\cal D}_2$ are nonempty, so are $D_1$ and $D_2$. Assume that $D_1\cap D_2=\emptyset$. Then, since $C$ is an equivalence class, there must exist $v_i\in D_i$ $(i=1,2)$ such that $v_1\approx v_2$, i.e., \begin{equation}\label{comdesc} \{w\in U:v_1\leadsto_{\vec H}w\}\cap\{w\in U:v_2\leadsto_{\vec H}w\}\neq\emptyset. \end{equation} However, for $i=1,2$, the set $\{w\in U:v_i\leadsto_{\vec H}w\}$ is entirely contained in the equivalence classes in ${\cal D}_i$ and their descendants. Since by Lemma~\ref{L:Ctree}, ${\cal C}$ has the structure of a tree, this contradicts (\ref{comdesc}). \end{Proof} We can now make the connection to the definition of tight polars. We say that a polar $(a_1,\ldots,a_\sigma)$ lies inside a set $D\subset U$ if $a_s\in D$ for all $1\leq s\leq\sigma$. \begin{lemma}[Tight polars] Let\label{L:tiso} $C\in{\cal C}\backslash{\cal C}_\ast$, let $M:=|{\cal D}_C|$ be the number of its direct descendants, and let $D_C:=\bigcup_{C'\in{\cal D}_C}C'$ be the union of all $C'\in{\cal D}_C$. Let $(a_{1,1},\ldots,a_{1,\sigma})$ be a polar inside $D_C$. Then, given that~$M\geq 2$, it is possible to choose tight polars $(a_{i,1},\ldots,a_{i,\sigma})$ $(2\leq i\leq M)$ inside $D_C$ such that: \begin{equation}\label{tiso} \mbox{For each $C'\in{\cal D}_C$ and $1\leq s\leq\sigma$, there is a unique $1\leq i\leq M$ such that $a_{i,s}\in C'$.} \end{equation} \end{lemma} \begin{Proof} By Lemma~\ref{L:DiC}, the graph ${\cal D}_C$ is connected in the sense defined there. To prove the claim of Lemma~\ref{L:tiso} will prove a slightly more general claim. Let ${\cal D}'_C$ be a connected subgraph of ${\cal D}_C$ with $M'$ elements, let $D'_C:=\bigcup_{C'\in{\cal D}'_C}C'$, and let $(a_{1,1},\ldots,a_{1,\sigma})$ be a polar inside $D'_C$. Then we claim that it is possible to choose tight polars $(a_{i,1},\ldots,a_{i,\sigma})$ $(2\leq i\leq M')$ inside $D'_C$ such that (\ref{tiso}) holds with ${\cal D}_C$ and $M$ replaced by ${\cal D}'_C$ and $M'$ respectively. We will prove the claim by induction on $M'$. The claim is trivial for $M'=1$. We will now prove the claim for general $M'\geq 2$ assuming it proved for $M'-1$. Since ${\cal D}'_C$ is connected, we can find some $C'\in{\cal D}'_C$ so that ${\cal D}'_C\backslash\{C'\}$ is still connected. If none of the vertices $a_{1,1},\ldots,a_{1,\sigma}$ lies inside $C'$, then we can add a point polar inside $C'$, use the induction hypothesis, and we are done. Likewise, if all of the vertices $a_{1,1},\ldots,a_{1,\sigma}$ lie inside $C'$, then we can add a point polar inside $D'_C\backslash C'$, use the induction hypothesis, and we are done. We are left with the case that some, but not all of the vertices $a_{1,1},\ldots,a_{1,\sigma}$ lie inside $C'$. Without loss of generality, we assume that $a_{1,1},\ldots,a_{1,m}\in C'$ and $a_{1,m+1},\ldots,a_{1,\sigma}\in D'_C\backslash C'$. Since ${\cal D}'_C$ is connected in the sense of Lemma~\ref{L:DiC}, we can find a $v\in C$ and $w_1\in C'$, $w_2\in D'_C\backslash C'$ such that $(v,w_i)\in\vec H$ $(i=1,2)$. Setting $a_{2,1}=\cdots=a_{2,m}:=w_2$ and $a_{2,m+1}=\cdots=a_{2,\sigma}:=w_1$ then defines a tight polar such that: \begin{itemize} \item For each $1\leq s\leq\sigma$, there is a unique $i\in\{1,2\}$ such that $a_{i,s}\in C'$. \item For each $1\leq s\leq\sigma$, there is a unique $i\in\{1,2\}$ such that $a_{i,s}\in D'_C\backslash C'$. \end{itemize} In particular, the elements of $(a_{i,s})_{i\in\{1,2\},\ 1\leq s\leq\sigma}$ with $a_{i,s}\in D'_C\backslash C'$ form a polar in $D'_C\backslash C'$, so we can again use the induction hypothesis to complete the argument. \end{Proof} \begin{Proof}[of Proposition~\ref{P:match}] We will use an inductive construction. Let $L:=\max\{h(w):w\in U\}$. For each $0\leq l\leq L$, we set $U_{\leq l}:=\{w\in U:h(w)\leq l\}$ and ${\cal C}_l:=\{C\in{\cal C}:h(C)=l\}$. We will inductively construct an increasing sequence of integers $1=N_0\leq N_1\leq\cdots\leq N_L$ and for each $0\leq l\leq L$, we will construct an $N_l\times \sigma$ matrix $\big(a_{i,s}(l)\big)_{1\leq i\leq N_l,\ 1\leq s\leq\sigma}$ such that $a_{i, s}(l)\in U_{\leq l}$ for all $1\leq i\leq N_l$ and $1\leq s\leq\sigma$. Our construction will be consistent in the sense that \b a_{i,s}(l+1)=a_{i,s}(l)\quad\forall 1\leq i\leq N_l,\ 1\leq s\leq\sigma,\ 0\leq l<L, \end{equation} that is at each step of the induction we add rows to the matrix we have constructed so far. In view of this, we can unambiguously drop the dependence on $l$ from our notation. We will choose the matrices \begin{equation}\label{indmat} \big(a_{i,s}\big)_{1\leq i\leq N_l,\ 1\leq s\leq\sigma} \end{equation} in such a way that for each $0\leq l\leq L$: \begin{enumerate} \item $a_{1,1}=\cdots=a_{1,\sigma}=(0,0)$, \item $(a_{i,1},\ldots,a_{i,\sigma})$ is a tight polar for each $2\leq i\leq N_l$, \item For all $C\in{\cal C}_l$ and $1\leq s\leq\sigma$, there is a unique $1\leq i\leq N_l$ such that $P_s(a_{i,s})\cap C\neq\emptyset$, \end{enumerate} where $P_s(a_{i,s})$ is defined as in (\ref{Ww}). We claim that setting $N:=N_L$ then yields a Toom matching with the additional properties described in the proposition. Property~(i) of Definition~\ref{def:toommatching} of a Toom matching and the additional properties (iii) and (iv) from Proposition~\ref{P:match} follow trivially from conditions (i) and (ii) of our inductive construction, so it remains to check property~(ii) of Definition~\ref{def:toommatching}, which can be reformulated by saying that for each $w\in U_\ast$ and $1\leq s\leq\sigma$, there exists a unique $1\leq i\leq N$ such that $w\in P_s(a_{i,s})$. Since $\{w\}\in{\cal C}$ for each $w\in U_\ast$ (vertices in $U_\ast$ form an equivalence class of their own), this follows from condition~(iii) of our inductive construction. We start the induction with $N_0=1$ and $a_{1,1}=\cdots=a_{1,\sigma}=(0,0)$. Since $(0,0)$ is the only vertex in $U$ with height zero, this obviously satisfies the induction hypotheses (i)--(iii). Now assume that (i)--(iii) are satisfied for some $0\leq l<L$. We need to define $N_{l+1}$ and choose polars $(a_{i,1},\ldots,a_{i,\sigma})$ with $N_l<i\leq N_{l+1}$ so that (i)--(iii) are satisfied for $l+1$. We note that by Lemma~\ref{L:Ctree}, each $C'\in{\cal C}_{l+1}$ is the direct descendent of a unique $C\in{\cal C}_l\backslash{\cal C}_\ast$. By the induction hypothesis~(iii), for each $C\in{\cal C}_l\backslash{\cal C}_\ast$ and $1\leq s\leq\sigma$, there exists a unique $1\leq i_s\leq N_l$ such that $P_s(a_{i_s,s})\cap C\neq\emptyset$. Let ${\cal D}_C:=\{C'\in{\cal C}:C\to C'\}$ denote the set of all direct descendants of $C$ and let $D_C:=\bigcup{\cal D}_C$ denote the union of its elements. Then setting $\{b_s\}:=P_s(a_{i_s,s})\cap D_C$ $(1\leq s\leq\sigma)$ defines a polar $(b_1,\ldots,b_\sigma)$ inside $D_C$. Applying Lemma~\ref{L:tiso} to this polar, we can add tight polars to our matrix in (\ref{indmat}) so that condition (iii) becomes satisfied for all $C'\in{\cal D}_C$. Doing this for all $C\in{\cal C}_l\backslash{\cal C}_\ast$, using the tree structure of ${\cal C}$ (Lemma~\ref{L:Ctree}), we see that we can satisfy the induction hypotheses (i)--(iii) for $l+1$. \end{Proof} \subsection{Construction of Toom contours}\label{S:constr} In this subsection, we prove Theorem~\ref{T:contour}. With Proposition~\ref{P:match} proved, most of the work is already done. We will prove a slightly more precise statement. Below $\psi(V)$ and $\psi(V_\ast)$ denote the images of $V$ and $V_\ast$ under $\psi$ and $\psi(\vec E_s):=\big\{\big(\psi(v),\psi(w)\big):(v,w)\in \vec E_s\big\}$. Theorem~\ref{T:contour} is an immediate consequence of Lemma~\ref{L:explan} and the following theorem. \begin{theorem}[Presence of a Toom contour] Under\label{T:contex} the assumptions of Theorem~\ref{T:contour}, whenever there is an explanation graph $(U,{\cal H})$ for $(0,0)$ present in ${\bm{\phh}}$, there is a Toom contour $(V,{\cal E},v_\circ,\psi)$ rooted at $(0,0)$ present in ${\bm{\phh}}$ with the additional properties that $\psi(V)\subset U$, $\psi(V_\ast)\subset U_\ast$, and $\psi(\vec E_s)\subset\vec H_s$ for all $1\leq s\leq\sigma$. \end{theorem} \begin{Proof} The main idea of the proof has already been explained below Proposition~\ref{P:match}. We now fill in the details. Let $(U,{\cal H})$ be an explanation graph for $(0,0)$ that is present in ${\bm{\phh}}$. Let $N:=|U_\ast|$ be the number of sinks. By Proposition~\ref{P:match} there exists a Toom matching $\big(a_{i,s}\big)_{1\leq i\leq N,\ 1\leq s\leq\sigma}$ for $(U,{\cal H})$ such that $a_{1,1}=\cdots=a_{1,\sigma}=(0,0)$, and $(a_{i,1},\ldots,a_{i,\sigma})$ is a tight polar for each $1\leq i\leq N$. Recall from (\ref{Ww}) that $P_s(w)$ denotes the unique directed path starting at $w$ that uses only directed edges from $\vec H_s$ and that ends at some vertex in $U_\ast$. For each $1\leq i\leq N$ such that $(a_{i,1},\ldots,a_{i,\sigma})$ is a point polar, and for each $1\leq s\leq\sigma$, we will use the notation \begin{equation}\label{point} P_s(a_{i,s})=\big\{a^0_{i,s},\ldots,a^{m(i,s)}_{i,s}\big\}, \end{equation} with $(a^{l-1}_{i,s},a^l_{i,s})\in\vec H_s$ for all $0<l\leq m(i,s)$. For each $1\leq i\leq N$ such that $(a_{i,1},\ldots,a_{i,\sigma})$ is not a point polar, by the definition of a tight polar, we can choose $v_i\in U$ such that $(v_i,a_{i,s})\in\vec H$ for all $1\leq s\leq\sigma$. In this case, we will use the notation \begin{equation}\label{npoint} \{v_i\}\cup P_s(a_{i,s})=\big\{a^0_{i,s},\ldots,a^{m(i,s)}_{i,s}\big\}, \end{equation} where $(a^0_{i,s},a^1_{i,s})\in\vec H$ and $(a^{l-1}_{i,s},a^l_{i,s})\in\vec H_s$ for all $1<l\leq m(i,s)$. We can now construct a Toom graph $(V,{\cal E})$ with a specially designated source $v_\circ$ as follows. We set \begin{equation} w(i,s,l):=\left\{\begin{array}{ll} i\quad&\mbox{if }l=0<m(i,s),\\[5pt] (i,s,l)\quad&\mbox{if }0<l<m(i,s),\\[5pt] a^{m(i,s)}_{i,s}\quad&\mbox{if }l=m(i,s). \end{array}\right.\qquad(1\leq i\leq N,\ 1\leq s\leq\sigma), \end{equation} and \be\begin{array}{r@{\,}c@{\,}l}\label{WF} \displaystyle V&:=&\big\{w(i,s,l):1\leq i\leq N,\ 1\leq s\leq\sigma,\ 0\leq l\leq m(i,s)\big\},\\[5pt] \displaystyle\vec E_s&:=&\displaystyle\big\{\big(w(i,s,l-1),w(i,s,l)\big): 1\leq i\leq N,\ 0<l\leq m(i,s)\big\}\quad(1\leq s\leq\sigma),\\[5pt] \displaystyle v_\circ&:=&\displaystyle w(1,1,0)=\cdots=w(1,\sigma,0). \end{array}\ee It is straightforward to check that $(V,{\cal E})$ is a Toom graph with sets of sources, internal vertices, and sinks given by \be\begin{array}{r@{\,}c@{\,}l} \displaystyle V_\circ&=&\displaystyle\big\{i:1\leq i\leq N,\ m(i,s)>0\big\} \cup\{a^0_{i,s}:m(i,s)=0\big\},\\[5pt] \displaystyle V_s&=&\displaystyle\big\{(i,s,l): 1\leq i\leq N,\ 0<l<m(i,s)\big\}\qquad(1\leq s\leq\sigma),\\[5pt] \displaystyle V_\ast&=&\displaystyle\big\{a^{m(i,s)}_{i,s}:1\leq i\leq N,\ 1\leq s\leq\sigma\big\}=U_\ast. \end{array}\ee Note that the vertices of the form $a^0_{i,s}$ with $m(i,s)=0$ are the isolated vertices, that are both a source and a sink. We now claim that setting \begin{equation} \psi\big(w(i,s,l)\big):=a^l_{i,s}\qquad(1\leq i\leq N,\ 1\leq s\leq\sigma,\ 0\leq l\leq m(i,s)) \end{equation} defines an embedding of $(V,{\cal E})$. We first need to check that this is a good definition in the sense that the right-hand side is really a function of $w(i,s,l)$ only. Indeed, when $l=0<m(i,s)$, we have $w(i,s,l)=i$ and $a^0_{i,1}=\cdots=a^0_{i,\sigma}$ by the way $a^0_{i,s}$ has been defined in (\ref{point}) and (\ref{npoint}). For $0<l<m(i,s)$, we have $w(i,s,l)=(i,s,l)$, and finally, for $l=m(i,s)$, we have $w(i,s,l)=a^l_{i,s}$. We next check that $\psi$ is an embedding, i.e., \begin{enumerate} \item $\displaystyle\psi_{d+1}(w)=\psi_{d+1}(v)-1$ for all $(v,w)\in\vec E$, \item $\psi(v_1)\neq\psi(v_2)$ for each $v_1\in V_\ast$ and $v_2\in V$ with $v_1\neq v_2$, \item $\psi(v_1)\neq\psi(v_2)$ for each $v_1,v_2\in V_s$ with $v_1\neq v_2$ $(1\leq s\leq\sigma)$. \end{enumerate} Property~(i) is clear from the fact that $\vec E\subset\vec H$ and Definition~\ref{def:finiteexpl} of an explanation graph. Property~(ii) follows from the fact that $\psi(V_\ast)=U_\ast$ and $\psi(V\backslash V_\ast)\subset U\backslash U_\ast$. Property~(iii), finally, follows from the observation that \begin{equation} P_s(a_{i,s})\cap P_s(a_{j,s})=\emptyset \quad\forall 1\leq s\leq\sigma,\ 1\leq i,j\leq N,\ i\neq j. \end{equation} Indeed, $P_s(a_{i,s})\cap P_s(a_{j,s})\neq\emptyset$ would imply that $\pi_s(a_{i,s})=\pi_s(a_{j,s})$, as in the explanation graph there is a unique directed path of each type from every vertex that ends at some~$w\in U_\ast$, which contradicts the definition of a Toom matching. Since moreover $\psi(v_\circ)=(0,0)$ and property~(ii) of Definition~\ref{def:finiteexpl} implies that $t<0$ for all $(i,t)\in\psi(V)\backslash\{(0,0)\}$, we see that the quadruple $(V,{\cal E},v_\circ,\psi)$ satisfies all the defining properties of a Toom contour (see Definition~\ref{def:toomcontour}), except that the Toom graph $(V,{\cal E})$ may fail to be connected. To fix this, we restrict ourselves to the connected component of $(V,E)$ that contains the root $v_\circ$. To complete the proof, we must show that $(V,{\cal E},v_\circ,\psi)$ is present in ${\bm{\phh}}$, i.e., \begin{enumerate} \item $\displaystyle\varphi_{\psi(v)}=\phi^0$ for all $\displaystyle v\in V_\ast$, \item $\displaystyle\varphi_{\psi(v)}\in\{\phi_1,\ldots,\phi_m\}$ for all $\displaystyle v\in V\backslash V_\ast$, \item $\displaystyle\vec\psi(w)-\vec\psi(v)\in A_s(\varphi_{\psi(v)})$ for all $(v,w)\in\vec E^\ast_s$ $(1\leq s\leq\sigma$), \item $\displaystyle\vec\psi(w)-\vec\psi(v)\in\bigcup_{s=1}^\sigma A_s(\varphi_{\psi(v)})$ for all $(v,w)\in\vec E^\circ$. \end{enumerate} We will show that these properties already hold for the original quadruple $(V,{\cal E},v_\circ,\psi)$, without the need to restrict to the connected component of $(V,E)$ that contains the root. Since the explanation graph $(U,{\cal H})$ is present in ${\bm{\phh}}$, we have $U_\ast=\{u\in U:\varphi_u=\phi^0\}$. Since $\psi(V_\ast)=U_\ast$, this implies properties (i) and (ii). The fact that the explanation graph $(U,{\cal H})$ is present in ${\bm{\phh}}$ moreover means that $j-i\in A_s(\varphi_{(i,t)})$ for all $\big((i,t),(j,t-1)\big)\in\vec H_s$ $(1\leq s\leq\sigma)$. Since $(a^0_{i,s},a^1_{i,s})\in\vec H$ and $(a^{l-1}_{i,s},a^l_{i,s})\in\vec H_s$ for all $1<l\leq m(i,s)$ $(1\leq i\leq N,\ 1\leq s\leq\sigma)$, this implies properties (iii) and (iv). \end{Proof} \subsection{Construction of Toom contours with two charges}\label{S:Tcycles} In this subsection we prove Theorem~\ref{T:strongpres}. As in the previous subsection, we will construct the Toom contour ``inside'' an explanation graph. Theorem~\ref{T:strongpres} is an immediate consequence of Lemma~\ref{L:explan} and the following theorem. \begin{theorem}[Strong presence of a Toom contour] If\label{T:strex} $\sigma=2$, then Theorem~\ref{T:contex} can be strengthened in the sense that the Toom contour $(V,{\cal E},v_\circ,\psi)$ is strongly present in ${\bm{\phh}}$. \end{theorem} Although it is a strengthening of Theorem~\ref{T:contex}, our proof of Theorem~\ref{T:strex} will be completely different. In particular, we will not make use of the Toom matchings of Subsection~\ref{S:match}. Instead, we will exploit the fact that if we reverse the direction of edges of one of the charges, then a Toom contour with two charges becomes a directed cycle. This allows us to give a proof of Theorem~\ref{T:strex} based on the method of ``loop erasion'' (as explained below) that seems difficult to generalise to Toom contours with three or more charges. Let $n\geq 0$ be an even integer and let $V:=\{0,\ldots,n-1\}$, equipped with addition modulo $n$. Let $\psi:V\to{\mathbb Z}^{d+1}$ be a function such that \begin{equation}\label{updo} \big|\psi_{d+1}(k)-\psi_{d+1}(k-1)\big|=1\qquad(1\leq k\leq n). \end{equation} We write $\psi(k)=\big(\vec\psi(k),\psi_{d+1}(k)\big)$ $(k\in V)$ and for $n\geq 2$ we define: \be\begin{array}{r@{\,}c@{\,}l}\label{eq:v} \displaystyle V_1&:=&\displaystyle\big\{k\in V: \psi_{d+1}(k-1)>\psi_{d+1}(k)>\psi_{d+1}(k+1)\big\},\\[5pt] \displaystyle V_2&:=&\displaystyle\big\{k\in V: \psi_{d+1}(k-1)<\psi_{d+1}(k)<\psi_{d+1}(k+1)\big\},\\[5pt] \displaystyle V_\ast&:=&\displaystyle\big\{k\in V: \psi_{d+1}(k-1)>\psi_{d+1}(k)<\psi_{d+1}(k+1)\big\},\\[5pt] \displaystyle V_\circ&:=&\displaystyle\big\{k\in V: \psi_{d+1}(k-1)<\psi_{d+1}(k)>\psi_{d+1}(k+1)\big\}. \end{array}\ee In the trivial case that $n=0$, we set $V_1=V_2:=\emptyset$ and $V_\circ=V_\ast:=\{0\}$. \begin{defi}\label{def:toomcycle} Let $V$ be as above. A \emph{Toom cycle} is a function $\psi:V\to{\mathbb Z}^{d+1}$ such that: \begin{enumerate} \item $\psi$ satisfies (\ref{updo}), \item $\psi(k_1)\neq\psi(k_2)$ for each $k_1\in V_\ast$ and $k_2\in V$ with $k_1\neq k_2$, \item $\psi(k_1)\neq\psi(k_2)$ for each $k_1,k_2\in V_s$ with $k_1\neq k_2$ $(1\leq s\leq\sigma)$, \item $t<\psi_{d+1}(0)$ for all $(i,t)\in\psi(V)\backslash\{\psi(0)\}$, \end{enumerate} where $V_1,V_2,V_\ast$, and $V_\circ$ are defined as in (\ref{eq:v}). \end{defi} If $\psi:V\to{\mathbb Z}^{d+1}$ is a Toom cycle of length $n\geq 2$, then we set: \be\begin{array}{r@{\,}c@{\,}l} \displaystyle\vec E_1&:=&\displaystyle\big\{(k,k+1): \psi_{d+1}(k)>\psi_{d+1}(k+1),\ k\in V\big\},\\[5pt] \displaystyle\lvec E_2&:=&\displaystyle\big\{(k,k+1): \psi_{d+1}(k)<\psi_{d+1}(k+1),\ k\in V\big\},\\[5pt] \displaystyle\vec E_2&:=&\displaystyle\big\{(k,l):(l,k)\in\lvec E_2\big\}, \end{array}\ee where as before we calculate modulo $n$. If $n=0$, then $\vec E_1=\vec E_2:=\emptyset$. We let $(V,{\cal E}):=(V,\vec E_1,\vec E_2)$ denote the corresponding directed graph with two types of directed edges. The following simple observation makes precise our earlier claim that if we reverse the direction of edges of one of the charges, then a Toom contour with two charges becomes a directed cycle. \begin{lemma}[Toom cycles] If\label{L:cyccon} $\psi:V\to{\mathbb Z}^{d+1}$ is a Toom cycle, then $(V,{\cal E},0,\psi)$ is a Toom contour with root $0$, set of sources $V_\circ$, set of sinks $V_\ast$, and sets of internal vertices of charge $s$ given by $V_s$ $(s=1,2)$. Moreover, every Toom contour with two charges is equivalent to a Toom contour of this form. \end{lemma} \begin{Proof} Immediate from the definitions. \end{Proof} \begin{Proof}[of Theorem~\ref{T:strex}] We will first show that Theorem~\ref{T:contex} can be strengthened in the sense that the Toom contour $(V,{\cal E},v_\circ,\psi)$ also satisfies condition~(v) of Definition~\ref{D:strongpres}. As in Theorem~\ref{T:contex}, let $(U,{\cal H})$ be an explanation graph for $(0,0)$ that is present in ${\bm{\phh}}$. We let $\lvec H_s:=\{(k,l):(l,k)\in\vec H_s\}$ denote the directed edges we get by reversing the direction of all edges in $\vec H_s$ $(s=1,2)$. We will use an inductive construction. At each point in our construction, $(V,{\cal E},0,\psi)$ will be a Toom contour rooted at $(0,0)$ that is obtained from a Toom cycle $\psi:V\to{\mathbb Z}^{d+1}$ as in Lemma~\ref{L:cyccon}, and $T:=\inf\{\psi_{d+1}(k):k\in V\}$ is the earliest time coordinate visited by the contour. At each point in our construction, it will be true that: \begin{itemize} \item[{\rm(i)'}] $\displaystyle\varphi_{\psi(k)}=\phi^0$ for all $k\in V_\ast$ with $T+1<\psi_{d+1}(k)$, \item[{\rm(ii)}] $\displaystyle\varphi_{\psi(v)}\in\{\phi_1,\ldots,\phi_m\}$ for all $v\in V\backslash V_\ast$, \item[{\rm(iiia)}] $\displaystyle\big(\psi(k),\psi(k+1)\big)\in\vec H_1$ for each $(k,k+1)\in\vec E_1$ with $k\in V_1\cup\{0\}$, \item[{\rm(iiib)}] $\displaystyle\big(\psi(k-1),\psi(k)\big)\in\lvec H_2$ for each $(k-1,k)\in\lvec E_2$ with $k\in V_2\cup\{0\}$, \item[{\rm(iva)}] $\displaystyle\big(\psi(k),\psi(k+1)\big)\in\vec H_2$ for each $(k,k+1)\in\vec E_1$ with $k\in V_\circ\backslash\{0\}$, \item[{\rm(ivb)}] $\displaystyle\big(\psi(k-1),\psi(k)\big)\in\lvec H_1$ for each $(k-1,k)\in\lvec E_2$ with $k\in V_\circ\backslash\{0\}$, \item[{\rm(vi)}] $\psi(k-1)\neq\psi(k+1)$ for each $k\in V_\circ\backslash\{0\}$. \end{itemize} We observe that condition~(i)' is a weaker version of condition~(i) of Definition~\ref{D:present}. Conditions (ii), (iiia), and (iiib) corresponds to conditions (ii) and (iii) of Definition~\ref{D:present}. Conditions (iva) and (ivb) are a stronger version of condition~(iv) of Definition~\ref{D:present}, that implies also condition~(v) of Definition~\ref{D:strongpres}. Finally, condition (vi) corresponds to condition~(vi) of Definition~\ref{D:strongpres}. Our inductive construction will end as soon as condition~(i) of Definition~\ref{D:present} is fully satisfied, i.e., when: \begin{itemize} \item[{\rm(i)}] $\displaystyle\varphi_{\psi(k)}=\phi^0$ for all $k\in V_\ast$. \end{itemize} \begin{figure}[htb] \begin{center} \inputtikz{looperas} \caption{The process of exploration and loop erasion.} \label{fig:looperas} \end{center} \end{figure} We start the induction with the trivial Toom cycle defined by $V:=\{0\}$ and $\psi(0)=(0,0)$. We identify a Toom cycle $\psi:\{0,\ldots,n-1\}\to{\mathbb Z}^{d+1}$ with the word $\psi(0)\cdots\psi(n-1)$. In each step of the induction, as long as (i) is not yet satisfied, we modify our Toom cycle according to the following two steps, which are illustrated in Figure~\ref{fig:looperas}. \begin{itemize} \item[\rm I.] \emph{Exploration.} We pick $k\in V_\ast$ such that $\displaystyle\varphi_{\psi(k)}\neq\phi^0$ and $\psi_{d+1}(k)=T+1$, or if such a $k$ does not exist, with $\psi_{d+1}(k)=T$. We define $w_s$ by $\vec H_{s,{\rm out}}(\psi(k)):=(\psi(k),w_s)$ $(s=1,2)$. In the word $\psi(0)\cdots\psi(n-1)$, on the place of $\psi(k)$, we insert the word $\psi(k)w_1\psi(k)w_2\psi(k)$. \item[\rm II.] \emph{Loop erasion.} If as a result of the exploration, there are $k_1,k_2\in V_\ast$ with $k_1<k_2$ such that $\psi(k_1)=\psi(k_2)$, then we remove the subword $\psi(k_1)\cdots\psi(k_2)$ from the word $\psi(0)\cdots\psi(n-1)$ and on its place insert $\psi(k_1)$. We repeat this step until $\psi(k_1)\neq\psi(k_2)$ for all $k_1,k_2\in V_\ast$ with $k_1\neq k_2$. \end{itemize} The effect of the exploration step is that one sink is replaced by a source and two internal vertices, one of each charge, and than two new sinks are created (see Figure~\ref{fig:looperas}). These new sinks are created at height $-T$ or $-T+1$ and hence can overlap with each other or with other preexisting sinks, but not with sources or internal vertices. If the exploration step has created overlapping sinks or the two new internal vertices overlap, then these are removed in the loop erasion step. After the removal of a loop, all remaining vertices are of the same type (sink, source, or internal vertex of a given charge) as before. Using these observations, it is easy to check that: \begin{itemize} \item[(C)] After exploration and loop erasion, the modified word $\psi$ is again a Toom cycle rooted at $(0,0)$ (see Definition~\ref{def:toomcycle}) and the induction hypotheses (i)', (ii), (iiia), (iiib), (iva), (ivb) and (vi) remain true. \end{itemize} Let $\Delta:=\{\psi(k):k\in V_\ast,\ \varphi_{\psi(k)}\neq\phi_0\}$. In each step of the induction, we remove one element from $\Delta$ with a given time coordinate, say $t$, and possibly add one or two new elements to $\Delta$ with time coordinates $t-1$. Since the explanation graph is finite, this cannot go on forever so the induction terminates after a finite number of steps. This completes the proof that Theorem~\ref{T:contex} can be strengthened in the sense that the Toom contour $(V,{\cal E},v_\circ,\psi)$ also satisfies condition~(v) of Definition~\ref{D:strongpres}. \end{Proof} \subsection{Forks}\label{S:fork} We recall that for Toom contours with two charges, Theorem~\ref{T:strongpres} strengthened Theorem~\ref{T:contour} by showing the presence of a Toom contour with certain additional properties. As we have seen in Subsection~\ref{S:explic}, such additional properties reduce the number of Toom contours one has to consider and hence lead to sharper Peierls bounds. In the present subsection, we will prove similar (but weaker) strengthened version of Theorem~\ref{T:contour} that holds for an arbitrary number of charges. Let $(V,{\cal E},v_\circ,\psi)$ be a Toom contour. By definition, a \emph{fork} is a source $v\in V_\circ$ such that: \begin{equation} \big|\{\psi(w):(v,w)\in\vec E\}\big|=2. \end{equation} As we will show in a moment, the proof of Theorem~\ref{T:contex} actually yields the following somewhat stronger statement. In the original formulation of Toom \cite{Too80}, his contours contain no sources but they contain objects that Toom calls forks and that effectively coincide with our usage of this term. For Toom, the fact that the number of sinks equals the number of forks plus one was part of his definition of a contour. In our formulation, this is a consequence of the fact that the number of sources equals the number of sinks. \begin{theorem}[Toom contour with forks only] Theorem~\ref{T:contex}\label{T:fork} can be strengthened in the sense that all sources $v\in V\backslash\{v_\circ\}$ are forks. \end{theorem} \begin{Proof} Let us say that $v\in V_\circ$ is a \emph{point source} if $|\{\psi(w):(v,w)\in\vec E\}|=1$. We first show that Theorem~\ref{T:contex} can be strengthened in the sense that all sources $v\in V\backslash\{v_\circ\}$ are forks or point sources. Indeed, this is a direct consequence of the fact that the tight polars $(a_{i,1},\ldots,a_{i,\sigma})$ $(2\leq i\leq M)$ constructed in the proof of Lemma~\ref{L:tiso} are either point polars or have the property that the set $\{a_{i,s}:1\leq s\leq\sigma\}$ has precisely two elements. The latter give rise to forks while the former give rise to point sources or isolated vertices. Since a Toom countour is connected, sources other than the root can never be isolated vertices. This shows that Theorem~\ref{T:contex} can be strengthened in the sense that all sources $v\in V\backslash\{v_\circ\}$ are forks or point sources. Now if some $v\in V_\circ\backslash\{v_\circ\}$ is a point source, then we can simplify the Toom contour by removing this source from the contour and joining all elements of $\{w:(v,w)\in\vec E\}$ into a new source, that is embedded at the space-time point $z\in{\mathbb Z}^{d+1}$ defined by $\{z\}:=\{\psi(w):(v,w)\in\vec E\}$. Repeating this process until it is no longer possible to do so we arrive at Toom contour $(V,{\cal E},v_\circ,\psi)$ with the additional property that all sources $v\in V\backslash\{v_\circ\}$ are forks. \end{Proof} \section{Bounds for eroders}\label{S:bounds} \subsubsection*{Outline} In this section, we apply the abstract theory developed in the previous section to concrete models. In Subsection~\ref{S:erosion}, we discuss the erosion criteria (\ref{erosion}) and (\ref{erode}). In particular, we prove Lemma~\ref{L:erode} and show that (\ref{erode}) implies that $\phi$ is an eroder. In Subsection~\ref{S:expbd}, we prove Lemmas \ref{L:expbd} and \ref{L:expbdcycle} which give an exponential upper bound on the number of Toom contours and Toom cycles with a given number of edges. In Subsection~\ref{S:finP}, we prove Lemma~\ref{L:Peifin} which shows that for eroders, finiteness of the Peierls sum is sufficient to conclude that $\overline\rho(p)>0$. At this point, we have proved all ingredients needed for the proof of Toom's stability theorem described in Subsection~\ref{S:erod} and also for the explicit bounds for concrete eroders stated in Subsection~\ref{S:explic}. \subsection{Eroders}\label{S:erosion} In this subsection we prove Lemma~\ref{L:erode}. Our proof depends on the equivalence of (\ref{erosion}) and the eroder property, which is proved in \cite[Thm~1]{Pon13}. In Lemma~\ref{L:Lerod}, we give an alternative direct proof that (\ref{erode}) implies that $\phi$ is an eroder. Although we do not really need this alternative proof, we have included it since it is short and instructive. In particular, it links the eroder property to edge speeds, which we otherwise do not discuss but which are an important motivating idea behind the definition of Toom contours.\medskip \begin{Proof}[of Lemma~\ref{L:erode}] In \cite[Lemma~12]{Pon13} it is shown\footnote{Since Ponselet discusses stability of the all-zero fixed point while we discuss stability of the all-one fixed point, in \cite{Pon13}, the roles of zeros and ones are reversed compared to our conventions.} that (\ref{erosion}) is equivalent to the existence of a polar function $L$ of dimension $2\leq\sigma\leq d+1$ and constants $\varepsilon_1,\ldots,\varepsilon_\sigma$ such that $\sum_{s=1}^\sigma \varepsilon_s>0$ and for each $1\leq s\leq\sigma$, there exists an $A_s\in{\cal A}(\phi)$ such that $\varepsilon_s-L_s(i)\leq 0$ for all $i\in A_s$. It follows that \begin{equation} \sum_{s=1}^\sigma\sup_{A\in{\cal A}(\phi)}\inf_{i\in A}L_s(i) \geq\sum_{s=1}^\sigma\inf_{i\in A_s}L_s(i)\geq\sum_{s=1}^\sigma \varepsilon_s>0, \end{equation} which shows that (\ref{erode}) holds. Assume, conversely, that (\ref{erode}) holds. Since ${\cal A}(\phi)$ is finite, for each $1\leq s\leq\sigma$ we can choose $A_s(\phi)\in{\cal A}(\phi)$ such that \begin{equation}\label{epss} \varepsilon_s:=\inf_{i\in A_s(\phi)}L_s(i)=\sup_{A\in{\cal A}(\phi)}\inf_{i\in A}L_s(i). \end{equation} Then (\ref{erode}) says that $\sum_{s=1}^\sigma\varepsilon_s>0$. Let $H_s:=\{z\in{\mathbb R}^d:L_s(z)\geq\varepsilon_s\}$. By the definition of a polar function, $\sum_{s=1}^\sigma L_s(z)=0$ for each $z\in{\mathbb R}^d$, and hence the condition $\sum_{s=1}^\sigma\varepsilon_s>0$ implies that for each $z\in{\mathbb R}^d$, there exists an $1\leq s\leq\sigma$ such that $L_s(z)<\varepsilon_s$. In other words, this says that $\bigcap_{s=1}^\sigma H_s=\emptyset$. For each $1\leq s\leq\sigma$, the set $A_s(\phi)$ is contained in the half-space $H_s$ and hence the same is true for ${\rm Conv}(A_s(\phi))$, so we conclude that \begin{equation} \bigcap_{s=1}^\sigma{\rm Conv}\big(A_s(\phi)\big)=\emptyset, \end{equation} from which (\ref{erosion}) follows. \end{Proof} \begin{lemma}[The eroder property] If\label{L:Lerod} a non-constant monotonic function $\phi:\{0,1\}^{{\mathbb Z}^d}\to\{0,1\}$ satisfies (\ref{erode}), then $\phi$ is an eroder. \end{lemma} \begin{Proof} Most of the argument has already been given below Lemma~\ref{L:erode}. It only remains to prove (\ref{edgespeed}). It suffices to prove the claim for $n=1$; the general claim then follows by induction. Assume that $i\in{\mathbb Z}^d$ satisfies $L_s(i)>r_s(X^0_0)-\delta_s$. We need to show that $X^0_1(i)=1$ for all such $i$. By the definition of $\delta_s$, we can choose $A\in{\cal A}(\phi)$ such that $\inf_{j\in A}L_s(j)=\delta_s$. It follows that $L_s(i+j)>r_s(X^0_0)$ for all $j\in A$ and hence $X^0_0(i+j)=1$ for all $j\in A$, which implies $X^0_1(i)=1$ by (\ref{Aphi}). \end{Proof} \subsection{Exponential bounds on the number of contours}\label{S:expbd} In this subsection, we prove Lemmas~\ref{L:expbd} and~\ref{L:expbdcycle}.\medskip \begin{Proof}[of Lemma~\ref{L:expbd}] We first consider the case that the number of charges $\sigma$ is even. Let $T=(V,{\cal E},v_\circ,\psi)\in{\cal T}'_0$. Recall that $(V,{\cal E})$ is a directed graph with $\sigma$ types of edges, that are called charges. In $(V,{\cal E})$, all edges point in the direction from the sources to the sinks. We modify $(V,{\cal E})$ by reversing the direction of edges of the charges $\ffrac{1}{2}\sigma+1,\ldots,\sigma$. Let $(V,{\cal E}')$ denote the modified graph. In $(V,{\cal E}')$, the number of incoming edges at each vertex equals the number of outgoing edges. Since moreover the undirected graph $(V,E)$ is connected, it is not hard to see\footnote{This is a simple variation of the ``Bridges of K\"onigsberg'' problem that was solved by Euler.} that it is possible to walk through the directed graph $(V,{\cal E}')$ starting from the root using an edge of charge $1$, in such a way that each directed edge of ${\cal E}'$ is traversed exactly once. Let $m:=\sigma n_{\rm e}(T)$ denote the total number of edges of $(V,{\cal E}')$ and for $0<k\leq m$, let $(v_{k-1},v_k)\in\vec E'_{s_k}$ denote the $k$-th step of the walk, which has charge $s_k$. Let $\delta_k:=\vec\psi(v_k)-\vec\psi(v_{k-1})$ denote the spatial increment of the $k$-th step. Note that the temporal increment is determined by the charge $s_k$ of the $k$-th step. Let $k_0,\ldots,k_{\sigma/2}$ denote the times when the walk visits the root $v_\circ$. We claim that in order to specify $(V,{\cal E},v_\circ,\psi)$ uniquely up to equivalence, in the sense defined in (\ref{equiv}), it suffices to know the sequences \begin{equation} (s_1,\ldots,s_m),\quad(\delta_1,\ldots,\delta_m),\quad\mbox{and}\quad(k_0,\ldots,k_{\sigma/2}). \end{equation} Indeed, the sinks and sources correspond to changes in the temporal direction of the walk which can be read off from the charges. Although the images under $\psi$ of sources may overlap, we can identify which edges connect to the root, and since we also know the increment of $\psi(v_k)$ in each step, all objects in (\ref{equiv}) can be identified. The first charge $s_1$ is 1 and after that, in each step, we have the choice to either continue with the same charge or choose one of the other $\ffrac{1}{2}\sigma$ available charges. This means that there are no more than $(\ffrac{1}{2}\sigma+1)^{m-1}$ possible ways to specify the charges $(s_1,\ldots,s_m)$. Setting $M:=\big|\bigcup_{s=1}^\sigma A_s(\phi)\big|$, we see that there are no more than $M^m$ possible ways to specify the spatial increments $(\delta_1,\ldots,\delta_m)$. Since $k_0=0,k_{\sigma/2}=m$, we can roughly estimate the number of ways to specify the visits to the root from above by $n^{\sigma/2-1}$. Recalling that $m=\sigma n_{\rm e}(T)$, this yields the bound \begin{equation} N_n\leq n^{\sigma/2-1}(\ffrac{1}{2}\sigma+1)^{\sigma n-1}M^{\sigma n}. \end{equation} This completes the proof when $\sigma$ is even. When $\sigma$ is odd, we modify $(V,{\cal E})$ by doubling all edges of charge $\sigma$, i.e., we define $(V,{\cal F})$ with \begin{equation} {\cal F}=(\vec F_1,\ldots,\vec F_{\sigma+1}):=(\vec E_1,\ldots,\vec E_\sigma,\vec E_\sigma), \end{equation} and next we modify $(V,{\cal F})$ by reversing the direction of all edges of the charges $\lceil\ffrac{1}{2}\sigma\rceil+1,\ldots,\sigma+1$. We can define a walk in the resulting graph $(V,{\cal F}')$ as before and record the charges and spatial increments for each step, as well as the visits to the root. In fact, in order to specify $(V,{\cal E},v_\circ,\psi)$ uniquely up to equivalence, we do not have to distinguish the charges $\sigma$ and $\sigma+1$. Recall that edges of the charges $\sigma$ and $\sigma+1$ result from doubling the edges of charge $\sigma$ and hence always come in pairs, connecting the same vertices. Since sinks do not overlap and since internal vertices of a given charge do not overlap, and since we traverse edges of the charges $\sigma$ and $\sigma+1$ in the direction from the sinks towards the sources, whenever we are about to traverse an edge that belongs to a pair of edges of the charges $\sigma$ and $\sigma+1$, we know whether we have already traversed the other edge of the pair. In view of this, for each pair, we only have to specify the spatial displacement at the first time that we traverse an edge of the pair. Using these considerations, we arrive at the bound \begin{equation} N_n\leq n^{\lceil\sigma/2\rceil-1}(\lceil\ffrac{1}{2}\sigma\rceil+1)^{(\sigma+1)n-1}M^{\sigma n}. \end{equation} \end{Proof} \begin{Proof}[of Lemma~\ref{L:expbdcycle}] The proof goes along the same lines as that of Lemma~\ref{L:expbd} for the case $\sigma$ is even. Observe that for $\sigma=2$, the walk visits the root 0 twice: $k_0=0, k_1=m$. Thus $(k_0, k_1)$ is deterministic, and we only need to specify the sequences \begin{equation} (s_1,\ldots,s_m),\quad(\delta_1,\ldots,\delta_m). \end{equation} The first charge $s_1$ is 1 and after that, in each step, we have the choice to either continue with the same charge or choose charge 2. This means that there are no more than $2^{m-1}$ possible ways to specify the charges $(s_1,\ldots,s_m)$. Once we have done that, by condition~(v) of Definition~\ref{D:strongpres} of what it means for a cycle to be strongly present, we know for each $0<k\leq m$ whether the spatial increment $\delta_k$ is in $A_1(\phi)$ or $A_2(\phi)$. Setting $M_s:=|A_s(\phi)\big|$ $(s=1,2)$, using the fact that $|\vec E_1|=|\vec E_2|=2 n_{\rm e}(T)=m/2$, we see that there are no more than $M_1^{m/2} \cdot M_2^{m/2}$ possible ways to specify $(\delta_1,\ldots,\delta_m)$. This yields the bound \begin{equation} N_n\leq 2^{2n-1}M_1^{n} \cdot M_2^{n}. \end{equation} \end{Proof} \subsection{Finiteness of the Peierls sum}\label{S:finP} In this subsection, we prove Proposition~\ref{P:Peifin} about the presence of a large contour. As a direct consequece of this proposition, we obtain Lemma~\ref{L:Peifin} which says that for an eroder, finiteness of the Peierls sum in (\ref{Peierls}) suffices to conclude that the intensity of the upper invariant law is positive. We also prove a stronger version of Proposition~\ref{P:Peifin}, where we show the strong presence of a Toom contour in which all sources are forks. \medskip \begin{Proof}[of Proposition~\ref{P:Peifin}] Recall the definition of the modified collection of monotonic maps ${\bm{\phh}}^{(r)}$ in~\eqref{eq:modifiedbooleanmaps}. Let $\overline x^{(r)}$ denote the maximal trajectory of ${\bm{\phh}}^{(r)}$. For each integer $q\geq 0$, let $C_q:={\rm Conv}(\{qj_1,\ldots,qj_\sigma\})$. Then \begin{equation} C_{q+1}=\big\{i+j_s:i\in C_q,\ 1\leq s\leq\sigma\big\}\qquad(q\geq 0). \end{equation} Using this, it is easy to see by induction that our assumption that $\overline x^{(r)}_{-r}(i)=0$ for all $i\in C_r$ implies that $\overline x^{(r)}_{-q}(i)=0$ for all $i\in C_q$ and $0\leq q\leq r$. In particular, this holds for $q=0$, so $\overline x^{(r)}_0(0)=0$. Using this, it is straightforward to adapt the proof of Lemma~\ref{L:explan} and show that there is an explanation graph $(U,{\cal H})$ for $(0,0)$ present in ${\bm{\phh}}^{(r)}$ which has the additional properties: \begin{itemize} \item $\big\{i\in{\mathbb Z}^d:(i,-q)\in U\big\}=C_q$ $(0\leq q\leq r)$, \item $\big((i,-q),(i+j_s,-q-1)\big)\in\vec H_s$ $(0\leq q<r,\ i\in C_q)$. \end{itemize} In particular, these properties imply that \begin{itemize} \item $t\leq-r$ for all $(i,t)\in U_\ast$. \end{itemize} Theorem~\ref{T:contex} tells us that there is a Toom contour $(V,{\cal E},v_\circ,\psi)$ rooted at $(0,0)$ present in ${\bm{\phh}}^{(r)}$ with the additional properties that $\psi(V)\subset U$, $\psi(V_\ast)\subset U_\ast$, and $\psi(\vec E_s)\subset\psi(\vec H_s)$ for all $1\leq s\leq\sigma$. This immediately implies that $\psi_{d+1}(v)\leq-r$ for all $v\in V_\ast$. To see that the Toom contour can be chosen such that moreover $\psi_{d+1}(v)\leq 1-r$ for all $v\in V_\circ\backslash\{v_\circ\}$, we have to look into the proof of Theorem~\ref{T:contex}. In Subsection~\ref{S:match} we defined an equivalence relation $\sim$ on the set of vertices $U$ of an explanation graph $(U,{\cal H})$. In Lemma~\ref{L:Ctree}, we showed that the set of all equivalence classes has the structure of a directed tree. If we draw time downwards, then the root of this tree lies below. In the proof of Proposition~\ref{P:match}, we constructed a Toom matching for $(U,{\cal H})$ with the property that except for the root, all other polars lie at a level above the last level where the tree still consisted of a single equivalence class. Finally, in the proof of Theorem~\ref{T:contex}, we used these polars to construct sources that lie at most one level below the corresponding polar. The upshot of all of this is that in order to show that $\psi_{d+1}(v)\leq 1-r$ for all $v\in V_\circ\backslash\{v_\circ\}$, it suffices to show that the set of vertices $\{(i,t)\in U:t=1-r\}$ forms a single equivalence class as defined in Subsection~\ref{S:match}. To see that this indeed is the case, call two points $i=(i_1,\dots,i_\sigma),j=(j_1,\dots,j_\sigma)\in C_{r-1}$ neighbours if there exist $1\leq s_1,s_2\leq\sigma$ with $s_1\neq s_2$ such that $i_{s_1}=j_{s_1}-1$, $i_{s_2}=j_{s_2}+1$, and $i_s=j_s$ for all $s\in\{1,\ldots,\sigma\}\backslash\{s_1,s_2\}$. Define $k\in C_r$ by $k_{s_1}=j_{s_1}$, $k_{s_2}=j_{s_2}+1$, and $k_s=j_s$ for all other $s$. Then $\big((i,1-r),(k,-r)\big)\in\vec H$ and $\big((j,1-r),(k,-r)\big)\in\vec H$ which proves that $(i,1-r)\approx(j,1-r)$. Since any two points in $C_{r-1}$ are connected by a path that in each step moves from a point to a neighbouring point, this shows that $\{(i,t)\in U:t=1-r\}$ forms a single equivalence class. To complete the proof, we need to show that if $\sigma=2$, then we can construct the Toom contour so that in addition it is strongly present in ${\bm{\phh}}^{(r)}$. We use the same explanation graph $(U,{\cal H})$ for $(0,0)$ with properties (i)--(iii) as above. Theorem~\ref{T:strex} now tells us that there is a Toom contour $(V,{\cal E},v_\circ,\psi)$ rooted at $(0,0)$ strongly present in ${\bm{\phh}}^{(r)}$ with the additional properties that $\psi(V)\subset U$, $\psi(V_\ast)\subset U_\ast$, and $\psi(\vec E_s)\subset\psi(\vec H_s)$ for all $1\leq s\leq\sigma$. This again immediately implies that $\psi_{d+1}(v)\leq-r$ for all $v\in V_\ast$, so again it remains to show that the Toom contour can be chosen such that moreover $\psi_{d+1}(v)\leq 1-r$ for all $v\in V_\circ\backslash\{v_\circ\}$. \begin{figure}[htb] \begin{center} \inputtikz{large} \caption{The Toom cycle $\psi$ described in the proof of Proposition~\ref{P:Peifin}.} \label{fig:large} \end{center} \end{figure} To see that this is the case, we have to look into the proof of Theorem~\ref{T:strex}. Instead of starting the inductive construction with the trivial Toom cycle of length zero, we claim that it is possible to start with a Toom cycle $\psi$ of length $4r$ for which all sources except the root have the time coordinate $1-r$ and all sinks have the time coordinate $-r$. Since the process of exploration and loop erasion will then only create new sources with time coordinate $-r$ or lower, the claim then follows. A Toom cycle $\psi$ with the described properties is drawn in Figure~\ref{fig:large}. More formally, this cycle has the following description. Starting from $(0,0)$, it first visits the points $(-k,kj_1)$ with $k=1,\ldots,r$. Next, it alternatively visits the points $(1-r,(r-k)j_1+(k-1)j_2)$ and $(-r,(r-k)j_1+kj_2)$ with $k=1,\ldots,r$. Finally, it visits the points $(k-r,(r-k)j_2)$ with $k=1,\ldots,r$, ending in $(0,0)$, where it started. \end{Proof} \begin{proposition}[Large contours with forks only] Proposition~\ref{P:Peifin}\label{P:finfork} can be strengthened in the sense that all sources $v\in V\backslash\{v_\circ\}$ are forks. \end{proposition} \begin{Proof} A Toom contours with two charges that is strongly present in $\Phi^{(r)}$ automatically has the property that all sources $v\in V\backslash\{v_\circ\}$ are forks, because of condition~(vi) of Definition~\ref{D:strongpres}. Thus, it suffices to prove the claim for Toom contours with three or more charges. In this case, as pointed out in the proof of Proposition~\ref{P:Peifin}, the fact that all sources $v\in V\backslash\{v_\circ\}$ are forks is an automatic result of the construction used in the proof of Theorem~\ref{T:contex}. Since we used this same construction in the proof of Proposition~\ref{P:Peifin}, the contour constructed there also has this property. \end{Proof} \begin{Proof}[of Lemma~\ref{L:Peifin}] Let \begin{equation} {\cal T}'_{0,r}:=\big\{(V,{\cal E},v_\circ,\psi)\in{\cal T}'_0:\psi_{d+1}(v)\leq-r \mbox{ for all }v\in V_\ast\big\}. \end{equation} By assumption, $\displaystyle\sum_{T\in{\cal T}'_0}p^{n_\ast(T)}<\infty$, so we can choose $r$ sufficiently large such that \begin{equation} \varepsilon:=\sum_{T\in{\cal T}'_{0,r}}p^{n_\ast(T)}<1. \end{equation} Fix $j_s\in A_s(\phi)$ $(1\leq s\leq\sigma)$ and set $\Delta_r:={\mathbb Z}^d\cap{\rm Conv}(\{rj_1,\ldots,rj_\sigma\})$. Then Proposition~\ref{P:Peifin} allows us to estimate \begin{equation} \P\big[\overline X_{-r}(i)=0\ \forall i\in\Delta_r\big] \leq\sum_{T\in{\cal T}'_{0,r}}\P\big[T\mbox{ is present in }\Phi^{(r)}\big]\leq\varepsilon, \end{equation} where in the last step we have used that $\psi_{d+1}(v)\leq-r$ for all $v\in V_\ast$ and hence all sinks of $V$ must be mapped to space-time points $(i,t)$ where $\Phi^{(r)}_{(i,t)}=\Phi_{(i,t)}$. By translation invariance, \begin{equation} \P\big[\overline X_{-r}(i)=1\mbox{ for some }i\in\Delta_r\big] \leq\sum_{i\in\Delta_r}\P\big[\overline X_{-r}(i)=1\big] =|\Delta_r|\P\big[\overline X_0(0)=1\big]. \end{equation} Combining this with our previous formulas, we see that \begin{equation} \overline\rho(p)=\P\big[\overline X_0(0)=1\big]\geq|\Delta_r|^{-1}(1-\varepsilon)>0. \end{equation} For Toom contours with two charges, Proposition~\ref{P:Peifin} guarantees the strong presence of a large Toom contour, so we can argue similarly, replacing ${\cal T}'_0$ by ${\cal T}''_0$. \end{Proof} \noindent \textbf{Remark} In Peierls arguments, it is frequently extremely helpful to be able to draw conclusions based only on the fact that that the Peierls sum is finite (but not necessarily less than one). These sorts of arguments played an important role in \cite{KSS14}, where we took inspiration for Lemma~\ref{L:Peifin}, and can be traced back at least to \cite[Section~6a]{Dur88}. \section{Cooperative branching and the identity map}\label{S:intbd} In this subsection, we study the monotone random cellular automaton that applies the maps $\phi^0,\phi^{\rm id}$, and $\phi^{\rm coop,d}$ with probabilities $p,q,r$, respectively. For each $p,r\geq 0$ such that $p+r\leq 1$, let $\overline\rho(p,r)$ denote the intensity of the upper invariant law of the process with parameters $p,1-p-r,r$. For each $0\leq r<1$, there exists a $p_{\rm c}(r)\in[0,1-r]$ such that $\overline\rho(p,r)>0$ for $0\leq p<p_{\rm c}(r)$ and $\overline\rho(p,r)=0$ for $p_{\rm c}(r)<p\leq 1-r$. We give lower bounds on $p_{\rm c}(r)$. Recall from Subsection~\ref{S:intrins} that we set $\sigma=2$ and for the sets $A_s(\phi_k)$ in (\ref{As}) we make the choices \begin{equation}\begin{array}{ll}\label{A12} \displaystyle A_1(\phi^{\rm id}):=A_1,\quad& A_2(\phi^{\rm id}):=A_1,\\[5pt] \displaystyle A_1(\phi^{\rm coop,d}):=A_1,\quad& A_2(\phi^{\rm coop, d}):=A_2, \end{array}\ee with $A_1:=\{0\}$ and $A_2:=\{e_1,\dots, e_d\}$. Let $\Phi=(\Phi_{(i,t)})_{(i,t)\in{\mathbb Z}^3}$ be an i.i.d.\ collection of monotonic maps so that $\P[\Phi_{(i,t)}=\phi^0]=p$, $\P[\Phi_{(i,t)}=\phi^{\rm id}]=q$, and $\P[\Phi_{(i,t)}=\phi^{\rm coop, d}]=r$. We let ${\cal T}_0$ denote the set of Toom contours $(V, \mathcal E, 0, \psi)$ rooted at the origin with respect to the given choice of $\sigma$ and the sets $A_s(\phi_k)$ in~\eqref{coopA12}. Theorem~\ref{T:contour} then implies the Peierls bound \begin{equation}\label{strPeicoop} 1-\overline\rho \leq \sum_{T\in{\cal T}_0}\P\big[T\mbox{ is strongly present in }\Phi\big]. \end{equation} In the remainder of this section, we give an upper bound on this expression. Recall from Subsection~\ref{S:Tcycles} that if we reverse the direction of edges of charge 2, then the Toom graph becomes a directed cycle with edge set $\vec E_1 \cup\cev E_2$. For any set $A\subset{\mathbb Z}^d$, let us write $-A:=\{-i:i\in A\}$. For any $(v, w)\in \vec E_1 \cup\cev E_2$ we say that~$\psi\big((v, w)\big)$ is \begin{enumerate} \item \emph{outward}, if $\psi_3(w)=\psi_3(v)-1$ and $\vec\psi(w)-\vec\psi(v)\in A_2$, \item \emph{upward}, if $\psi_3(w)=\psi_3(v)-1$ and $\vec\psi(w)-\vec\psi(v)\in A_1$, \item \emph{inward}, if $\psi_3(w)=\psi_3(v)+1$ and $\vec\psi(w)-\vec\psi(v)\in -A_2$, \item \emph{downward}, if $\psi_3(w)=\psi_3(v)+1$ and $\vec\psi(w)-\vec\psi(v)\in -A_1$. \end{enumerate} The use of the words ``upward'' and ``downward'' are inspired by our habit of drawing negative time upwards in pictures. As $|A_2|=d$, we distinguish $d$ types of outward and inward edges: we say that~$\psi\big((v, w)\big)$ is type $i$, if $|\vec\psi(w)-\vec\psi(v)|=e_i$. Our definitions in (\ref{A12}) together with Definitions~\ref{D:present} and \ref{D:strongpres} imply that a Toom contour is strongly present in $\Phi$ if and only if the following conditions are satisfied: \begin{itemize} \item[{\rm(i)}] $\displaystyle\Phi_{\psi(v)}=\phi^0$ for all $\displaystyle v\in V_\ast$, \item[{\rm(iia)}] $\displaystyle\Phi_{\psi(v)}\in\{\phi^{\rm id},\phi^{\rm coop, d}\}$ for all $\displaystyle v\in V_1\cup V_2\cup \{v_\circ\}$, \item[{\rm(iib)}] $\displaystyle\Phi_{\psi(v)}=\phi^{\rm coop, d}$ for all $\displaystyle v\in V_\circ\backslash\{v_\circ\}$, \item[{\rm(iiia)}] If $(v, w)\in \vec E^\ast_1$, then $\psi\big((v, w)\big)$ is upward, \item[{\rm(iiib)}] If $(v, w)\in \cev E^\ast_2$, then $\displaystyle\left\{\begin{array}{ll}\psi\big((v, w)\big)\mbox{ is downward }\quad\mbox{if }\Phi_{\psi(w)}=\phi^{\rm id},\\ \psi\big((v, w)\big)\mbox{ is inward }\quad\mbox{if }\Phi_{\psi(w)}=\phi^{\rm coop, d},\end{array}\right.$ \item[{\rm(iva)'}] If $(v, w)\in \vec E^\circ_1$, then $\psi\big((v, w)\big)$ is outward, \item[{\rm(ivb)'}] If $(v, w)\in \cev E^\circ_2$, then $\psi\big((v, w)\big)$ is downward, \end{itemize} where $\vec E^\circ_i$ and $\vec E^\ast_i$ are defined in~\eqref{Ecirc}. If $(V, {\cal E}, v_\circ, \psi)$ is a Toom contour rooted at 0 that is strongly present in $\Phi$, then we can fully specify $\psi$ by saying for each $(v, w)\in \vec E_1\cup\cev E_2$ whether $\psi\big((v, w)\big)$ is upward, downward, inward or outward, and its type in the latter two cases. In other words, we can represent the contour by a word of length $n$ consisting of the letters from the alphabet $\{o_1,\dots,o_d,u,d,i_1,\dots,i_d\}$, which represents the different kinds of steps the cycle can take. Then we obtain a word consisting of the letters $o_1,\dots,o_d,u,d,i_1,\dots,i_d$ that must satisfy the following rules: \begin{itemize} \item Each outward step must be immediately preceded by a downward step. \item Between two occurrences of the string $do_\cdot$, and also before the first occurrence of $do_\cdot$ and after the last occurrence, we first see a string consisting of the letter $u$ of length $\geq 0$, followed by a string consisting of the letters $d,i_1,\dots, i_d$, again of length $\geq 0$. \end{itemize} So, for example the contour in the middle of Figure~\ref{fig:minexpl} is described by the following word: \begin{equation} \underbrace{uuuu}\underbrace{do_1}\underbrace{do_1}\underbrace{uu}\underbrace{do_2}\underbrace{do_2}\underbrace{di_1}\underbrace{di_1}\underbrace{do_1}\underbrace{do_1}\underbrace{di_1}\underbrace{di_1}\underbrace{di_2}\underbrace{di_2}. \end{equation} We call a sequence of length $\geq 0$ of consecutive downward/upward steps a downward/upward segment. We can alternatively represent $\psi$ by a word of length $n$ consisting of the letters from $\{o_1,\dots,o_d,U,D,i_1,\dots,i_d, i^\circ_1,\dots,i^\circ_d\}$, where $U$ and $D$ represent upward and downward segments. Let us for the moment ignore the $\circ$ superscripts. Then we can obtain a word consisting of these letters that must satisfy the following rules: \begin{itemize} \item Each outward step must be immediately preceded by a downward segment of length $\geq 1$ and followed by an upward segment of length $\geq 0$. \item The first step is an upward segment. \item Between two occurrences of the string $Do_\cdot U$, and also before the first and after the last occurrence, we see a sequence of the string $Di_\cdot$ of length $\geq 0$. \item The last step is a downward segment. \end{itemize} We add the superscript $\circ$ to each inward step whose endpoint overlaps with the image of a source other than the root already visited by the cycle in one of the previous steps. For any Toom contour $T$ denote by $W(T)$ the corresponding word satisfying these rules. The structure of such a representation of a contour becomes more clear if we indicate the vertices in $V_1,V_2,V_\ast$, and $V_\circ$ with the symbols $1,2,\ast,\circ$, respectively. Then the contour in the middle of Figure~\ref{fig:minexpl} is described by the following word: \begin{equation}\label{eq:discreteword} \accentset{\circ}{|} U \accentset{\ast}{|} \underbrace{D\accentset{\circ}{|} o_1 \accentset{1}{|} U} \accentset{\ast}{|} \underbrace{D\accentset{\circ}{|} o_1 \accentset{1}{|} U} \accentset{\ast}{|} \underbrace{D\accentset{\circ}{|} o_2 \accentset{1}{|} U} \accentset{\ast}{|} \underbrace{D\accentset{\circ}{|} o_2 \accentset{1}{|} U} \accentset{\ast}{|} \underbrace{D\accentset{2}{|} i_1} \accentset{2}{|} \underbrace{D\accentset{2}{|} i_1} \accentset{2}{|} \underbrace{D\accentset{\circ}{|} o_1 \accentset{1}{|} U} \accentset{\ast}{|} \underbrace{D\accentset{\circ}{|} o_1 \accentset{1}{|} U} \accentset{\ast}{|} \underbrace{D\accentset{2}{|} i^\circ_1} \accentset{2}{|} \underbrace{D\accentset{2}{|} i_1} \accentset{2}{|} \underbrace{D\accentset{2}{|} i_2} \accentset{2}{|} \underbrace{D\accentset{2}{|} i_2} \accentset{2}{|} D\accentset{\circ}{|}. \end{equation} Finally, let $l^+(T), l^-(T)$ and $l^{-,\circ}(T)$ denote the vectors containing the lengths of the upward segments, downward segments followed by $o_\cdot$ or $i_\cdot$ and downward segments followed by $i^\circ_\cdot$ respectively in the order we encounter them along the cycle. For the example above we have: \begin{equation}\begin{aligned} l^+(T)=&(4,0,2,0,0,0,0),\\ l^-(T)=&(1,1,1,1,0,0,1,1,0,0,0,0,0),\\ l^{-,\circ}(T)=&(0). \end{aligned} \end{equation} \begin{claim}\label{claim:downwardlength} Let $T$ be a Toom contour strongly present in $\Phi$ rooted at 0. Then $W(T), l^+(T)$ and $l^-(T)$ uniquely determine $(V, {\cal E}, 0, \psi)$. \end{claim} \begin{Proof} Knowing the word describing $T$ together with the lengths of all upward and downward segments uniquely determines the contour, so it is enough to show that $W(T), l^+(T)$ and $l^-(T)$ determines $l^{-,\circ}(T)=(l_1, \dots, l_j)$ $(j\geq 0)$. Assume we know $l_1,\dots, l_i$ for some $0\leq i<j$. We then know the length and type of each step along the cycle up to the downward segment corresponding to $l_{i+1}$, that is we know the coordinates of its starting point. This downward segment ends at a charge 2 internal vertex, and the consecutive step is inward ending at a source other than the root already visited by the cycle. The cycle enters each such source by a downward step and leaves it by an outward step, hence by the structure of the explanation graph the endpoints of this outward step must coincide with the endpoints of the inward step following the downward segment with length $l_{i+1}$. As each outward step is followed by an upward segment, the starting point of the consecutive upward segment must be the endpoint of our downward segment. The endpoint of every upward segment is a defective site, and each site along a downward segment (except maybe its endpoints) is an identity site, so this upward segment must contain every site of our downward segment. Furthermore, by (iii) of Definition~\ref{def:embedding} of an embedding there cannot be any other upward segment that overlaps with this downward segment. Therefore, given the starting coordinates of our downward segment, we check which upward segment visited these coordinates previously, and we let $l_{i+1}$ be the distance between the starting points of this upward segment and our downward segment. \end{Proof} By a small abuse of notation, let us also use the letters $o,i$ to indicate the number of times the symbols $o, i$ occur in our representation of the contour (regardless of the sub- and superscripts). As our contour is a cycle starting and ending at 0, we must have the same number of inward and outward steps, furthermore, the total lengths of upward and downward segments must be equal as well: \begin{equation}\label{eq:inoutward} o=i\quad\mbox{and}\quad \|l^+(T)\|_1 =\| l^-(T)\|_1+\|l^{-,\circ}(T)\|_1. \end{equation} We observe that each source (other than the root) is followed by an outward step, thus \begin{equation}\label{eq:inwarddefective} |V_\circ|=|V_\ast|=i+1. \end{equation} Finally, in the representation $W(T)$ of a contour the first and last step is $U$ and $D$ respectively, and in between $i$ strings of $DoU$ alternate with $i$ strings of $Di$. Thus, letting $0\leq j\leq i$ denote the number of inward steps with the superscript $\circ$ and using \eqref{eq:inoutward} we have \begin{equation} l^+(T) \in \big({\mathbb Z}^+\cup\{0\}\big)^{i+1}, \quad l^-(T)\in \big({\mathbb Z}^+\cup\{0\}\big)^{2i-j+1},\quad l^{-,\circ}(T)\in \big({\mathbb Z}^+\cup\{0\}\big)^{j}. \end{equation} Let $W(i,j)$ denote the number of different words that have $i$ inward steps and $j$ inward steps with the superscript $\circ$ made from the alphabet $\{o_1,\dots,o_d,U,D,i_1,\dots,i_d, i^\circ_1,\dots,i^\circ_d\}$ that satisfy our rules. \begin{claim}\label{claim:Wij} For all $0\leq i,\; 0\leq j\leq i$ we have \begin{equation}\label{eq:Wij} W(i,j)\leq \binom{2i}{i}\binom{i}{j}d^{2i-j}. \end{equation} \end{claim} \begin{Proof} In any $\mathcal W\in W(i, j)$ the first and last step is $U$ and $D$ respectively, and in between $i$ strings of $DoU$ alternate with $i$ strings of $Di$. Thus (ignoring the super- and subscripts) we can arrange these strings in $\binom{2i}{i}$ possible ways. We then choose $j$ inward steps to which we add the superscript $\circ$, this can be done in $\binom{i}{j}$ ways. Finally, we can assign the $o$'s and $i$'s subscripts $1,\dots, d$ one by one. As we have seen in the proof of Claim~\ref{claim:downwardlength}, an inward step with the superscript $\circ$ overlaps with an outward step previously visited by the cycle, so the type of this inward step is the same as the type of that outward step. Hence we can assign the types of $o$'s and $i$'s in $d^{2i-j}$ different ways. \end{Proof} \begin{claim}\label{claim:wordprob} Let $\mathcal W\in W(i, j)$ for some $0\leq i,\; 0\leq j\leq i$. Then \[ \sum_{T\in \mathcal T_0: W(T)=\mathcal W}\P\big[T\mbox{ is strongly present in }\Phi\big]\leq\binom{3i-j}{i} p^{i+1}r^{2i-j}\left(\frac 1 {1-q} \right)^{3i-j+1}. \] \end{claim} Using $q=1-p-r$ and Claim~\ref{claim:Wij} we can estimate the Peierls sum in (\ref{strPeicoop}) from above by \begin{equation}\label{Prq} \begin{aligned} \sum_{i=0}^\infty\sum_{j=0}^i W(i,j) \binom{3i-j}{i} p^{i+1}r^{2i-j}\left(\frac 1 {1-q} \right)^{3i-j+1}\\ < \frac {p} {p+r}\sum_{i=0}^\infty \left(\frac {16dpr\big((2d+1)r+p\big)} {(p+r)^3} \right)^{i}. \end{aligned}\end{equation} For any fixed $r$ this sum is finite as soon as $p<\big(\sqrt{(d+0.5)^2+1/(16d)}-d-0.5\big)r$. In particular for $d=2$ we obtain the following bound on the critical parameter \[p_c(r)> 0.00624 r.\] \begin{Proof}[Proof of Claim~\ref{claim:wordprob}] The idea of the proof is similar to that of Lemma 9 in~\cite{GG82}. As the Toom cycle $T$ is strongly present in $\Phi$, each sink is mapped to a defective site, and each inward step ends and each outward step starts at a site where the cooperative branching map is applied. The definition of an embedding entails that sinks do not overlap, so using \ref{eq:inwarddefective} they contribute to a factor $p^{i+1}$. To estimate the contribution of the in- and outward steps, we need to recall the construction of the Toom cycle in Section~\ref{S:Tcycles}. We inductively add edges to the cycle by exploring its previously unexplored sites one by one. At an exploartion step, starting at the site we are exploring, an upward, a downward, an outward and an inward step is added in this order. Although during the loop erasion some of these steps might be erased, their relative order in the cycle does not change and the site is not visited again in later iterations. Therefore, each site is the starting point of at most one outward step and the endpoint of at most one inward step, and if both steps are present, the outward step is always visited first by the cycle. As outward steps start at a source, there are $i$ inward and outward steps and $j$ inward steps with the superscript $\circ$, we have that these steps contribute to a factor $r^{2i-j}$. Finally, the strong presence of $T$ implies that every downward step, except for the ones ending at a source other than the root, ends at a site where the identity map is applied. ~\ref{eq:inoutward} then yields that downward segments contribute to a factor $q^{\|l^+(T)\|_1-i}$. Let \begin{equation} \mathcal L(\mathcal W):=\{(l^+(T), l^-(T)): W(T)=\mathcal W\} \end{equation} Recall that by Claim~\ref{claim:downwardlength} $W(T)=\mathcal W, l^+(T)$ and $l^-(T)$ uniquely specify the Toom contour $T$. We then have \begin{equation} \sum_{T\in \mathcal T_0: W(T)=\mathcal W}\P\big[T\mbox{ is strongly present in }\Phi\big]\leq p^{i+1}r^{2i-j} \sum_{(l^+, l^-)\in \mathcal L(\mathcal W)}q^{\|l^+\|_1-i}. \end{equation} It remains to show that \begin{equation}\label{eq:endproof} q^{-i}\sum_{(l^+, l^-)\in \mathcal L(\mathcal W)}q^{\|l^+\|_1}\leq\binom{3i-j}{i}\left(\frac 1 {1-q} \right)^{3i-j+1}. \end{equation} From now on, we will omit the last coordinate of $l^-$. As we have seen in the proof of Claim~\ref{claim:downwardlength}, to determine the lengths in $l^{-,\circ}$ it is enough to know the type and length of each step along the cycle up to the corresponding downward step. Therefore, when the cycle visits the last downward segment, the length of every other down- and upward segment is already known. By~\eqref{eq:inoutward} we then have $l^-_{2i-j+1}=\|l^+\|_1-\|l^{-,\circ}\|_1-l^-_1-\dots-l^-_{2i-j}$. By a small abuse of notation we will denote $l^-=(l^-_1,\dots, l^-_{2i-j})$ and $l^+=(l^+_1, \dots, l^+_{i+1})$. Given~$l^-$ and $l^+$ we merge all the lengths into a single vector in a certain order, that is we inductively construct two vectors $k\in \big(\mathbb Z^+\cup \{0\}\big)^{3i-j+1}$ and $k^\pm\in\{1,-1\}^{3i-j+1}$ in the following way. We let $K_0=k_0^+=k_0^-=0$ and for each $1\leq s< 3i-j+1$ \begin{itemize} \item if $l^-_{s-1}-l^+_{s-1}> K_{s-1}$ or $l^-_{s-1}-l^+_{s-1}= K_{s-1} < 0$, then \[k_s:=l^+_{k^+_{s-1}+1}, \quad k^\pm_s:=1, \quad k^+_s:=k^+_{s-1}+1, \quad k^-_s:=k^-_{s-1},\] \item otherwise \[k_s:=l^-_{k^-_{s-1}+1}, \quad k^\pm_s:=-1, \quad k^+_s:=k^+_{s-1}, \quad k^-_s:=k^-_{s-1}+1,\] \end{itemize} and we let \[K_s:=K_{s-1}+k_sk^\pm_s. \] Finally we let $k:=(k_1,\dots, k_{3i-j+1})$ and $k^\pm:=(k^\pm_1,\dots, k^\pm_{3i-j+1})$. Note that each element $k_s^\pm$ is 1 or -1, depending on whether $k_s$ was chosen from $l^+$ or $l^-$ respectively, furthermore, the vectors $k$ and $k^\pm$ satisfy the property \begin{equation}\label{eq:kproperty} K_s\geq0 \quad\mbox{iff}\quad k^\pm_s=1\qquad\forall s. \end{equation} Informally, this means that we rearrange the lengths such that every upward step ends at a non-negative height and every downward step ends at a negative height. As $K_{3i-j+1}=\|l^+\|_1-\|l^-\|_1\geq 0$, this implies that $k^\pm_{3i-j+1}=1$, that is the last element of $k$ is an upward length. Let us further denote the sum of upward and downward lengths in $k$ up to coordinate $s$ by \begin{equation}\begin{aligned} K_s^+&:=k_1 \mathbbm{1}\{K^\pm_1=1\}+\dots+k_s \mathbbm{1}\{K^\pm_s=1\},\\ K_s^-&:=k_1 \mathbbm{1}\{K^\pm_1=-1\}+\dots+k_s \mathbbm{1}\{K^\pm_s=-1\}. \end{aligned}\end{equation} Clearly,~$K_s^+\geq K_{s-1}^+$ and~$K_s^-\geq K_{s-1}^-$ for each $s$. Furthermore,~\eqref{eq:kproperty} implies \begin{equation}\label{eq:kpmbounds} \begin{cases}K_{s-1}^-< K_{s-1}^-\leq K_{s}^+, \quad &\text{if } k^\pm_{s-1}=-1, k^\pm_{s}=1,\\ K_{s-1}^-\leq K_{s-1}^+< K_{s}^-, \quad &\text{if } k^\pm_{s-1}=1, k^\pm_{s}=-1. \end{cases} \end{equation} Let $\mathcal K$ denote the set of all pairs of vectors $(k, k^\pm)$ such that $k\in \big(\mathbb Z^+\cup \{0\}\big)^{3i-j+1},k^\pm\in\{1,-1\}^{3i-j+1}$ and that satisfy poperty~\eqref{eq:kproperty}, and let $\mathcal K^\pm$ denote the set of all vectors $k^\pm$ that contain $2i-j$ (-1)'s and $i+1$ 1's such that $k^\pm_{3i-j+1}=1$. We then can further bound \begin{equation}\label{eq:endproof2} \sum_{(l^+, l^-)\in \mathcal L(\mathcal W)}q^{\|l^+\|_1}\leq \sum_{k^\pm\in\mathcal K^\pm}\sum_{k: (k, k^\pm)\in\mathcal K} q^{K^{+}_{3i-j+1}}. \end{equation} Let us fix for the moment the vector $k^\pm$ and consider the sum \begin{equation}\label{eq:k1k2k3} \sum_{k: (k, k^\pm)\in\mathcal K} q^{K^{+}_{3i-j+1}}=\sum_{k_1\in\mathcal K_1}\dots \sum_{k_{3i-j+1}\in\mathcal K_{3i-j+1}}q^{K^{+}_{3i-j+1}}, \end{equation} where $\mathcal K_s(k_1,\dots, k_{s-1})$ denotes the set of all the possible $k_s$'s given the first $s-1$ coordinate of $k$. For any $k^\pm_s=1$, we can estimate \begin{equation} \sum_{k_{s-1}\in\mathcal K_{s-1}}\sum_{k_{s}\in\mathcal K_s} q^{K^{+}_s}\leq \begin{cases}\begin{aligned} \sum_{k_{s-1}\in\mathcal K_{s-1}}\sum_{K_{s}^+=K_{s-1}^+}^\infty q^{K^{+}_s}= \frac{1}{1-q}\sum_{k_{s-1}\in\mathcal K_{s-1}}q^{K^{+}_{s-1}} \quad &\mbox{if }k^\pm_{s-1}=1, \\ \sum_{k_{s+1}\in\mathcal K_{s+1}}\sum_{K_{s}^+=K_{s-1}^-}^\infty q^{K^{+}_s}= \frac{1}{1-q}\sum_{k_{s-1}\in\mathcal K_{s+1}}q^{K^{-}_{s-1}} \quad &\mbox{if }k^\pm_{s-1}=-1, \end{aligned} \end{cases} \end{equation} by a change of variable and using~$K_s^+\geq K_{s-1}^+$ in the first case and~\eqref{eq:kpmbounds} in the second. Similarly, for any $k^\pm_s=-1$, we can estimate \begin{equation} \sum_{k_{s-1}\in\mathcal K_{s-1}}\sum_{k_{s}\in\mathcal K_s} q^{K^{-}_s}\leq \begin{cases}\begin{aligned} \sum_{k_{s-1}\in\mathcal K_{s-1}}\sum_{K_{s}^-=K_{s-1}^+}^\infty q^{K^{-}_s}= \frac{1}{1-q}\sum_{k_{s-1}\in\mathcal K_{s-1}}q^{K^{+}_{s-1}} \quad &\mbox{if }k^\pm_{s-1}=1, \\ \sum_{k_{s+1}\in\mathcal K_{s+1}}\sum_{K_{s}^-=K_{s-1}^-}^\infty q^{K^{-}_s}= \frac{1}{1-q}\sum_{k_{s-1}\in\mathcal K_{s+1}}q^{K^{-}_{s-1}} \quad &\mbox{if }k^\pm_{s-1}=-1. \end{aligned} \end{cases} \end{equation} Finally, if a length $k_s$ with $k^\pm_s=-1$ corresponds to a downward segment ending at a source (of which we have $i$ in total), we have $k_s\geq 1$. Then we can bound $K_s^-\geq K_{s-1}^-+1$ if $k^\pm_{s-1}=-1$, and $K_s^-\geq K_{s-1}^++1$ if $k^\pm_{s-1}=1$, as we have a strict inequality in~\eqref{eq:kpmbounds} in this case. Thus these downward segments will each contribute to an additional factor of $q$. As $k^\pm_{3i-j+1}=1$, we can repeatedly apply these formulas in~\eqref{eq:k1k2k3} for all $s$ to obtain the upper bound $q^i\big(\frac{1}{1-q}\big)^{3i-j+1}$. Observing that $|\mathcal K^\pm|=\binom{3i-j}{i}$ and using~\eqref{eq:endproof2} we can conclude~\eqref{eq:endproof}. \end{Proof} \section{Continuous time}\label{S:cont} \subsection*{Outline} In this section, we consider monotone interacting particle systems with a finite collection $\phi_0, \phi_1,\ldots,\phi_m$ of monotonic maps such that $\phi_0=\phi^0$, $\phi_k\neq \phi^{\text{id}}$ for any $1\leq k\leq m$, and a collection of nonnegative rates $r_0, r_1,\ldots,r_m$, evolving according to~\eqref{traj}. We extend the definition of Toom contours to continuous time, and show how to use them to obtain explicit bounds for certain models. \subsection{Toom contours in continuous time} Recall Definition~\ref{def:toomgraph} of a Toom graph $(V,\mathcal E)=(V, \vec E_1,\dots,\vec E_\sigma)$ with $\sigma$ charges and the definition of sources, sinks and internal vertices in~\eqref{eq:sourcesinkint}. \textit{Continuous Toom contours} are Toom graphs embedded in space-time $\mathbb Z^d\times \mathbb R$. \begin{defi}\label{def:contembedding} A \emph{continuous embedding} of $(V,{\cal E})$ is a map \begin{equation}\label{psicontin} V\ni v\mapsto\psi(v)=\big(\vec\psi(v),\psi_{d+1}(v)\big)\in{\mathbb Z}^d\times{\mathbb R} \end{equation} that has the following properties: \begin{enumerate} \item either $\displaystyle\psi_{d+1}(w)<\psi_{d+1}(v)$ and $\vec\psi(w)=\vec\psi(v)$, or $\displaystyle\psi_{d+1}(w)=\psi_{d+1}(v)$ and $\vec\psi(w)\neq\vec\psi(v)$ for all $(v,w)\in\vec E$, \item $\psi(v_1)\neq\psi(v_2)$ for each $v_1\in V_\ast$ and $v_2\in V$ with $v_1\neq v_2$, \item $\psi(v_1)\neq\psi(v_2)$ for each $v_1,v_2\in V_s$ with $v_1\neq v_2$ $(1\leq s\leq\sigma)$, \item $\psi_{d+1}(v_3)\notin\big(\psi_{d+1}(v_2), \psi_{d+1}(v_1)\big)$ for each $(v_1, v_2)\in \vec E_s, \; v_3\in V_s \cup V_\ast$ with $\vec\psi(v_1)=\vec\psi(v_2)=\vec\psi(v_3)$ $(1\leq s\leq\sigma)$. \end{enumerate} \end{defi} \noindent We call $\psi((v, w))=(\psi(v), \psi(w))$ a \textit{vertical segment}, if $\displaystyle\psi_{d+1}(w)<\psi_{d+1}(v)$, and a \textit{horizontal segment}, if $\displaystyle\psi_{d+1}(w)=\psi_{d+1}(v)$. Then (i) implies that $\psi(\vec E)$ is the union of vertical and horizontal segments. Property (iv) says that an internal vertex of charge $s$ or a sink is not mapped into a point of a vertical segment in $\psi(\vec E_s)$ $(1\leq s\leq\sigma)$. Note that, unlike in the discrete time case, this definition of an embedding does not imply $|\vec E_1|=\dots=|\vec E_\sigma|$. \begin{defi}\label{def:conttoomcontour} A \emph{continuous Toom contour} is a quadruple $(V,{\cal E},v_\circ,\psi)$, where $(V,{\cal E})$ is a connected Toom graph, $v_\circ\in V_\circ$ is a specially designated source, and $\psi$ is a continuous embedding of $(V,{\cal E})$ that has the additional property that: \begin{enumerate}\addtocounter{enumi}{4} \item $\psi_{d+1}(v_\circ)>t$ for each $(i,t)\in\psi(V)\backslash\psi(\{v_\circ\})$. \end{enumerate} \end{defi} \noindent We set \be\begin{array}{r@{\,}c@{\,}l} \displaystyle V_\text{vert}&:=&\displaystyle\big\{v\in V: \psi((w, v)) \mbox{ is a vertical segment for some } (w, v)\in\vec E\big\},\\[5pt] \displaystyle V_\text{hor}&:=&\displaystyle\big\{v\in V: \psi((v, w)) \mbox{ is a horizontal segment for some } (v, w)\in\vec E\big\}, \end{array}\ee that is $V_\text{vert}$ is the set of vertices in $V$ whose images under~$\psi$ are the endpoints of a vertical segment, and $V_\text{hor}$ is the set of vertices in $V$ whose images under~$\psi$ are the starting points of a horizontal segment. We let $\P^\mathbf r$ with $\mathbf r=(r_0, \dots, r_m)$ be a probability measure under which we define a family of independent Poisson processes on ${\mathbb R}$: \begin{equation}\label{eq:poi} \mathbf P_{i, k}\text{ for } i \in \mathbb Z^d,\; 0\leq k\leq m, \quad \text{each with rate }r_k. \end{equation} We regard each $\mathbf P_{i, k}$ as a random discrete subset of ${\mathbb R}$. Note that $\mathbb P^{\mathbf r}$-a.s. these sets are pairwise disjoint. $\mathbf P=\big(\mathbf P_{(i,k)}\big)_{i\in{\mathbb Z}^{d}, 0\leq k\leq m}$ almost surely determines a stationary process $(\overline X_t)_{t\in{\mathbb R}}$ that at each time $t$ is distributed according to the upper invariant law $\overline\nu$. As in the discrete time case, we need a special construction of this process. Let $\mathcal P=\big(\mathcal P_{i, k}\big)_{i\in{\mathbb Z}^d, 0\leq k\leq m}$ denote a realization of the Poisson processes. We will call a point in $\mathcal P_{i, k} \; (i\in {\mathbb Z}^d)$ a \textit{type k arrival point}, and call type 0 arrival points \textit{defective points}. Furthermore, let $\{0,1\}^{{\mathbb Z}^{d}\times {\mathbb R}}$ denote the space of all space-time configurations $x=(x_t(i))_{i\in{\mathbb Z}^{d}, t\in {\mathbb R}}$. For $x\in\{0,1\}^{{\mathbb Z}^{d}}$ and $t\in{\mathbb R}$, we define $x_t\in\{0,1\}^{{\mathbb Z}^d}$ by $x_t:=(x_t(i))_{i\in{\mathbb Z}^d}$. By definition, a \emph{trajectory} of $\mathcal P$ is a space-time configuration $x$ such that \begin{equation} x_t(i)=\begin{cases} \phi_{k}(\theta_ix_{t-})\qquad &\forall \; 0\leq k\leq m,\; t\in\mathcal P_{i, k}, \\ x_{t-}(i)\qquad&\mbox{otherwise.} \end{cases} \quad\big((i,t)\in{\mathbb Z}^{d}\times {\mathbb R}\big) \end{equation} We have the following continuous-time equivalents of Lemmas~\ref{L:maxtraj} and~\ref{L:maxup}. \begin{lemma}[Minimal and maximal trajectories] Let $\mathcal P$ be a realization of the Poisson processes defined in~\eqref{eq:poi}. Then there exist trajectories $\underline x$ and $\overline x$ that are uniquely characterised by the property that each trajectory $x$ of $\mathcal P$ satisfies $\underline x\leq x\leq\overline x$ (pointwise). \end{lemma} \begin{lemma}[The lower and upper invariant laws] Let $\phi_0,\ldots,\phi_m$ be monotonic functions, let $r_0,\ldots,r_m$ be nonnegative rates, and let $\underline\nu$ and $\overline\nu$ denote the lower and upper invariant laws of the corresponding monotone interacting particle system. Let $\mathbf P=\big(\mathbf P_{(i,k)}\big)_{i\in{\mathbb Z}^{d}, 0\leq k\leq m}$ be a family of independent Poisson processes, each with rate $r_k$, and let $\underline X$ and $\overline X$ be the minimal and maximal trajectories of $\mathbf P$. Then for each $t\in{\mathbb R}$, the random variables $\underline X_t$ and $\overline X_t$ are distributed according to the laws $\underline\nu$ and $\overline\nu$, respectively. \end{lemma} \noindent We omit the proofs, as they go along the same lines as that of the discrete time statements. From now on, we fix a realization $\mathcal P$ of the Poisson processes such that the sets $\mathcal P_{i,k}$ are pairwise disjoint. Recall the definition of ${\cal A}(\phi_k)$ in (\ref{Aphi}). We fix an integer $\sigma\geq 2$ and for each $1\leq k\leq m$ and $1\leq s\leq\sigma$ we choose a set \begin{equation}\label{Ascontin} A_s(\phi_k)\in{\cal A}(\phi_k). \end{equation} \begin{defi}\label{def:conttoomcontourpresent} A continuous Toom contour $(V,{\cal E},v_\circ,\psi)$ with $\sigma$ charges is \emph{present} in the realization of the Poisson processes $\mathcal P=\big(\mathcal P_{i, k}\big)_{i\in{\mathbb Z}^d, 0\leq k\leq m}$ if: \begin{enumerate} \item $\displaystyle\psi_{d+1}(v)\in \mathcal P_{\vec\psi(v), 0}$ if and only if $\displaystyle v\in V_\ast$, \item $\displaystyle\psi_{d+1}(v)\in \cup_{k=1}^m \mathcal P_{\vec\psi(v), k}$ for all $\displaystyle v\in V_\text{hor}\cup (V_\circ\backslash \{v_\circ\})$, \item $\displaystyle\psi_{d+1}(v)\in \mathcal P_{\vec\psi(v), k}$ for some $1\leq k\leq m$ such that $A_s(\phi_k)\neq \{(0, 0)\}$ for all $\displaystyle v\in V_s\cap V_\text{vert}$ $(1\leq s\leq\sigma)$, \item $\mathcal P_{\vec\psi(v), k}\cap \big(\psi_{d+1}(w), \psi_{d+1}(v)\big)=\emptyset$ for all $(v,w)\in\vec E_s$ such that $w\in V_\text{vert}$ and for all $1\leq k\leq m$ such that $(0, 0)\notin A_s(\phi_k)$ $(1\leq s\leq\sigma)$, \item $\displaystyle\vec\psi(w)-\vec\psi(v)\in A_s(\phi_k)$ if $\displaystyle\psi_{d+1}(v)\in \mathcal P_{\vec\psi(v), k}$ for some $1\leq k\leq m$, for all $(v, w)\in \vec E^\ast$ with $v\in V_\text{hor}$ $(1\leq s\leq\sigma)$, \item $\displaystyle\vec\psi(w)-\vec\psi(v)\in\bigcup_{s=1}^\sigma A_s(\phi_{k})$ if $\displaystyle\psi_{d+1}(v)\in \mathcal P_{\vec\psi(v), k}$ for some $1\leq k\leq m$, for all $(v, w)\in \vec E^\circ$, \end{enumerate} where $\vec E^\circ$ and $\vec E^\ast$ are defined in~\eqref{Ecirc}. \end{defi} \noindent Condition (i) says that sinks and only sinks are mapped to defective points. Together with condition (iv) of Definition~\ref{def:contembedding} of a continuous embedding this implies that we cannot encounter any defective point along a vertical segment of the contour. Condition (ii) says that vertices in $V_\text{hor}$ and sources (except for the root) are mapped to type $k$ arrival points with $1\leq k\leq m$. As the other endpoint of the horizontal segment is not an arrival point, the consecutive segment must be vertical, furthermore, together with (i) this implies that there cannot be a defective point at either end of a horizontal segment. Condition (iii) says that internal vertices with charge~$s$ in $V_\text{vert}$ are mapped to type $k$ arrival points with $A_s(\phi_k)\neq \{(0,0)\}$. Condition (iv) says that we can only encounter type $k$ arrival points with $(0,0)\in A_s(\phi_k)$ along a vertical segment in $\psi(\vec E_s)$ $(1\leq s\leq\sigma)$. Condition~(v) says that if $\psi((v,w))$ is a horizontal segment such that $v$ is an internal vertex with charge $s$ or the root that is mapped into a type $k$ arrival point ($1\leq k\leq m$), then $(v,w)$ is mapped to a pair of space-time points of the form $\big((i,t),(i+j,t)\big)$ with $j\in A_s(\phi_k)$. Condition~(vi) is similar, except that if $v$ is a source different from the root, then we only require that $j\in\bigcup_{s=1}^\sigma A_s(\phi_{k})$. Again, we can strengthen this definition for the $\sigma=2$ case. \begin{defi}\label{def:conttoomcontourstrongpresent} A continuous Toom contour $(V,{\cal E},v_\circ,\psi)$ with $2$ charges is \emph{strongly present} in the realization of the Poisson processes $\mathcal P=\big(\mathcal P_{i, k}\big)_{i\in{\mathbb Z}^d, 0\leq k\leq m}$ if in addition to conditions (i)--(vi) of Definition~\ref{def:conttoomcontourpresent}, for each $v\in V_\circ\backslash\{v_\circ\}$ and $w_1,w_2\in V$ with $(v,w_s)\in\vec E_{s,{\rm out}}(v)$ $(s=1,2)$, one has: \begin{enumerate}\addtocounter{enumi}{6} \item $\displaystyle\vec\psi(w_i)-\vec\psi(v)\in A_{3-i}(\phi_k)$ if $\displaystyle\psi_{d+1}(v)\in \mathcal P_{\vec\psi(v), k}$ for some $1\leq k\leq m$ $(i=1, 2)$, \item $\vec\psi(w_1)\neq\vec\psi(w_2)$. \end{enumerate} \end{defi} Our aim is to show that $\overline x_0(0)$ implies the existence of a continuous Toom contour rooted at $(0,0)$ present in $\mathcal P$. To that end, we define ``connected components'' of space-time points in state 0, that will play the role of explanation graphs in continuous time. We first define oriented paths on the space-time picture of the process. For each~$t\in \mathcal P_{i, k}$ $(i\in{\mathbb Z}^d, 1\leq k\leq m)$ such that $\overline x_t(i)=0$ place an \textit{arrow} (an oriented edge) pointing from~$(i, t)$ to each~$(j, t)\in A_s(\phi_k)$ such that $\overline x_t(j)=0$ ($1\leq s\leq\sigma$). It is easy to see that we place at least one arrow pointing to each set~$A_s(\phi_k)$, otherwise site $i$ would flip to state 1 at time $t$. Furthermore, for each~$t\in\mathcal P_{i, 0}$ place a \textit{death mark} at~$(i, t)$. A \textit{path} moves in the decreasing time direction without passing through death marks and possibly jumping along arrows in the direction of the arrow. More precisely, it is a function $\gamma: [t_1,t_2] \to \mathbb Z^d$ which is left continuous with right limits and satisfies, for all $t \in (t_1, t_2)$, \[ \begin{aligned} t &\notin \mathcal P_{\gamma(t), 0} \qquad \text{ and }\\ \gamma(t) &\neq \gamma(t+) \text{ implies } t \in \mathcal P_{\gamma(t),k}, \gamma(t+)-\gamma(t)\in A_s(\phi_k)\text{ and } \overline x_t(\gamma(t+))=0\\ & \qquad\text{ for some } 1\leq k\leq m, \; 1\leq s\leq\sigma. \end{aligned} \] We say that two points $(i, t), (j, s)$ with $t > s$ are connected by a path if there exists a path $\gamma: [s, t] \to\mathbb Z^d$ with $\gamma(t) = i$ and $\gamma(s) = j$. Define \begin{equation}\label{eq:gamma} \Gamma_{(i, t)}:=\{(j, s): (i, t) \mbox{ and }(j,s) \mbox{ are connected by a path} \} \end{equation} and $\Gamma^T_{(i, t)}:=\Gamma_{(i, t)}\cap {\mathbb Z}^d\times [t-T, t]$. If $\overline x_{0}(0)=0$, then by the definition of the paths and arrows we have $\overline x_s(j)=0 $ for all $(j, s)\in \Gamma_{(0,0)}$. \begin{theorem}[Presence of a continuous Toom contour]\label{T:contcontour} Let $\phi_0, \dots, \phi_m$ be monotonic functions where $\phi_0=\phi^0$ is the constant map that always gives the outcome zero, and let $r_0, \dots, r_m$ be nonnegative rates. Let $\mathcal P$ be a realization of the Poisson processes defined in~\eqref{eq:poi}, and denote its maximal trajectory by~$\overline x$. Let $\sigma\geq 2$ be an integer and for each $1\leq s\leq\sigma$ and $1\leq k\leq m$, let $A_s(\phi_k)\in{\cal A}(\phi_k)$ be fixed. Then, if $\Gamma^T_{(0,0)}$ is bounded for all $T>0$, $\overline x_0(0)=0$ implies that with respect to the given choice of $\sigma$ and the sets $A_s(\phi_k)$, there is a continuous Toom contour $(V,{\cal E},v_\circ,\psi)$ rooted at $(0,0)$ present in $\mathcal P$ for $\sigma\geq 2$, and strongly present in $\mathcal P$ for $\sigma=2$. \end{theorem} The monotone interacting particle systems we consider here have the property that $\Gamma^T_{(0,0)}$ is bounded for all $T>0$ (see for example Chapter 4 of the lecture notes~\cite{Swart17}), if \begin{equation}\begin{aligned} &\sum_{k=0}^m r_k<\infty, \\ &\sum_{k=0}^m r_k \left(| \cup_{A\in\mathcal A} A| -1\right)<\infty. \end{aligned} \end{equation} \begin{Proof} As~$\Gamma^T_{(0,0)}$ is bounded, the set~$\Gamma^T_{(0, 0)}\cap \big(\cup_{i\in\mathbb Z^d, 0\leq k\leq m}\mathcal \{i\}\times \mathcal P_{i, k}\big)$ is finite for all $T>0$, therefore we can order the arrival points in ~$\Gamma_{(0,0)}$ in decreasing order. Denote by~$(i_l, t_l)$ its elements with $0 \geq t_1 > t_2 >\dots$, and let $t_0:=0$. We define a monotonic flow ${\bm{\phh}}$ in ${\mathbb Z}^{d+1}$ as follows. For all $(i, t)\in{\mathbb Z}^{d+1}$ we let \begin{equation}\label{eq:discretemaps} \varphi_{(i,t)}:=\left\{\begin{array}{ll} \phi_k\quad&\mbox{if } (i, t)=(i_l, -2l) \mbox{ for some } t_l\in \mathcal P_{i_l, k} \quad(0\leq k\leq m),\\[5pt] \phi^{\text{id}}&\mbox{otherwise,} \end{array}\right. \end{equation} where~$\phi^{\text{id}}$ is the identity map defined in \eqref{phiid}. Denoting by $\overline x'$ the maximal trajectory of this monotonic flow, it is easy to see that~$\overline x'_0(0)=0$, thus Theorem \ref{T:contour} implies the existence of a Toom contour $(V', \mathcal E', v'_\circ, \psi')$ rooted at $(0, 0)$ present in ${\bm{\phh}}$ with respect to the given choice of $\sigma$ and the sets $A_s(\phi_k)$. We use this discrete-time contour to define the continuous-time one. For all $v'\in V$ such that $\psi'(v)=(i, -l)$ we let \begin{equation}\label{eq:contpsi} \psi(v):=\left\{\begin{array}{ll} \psi(w_1) &\mbox{if } \exists w_1, w_2: (w_1, v), (v, w_2)\in \vec E' \mbox{ and } \vec\psi'(w_1)=\vec\psi'(w_2)= \vec\psi'(v),\\[5pt] (i, t_{\lceil l/2\rceil})\quad&\mbox{otherwise.} \end{array}\right. \end{equation} Recall that for $v, w\in V'$, we write $v\leadsto_{\vec E'}w$ when we can reach $w$ from $v$ through directed edges of $\vec E'$. We define \begin{equation} \mathcal W(v):=\{w\in V': v\leadsto_{\vec E'}w \mbox{ and } \psi(w)=\psi(v)\} \quad \forall v\in V'. \end{equation} Note that $\mathcal W(v)=\{v\}$ for all $v\in V'_\ast$. Set $V:=\cup_{s=1}^\sigma V_s\cup V_\circ\cup V_\ast$ with \begin{equation}\label{eq:conttoomgraph} \begin{array}{l} V_\circ:=\{\mathcal W(v): v\in V'_\circ\},\\[5pt] V_\ast:=\{\mathcal W(v): v\in V'_\ast\},\\[5pt] V_s:=\{\mathcal W(v): v\in V'_s\setminus \cup_{w\in V'_\circ}\mathcal W(w)\} \quad(1\leq s\leq\sigma). \end{array} \end{equation} For all $W\in V$ we let $\psi(W):=\psi(w)$ for some $w\in W$. We further define \begin{equation} \vec E_s:=\{(W_1, W_2)\in V\times V:\exists w_i\in W_i \mbox{ such that }(w_1, w_2)\in \vec E'_s\} \quad (1\leq s\leq \sigma). \end{equation} Letting $v_\circ$ be the set $W\subset V$ containing $v'_\circ$, we claim that $(V, \mathcal E, v_\circ, \psi)$ is a continuous Toom contour rooted at $(0, 0)$ present in $\mathcal P$ for $\sigma\geq 2$, and strongly present in $\mathcal P$ for $\sigma= 2$. (See Figure~\ref{fig:contcontour} for an example of the construction.) \begin{figure}[htb!] \begin{center} \includegraphics[height=12cm]{figures/contcontour2.pdf} \caption{Top left: A realization of $\mathbf{P}$ that applies the maps $\phi^0$ and $\phi^{\rm coop}$ with rates $r_0$ and $r_1$ respectively. The points marked with a star are defective, ensuring that the origin (0,0,0) is in state 0. The connected component $\Gamma_{(0,0,0)}$ of the origin is marked by black. Right: The monotone cellular automaton ${\bm{\phh}}$ defined in~\eqref{eq:discretemaps} and the corresponding Toom contour rooted at (0,0,0). The sites marked with a star and open dot apply $\phi^0$ and $\phi^{\rm coop}$ respectively, every other site applies the identity map. The origin in state zero. Middle: The Toom graph corresponding to the Toom contour on the right. The green sets correspond to the vertices of the Toom graph of the continuous contour, defined in~\eqref{eq:conttoomgraph}. Bottom left: The Toom contour corresponding to the realization of $\mathbf{P}$ on the top left.} \label{fig:contcontour} \end{center} \end{figure} Let us start with some simple observations. By definition, in ${\bm{\phh}}$ at each height $-2l \; (1\leq l\leq n)$ there is exactly one site $(i, -2l)$ such that $\varphi_{(i, -2l)}\neq \phi^{\text{id}}$, every other site of $\mathbb Z^{d+1}$ applies identity map. By the construction of the Toom contour a site with the identity map cannot be the image of a source, furthermore any edge in $\psi'(\vec E')$ starting at such a site is vertical. Any edge starting at a site with $\phi^k \; (1\leq k\leq m)$ has the form $\big((i, t), (j, t-1)\big)$ for some $t\in 2\mathbb Z, i, j\in\mathbb{Z}^d$. We call these edges diagonal, if $i\neq j$. Thus, $\psi'(\vec E')$ is the union of vertical and diagonal edges, such that each diagonal edge points from an even height to an odd height. Furthermore, as $\varphi_{\psi'(v)}=\phi^0$ for all $v\in V'_\ast$, each sink is mapped to a space-time point with even height. Together with the defining properties of an embedding in Definition~\ref{def:embedding} these observations imply tha \begin{equation}\label{eq:psiprimeadditional} (j, t)\notin\psi'(V'_s\cup V'_\ast) \mbox{ for each }\big((i, t), (j, t-1)\big)\in\psi'(\vec E'_s) \mbox{ with }i\neq j\quad (1\leq s\leq\sigma). \end{equation} As $\phi_{(i, t)}\neq \phi^{\text{id}}$, we must have $\phi_{(j, t)} = \phi^{\text{id}}$, furthermore, we have the identity map at every site at height $t-1$. As $t-1$ is odd, clearly $(j, t)\notin\psi'(V'_\ast)$. Assume that $(j, t)=\psi'(v)$ for some $v\in V'_s$, then there is a $w\in V'_s$ such that $(v, w)\in\vec E'$ and $\psi'((v, w))$ is vertical. This means that $\psi'(w)=(j, t-1)$, that is a type $s$ vertex overlaps with another type $s$ vertex, contradicting property (iii) of Definition~\ref{def:embedding}. Let us now examine the image of $(V', \mathcal E')$ under $\psi$. By definition, for each $(v, w)\in \vec E'$ such that $\psi'((v, w))$ is diagonal we have $\psi_{d+1}(v)=\psi_{d+1}(w)$. Furthermore, $\vec\psi(v)=\vec\psi'(v)$ for all $v\in V'$, implying that $\psi(\vec E')$ is the union of horizontal and vertical segments. Observe that for any sequence of vertices $v_1, \dots, v_n\in V_s \;(1\leq s\leq \sigma)$ such that $\psi'((v_i, v_{i+1}))$ is vertical for each $1\leq i\leq n-1$ the embedding $\psi$ maps $v_2,\dots, v_{n-1}$ to $\psi(v_1)$. Thus the starting points of vertical edges in $\psi'(\vec E')$ are eventually mapped into the endpoints of horizontal segments or sources under $\psi$. From the definition of $V$ in~\eqref{eq:conttoomgraph} it is easy to see that, with the convention that $\psi((v,w))=(\psi(v),\psi(w))=\emptyset$ if $\psi(v)=\psi(w)$, we have \begin{equation}\label{eq:psipsiprime} \begin{array}{l} \psi(V_\circ)= \psi(V'_\circ),\quad \psi(v_\circ)= \psi(v'_\circ), \quad \psi(V_\ast)= \psi(V'_\ast),\\[5pt] \psi(V_s)= \psi(V'_s)\setminus \psi(V'_\circ), \quad \psi(\vec E_s)=\psi(\vec E'_s), \quad (1\leq s\leq\sigma).\\[5pt] \end{array} \end{equation} For any $(v, w)\in \vec E'$ such that $\psi'((v, w))$ is diagonal or $v\in V'_\circ$ we have $\varphi_{\psi'(v)}= \phi_k \; (1\leq k\leq m)$, thus $\psi_{d+1}(v)$ is an arrival point of $\mathcal P_{\vec\psi(v), k}$. Finally, for each $v\in V'_\ast$ we have $\varphi_{\psi'(v)}= \phi_0$, thus $\psi_{d+1}(v)$ is a defective point. We are now ready to show that $(V, \mathcal E, v_\circ, \psi)$ is a continuous Toom contour rooted at $(0, 0)$. As $(V', \mathcal E')$ is a Toom graph, it is straightforward to check that $(V, \mathcal E)$ is a Toom graph as well. We have already seen that $\psi$ satisfies condition (i) of Definition~\ref{def:contembedding} of a continuous embedding. As $\psi'$ satisfies Definition~\ref{def:embedding}, its properties (ii) and (iii) together with \eqref{eq:contpsi} and~\eqref{eq:psiprimeadditional} easily yield conditions (ii) and (iii). Finally, assume that (iv) does not hold. By~\eqref{eq:conttoomgraph} then there exist $v_1 \in V'_s\cup V'_\circ, v_2 \in V'_s, v_3\in V'_s\cup V'_\ast$ such that $\vec\psi'(v_1)=\vec\psi'(v_2)=\vec\psi'(v_3)$ and $\psi'_{d+1}(v_2)<\psi'_{d+1}(v_3)<\psi'_{d+1}(v_1)$ with $\psi'_{d+1}(v_i)\in \mathbb Z$ for each $i=1,2,3$. As there is a type $s$ charge travelling through $v_1$ and $v_2$ in $(V', \mathcal E')$ and the difference between the time coordinates of $\psi'$ of two consecutive vertices of a charge is 1, there must be a $w\in V'_s$ such that $\psi'(w)=\psi'(v_3)$, that is a sink or an internal vertex of type $s$ overlaps with another internal vertex of type $s$. This contradicts conditions (ii) and (iii) of Definition~\ref{def:embedding}, therefore condition (iv) must hold. By Defintion~\ref{def:toomcontour} and the definition of $\psi$ we have $\psi_{d+1}(v)\leq 0$ for all $v\in V'$ (hence for all $v\in V$ as well), and $\psi(v'_\circ)=\psi(v_\circ)=(0,0)$. By~\eqref{eq:conttoomgraph} any vertex $v\in V$ such that $\psi(v)=(0,0)$ is contained in some $W\in V_\circ$, thus $(V, \mathcal E, v_\circ, \psi)$ satisfies the defining property of Definition \ref{def:conttoomcontour} of a continuous Toom contour rooted at $v_\circ$. We are left to show that this contour is (strongly) present in $\mathcal P$. As $(V', \mathcal E', v'_\circ, \psi')$ is a Toom contour rooted at $(0, 0)$ present in ${\bm{\phh}}$, it satisfies Definition~\ref{D:present}. We now check the conditions of Definition~\ref{def:conttoomcontourpresent}. We have already seen that conditions (i) and (ii) hold. Condition (iii) says that internal vertices with charge~$s$ in $V_\text{vert}$ are mapped to type $k$ arrival points with $A_s(\phi_k)\neq \{(0,0)\}$. As for all $w\in V$ such that $\psi'(w)$ is the starting point of a vertical edge in $\psi'(\vec E')$, $\psi(w)$ is the endpoint of a horizontal segment or a source, we have that indeed $\varphi_{\psi'(v)}$ cannot be the identity map or a map $\phi_k$ with $A_s(\phi_k)= \{(0,0)\}$ for any $v\in V_\text{vert}\cap V_s$. Condition (iv) says that we can only encounter type $k$ arrival points with $(0,0)\in A_s(\phi_k)$ along a vertical segment in $\psi(\vec E_s)$ $(1\leq s\leq\sigma)$. If $(0,0)\notin A_s(\phi_k)$ for an arrival point along the image of a charge $s$, then by the construction of the discrete contour the charge is diverted at this point in a horizontal direction, so it is necessarily the endpoint of that vertical segment. Finally, conditions (v) and (vi) are immediate from conditions (iii) and (iv) of Definition~\ref{D:present}. Since moreover $(V', \mathcal E', v'_\circ, \psi')$ satisfies Definition~\ref{D:strongpres} for $\sigma=2$, the defining properties of Definition~\ref{def:conttoomcontourstrongpresent} hold for $(V', \mathcal E', v'_\circ, \psi')$. \end{Proof} \begin{remark}\label{rem:contcontour} We have observed before, that in the image under $\psi$ of a type $s$ charge ($1\leq s\leq \sigma$) horizontal segments are always followed by vertical segments. The construction of the continuous Toom contour described above also ensures that vertical segments either end at a defective point, or are followed by a horizontal segment. Thus, starting from the image of the source, we have an alternating sequence of horizontal and vertical edges ending with a vertical edge at the image of the sink. Furthermore, if $(0,0)\notin \cup_{k=0}^m \mathcal P_{0, k}$, then $\varphi_{(0,0)}=\phi^{\rm id}$, so every $(v_\circ, w)\in\psi'(\vec E')$ is vertical. \eqref{eq:contpsi} then implies that every segment in the continuous contour starting at $\psi(v_\circ)$ is also vertical. \end{remark} \subsection{Explicit bounds}\label{S:contbounds} \noindent \textbf{Sexual contact process on $\mathbb Z^d \; (d\geq 1)$} Recall from Subsection~\ref{S:contfirst} that we define $A_1:=\{0\}$ and $A_2:=\{e_1,\dots, e_d\}$ and we have \begin{equation} {\cal A}(\phi^{\rm coop, d})=\big\{A_1,A_2\big\}. \end{equation} We set $\sigma:=|{\cal A}(\phi^{\rm coop, d})|=2$, and for the sets $A_s(\phi_k)$ in (\ref{As}) we make the choices \begin{equation} A_1(\phi^{\rm coop,d}):=A_1,\quad A_2(\phi^{\rm coop, d}):=A_2, \end{equation} that is we have $A_s(\phi_1)\neq A_1$ only for $s=2$. Let $\mathbf P=\big(\mathbf P_{(i,k)}\big)_{i\in{\mathbb Z}^{d}, k=0,1}$ be a family of independent Poisson processes such that for each $i$ $\mathbf P_{(i,0)}$ has rate 1 and $\mathbf P_{(i,1)}$ has rate $\lambda$. In line with the terminology used for contact processes, we will call type 0 arrival points \emph{death marks} and type 1 arrivel points \emph{birth marks}. Then Theorem~\ref{T:contcontour} implies the Peierls bound: \begin{equation}\label{eq:Peicont} 1-\overline\rho=\P[\overline X_0(0)=0] \leq\P\big[\mbox{a Toom contour rooted at 0 is strongly present in }\mathbf P\big]. \end{equation} In what follows, we give an upper bound on this probability. Definitions~\ref{def:conttoomcontourpresent} and \ref{def:conttoomcontourstrongpresent} imply that a continuous Toom contour is strongly present in $\mathbf P$ if and only if the following conditions are satisfied: \begin{itemize} \item[{\rm(i)}] $\psi(v)$ is a death mark for all $\displaystyle v\in V_\ast$, \item[{\rm(ii)}] $\psi(v)$ is a birth mark for all $\displaystyle v\in V_{\text{hor}}$, \item[{\rm(iii)}] There are no death marks along vertical segments of $\psi(\vec E)$, \item[{\rm(iv)}] There are no birth marks along vertical segments of $\psi(\vec E_2)$,\ \item[{\rm(v)}] $v\in V_2\cup V_\circ$ for all $\displaystyle v\in V_{\text{hor}}$, \item[{\rm(vi)}] Horizontal and vertical segments alternate along each path between a source and a sink, \item[{\rm(viia)}] If $(v, w)\in \vec E^\circ_1$, then $\psi\big((v, w)\big)$ is a vertical segment, \item[{\rm(viib)}] If $(v, w)\in \vec E^\circ_2$, then $\psi\big((v, w)\big)$ is a horizontal segment, \item[{\rm(viii)}] If $(v, w)\in \vec E$ with $w\in V_\ast$, then $\psi\big((v, w)\big)$ is a vertical segment, \end{itemize} where $\vec E^\circ_i$ is defined in~\eqref{Ecirc}. As horizontal segments cannot start at the image of a type 1 internal vertex and they alternate with vertical segments along each path between a source and a sink, this implies that the image of a type 1 charge starting at a source and ending at a sink is either a single vertical segment (that is there is no internal vertex along the path), or a horizontal segment followed by a vertical segment (that is there is exactly one internal vertex along the path). Furthermore, by Remark~\ref{rem:contcontour}, $\mathbb P^{(1,\lambda)}$-a.s. the type 1 path starting at $v_\circ$ consists of a single vertical segment. We now can argue similarly as in the discrete time case in Section~\ref{S:intbd}. If we reverse the direction of edges of charge 2, then the Toom graph becomes a directed cycle with edge set $\vec E_1 \cup\cev E_2$. We then call vertical segments in $\psi(\vec E_1)$ \textit{upward} and in $\psi(\cev E_2)$ \textit{downward}, and horizontal segments in $\psi(\vec E_1)$ \textit{outward} and in $\psi(\cev E_2)$ \textit{inward}. As $|A_2|=d$ we distinguish $d$ types of outward and inward segments: we say that~$\psi\big((v, w)\big)$ is type $i$, if $|\vec\psi(w)-\vec\psi(v)|=e_i$. If $(V, {\cal E}, v_\circ, \psi)$ is a continuous Toom contour rooted at 0 that is strongly present in $\mathbf P$, then we can fully specify $\psi$ by saying for each $(v, w)\in \vec E_1\cup\cev E_2$ whether $\psi\big((v, w)\big)$ is an upward, a downward, an outward or an inward segment, and its length in the former two and type in the latter two cases. In other words, we can represent the contour by a word of length $n$ consisting of the letters from the alphabet $\{o_1,\dots,o_d,u, d, i_1,\dots, i_d\}$, which represents the different kinds of steps the cycle can take, and a vector $l$ that contains the length of each vertical segment along the cycle in the order we encouner them. Then we can obtain a word consisting of these letters that must satisfy the following rules: \begin{itemize} \item The first step is an upward segment. \item Each outward segment must be immediately preceded by a downward segment and followed by an upward segment. \item Between two occurrences of the string $Do_\cdot U$, and also before the first and after the last occurrence, we see a sequence of the string $Di_\cdot$ of length $\geq 0$. \item The last step is a downward segment. \end{itemize} Notice that the structure of a possible word is exactly the same as in~\eqref{eq:discreteword}. Then the contour in the bootom left of Figure~\ref{fig:contcontour} is described by the following word: \begin{equation} \accentset{\circ}{|} U \accentset{\ast}{|} \underbrace{D\accentset{\circ}{|} o_2 \accentset{1}{|} U} \accentset{\ast}{|} \underbrace{D\accentset{\circ}{|} o_1 \accentset{1}{|} U} \accentset{\ast}{|} \underbrace{D\accentset{2}{|} i_2} \accentset{2}{|} \underbrace{D\accentset{2}{|} i_1} \accentset{2}{|} D\accentset{\circ}{|}. \end{equation} For any continuous Toom contour $T$ denote by $W(T)$ the corresponding word satisfying these rules and by $\mathbf W$ the set of all possible words satisfying these rules. We then can bound \begin{equation}\begin{aligned} \P[\overline X_0(0)=0]&\\ \leq\sum_{W\in\mathbf W} \P\big[&\mbox{a Toom contour } T \mbox{ with } W(T)=W \mbox{ rooted at 0 is strongly present in }\mathbf P\big].\end{aligned} \end{equation} From this point on, we can count the number of possible words and assign probabilities to each following the same line of thought (adapted to continuous time) as in Section~\ref{S:intbd} for the discrete-time monotone cellular automaton that applies the cooperative branching and the identity map. We then recover the following Peierls bound: \begin{equation} \P[\overline X_0(0)=0] \leq\frac {1} {1+\lambda}\sum_{i=0}^\infty\left(\frac {16d\lambda\big((2d+1)\lambda+1\big)} {(\lambda+1)^3} \right)^{i}. \end{equation} The argument is similar to that of \cite[Lemma~8 and 9]{Gra99}. Presenting it would be long and technical, but not particularly challenging, so we will skip it. As we have mentioned earlier, we can think of this process as the limit of the random cellular automaton with time steps of size $\varepsilon$ where the maps $\phi^0,\phi^{\rm coop, d}$ and $\phi^{\rm id}$ are applied with probabilities $\varepsilon$, $\varepsilon \lambda$, and $1-\varepsilon(1+\lambda)$, respectively. Observe that we recover the exact same Peierls bound by substituting $p=\varepsilon$, $r=\varepsilon \lambda$, and $q=1-\varepsilon(1+\lambda)$ into~\eqref{Prq} and letting $\varepsilon\to 0$. In particular for $d=1$ we obtain the bound \begin{equation} \lambda_c (1)\leq 49.3242\dots, \end{equation} and for $d=2$ the bound \begin{equation} \lambda_c (2)\leq 161.1985\dots . \end{equation} \section{Minimal explanations}\label{S:expla} \subsection*{Outline} Our proof of Theorem~\ref{T:contour} started with Lemma~\ref{L:explan}, which shows that if $\overline x_0(0)=0$, then there is an explanation graph present in ${\bm{\phh}}$, in the sense of Definitions \ref{def:finiteexpl} and \ref{def:finexpres}. In this section, we explain how explanation graphs, whose definition looks somewhat complicated at first sight, naturally arise from a more elementary concept, which we will call a \emph{minimal explanation}. Our definition of a minimal explanation will be similar to, though different from the definition of John Preskill \cite{Pre07}. We introduce minimal explanations in Subsection~\ref{S:finexpl} and then discuss their relation to explanation graphs in Subsection~\ref{S:exgr}. \subsection{Finite explanations}\label{S:finexpl} For each monotonic map $\phi:\{0,1\}^{{\mathbb Z}^d}\to\{0,1\}$, we define \be\begin{array}{r@{\,}c@{\,}l} \displaystyle{\cal A}^\uparrow(\phi)&:=&\displaystyle\big\{A\subset{\mathbb Z}^d:\phi(1_A)=1\big\},\\[5pt] \displaystyle{\cal Z}^\uparrow(\phi)&:=&\displaystyle\big\{Z\subset{\mathbb Z}^d:\phi(1-1_Z)=0\big\}, \end{array}\ee where $1_A$ denotes the indicator function of $A$ and hence $1-1_Z$ is the configuration that is zero on $Z$ and one elsewhere. Clearly, ${\cal A}^\uparrow(\phi)$ is an increasing set in the sense that ${\cal A}^\uparrow(\phi)\ni A\subset A'$ implies $A'\in{\cal A}(\phi)$. Likewise ${\cal Z}^\uparrow(\phi)$ is increasing. We say that an element $A\in{\cal A}^\uparrow(\phi)$ is \emph{minimal} if $A,A'\in{\cal A}^\uparrow(\phi)$ and $A'\subset A$ imply $A'=A$. We define minimal elements of ${\cal Z}^\uparrow(\phi)$ in the same way and set \begin{equation}\label{upmin} {\cal A}(\phi):=\big\{A\in{\cal A}^\uparrow(\phi):A\mbox{ is minimal}\big\} \quad\mbox{and}\quad {\cal Z}(\phi):=\big\{Z\in{\cal Z}^\uparrow(\phi):Z\mbox{ is minimal}\big\}. \end{equation} Since monotonic maps are local (i.e., depend only on finitely many coordinates), it is not hard to see that \be\begin{array}{r@{\,}c@{\,}l}\label{minup} \displaystyle{\cal A}^\uparrow(\phi)&:=&\displaystyle\big\{A\subset{\mathbb Z}^d:A\supset A'\mbox{ for some }A'\in{\cal A}(\phi)\big\},\\[5pt] \displaystyle{\cal Z}^\uparrow(\phi)&:=&\displaystyle\big\{Z\subset{\mathbb Z}^d:Z\supset Z'\mbox{ for some }Z'\in{\cal Z}(\phi)\big\}. \end{array}\ee It follows that \begin{equation} \phi(x)=\bigvee_{A\in{\cal A}(\phi)}\bigwedge_{i\in A}x(i)=\bigwedge_{Z\in{\cal Z}(\phi)}\bigvee_{i\in Z}x(i). \end{equation} In particular, our present definition of ${\cal A}(\phi)$ coincides with the one given in (\ref{Aphi}). We note that ${\cal A}(\phi^0)=\emptyset$ and ${\cal A}(\phi^1)=\{\emptyset\}$, and similarly ${\cal Z}(\phi^0)=\{\emptyset\}$ and ${\cal Z}(\phi^1)=\emptyset$. One has \begin{equation} A\in{\cal A}^\uparrow(\phi)\quad\mbox{if and only if}\quad A\cap Z\neq\emptyset\quad\forall Z\in{\cal Z}^\uparrow(\phi), \end{equation} and by (\ref{minup}) the same is true with ${\cal Z}^\uparrow(\phi)$ replaced by ${\cal Z}(\phi)$. Similarly, \begin{equation}\label{Zchar} Z\in{\cal Z}^\uparrow(\phi)\quad\mbox{if and only if}\quad Z\cap A\neq\emptyset\quad\forall A\in{\cal A}(\phi). \end{equation} For monotonic maps $\phi$ and $\phi'$ defined on $\{0,1\}^{{\mathbb Z}^d}$, we write $\phi\leq\phi'$ if $\phi(x)\leq\phi'(x)\quad\forall x\in\{0,1\}^{{\mathbb Z}^d}$. Moreover, we write \begin{equation} \phi\prec\phi'\quad\mbox{if and only if}\quad{\cal Z}(\phi)\subset{\cal Z}(\phi'). \end{equation} Note that $\phi\prec\phi'$ implies that $\phi\geq\phi'$. For monotonic flows ${\bm{\phh}}$ and ${\bm{\psi}}$, we write ${\bm{\phh}}\leq{\bm{\psi}}$ (resp.\ ${\bm{\phh}}\prec{\bm{\psi}}$) if $\varphi_{(i,t)}\leq\psi_{(i,t)}$ (resp.\ $\varphi_{(i,t)}\prec\psi_{(i,t)}$) for all $(i,t)\in{\mathbb Z}^{d+1}$. We let $\overline x^{\bm{\phh}}$ denote the maximal trajectory of a monotonic flow ${\bm{\phh}}$. By definition, a \emph{finite explanation} for $(0,0)$ is a monotonic flow ${\bm{\psi}}$ such that: \begin{enumerate} \item $\overline x_0^{\bm{\psi}}(0)=0$, \item $\psi_{(i,t)}\neq\phi^1$ for finitely many $(i,t)\in{\mathbb Z}^{d+1}$. \end{enumerate} By definition, a \emph{minimal explanation} for $(0,0)$ is a finite explanation ${\bm{\psi}}$ that is minimal with respect to the partial order $\prec$, i.e., ${\bm{\psi}}$ has the property that if ${\bm{\psi}}'$ is a finite explanation for $(0,0)$ such that ${\bm{\psi}}'\prec{\bm{\psi}}$, then ${\bm{\psi}}'={\bm{\psi}}$. \begin{lemma}[Existence of a minimal explanation] Let\label{L:minexist} ${\bm{\phh}}$ be a monotonic flow. Then $\overline x_0^{\bm{\phh}}(0)=0$ if and only if there exists a minimal explanation ${\bm{\psi}}$ for $(0,0)$ such that ${\bm{\psi}}\prec{\bm{\phh}}$. \end{lemma} \begin{Proof} Assume that there exists a minimal explanation ${\bm{\psi}}$ for $(0,0)$ such that ${\bm{\psi}}\prec{\bm{\phh}}$. Then ${\bm{\psi}}\geq{\bm{\phh}}$ and hence $0=\overline x^{\bm{\psi}}_0(0)\geq\overline x^{\bm{\phh}}_0(0)$. To complete the proof, we must show that conversely, $\overline x_0^{\bm{\phh}}(0)=0$ implies the existence of a minimal explanation ${\bm{\psi}}$ for $(0,0)$ such that ${\bm{\psi}}\prec{\bm{\phh}}$. We first prove the existence of a finite explanation ${\bm{\psi}}$ for $(0,0)$ such that ${\bm{\psi}}\prec{\bm{\phh}}$. For each $s\in{\mathbb Z}$, we define $x^s$ as in (\ref{maxs}). Then (\ref{maxconv}) implies that $x^{-n}_0(0)=0$ for some $0\leq n<\infty$. For each $(i,t)\in{\mathbb Z}^{d+1}$, let \begin{equation} U(i,t):=\big\{(j,t-1):j\in A\mbox{ for some }A\in{\cal A}(\varphi_{(i,t)})\big\} \end{equation} denote the set of ``ancestors'' of $(i,t)$. For any $Z\subset{\mathbb Z}^{d+1}$, we set $U(Z):=\{U(z):z\in Z\}$ and we define inductively $U^0(Z):=Z$ and $U^{k+1}(Z):=U(U^k(Z))$ $(k\geq 0)$. Then $\bigcup_{k=0}^nU^k(0,0)$ is a finite set. Since $x^{-n}_0(0)=0$, it follows that setting \begin{equation} \psi_{(i,t)}:=\left\{\begin{array}{ll} \displaystyle\varphi_{(i,t)}\quad&\mbox{if }(i,t)\in\bigcup_{k=0}^nU^k(0,0),\\[5pt] \displaystyle\phi^1\quad&\mbox{otherwise} \end{array}\right. \end{equation} defines a finite explanation ${\bm{\psi}}$ for $(0,0)$ such that ${\bm{\psi}}\prec{\bm{\phh}}$. We observe that for a given monotonic map $\phi$, there are only finitely many monotonic maps $\phi'$ such that $\phi'\prec\phi$. Also, since ${\cal Z}(\phi^1)=\emptyset$, the only monotonic map $\phi$ such that $\phi\prec\phi^1$ is $\phi=\phi^1$. Therefore, since $\psi_{(i,t)}\neq\phi^1$ for finitely many $(i,t)\in{\mathbb Z}^{d+1}$, there exist ony finitely many monotonic flows ${\bm{\psi}}'$ such that ${\bm{\psi}}'\prec{\bm{\psi}}$. It follows that the set of all finite explanations ${\bm{\psi}}'$ for $(0,0)$ that satisfy ${\bm{\psi}}'\prec{\bm{\psi}}$ must contain at least one minimal element, which is a minimal explanation for $(0,0)$ such that ${\bm{\psi}}\prec{\bm{\phh}}$. \end{Proof} The following lemma gives a more explicit description of minimal explanations. In Figure~\ref{fig:minexpl} on the right, a minimal explanation ${\bm{\psi}}$ for $(0,0)$ is drawn with ${\bm{\psi}}\prec{\bm{\phh}}$, where ${\bm{\phh}}$ is a monotonic flow that takes values in $\{\phi^0,\phi^{\rm coop}\}$. For each $(i,t)\in{\mathbb Z}^{d+1}$ such that $\psi_{(i,t)}\neq\phi^1$, thick black lines join $(i,t)$ to the points $(j,t-1)$ with $j\in Z_{(i,t)}$, where $Z_{(i,t)}$ is the set defined in point~(v) below. Orange stars indicate points $(i,t)$ where $\psi_{(i,t)}=\phi^0$. The minimal explanation drawn in Figure~\ref{fig:minexpl} has the special property that even if we replace $\psi_{(i,t)}$ by $\varphi_{(i,t)}$ in all points except for the defective points of ${\bm{\phh}}$, then it is still true that removing any of the defective points of ${\bm{\psi}}$ results in the origin having the value one. This means that the set of defective points drawn in Figure~\ref{fig:minexpl} corresponds to a ``minimal explanation'' in the sense defined by John Preskill in \cite{Pre07}, which is a bit stronger than our definition. \begin{lemma}[Minimal explanations] Let\label{L:minexpl} ${\bm{\psi}}$ be a finite explanation for $(0,0)$. Then ${\bm{\psi}}$ is a minimal explanation for $(0,0)$ if and only if in addition to conditions (i)--(iii) of the definition of a finite explanation, one has: \begin{enumerate}\addtocounter{enumi}{3} \item $\psi_{(i,t)}=\phi^1$ for all $(i,t)\in{\mathbb Z}^{d+1}\backslash\{(0,0)\}$ such that $t\geq 0$, \item for each $(i,t)\in{\mathbb Z}^{d+1}$ such that $\psi_{(i,t)}\neq\phi^1$, there exists a finite $Z_{(i,t)}\subset{\mathbb Z}^d$ such that ${\cal Z}(\psi_{(i,t)})=\{Z_{(i,t)}\}$, \item for each $(i,t)\in{\mathbb Z}^{d+1}\backslash\{(0,0)\}$ such that $\psi_{(i,t)}\neq\phi^1$, there exists a $j\in{\mathbb Z}^d$ such that $\psi_{(j,t+1)}\neq\phi^1$ and $i\in Z_{(j,t+1)}$. \end{enumerate} Moreover, each minimal explanation ${\bm{\psi}}$ for $(0,0)$ satisfies: \begin{enumerate}\addtocounter{enumi}{6} \item $\overline x^{\bm{\psi}}_t(i)=0$ for each $(i,t)\in{\mathbb Z}^{d+1}$ such that $\psi_{(i,t)}\neq\phi^1$, \end{enumerate} \end{lemma} \begin{Proof} We first show that a finite explanation ${\bm{\psi}}$ for $(0,0)$ satisfying (iv)--(vi) is minimal. By our definition of minimal explanations, we must check that if ${\bm{\psi}}'$ is a finite explanation such that ${\bm{\psi}}'\prec{\bm{\psi}}$, then ${\bm{\psi}}'={\bm{\psi}}$. Imagine that conversely, $\psi'_{(i,t)}\neq\psi_{(i,t)}$ for some $(i,t)\in{\mathbb Z}^{d+1}$. Then by (v) and the fact that $\psi'_{(i,t)}\prec\psi_{(i,t)}$, we must have that ${\cal Z}(\psi'_{(i,t)})=\emptyset$ and hence $\psi'_{(i,t)}=\phi^1$. Since $\psi'_{(i,t)}\neq\psi_{(i,t)}$, it follows that $\psi_{(i,t)}\neq\phi^1$. By (iv), this implies that either $(i,t)=(0,0)$ or $t<0$. Let $n:=-t$. Using (vi), we see that there exist $i=i_0,\ldots,i_n$ such that $\psi_{(i_k,t+k)}\neq\phi^1$ $(0\leq k\leq n)$ and $i_{k-1}\in Z_{(i_k,t+k)}$ $(0<k\leq n)$. By (iv), we must have $i_n=0$. Since $\psi'_{(i,t)}=\phi^1$ we have $\overline x^{{\bm{\psi}}'}_t(i)=1$. Using the fact that ${\bm{\psi}}'\prec{\bm{\psi}}$ and $i_{k-1}\in Z_{(i_k,t+k)}$ $(0<k\leq n)$, it follows that $\overline x^{{\bm{\psi}}'}_{t+k}(i_k)=1$ for all $0\leq k\leq n$. In particular, this shows that $\overline x^{{\bm{\psi}}'}_0(0)=1$, contradicting the fact that ${\bm{\psi}}'$ is a finite explanation for $(0,0)$. We next show that each minimal explanation ${\bm{\psi}}$ for $(0,0)$ satisfies (iv)--(vii). Property~(iv) follows from the fact that if $\psi_{(i,t)}\neq\phi^1$ for some $(i,t)\in{\mathbb Z}^{d+1}\backslash\{(0,0)\}$ such that $t\geq 0$, then setting $\psi'_{(i,t)}:=\phi^1$ and $\psi'_{(j,s)}:=\psi_{(j,s)}$ for all $(j,s)\neq(i,s)$ defines a finite explanation ${\bm{\psi}}'\prec{\bm{\psi}}$. Property~(vii) follows in the same way: if $\overline x^{\bm{\psi}}_t(i)=1$ for some $(i,t)\in{\mathbb Z}^{d+1}$ such that $\psi_{(i,t)}\neq\phi^1$, then we can replace $\psi_{(i,t)}$ by $\phi^1$ without changing the fact that ${\bm{\psi}}$ is a finite explanation. To prove (v), we first observe that if $\psi_{(i,t)}\neq\phi^1$ for some $(i,t)\in{\mathbb Z}^{d+1}$, then $\overline x^{\bm{\psi}}_t(i)=0$ by (vii). It follows that there exists some $Z\in{\cal Z}(\psi_{(i,t)})$ such that $\overline x^{\bm{\psi}}_{t-1}(j)=0$ for all $j\in Z$. (Note that this in particular includes the case that $\psi_{(i,t)}=\phi^0$ and ${\cal Z}(\psi_{(i,t)})=\{\emptyset\}$.) If ${\cal Z}(\psi_{(i,t)})$ contains any other elements except for $Z$, then we can remove these without changing the fact that ${\bm{\psi}}$ is a finite explanation. Therefore, by minimality, we must have ${\cal Z}(\psi_{(i,t)})=\{Z\}$, proving (v). To prove (vi), finally, assume that $(i,t)\in{\mathbb Z}^{d+1}\backslash\{(0,0)\}$ and $\psi_{(i,t)}\neq\phi^1$, but there does not exist a $j\in{\mathbb Z}^d$ such that $\psi_{(j,t+1)}\neq\phi^1$ and $i\in Z_{(j,t+1)}$. Then we can replace $\psi_{(i,t)}$ by $\phi^1$ without changing the fact that ${\bm{\psi}}$ is a finite explanation, which contradicts minimality. This completes the proof. \end{Proof} \subsection{Explanation graphs revisited}\label{S:exgr} We claim that in the proof of many of our results, such as Theorems \ref{T:contour} and \ref{T:strongpres}, we can without loss of generality assume that \begin{equation}\label{wlog} {\cal A}(\phi_k)=\big\{A_s(\phi_k):1\leq s\leq\sigma\big\}\qquad(1\leq k\leq m). \end{equation} To see this, let ${\bm{\phh}}$ be a monotonic flow on $\{0,1\}^{{\mathbb Z}^d}$ taking values in $\{\phi_0,\ldots,\phi_m\}$, where $\phi_0=\phi^0$ is the constant map that always gives the outcome zero and $\phi_1,\ldots,\phi_m$ are non-constant. Let $\sigma\geq 2$ be an integer and for each $1\leq k\leq m$ and $1\leq s\leq\sigma$, fix $A_s(\phi_k)\in{\cal A}(\phi_k)$. We let ${\bm{\phh}}^\ast=(\varphi^\ast_{(i,t)})_{(i,t)\in{\mathbb Z}^{d+1}}$ denote the image of ${\bm{\phh}}$ under the map from $\{\phi_0,\ldots,\phi_m\}$ to $\{\phi^\ast_0,\ldots,\phi^\ast_m\}$ defined by setting $\phi_0^\ast:=\phi_0$ and \begin{equation}\label{phick} \phi^\ast_k(x):=\bigvee_{s=1}^\sigma\bigwedge_{i\in A_s(\phi_k)}x(i) \qquad\big(1\leq k\leq m,\ x\in\{0,1\}^{{\mathbb Z}^d}\big). \end{equation} We set $A_s(\phi^\ast_k):=A_s(\phi_k)$ $(1\leq k\leq m,\ 1\leq s\leq\sigma)$. We make the following simple observations. \begin{lemma}[Modified monotonic flow] The\label{L:wlog} modified monotonic flow ${\bm{\phh}}^\ast$ has the following properties: \begin{enumerate} \item ${\bm{\phh}}^\ast$ satisfies (\ref{wlog}), \item ${\bm{\phh}}^\ast\geq{\bm{\phh}}$, \item $\overline x^{{\bm{\phh}}^\ast}\geq\overline x^{\bm{\phh}}$, \item an explanation graph is present in ${\bm{\phh}}^\ast$ if and only if it is present such that ${\bm{\psi}}\prec{\bm{\phh}}$, \item a Toom contour is (strongly) present in ${\bm{\phh}}^\ast$ if and only if it is present such that ${\bm{\psi}}\prec{\bm{\phh}}$. \end{enumerate} \end{lemma} \begin{Proof} Property~(iii) is a direct consequence of (ii) and all other properties follow directly from the definitions. \end{Proof} Because of Lemma~\ref{L:wlog}, in the proof of results such as Theorems \ref{T:contour} and \ref{T:strongpres} about the (strong) presence of Toom contours or Lemma~\ref{L:explan} about the presence of an explanation graph, we can without loss of generality assume that (\ref{wlog}) holds. Indeed, by part (iii) of the lemma, $\overline x^{\bm{\phh}}_0(0)=0$ implies $\overline x^{{\bm{\phh}}^\ast}_0(0)=0$ so replacing ${\bm{\phh}}$ by ${\bm{\phh}}^\ast$, in view of parts (iv) and (v), it suffices to prove the presence of an explanation graph or the (strong) presence of a Toom contour in ${\bm{\phh}}^\ast$. We now come to the main subject of this subsection, which is to link minimal explanations to explanation graphs. We start with a useful observation. \begin{lemma}[Presence of an explanation graph] Assume\label{L:exnul} that ${\bm{\phh}}$ satisfies (\ref{wlog}). Then properties (ii) and (iii) of Definition~\ref{def:finexpres} imply property~(i). \end{lemma} \begin{Proof} Property~(ii) of Definition~\ref{def:finexpres} implies that \begin{equation}\label{U0} \overline x_t(i)=0\quad\forall (i,t)\in U_\ast. \end{equation} We next claim that for $(i,t)\in U\backslash U_\ast$, \begin{equation}\label{Unext} \overline x_{t-1}(j)=0\quad\forall\big((i,t),(j,t-1)\big)\in\vec H \quad\mbox{implies}\quad \overline x_t(i)=0. \end{equation} Indeed, if $\overline x_{t-1}(j)=0$ for all $\big((i,t),(j,t-1)\big)\in\vec H$, then by property~(iii) of Definition~\ref{def:finexpres}, for each $1\leq s\leq\sigma$, there is a $k\in A_s(\varphi_{(i,t)})$ such that $\overline x_{t-1}(i+k)=0$, which by (\ref{wlog}) implies that $\overline x_t(i)=0$. Define inductively $U_0:=U_\ast$ and $U_{n+1}:=\{u\in U:v\in U_n\ \forall (u,v)\in\vec H\}$. Then (\ref{U0}) and (\ref{Unext}) imply that $\overline x_t(i)=0$ for all $(i,t)\in\bigcup_{n=0}^\infty U_n=U$. \end{Proof} We now make the link between minimal explanations and the presence of explanation graphs as defined in Definitions \ref{def:finiteexpl} and \ref{def:finexpres}. As before, ${\bm{\phh}}$ is a monotonic flow on $\{0,1\}^{{\mathbb Z}^d}$ taking values in $\{\phi_0,\ldots,\phi_m\}$, where $\phi_0=\phi^0$ and $\phi_1,\ldots,\phi_m$ are non-constant. Moreover, we have fixed an integer $\sigma\geq 2$ and for each $1\leq k\leq m$ and $1\leq s\leq\sigma$, we have fixed $A_s(\phi_k)\in{\cal A}(\phi_k)$. \begin{lemma}[Minimal explanations and explanation graphs] Assume\label{L:expl} that ${\bm{\phh}}$ satisfies (\ref{wlog}) and that ${\bm{\psi}}$ is a minimal explanation for $(0,0)$ such that ${\bm{\psi}}\prec{\bm{\phh}}$. For each $(i,t)\in{\mathbb Z}^{d+1}$ such that $\psi_{(i,t)}\neq\phi^1$, let $Z_{(i,t)}$ be as in point~(v) of Lemma~\ref{L:minexpl}. Then there is an explanation graph $(U,{\cal H})$ for $(0,0)$ present in ${\bm{\phh}}$ such that: \begin{equation}\begin{array}{c}\label{UUH} \displaystyle U=\big\{(i,t)\in{\mathbb Z}^{d+1}:\psi_{(i,t)}\neq\phi^1\big\},\quad U_\ast=\big\{(i,t)\in U:\psi_{(i,t)}=\phi^0\big\},\\[5pt] \displaystyle\mbox{and}\quad \vec H=\big\{\big((i,t),(j,t-1)\big):(i,t)\in U,\ j\in Z_{(i,t)}\big\}. \end{array}\ee \end{lemma} \begin{Proof} Let $U$ and $U^\ast$ be defined by (\ref{UUH}). Let $(i,t)\in U\backslash U_\ast$. Since ${\bm{\psi}}\prec{\bm{\phh}}$ we have ${\cal Z}(\psi_{(i,t)})\subset{\cal Z}(\varphi_{(i,t)})$ and hence $Z_{(i,t)}\in{\cal Z}(\varphi_{(i,t)})$, so by (\ref{Zchar}), for each $1\leq s\leq\sigma$, we can choose some $j_s(i,t)\in Z_{(i,t)}\cap A_s(\varphi_{(i,t)})$. We claim that $Z_{(i,t)}=\{j_1(i,t),\ldots,j_\sigma(i,t)\}$. To see this, set $Z'_{(i,t)}:=\{j_1(i,t),\ldots,j_\sigma(i,t)\}$. Then $Z'_{(i,t)}\subset Z_{(i,t)}$ and (\ref{wlog}) implies that $Z'_{(i,t)}\cap A\neq\emptyset$ for all $A\in{\cal A}(\varphi_{(i,t)})$, which by (\ref{Zchar}) implies that $Z'_{(i,t)}\in{\cal Z}^\uparrow(\varphi_{(i,t)})$. By (\ref{upmin}), $Z_{(i,t)}$ is a minimal element of ${\cal Z}^\uparrow(\varphi_{(i,t)})$, so we conclude that $Z'_{(i,t)}=Z_{(i,t)}$. We claim that setting \begin{equation} \vec H_s:=\big\{\big((i,t),(j_s(i,t),t-1)\big):(i,t)\in U\backslash U_\ast\big\} \qquad(1\leq s\leq\sigma) \end{equation} now defines an explanation graph that is present in ${\bm{\phh}}$. Properties (i), (ii), (iv) and (v) of Definition~\ref{def:finiteexpl} follow immediately from our definitions and the fact that $\psi_{(0,0)}\neq\phi^1$ since ${\bm{\psi}}$ is a mininal explanation for $(0,0)$. Property~(iii) follows from Lemma~\ref{L:minexpl}~(vi). This proves that $(U,{\cal H})$ is an explanation graph. To see that $(U,{\cal H})$ is present in ${\bm{\phh}}$, we must check conditions (i)--(iii) of Definition~\ref{def:finexpres}. Condition~(i) follows from Lemma~\ref{L:minexpl}~(vii) and conditions (ii) and (iii) are immediate from our definitions. \end{Proof} \subsection{Discussion} As before, let ${\bm{\phh}}$ be a monotonic flow on $\{0,1\}^{{\mathbb Z}^d}$ taking values in $\{\phi_0,\ldots,\phi_m\}$, where $\phi_0=\phi^0$ and $\phi_1,\ldots,\phi_m$ are non-constant. Let $\sigma\geq 2$ and for each $1\leq k\leq m$ and $1\leq s\leq\sigma$, let $A_s(\phi_k)\in{\cal A}(\phi_k)$ be fixed. Consider the following conditions: \begin{enumerate} \item $\overline x^{\bm{\phh}}_0(0)=0$, \item there exists a minimal explanation ${\bm{\psi}}$ for $(0,0)$ such that ${\bm{\psi}}\prec{\bm{\phh}}$, \item there is an explanation graph $(U,{\cal H})$ for $(0,0)$ present in ${\bm{\phh}}$, \item there is a Toom contour $(V,{\cal E},v_\circ,\psi)$ rooted at $(0,0)$ present in ${\bm{\phh}}$. \end{enumerate} Theorem~\ref{T:contour} and Lemmas \ref{L:explan} and \ref{L:minexist} say that conditions (i)--(iii) are equivalent and imply (iv). As the example in Figure~\ref{fig:minexpl} showed, (iv) is strictly weaker than the other three conditions. This raises the question whether it is possible to prove Toom's stability theorem using a Peierls argument based on minimal explanations, as suggested in \cite{Pre07}. Let us say that $(i,t)$ is a \emph{defective site} for a finite explanation ${\bm{\psi}}$ if $\psi_{(i,t)}=\phi^0$. Let $\phi$ be an eroder and let $M_n$ denote the number of minimal explanations ${\bm{\psi}}$ for $(0,0)$ with $n$ defective sites that satisfy $\psi_{(i,t)}\prec\phi$ whenever $(i,t)$ is not defective. We pose the following open problem: \begin{quote} Do there exist finite constants $C,N$ such that $M_n\leq CN^n$ $(n\geq 0)$? \end{quote} If the answer to this question is affirmative, then it should be possible to set up a Peierls argument based on minimal explanations, rather than Toom contours. In principle, such an argument has the potential to be simpler and more powerful than the Peierls arguments used in this article, but as we have seen the relation between minimal explanations and Toom contours is not straightforward and finding a good upper bound on the number of minimal explanations with a given number of defective sites seems even harder than in the case of Toom contours. \subsection*{Acknowledgment} We thank Anja Sturm who was involved in the earlier phases of writing this paper for her contributions to the discussions. We thank Ivailo Hartarsky for useful discussions. The first author is supported by grant 20-08468S of the Czech Science Foundation (GA CR). The second and third authors are supported by ERC Starting Grant 680275 ``MALIG''.
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} Reinforcement Learning (RL)~\cite{kaelbling1996reinforcement,szepesvari2010algorithms,sutton2018reinforcement} is a classic online decision making formulation, where an agent interacts with an unknown environment with the goal of maximizing the obtained reward. Despite the empirical success and theoretical progress of recent RL algorithms, e.g., ~\cite{szepesvari2010algorithms,agrawal2017optimistic,azar2017minimax,jin2018q,zanette2019tighter}, they focus mainly on the risk-neutral criterion, i.e., maximize the expected cumulative reward. This criterion can neglect rare but disastrous situations and fails to guarantee the safety in decision-making processes. As a result, existing algorithms cannot be applied to tackle risk-sensitive tasks in real-world scenarios such as autonomous driving~\cite{wen2020safe} and clinical treatment planning~\cite{healthcare}, where safety is a top-priority performance metric and policies that ensure low risk of getting into catastrophic situations (e.g., accidents in autonomous driving) at all time are strongly preferred. Motivated by the above facts, we investigate Iterated CVaR RL, a novel episodic RL formulation equipped with an important risk-sensitive criterion, i.e., Iterated Conditional Value-at-Risk (CVaR). Here CVaR~\cite{artzner1999coherent,rockafellar2000optimization} is a popular static (single-stage) risk measure which represents the expected tail reward, and Iterated CVaR~\cite{hardy2004iterated,chu2014markov,bauerle2022markov}, defined upon CVaR by backward iteration, is a dynamic (multi-stage) risk measure first proposed in financial insurance~\cite{hardy2004iterated} and focuses on the worst portion of the reward-to-go at each stage. In the Iterated CVaR RL problem, an agent interacts with an \emph{unknown} episodic Markov Decision Process (MDP) in order to maximize the worst $\alpha$-percent of the reward-to-go at each step, where $\alpha \in (0,1)$ is a given risk level. Under this model, we investigate two important performance metrics, i.e., Regret Minimization (RM), where the goal is to minimize the cumulative regret over all episodes, and Best Policy Identification (BPI), where the performance is measured by the number of episodes used for identifying an optimal policy. Different from existing risk-sensitive RL~\cite{chow2017risk,fei2020near_optimal,fei2021exponential,fei2021function_approx} which consider the exponential utility criterion (restrain the overall variability) or the CVaR of the cumulative reward, Iterated CVaR RL focuses primarily on the bad tail (rather than both-side tails) of the reward, and concerns preventing catastrophic situations at all decision stages. This Iterated CVaR RL formulation enables us to model safety requirements throughout the decision process, and is most suitable for applications where such \emph{safety-at-all-time} is critical, e.g., autonomous driving~\cite{wen2020safe} and clinical treatment planning~\cite{healthcare}. For example, consider an unmanned helicopter control task~\cite{unmanned_helicopter}, where we fly an unmanned helicopter to complete some task. There is a small probability that, at each time during execution, the helicopter encounters a sensing or control failure and does not take the prescribed action. In order to guarantee the safety of the helicopter (which is expensive) and surrounding workers and buildings, it is important to make sure that even under bad situations, e.g., the sensing or control failure occurs, the policy ensures that the helicopter does not crash and cause vital damage. Iterated CVaR RL faces several unique challenges. (i) The importance (contribution to regret) of a state in Iterated CVaR RL is not proportional to its visitation probability. In particular, there are some risky states that are more critical, but difficult to collect information from. (ii) The value function for Iterated CVaR RL cannot be decomposed into the expected rewards at each step with respect to the visitation distribution. Without the consistency between state importance and its visitation probability and the reward decomposition, which are key to existing RL analysis~\cite{jaksch2010near,agrawal2017optimistic,azar2017minimax,zanette2019tighter}, Iterated CVaR RL imposes brand-new algorithm design and analysis challenges. To tackle these challenges, we design two efficient algorithms $\mathtt{ICVaR\mbox{-}RM}$ and $\mathtt{ICVaR\mbox{-}BPI}$ for the RM and BPI metrics, respectively, equipped with a delicate CVaR-adapted value iteration and exploration bonuses to allocate more attention on rare but potentially dangerous states. We also develop novel analytical techniques, to bound the change of CVaR due to the value function shift and decompose the regret via a distorted visitation distribution. Lower bounds for both metrics are established to demonstrate the optimality of our algorithms with respect to the number of episodes $K$. We further study an interesting limiting case of Iterated CVaR RL when $\alpha$ approaches $0$, called Worst Path RL, where the decision maker is extremely risk-adverse and concerns the worst case (e.g., in autonomous driving~\cite{wen2020safe} and clinical treatment planning~\cite{healthcare} where the worst case can be disastrous). In this limiting case, the goal becomes to maximize the minimum possible cumulative reward (optimize the worst path). We note that Worst Path RL cannot be simply solved by taking $\alpha=0$ in Iterated CVaR RL, since the results there have a dependency on $\frac{1}{\alpha}$. Thus, we design a simple yet efficient algorithm $\mathtt{MaxWP}$ for Worst Path RL, and provide constant upper and lower regret bounds which are independent of $K$. The contributions of this paper are summarized as follows. \begin{itemize} \item We study a novel Iterated CVaR RL formulation, where an agent interacts with an unknown environment, with the objective to maximize the tail of the reward-to-go at each step. This formulation enables us to model safety requirements throughout the decision process, and is most suitable for applications where such safety-at-all-time is critical. \item We investigate two important metrics of Iterated CVaR RL, i.e., Regret Minimization (RM) and Best Policy Identification (BPI). For both metrics, we propose efficient algorithms $\mathtt{ICVaR\mbox{-}RM}$ and $\mathtt{ICVaR\mbox{-}BPI}$, and establish matching regret/sample complexity upper and lower bounds with respect to $K$. Our results reveal essential hardness of Iterated CVaR RL, i.e., any algorithm must suffer a regret for exploring risk-sensitive but hard-to-reach states. \item We further investigate a limiting case of Iterated CVaR RL, called Worst Path RL, where the objective is to maximize the minimum possible cumulative reward. For Worst Path RL, we develop a simple and efficient algorithm $\mathtt{MaxWP}$, and provide constant regret upper and lower bounds (independent of $K$). \end{itemize} Due to space limit, we defer all proofs to Appendix. \section{Related Work} \textbf{CVaR-based Markov Decision Process (MDP).} \cite{chow2014algorithms} considers a CVaR-constrained MDP, which aims to minimize the expected total cost with a constraint on the CVaR of the total cost. \cite{bauerle2011markov,haskell2015convex,chow2015risk} investigate how to minimize the CVaR of the total cost in decision processes, i.e., CVaR MDPs, and propose approximate planning algorithms with convergence analysis (see Appendix~\ref{apx:comparison_cvar} for a detailed comparison between our formulation and CVaR MDPs). \cite{hardy2004iterated} firstly defines Iterated CVaR, and demonstrates that it is a coherent and consistent dynamic risk measure, and applicable to equity-linked insurance. \cite{osogami2012iterated,chu2014markov,bauerle2022markov} investigate MDPs with general iterated coherent risk measures, including Iterated CVaR, and demonstrate the existence of Markovian optimal policies for these MDPs. \cite{hardy2004iterated,coraluppi1997mixed,coraluppi1999risk} study MDPs with the goal of minimizing the worst-case discounted cost, develop dynamic programming for value functions, and design heuristic algorithms without theoretical analysis. \yihan{\cite{hardy2004iterated} also gives a heuristic online RL algorithm but does not provide a regret guarantee. revised this sentence} The above works mainly study mathematical properties (e.g., the existence of optimal policies), offline planning algorithms, and convergence guarantees for MDPs with known transition distribution, while our work investigates an online RL problem for unknown MDPs, designs online algorithms and derives regret/sample complexity guarantees. \textbf{Risk-Sensitive Reinforcement Learning.} \cite{chow2017risk} studies a risk-constrained RL problem where risk is represented by a constraint on the CVaR of the total cost, and designs policy gradient and actor-critic algorithms with convergence analysis. \cite{fei2020near_optimal} considers risk-sensitive RL with the exponential utility criterion, and proposes efficient algorithms with regret guarantees. \cite{fei2021exponential} further improves the regret results of \cite{fei2020near_optimal} by developing an exponential Bellmen equation and a Bellman backup analytical procedure. \cite{fei2021function_approx} extends the results in \cite{fei2020near_optimal,fei2021exponential} from the tabular setting to function approximation, and designs algorithms with sub-linear regret guarantees. The above works study RL with the CVaR constraints or the exponential utility criterion, which greatly differ from our formulation, and their algorithms and results cannot be applied to our problem. \section{Problem Formulation} \label{sec:formulation} In this section, we present the problem formulations of Iterated CVaR RL and Worst Path RL. \textbf{Conditional Value-at-Risk (CVaR).} We first introduce two risk measures, i.e., Value-at-Risk (VaR) and Conditional Value-at-Risk (CVaR). Let $X$ be a random variable with cumulative distribution function $F(x)=\Pr[X \leq x]$. Given a risk level $\alpha \in (0,1)$, the VaR at risk level $\alpha$ is the $\alpha$-quantile of $X$, i.e., $\textup{Var}^{\alpha}(X)=\min\{x|F(x) \geq \alpha\}$, and the CVaR at risk level $\alpha$ is defined as~\cite{rockafellar2000optimization}: \begin{align*} \textup{CVaR}^{\alpha}(X)=\sup_{x \in \mathbb{R}} \lbr{ x-\frac{1}{\alpha} \mathbb{E} \mbr{ (x-X)^{+} } }, \end{align*} where $(x)^{+}:=\max\{x,0\}$. If there is no probability atom at $\textup{CVaR}^{\alpha}(X)$, CVaR can also be written as $ \textup{CVaR}^{\alpha}(X)=\mathbb{E} [X|X \leq \textup{Var}^{\alpha}(X)] $ ~\cite{shapiro2021lectures}. Intuitively, $\textup{CVaR}^{\alpha}(X)$ is a distorted expectation of $X$ conditioning on its $\alpha$-portion tail, which depicts the average value when bad situations happen. When $\alpha=1$, $\textup{CVaR}^{\alpha}(X)=\mathbb{E}[X]$, and when $\alpha \downarrow 0$, $\textup{CVaR}^{\alpha}(X)$ tends to $\min(X)$~\cite{chow2015risk}. \textbf{Iterated CVaR RL.} We consider an episodic Markov Decision Process (MDP) $\mathcal{M}(\mathcal{S},\mathcal{A},H,p,r)$. Here $\mathcal{S}$ is the state space, $\mathcal{A}$ is the action space, and $H$ is the length of horizon in each episode. $p$ is the transition distribution, i.e., $p(s'|s,a)$ gives the probability of transitioning to $s'$ when starting from state $s$ and taking action $a$. $r: \mathcal{S} \times \mathcal{A} \mapsto [0,1]$ is a reward function, so that $r(s,a)$ gives the deterministic reward for taking action $a$ in state $s$. A policy $\pi$ is defined as a collection of $H$ functions, i.e., $\pi=\{\pi_h: \mathcal{S} \mapsto \mathcal{A}\}_{h \in [H]}$, where $[H]:=\{1, 2, ..., H\}$. The online episodic RL game is as follows. In each episode $k$, an agent chooses a policy $\pi^k$, and starts from an initial state $s^k_1$, which is arbitrarily picked by the environment and is the same for all $k$ in the best policy identification setting, as in the literature~\cite{jin2018q,zanette2019tighter,dann2015sample,menard2021fast}. At each step $h \in [H]$, the agent observes the state $s^k_h$, takes an action $a^k_h=\pi^k_h(s^k_h)$, receives a reward $r(s^k_h,a^k_h)$, and then transitions to a next state $s^k_{h+1}$ according to the transition distribution $p(\cdot|s^k_h,a^k_h)$. After the agent takes an action and receives the reward at step $H$, this episode ends, and she enters the next episode. In Iterated CVaR RL, given a risk level $\alpha \in (0,1)$ and a policy $\pi$, let $V^{\pi}_h:\mathcal{S} \mapsto \mathbb{R}$ and $Q^{\pi}_h:\mathcal{S} \times \mathcal{A} \mapsto \mathbb{R}$ denote the value function and Q-value function at step $h$, respectively~\cite{chu2014markov,bauerle2022markov}. Specifically, $V^{\pi}_h(s)$ and $Q^{\pi}_h(s,a)$ denote the cumulative reward that can be obtained when the worst $\alpha$-portion event happens (transitioning to the worst $\alpha$-percent states) at each step, starting from $s$ and $(s,a)$ at step $h$, respectively. Formally, $Q^{\pi}_h$ and $V^{\pi}_h$ are recurrently defined as \begin{equation} \left\{ \begin{aligned} Q^{\pi}_h(s,a)&= r(s,a)+ \textup{CVaR}^{\alpha}_{s' \sim p(\cdot|s,a)}(V^{\pi}_{h+1}(s')) \\ V^{\pi}_h(s)&=Q^{\pi}_h(s,\pi_h(s)) \\ V^{\pi}_{H+1}(s)&=0, \ \forall s \in \mathcal{S} , \end{aligned} \right. \label{eq:cvar_bellman} \end{equation} where $\textup{CVaR}^{\alpha}_{s' \sim p(\cdot|s,a)}(V^{\pi}_{h+1}(s'))$ denotes the CVaR value of random variable $V^{\pi}_{h+1}(s')$ with $s' \sim p(\cdot|s,a)$ at risk level $\alpha$. Unfolding Eq.~\eqref{eq:cvar_bellman}, $Q^{\pi}_h$ and $V^{\pi}_h$ can also be expressed as \begin{align*} Q^{\pi}_h(s,a)& = r(s,a) + \textup{CVaR}^{\alpha}_{s_{h+1} \sim p(\cdot|s,\pi_h(s))} \bigg(r(s_{h+1},\pi_{h+1}(s_{h+1})) \\& \!\!\!+\textup{CVaR}^{\alpha}_{s_{h+2} \sim p(\cdot|s_{h+1},\pi_{h+1}(s_{h+1}))} \Big(\dots \textup{CVaR}^{\alpha}_{s_{H} \sim p(\cdot|s_{H-1},\pi_{H-1}(s_{H-1}) )}(r(s_{H},\pi_{H}(s_{H}) ) ) \Big) \bigg) , \\ V^{\pi}_h(s)&= r(s, \pi_h(s)) + \textup{CVaR}^{\alpha}_{s_{h+1} \sim p(\cdot|s,\pi_h(s))} \bigg(r(s_{h+1},\pi_{h+1}(s_{h+1})) \\& \!\!\!+\textup{CVaR}^{\alpha}_{s_{h+2} \sim p(\cdot|s_{h+1},\pi_{h+1}(s_{h+1}))} \Big(\dots \textup{CVaR}^{\alpha}_{s_{H} \sim p(\cdot|s_{H-1},\pi_{H-1}(s_{H-1}))}(r(s_{H},\pi_{H}(s_{H}))) \Big) \bigg) . \end{align*} Since $\mathcal{S}$, $\mathcal{A}$, $H$ are finite and the Iterated CVaR RL problem satisfies the optimal substructure property, there exists an optimal policy $\pi^{*}$ which gives the optimal value $V^{*}_h(s)=\max_{\pi} V^{\pi}_h(s)$ for all $s \in \mathcal{S}$ and $h \in [H]$~\cite{chu2014markov,bauerle2022markov,sutton2018reinforcement}. The Bellman equation is given in Eq.~\eqref{eq:cvar_bellman}, and the Bellman optimality equation is as follows: \begin{equation} \left\{ \begin{aligned} Q^{*}_h(s,a)&= r(s,a)+ \textup{CVaR}^{\alpha}_{s' \sim p(\cdot|s,a)}(V^{*}_{h+1}(s')) \\ V^{*}_h(s)&= \max_{a \in \mathcal{A}} Q^{*}_h(s,a) \\ V^{*}_{H+1}(s)&=0, \ \forall s \in \mathcal{S} . \end{aligned} \right. \label{eq:cvar_optimality_bellman} \end{equation} We consider two performance metrics for Iterated CVaR RL, i.e., Regret Minimization (RM) and Best Policy Identification (BPI). In Iterated CVaR RL-RM, the agent aims to minimize the cumulative regret in $K$ episodes, defined as \begin{align} \mathcal{R}(K)=\sum_{k=1}^{K} \sbr{V^{*}_1(s^k_1)-V^{\pi_k}_1(s^k_1)} . \label{eq:regret} \end{align} In Iterated CVaR RL-BPI, given a confidence parameter $\delta \in (0,1)$ and an accuracy parameter $\varepsilon>0$, the agent needs to use as few trajectories (episodes) as possible to identify an $\varepsilon$-optimal policy $\hat{\pi}$, which satisfies \begin{align*} V^{\hat{\pi}}_1(s_1) \geq V^{*}_1(s_1) - \varepsilon , \end{align*} where $s_1$ denotes the fixed initial state in the BPI setting. The performance of BPI is measured by sample complexity, i.e., the number of trajectories used. \textbf{Worst Path RL.} Furthermore, we investigate an interesting limiting case of Iterated CVaR RL when $\alpha \downarrow 0$, called Worst Path RL, in which case the goal just becomes to maximize the minimum possible reward. Note that this case cannot be simply solved by taking $\alpha=0$ in Iterated CVaR RL, as the results there have a dependency on $\frac{1}{\alpha}$ and changing from $\textup{CVaR}(\cdot)$ to $\min(\cdot)$ in Worst Path RL requires a brand-new algorithm design and analysis. For Worst Path RL, in this paper we mainly consider the regret minimization metric, and the definition of regret is the same as Eq.~\eqref{eq:regret}. In Worst Path RL, the recurrent definition of Q-value/value functions and Bellman equations are \begin{equation} \left\{ \begin{aligned} Q^{\pi}_h(s,a)&= r(s,a)+ \min_{s' \sim p(\cdot|s,a)}(V^{\pi}_{h+1}(s')) \\ V^{\pi}_h(s)&=Q^{\pi}_h(s,\pi_h(s)) \\ V^{\pi}_{H+1}(s)&=0, \ \forall s \in \mathcal{S} , \end{aligned} \right. \ \left\{ \begin{aligned} Q^{*}_h(s,a)&= r(s,a)+ \min_{s' \sim p(\cdot|s,a)}(V^{*}_{h+1}(s')) \\ V^{*}_h(s)&= \max_{a \in \mathcal{A}} Q^{*}_h(s,a) \\ V^{*}_{H+1}(s)&=0, \ \forall s \in \mathcal{S} . \end{aligned} \right. \label{eq:min_bellman} \end{equation} where $\min_{s' \sim p(\cdot|s,a)}(V^{\pi}_{h+1}(s'))$ denotes the minimum value of random variable $V^{\pi}_{h+1}(s')$ with $s' \sim p(\cdot|s,a)$. From Eq.~\eqref{eq:min_bellman}, we see that \begin{align*} Q^{\pi}_h(s,a) = \!\!\! \min_{(s_t,a_t) \sim \pi} \!\! \mbr{ \sum_{t=h}^{H} r(s_t,a_t) \Big| s_h=s,a_h=a,\pi } \!, \ V^{\pi}_h(s) = \!\!\! \min_{(s_t,a_t) \sim \pi} \!\! \mbr{ \sum_{t=h}^{H} r(s_t,a_t) \Big| s_h=s,\pi } \!. \end{align*} Thus, $Q^{\pi}_h(s,a)$ and $V^{\pi}_h(s)$ become the minimum possible cumulative reward under policy $\pi$, starting from $(s,a)$ and $s$ at step $h$, respectively. The optimal policy $\pi^{*}$ maximizes the minimum possible cumulative reward (optimizes the worst path) for all starting states and steps. Formally, $\pi^{*}$ gives the optimal value $V^{*}_h(s)=\max_{\pi} V^{\pi}_h(s)$ for all $s \in \mathcal{S}$ and $h \in [H]$. \section{Iterated CVaR RL with Regret Minimization} In this section, we investigate regret minimization (Iterated CVaR RL-RM), propose an algorithm $\mathtt{ICVaR\mbox{-}RM}$ with a CVaR-adapted exploration bonus, and demonstrate its sample efficiency. \subsection{Algorithm $\mathtt{ICVaR\mbox{-}RM}$ and Regret Upper Bound} We propose a value iteration-based algorithm $\mathtt{ICVaR\mbox{-}RM}$, which adopts a Brown-type~\cite{brown2007large} (CVaR-adapted) exploration bonus and pays more attention to rare but risky states. Algorithm~\ref{alg:cvar} illustrates the formal procedure of $\mathtt{ICVaR\mbox{-}RM}$. Specifically, in each episode $k$, $\mathtt{ICVaR\mbox{-}RM}$ computes the empirical CVaR for the values of next states $\textup{CVaR}^{\alpha}_{s' \sim \hat{p}^k(\cdot|s,a)}(\bar{V}^k_{h+1}(s'))$ and a Brown-type~\cite{brown2007large} exploration bonus $\frac{H}{\alpha}\sqrt{\frac{L}{n_k(s,a)}}$. Here $n^{k}(s,a)$ is the number of times $(s,a)$ was visited up to episode $k$, and $\hat{p}^{k+1}(s'|s,a)$ is the empirical estimate of transition probability $p(s'|s,a)$. Then, $\mathtt{ICVaR\mbox{-}RM}$ constructs optimistic Q-value function $\bar{Q}^k_h(s,a)$, value function $\bar{V}^k_h(s)$, and a greedy policy $\pi^k$ with respect to $\bar{Q}^k_h(s,a)$. After calculating the value functions and policy, $\mathtt{ICVaR\mbox{-}RM}$ plays episode $k$ with policy $\pi^k$, observes a trajectory, and updates $n_k(s,a)$ and $\hat{p}^{k+1}(s'|s,a)$. We summarize the regret performance of $\mathtt{ICVaR\mbox{-}RM}$ as follows. \begin{algorithm}[t] \caption{$\mathtt{ICVaR\mbox{-}RM}$} \label{alg:cvar} \begin{algorithmic}[1] \STATE {\bfseries Input:} $\delta$, $\alpha$, $\delta':=\frac{\delta}{5}$, $L:=\log(\frac{KHSA}{\delta'})$, $\bar{V}^k_{H+1}(s) = 0$ for any $k>0$ and $s \in \mathcal{S}$. \FOR{$k=1,2,\dots,K$} \FOR{$h=H,H-1,\dots,1$} \FOR{$s \in \mathcal{S}$} \FOR{$a \in \mathcal{A}$} \STATE $\bar{Q}^k_h(s,a) \leftarrow r(s,a) + \textup{CVaR}^{\alpha}_{s' \sim \hat{p}^k(\cdot|s,a)}(\bar{V}^k_{h+1}(s')) + \frac{H}{\alpha}\sqrt{\frac{L}{n_k(s,a)}}$ \ENDFOR \STATE $\bar{V}^k_h(s) \leftarrow \max_{a \in \mathcal{A}} \bar{Q}^k_h(s,a)$. $\pi^k_h(s) \leftarrow \operatornamewithlimits{argmax}_{a \in \mathcal{A}} \bar{Q}^k_h(s,a)$ \ENDFOR \ENDFOR \STATE Play the episode $k$ with policy $\pi^k$, and update $n_{k+1}(s,a)$ and $\hat{p}^{k+1}(s'|s,a)$ \ENDFOR \end{algorithmic} \end{algorithm} \begin{theorem}[Regret Upper Bound]\label{thm:cvar_rm_ub} With probability at least $1-\delta$, the regret of algorithm $\mathtt{ICVaR\mbox{-}RM}$ is bounded by \begin{align*} O \Bigg( \frac{ HS \sqrt{KHA} }{\alpha \sqrt{\min \limits_{\begin{subarray}{c}\pi,h,s:\ w_{\pi,h}(s)>0\end{subarray}} w_{\pi,h}(s)}} \log\sbr{\frac{KHSA}{\delta}} \Bigg) , \end{align*} where $w_{\pi,h}(s)$ denotes the probability of visiting state $s$ at step $h$ under policy $\pi$. \end{theorem} \textbf{Remark 1.} The factor $\min_{\pi,h,s:\ w_{\pi,h}(s)>0} w_{\pi,h}(s)$ stands for the minimum probability of visiting an available state under any feasible policy. Note that (i) The minimum operator is only taken over the feasible policies under which $s$ is available. Thus this factor will never be zero. (ii) The factor also exists in the lower bound (see Section~\ref{sec:regret_lb}), which characterizes the intrinsic hardness of Iterated CVaR RL, i.e., there exist critical states that are difficult to reach. Compared to existing risk-sensitive RL~\cite{chow2017risk,fei2020near_optimal,fei2021exponential,fei2021function_approx} which either consider the exponential utility criterion and have an exponential regret in $H$, or consider the CVaR constraint and do not have regret guarantees, for Iterated CVaR RL, we provide a polynomial regret in all problem parameters and demonstrate the sample efficiency. \emph{Challenges and Novelty in Regret Analysis.} The regret analysis for Iterated CVaR RL faces several unique challenges. First of all, in contrast to prior RL analysis~\cite{jaksch2010near,dann2017unifying,azar2017minimax,zanette2019tighter} where the contribution of a state to the regret is proportional to its visitation probability, the Iterated CVaR criterion places more emphasis on risky but hard-to-reach states, which are difficult to learn. Second, since the value function is measured by the tail reward rather than the expectation, the regret is not simply a summation of the estimation error over all state-action pairs under the standard visitation distribution. To tackle these obstacles, we develop novel analytical techniques, to bound the change of CVaR due to the value function shift and decompose regret into estimation error via a distorted visitation distribution. Below we present a proof sketch for Theorem~\ref{thm:cvar_rm_ub}. \emph{Proof sketch of Theorem~\ref{thm:cvar_rm_ub}.} First, we introduce a key inequality (Eq.~\eqref{eq:cvar_gap_V_shift_main_text}) to bound the change of CVaR when the true value function shifts to an optimistic one. Let $\beta^{\alpha,V}(\cdot|s,a) \in \mathbb{R}^{S}$ denote the assigned weights on $V(s')$ when computing $\textup{CVaR}^{\alpha}_{s' \sim p(\cdot|s,a)}(V(s'))$, which satisfies $ \sum_{s' \in \mathcal{S}} \beta^{\alpha,V}(s'|s,a) \cdot V(s') = \textup{CVaR}^{\alpha}_{s' \sim p(\cdot|s,a)}(V(s')) $. Intuitively, $\beta^{\alpha,V}(\cdot|s,a)$ is a distorted distribution of transition distribution $p(\cdot|s,a)$, which gives more weights to bad successor states. Then, for any $(s,a)$, optimistic and true value functions $\bar{V}, V$ such that $\bar{V}(s') \geq V(s')$ for any $s' \in \mathcal{S}$, it holds that \begin{align} \textup{CVaR}^{\alpha}_{s' \sim p(\cdot|s,a)}(\bar{V}(s')) - \textup{CVaR}^{\alpha}_{s' \sim p(\cdot|s,a)}(V(s')) \leq \beta^{\alpha,V}(\cdot|s,a)^\top \sbr{\bar{V}-V} . \label{eq:cvar_gap_V_shift_main_text} \end{align} Eq.~\eqref{eq:cvar_gap_V_shift_main_text} implies that the gap of CVaR between the optimistic and true value functions can be bounded by their value deviation under a distorted transition distribution, which serves as the basis of our recurrent regret decomposition. Now, using Eq.~\eqref{eq:cvar_gap_V_shift_main_text} and the fact that $\bar{V}^k_h$ is an optimistic estimate of $V^{*}_h$, we decompose the regret as \begin{align} \!\bar{V}^k_1(s^k_1) \!-\! V^{\pi^k}_1(s^k_1) \!= & \frac{H}{\alpha}\sqrt{\frac{L}{n_k(s^k_1,a^k_1)}}+\textup{CVaR}^{\alpha}_{s' \sim \hat{p}^k(\cdot|s^k_1,a^k_1)}(\bar{V}^k_{2}(s')) - \textup{CVaR}^{\alpha}_{s' \sim p(\cdot|s^k_1,a^k_1)}(\bar{V}^k_{2}(s')) \nonumber\\& + \textup{CVaR}^{\alpha}_{s' \sim p(\cdot|s^k_1,a^k_1)}(\bar{V}^k_{2}(s')) - \textup{CVaR}^{\alpha}_{s' \sim p(\cdot|s^k_1,a^k_1)}(V^{\pi^k}_{2}(s')) \nonumber\\ \overset{\textup{(a)}}{\leq} & \frac{H}{\alpha}\sqrt{\frac{L}{n_k(s^k_1,a^k_1)}}+ \frac{H}{\alpha}\sqrt{\frac{SL}{n_k(s^k_1,a^k_1)}} + \beta^{\alpha,V^{\pi^k}_{2}}(\cdot|s^k_1,a^k_1)^\top (\bar{V}^k_{2} - V^{\pi^k}_{2}) \nonumber\\ \overset{\textup{(b)}}{\leq} & \sum_{h=1}^{H} \sum_{(s,a)} w^{\textup{CVaR},\alpha,V^{\pi^k}}_{kh}\!\!(s,a) \cdot \frac{H\sqrt{L}+H\sqrt{SL}}{\alpha \sqrt{n_k(s,a)}} . \label{eq:regret_per_episode_main_text} \end{align} Here $w^{\textup{CVaR},\alpha,V^{\pi^k}}_{kh}\!\!(s,a)$ denotes the distorted visitation probability of $(s,a)$ under the CVaR criterion. Inequality (a) uses the concentration of CVaR and Eq.~\eqref{eq:cvar_gap_V_shift_main_text}, and inequality (b) follows from a recurrent application of (a). Eq.~\eqref{eq:regret_per_episode_main_text} decomposes the regret into the estimation error over all state-action pairs under the distorted visitation distribution, which resolves the challenges due to additional focus on risky states under Iterated CVaR. Finally, summing Eq.~\eqref{eq:regret_per_episode_main_text} over all episodes $k$, and computing the ratio of distorted visitation probability over standard visitation probability, we obtain Theorem~\ref{thm:cvar_rm_ub}. $\hfill \square$ \subsection{Regret Lower Bound}\label{sec:regret_lb} We now present a regret lower bound to demonstrate the optimality of algorithm $\mathtt{ICVaR\mbox{-}RM}$. \begin{theorem}[Regret Lower Bound] \label{thm:cvar_rm_lb} There exists an instance of Iterated CVaR RL-RM, for which the regret of any algorithm is at least \begin{align*} \Omega \Bigg( H \sqrt{\frac{AK}{\alpha \min \limits_{\begin{subarray}{c}\pi,h,s:\ w_{\pi,h}(s)>0\end{subarray}} w_{\pi,h}(s) }} \ \ \Bigg) . \end{align*} \end{theorem} \textbf{Remark 2.} Theorem~\ref{thm:cvar_rm_lb} demonstrates that the factor $\min_{\pi,h,s:\ w_{\pi,h}(s)>0} w_{\pi,h}(s)$ in our regret upper bound (Theorem~\ref{thm:cvar_rm_ub}) is inevitable in general, and reveals the intrinsic hardness of Iterated CVaR RL, i.e., any algorithm must suffer a regret due to exploring risky but hard-to-reach states. This lower bound also validates that $\mathtt{ICVaR\mbox{-}RM}$ is optimal with respect to $K$. \begin{wrapfigure}[9]{r}{0.4\textwidth} \centering \vspace*{-2em} \includegraphics[width=0.38\textwidth]{fig/instance_lb_main_text.pdf} \caption{The instance for lower bounds.} \label{fig:lower_bound_main_text} \end{wrapfigure} \emph{Hard Instances.} In the lower bound analysis, we construct an instance with a hard-to-reach bandit state (which has an optimal action and multiple sub-optimal actions), and show that this state is critical to minimizing the regret, but difficult for any algorithm to learn. As shown in Figure~\ref{fig:lower_bound_main_text}, we consider an MDP with $A$ actions, and $n$ regular states $s_1,\dots,s_n$ and three absorbing states $x_1,x_2,x_3$. The reward function $r(s,a)$ depends only on the states, i.e., $s_1,\dots,s_n$ generate zero reward and $x_1,x_2,x_3$ generate rewards $1$, $0.8$ and $0.2$, respectively. Under all actions, state $s_1$ transitions to $s_2,x_1,x_2,x_3$ with probabilities $\alpha$, $1-3\alpha$, $\alpha$ and $\alpha$, respectively, where $\alpha$ is the risk level, and state $s_i$ ($2 \leq i \leq n-1$) transitions to $s_{i+1},x_1$ with probabilities $\alpha$ and $1-\alpha$, respectively. For the bandit state $s_n$, under the optimal action, $s_n$ transitions to $x_2,x_3$ with probabilities $1-\alpha+\eta$ and $\alpha-\eta$, respectively. Under sub-optimal actions, $s_n$ transitions to $x_2,x_3$ with probabilities $1-\alpha$ and $\alpha$. In this MDP, under the Iterated CVaR criterion, $V^{\pi}_1$ mainly depends on the path $s_1 \rightarrow s_2 \rightarrow \dots \rightarrow s_n \rightarrow x_2/x_3$, and especially on the action choice in the bandit state $s_n$. However, when $\alpha$ is small, it is difficult for any policy to reach (and collect information from) $s_n$. Thus, to learn the optimal action in $s_n$, any algorithm must suffer a regret dependent on the probability of visiting $s_n$, which is exactly the minimum visitation probability of any state under a feasible policy $\min_{\pi,h,s:\ w_{\pi,h}(s)>0} w_{\pi,h}(s)$. \begin{algorithm}[t] \caption{$\mathtt{ICVaR\mbox{-}BPI}$} \label{alg:cvarbpi} \begin{algorithmic}[1] \STATE {\bfseries Input:} $\varepsilon$, $\delta$, $\alpha$, $\delta':=\frac{\delta}{5}$, $\tilde{L}(k):=\log(\frac{2HSAk^3}{\delta'})$ for any $k>0$, $J^k_{H+1}(s)=\bar{V}^k_{H+1}(s)=\underline{V}^k_{H+1}(s)=0$ for any $k>0$ and $s \in \mathcal{S}$. \FOR{$k=1,2,\dots,K$} \FOR{$h=H,H-1,\dots,1$} \FOR{$s \in \mathcal{S}$} \FOR{$a \in \mathcal{A}$} \STATE $\bar{Q}^k_h(s,a) \leftarrow r(s,a)+\textup{CVaR}^{\alpha}_{s' \sim \hat{p}^k(\cdot|s,a)}(\bar{V}^k_{h+1}(s')) + \frac{H}{\alpha}\sqrt{\frac{\tilde{L}(k)}{n_k(s,a)}}$ \STATE $\underline{Q}^k_h(s,a) \leftarrow r(s,a)+\textup{CVaR}^{\alpha}_{s' \sim \hat{p}^k(\cdot|s,a)}(\underline{V}^k_{h+1}(s')) - \frac{H}{\alpha}\sqrt{\frac{S\tilde{L}(k)}{n_k(s,a)}}$ \STATE $G^k_h(s,a) \leftarrow \frac{H(1+\sqrt{S})\sqrt{\tilde{L}(k)}}{\alpha \sqrt{n_k(s,a)}}+\hat{\beta}^{k;\alpha,\underline{V}^k_{h+1}}(\cdot|s,a)^\top J^k_{h+1}$ \label{line:bpi_beta} \ENDFOR \STATE $\pi^k_h(s) \leftarrow \operatornamewithlimits{argmax}_{a \in \mathcal{A}} \bar{Q}^k_h(s,a)$. $\bar{V}^k_h(s) \leftarrow \max_{a \in \mathcal{A}} \bar{Q}^k_h(s,a)$. $\underline{V}^k_h(s) \leftarrow \underline{Q}^k_h(s,\pi^k_h(s))$. \STATE $J^k_h(s) \leftarrow G^k_h(s,\pi^k_h(s))$. \ENDFOR \ENDFOR \STATE {\bfseries if} $J^k_1(s)\leq \varepsilon$, {\bfseries return} $\pi^k(s)$ \STATE {\bfseries else} Play the episode $k$ with policy $\pi^k$, and update $n_{k+1}(s,a)$ and $\hat{p}^{k+1}(s'|s,a)$ \ENDFOR \end{algorithmic} \end{algorithm} \section{Iterated CVaR RL with Best Policy Identification} In this section, we turn to best policy identification (Iterated CVaR RL-BPI). We design an efficient algorithm $\mathtt{ICVaR\mbox{-}BPI}$, and establish rigorous sample complexity upper and lower bounds. To our best knowledge, these are the first sample complexity results for risk-sensitive RL. \subsection{Algorithm $\mathtt{ICVaR\mbox{-}BPI}$ and Sample Complexity Upper Bound} Algorithm $\mathtt{ICVaR\mbox{-}BPI}$ (Algorithm~\ref{alg:cvarbpi}) constructs optimistic and pessimistic value functions, estimation error, and a hypothesized optimal policy in each episode, and returns the hypothesized optimal policy when the estimation error shrinks within $\varepsilon$. Specifically, in each episode $k$, $\mathtt{ICVaR\mbox{-}BPI}$ calculates the empirical CVaR for values of next states $\textup{CVaR}^{\alpha}_{s' \sim \hat{p}^k(\cdot|s,a)}(\bar{V}^k_{h+1}(s')), \textup{CVaR}^{\alpha}_{s' \sim \hat{p}^k(\cdot|s,a)}(\underline{V}^k_{h+1}(s'))$ and exploration bonuses $\frac{H}{\alpha}\sqrt{\frac{\tilde{L}(k)}{n_k(s,a)}}, \frac{H}{\alpha}\sqrt{\frac{S\tilde{L}(k)}{n_k(s,a)}}$, to establish the optimistic and pessimistic Q-value functions $\bar{Q}^k_h(s,a)$ and $\underline{Q}^k_h(s,a)$, respectively. $\mathtt{ICVaR\mbox{-}BPI}$ further maintains a hypothesized optimal policy $\pi^k$, which is greedy with respect to $\bar{Q}^k_h(s,a)$. Let $\hat{\beta}^{k;\alpha,\underline{V}^k_{h+1}}(\cdot|s,a) \in \mathbb{R}^{S}$ (Line~\ref{line:bpi_beta}) denote the assigned weights on $\underline{V}^k_{h+1}(s')$ when computing $\textup{CVaR}^{\alpha}_{s' \sim \hat{p}^k(\cdot|s,a)}(\underline{V}^k_{h+1}(s'))$, which satisfies $\sum_{s' \in \mathcal{S}} \hat{\beta}^{k;\alpha,\underline{V}^k_{h+1}}(s'|s,a) \cdot \underline{V}^k_{h+1}(s') = \textup{CVaR}^{\alpha}_{s' \sim \hat{p}^k(\cdot|s,a)}(\underline{V}^k_{h+1}(s'))$. Then, $\mathtt{ICVaR\mbox{-}BPI}$ computes the estimation error $G^k_h(s,a)$ and $J^k_h(s)$ using the assigned weights $\hat{\beta}^{k;\alpha,\underline{V}^k_{h+1}}(\cdot|s,a)$ for next states under CVaR. Once the estimation error $J^k_h(s)$ shrinks within the accuracy parameter $\varepsilon$, $\mathtt{ICVaR\mbox{-}BPI}$ returns the hypothesized optimal policy $\pi^k$. The sample complexity of algorithm $\mathtt{ICVaR\mbox{-}BPI}$ is presented as follows. \begin{theorem} [Sample Complexity Upper Bound] \label{thm:bpi_ub} The number of trajectories used by algorithm $\mathtt{ICVaR\mbox{-}BPI}$ to return an $\varepsilon$-optimal policy with probability at least $1-\delta$ is bounded by \begin{align*} O \Bigg( \frac{ H^3 S^2 A}{\varepsilon^2 \alpha^2 \cdot \min \limits_{\begin{subarray}{c}\pi,h,s:\ w_{\pi,h}(s)>0\end{subarray}} w_{\pi,h}(s) } \log\sbr{\frac{SAH}{\delta} } \Bigg) . \end{align*} \end{theorem} As in Theorem~\ref{thm:cvar_rm_ub}, this sample complexity result also contains the factor $\min_{\pi,h,s:\ w_{\pi,h}(s)>0} w_{\pi,h}(s)$, which is the minimum probability of visiting an available state over all feasible policies. This factor also exists in the lower bound and is indispensable in general (see Section~\ref{sec:sample_complexity_lb}). Theorem~\ref{thm:bpi_ub} corroborates the sample efficiency of $\mathtt{ICVaR\mbox{-}BPI}$, i.e., uses only polynomial trajectories in $K,H,S,A$ to identify a near-optimal policy. \subsection{Sample Complexity Lower Bound} \label{sec:sample_complexity_lb} We now present a lower bound for Iterated CVaR RL-BPI. We say algorithm $\mathcal{A}$ is $(\delta,\varepsilon)$-correct if $\mathcal{A}$ returns an $\varepsilon$-optimal policy $\hat{\pi}$ such that $ V^{\hat{\pi}}_1(s_1) \geq V^{*}_1(s_1) - \varepsilon $ with probability $1-\delta$. \begin{theorem}[Sample Complexity Lower Bound] \label{thm:bpi_lb} There exists an instance of Iterated CVaR RL-BPI, for which the number of trajectories used by any $(\delta,\varepsilon)$-correct algorithm is at least \begin{align*} \Omega \Bigg( \frac{ H^2 A}{\varepsilon^2 \alpha \cdot \min \limits_{\begin{subarray}{c}\pi,h,s:\ w_{\pi,h}(s)>0\end{subarray}} w_{\pi,h}(s)} \log \sbr{\frac{1}{\delta}} \Bigg) . \end{align*} \end{theorem} Theorem~\ref{thm:bpi_lb} corroborates the tightness of the factor $\min_{\pi,h,s:\ w_{\pi,h}(s)>0} w_{\pi,h}(s)$ in the upper bound (Theorem~\ref{thm:bpi_ub}), and reveals intrinsic hardness of Iterated CVaR RL, i.e., one needs to spend a number of trajectories on exploring critical but hard-to-reach states in order to identify an optimal policy. \yihan{Revised this remark} \section{Worst Path RL} In this section, we investigate an interesting limiting case of Iterated CVaR RL when $\alpha \downarrow 0$, called Worst Path RL, in which case the agent aims to maximize the minimum possible cumulative reward. Note that the Worst Path RL problem has the \emph{unique feature} that the value function (Eq.~\eqref{eq:min_bellman}) concerns only the minimum value of successor states and are independent of transition probabilities. Therefore, as long as we learn the connectivity among states, we can perform a planning to compute the optimal policy. Yet, this feature does not make the Worst Path RL problem trivial, because it is difficult to distinguish whether a successor state is hard to reach or does not exist. As a result, a careful scheme is needed to both explore undetected successor states and exploit observations to minimize regret. \subsection{Algorithm $\mathtt{MaxWP}$ and Regret Upper Bound} \begin{algorithm}[t] \caption{$\mathtt{MaxWP}$} \label{alg:min} \begin{algorithmic}[1] \STATE {\bfseries Input:} $\delta$, $\delta':=\frac{\delta}{2}$, $L:=\log(\frac{SA}{\delta'})$, $\hat{V}^k_{H+1}(s)=0$ for any $k>0$ and $s \in \mathcal{S}$. \FOR{$k=1,2,\dots,K$} \FOR{$h=H,H-1,\dots,1$} \FOR{$s \in \mathcal{S}$} \FOR{$a \in \mathcal{A}$} \STATE $\hat{Q}^k_h(s,a) \leftarrow r(s,a)+\min_{s' \sim \hat{p}^k(\cdot|s,a)}(\hat{V}^k_{h+1}(s'))$ \ENDFOR \STATE $\hat{V}^k_h(s) \leftarrow \max_{a \in \mathcal{A}} \hat{Q}^k_h(s,a)$ \STATE $\pi^k_h(s) \leftarrow \operatornamewithlimits{argmax}_{a \in \mathcal{A}} \hat{Q}^k_h(s,a)$ \ENDFOR \ENDFOR \STATE Play the episode $k$ with policy $\pi^k$, and update $n_{k+1}(s,a)$ and $\hat{p}^{k+1}(s'|s,a)$ \ENDFOR \end{algorithmic} \end{algorithm} We design a simple yet efficient algorithm $\mathtt{MaxWP}$ (Algorithm~\ref{alg:min}), which carefully combines the exploration of critical but hard-to-reach successor states and the exploitation of current best actions. Specifically, in each episode $k$, $\mathtt{MaxWP}$ constructs empirical Q-value function $\hat{Q}^k_h(s,a)$ and value function $\hat{V}^k_h(s)$ using the lowest value of next states $\min_{s' \sim \hat{p}^k(\cdot|s,a)}(\hat{V}^k_{h+1}(s'))$, and maintains a greedy policy $\pi^k_h(s)$ with respect to $\hat{Q}^k_h(s,a)$. Then, $\mathtt{MaxWP}$ executes policy $\pi^k_h(s)$ in episode $k$, and updates the number of visitations $n_{k+1}(s,a)$ and empirical transition distribution $\hat{p}^{k+1}(s'|s,a)$. The intuition behind $\mathtt{MaxWP}$ is as follows. Since the Q-value function for Worst Path RL use the $\min$ operator, if the Q-value function is not accurately estimated, it can only be over-estimated (not under-estimated). If over-estimation happens, $\mathtt{MaxWP}$ will be exploring an over-estimated action and urging its empirical Q-value to get back to its true Q-value. Otherwise, if the Q-value function is already accurate, $\mathtt{MaxWP}$ just selects the optimal action. In other words, $\mathtt{MaxWP}$ combines the exploration of over-estimated actions (which lead to undetected bad successor states) and exploitation of current best actions. Below we provide the regret guarantee for algorithm $\mathtt{MaxWP}$. \begin{theorem} \label{thm:min_regret_ub} With probability at least $1-\delta$, the regret of algorithm $\mathtt{MaxWP}$ is bounded by \begin{align*} O \Bigg( \sum_{(s,a)} \frac{H}{\min \limits_{\begin{subarray}{c} \pi: \ \upsilon_{\pi}(s,a)>0 \end{subarray}} \upsilon_{\pi}(s,a) \cdot \min \limits_{s' \in \textup{supp}( p(\cdot|s,a))} p(s'|s,a) } \log\sbr{ \frac{SA}{\delta} } \Bigg) , \end{align*} where $\upsilon_{\pi}(s,a)$ denotes the probability that $(s,a)$ is visited at least once in an episode under policy $\pi$. \end{theorem} \textbf{Remark 3.} The factor $\min_{ \pi: \ \upsilon_{\pi}(s,a)>0} \upsilon_{\pi}(s,a)$ stands for the minimum probability of visiting $(s,a)$ at least once in an episode over all feasible policies, and $\min_{s' \in \textup{supp}( p(\cdot|s,a))} p(s'|s,a)$ denotes the minimum transition probability over all successor states of $(s,a)$. We will show that these two factors also exist in the lower bound (discussed in Section~\ref{sec:min_lb}), and are unavoidable in general. Theorem~\ref{thm:min_regret_ub} exhibits that algorithm $\mathtt{MaxWP}$ enjoys a constant regret, since it effectively utilizes and adapts to the feature of Worst Path RL and efficiently explores the connectivity among states. \subsection{Regret Lower Bound} \label{sec:min_lb} We now establish a regret lower bound for Worst Path RL. To exclude trivial instance-specific algorithms and formally state our lower bound, we first define an $o(K)$-consistent algorithm as an algorithm which guarantees an $o(K)$ regret on any instance of Worst Path RL. \begin{theorem} \label{thm:min_regret_lb} There exists an instance of Worst Path RL, for which the regret of any $o(K)$-consistent algorithm is at least \begin{align*} \Omega \Bigg( \max_{ (s,a):\ \exists h,\ a \neq \pi^{*}_h(s) } \frac{ \Delta_{\min} }{\min \limits_{ \pi:\ \upsilon_{\pi}(s,a)>0 } \upsilon_{\pi}(s,a) \cdot \min \limits_{s' \in \textup{supp}( p(\cdot|s,a))} p(s'|s,a) } \Bigg) , \end{align*} where $\max_{\begin{subarray}{c} (s,a): \exists h,\ a \neq \pi^{*}_h(s) \end{subarray}}$ takes the maximum over all $(s,a)$ such that $a$ is sub-optimal in state $s$ at some step, and $\Delta_{\min}$ denotes the minimum regret that a sub-optimal policy must suffer in an episode. \end{theorem} The insight behind this lower bound is as follows: For a critical but hard-to-reach state $s$, any $o(K)$-consistent algorithm must explore all actions $a$ in state $s$, in order to detect their induced successor states $s'$ and distinguish between the optimal and sub-optimal actions. Hence, this process incurs a regret dependent on factors $\min_{\pi: \upsilon_{\pi}(s,a)>0} \upsilon_{\pi}(s,a)$ and $\min_{s' \in \textup{supp}( p(\cdot|s,a))} p(s'|s,a)$. \section{Conclusion} \label{sec:conclusion} In this paper, we investigate a novel Iterated CVaR RL problem with the Regret Minimization and Best Policy Identification. We design two efficient algorithms $\mathtt{ICVaR\mbox{-}RM}$ and $\mathtt{ICVaR\mbox{-}BPI}$, and provide matching regret/sample complexity upper and lower bounds with respect to $K$. We also investigate a limiting case called Worst Path RL, and propose a simple and efficient algorithm $\mathtt{MaxWP}$ with rigorous regret guarantees. Novel techniques are developed to analyze the change of CVaR due to the value function shift and decompose the regret via a distorted visitation distribution. \bibliographystyle{plainnat} \section*{Appendix} \section{Comparison with CVaR MDP} \label{apx:comparison_cvar} In this section, we compare our Iterated CVaR MDP formulation with CVaR MDP~\cite{bauerle2011markov,haskell2015convex,chow2015risk}. The objective of CVaR MDP is to maximize the CVaR value of the total reward, defined as\footnote{Some of prior works~\cite{bauerle2011markov,chow2015risk} also interpret the reward as cost, and consider to minimize the CVaR value of the total cost.} \begin{align*} \max_{\pi} \ \textup{CVaR}^{\alpha}_{(s_h,a_h) \sim p,\pi} \sbr{\sum_{h=1}^{H} r(s_h,a_h)} . \end{align*} Below we present an illustrating example of financial investment to show that, compared to the CVaR MDP, our Iterated CVaR MDP has a stronger incentive to avoid getting into catastrophic states. As shown in Figure~\ref{fig:comparison_cvar}, starting from an initial state $s_1$, we choose one of two multi-stage finical products to invest, denoted by actions $a_1,a_2$. In state $s_1$, with action $a_1,a_2$, we transition to $s_2,s_3$ deterministically, respectively. In states $s_2,\dots,s_{13}$, actions have the same transition distribution. We receives a reward only at the final step, and the value of a reward stands for the ratio of current money over the invested money. The transition probabilities and rewards are specified in the figure. Let risk level $\alpha=0.05$. We can see that, under the CVaR criterion, we will choose $a_2$ which has a higher CVaR value. However, under the Iterated CVaR criterion, we will select $a_1$ which generates a higher Iterated CVaR value. The intuition is that Iterated CVaR places more weights on the terrible states $s_8,s_{11}$, where we will lose all money. Hence, compared to the CVaR MDP, the Iterated CVaR MDP has a stronger effect on preventing us from getting into catastrophic states. In addition, from the computation perspective, the CVaR MDP does not have Markovian optimal policies and is computationally intractable even for planning~\cite{bauerle2011markov}. Existing works for CVaR MDP~\cite{haskell2015convex,chow2015risk} mainly investigate how to design approximate planning algorithms with convergence analysis, rather than developing online RL algorithms with regret guarantees as in our work. \section{Proofs for Iterated CVaR RL with Regret Minimization} In this section, we present the proofs of regret upper and lower bounds (Theorems~\ref{thm:cvar_rm_ub},\ref{thm:cvar_rm_lb}) for Iterated CVaR RL-RM. \subsection{Proofs for Regret Upper Bound} In order to prove the regret upper bound (Theorem~\ref{thm:cvar_rm_ub}), we first introduce several important lemmas (Lemmas~\ref{lemma:concentration_V_star}-\ref{lemma:cvar_increase_V}) and define concentration event $\mathcal{E}$. \subsubsection{Concentration} \begin{lemma}[Concentration for $V^{*}$] \label{lemma:concentration_V_star} It holds that \begin{align*} \Pr \Bigg[ & \abr{\textup{CVaR}^{\alpha}_{s' \sim \hat{p}^k(\cdot|s,a)}(V^{*}_h(s')) - \textup{CVaR}^{\alpha}_{s' \sim p(\cdot|s,a)}(V^{*}_h(s'))} \leq \frac{H}{\alpha}\sqrt{\frac{ \log \sbr{\frac{KHSA}{\delta'}} }{n_k(s,a)}} , \\ & \forall k \in [K], \forall h \in [H], \forall (s,a) \in \mathcal{S} \times \mathcal{A} \Bigg] \geq 1-2\delta' . \end{align*} \end{lemma} \begin{proof}[Proof of Lemma~\ref{lemma:concentration_V_star}] Using Brown's inequality~\cite{brown2007large} (Theorem 2 in \cite{thomas2019concentration}) and a union bound over $(s,a) \in \mathcal{S} \times \mathcal{A}$ and $n_k(s,a) \in [KH]$, we can obtain this lemma. \end{proof} \begin{figure} [t!] \centering \includegraphics[width=0.99\textwidth]{fig/comparison_CVaR.pdf} \caption{Illustrating example for the comparison with CVaR MDP.} \label{fig:comparison_cvar} \end{figure} \begin{lemma}[Concentration for any $V$] \label{lemma:concentration_any_V} It holds that \begin{align*} \Bigg[ &\abr{\textup{CVaR}^{\alpha}_{s' \sim \hat{p}^k(\cdot|s,a)}(V(s')) - \textup{CVaR}^{\alpha}_{s' \sim p(\cdot|s,a)}(V(s'))} \leq \frac{2H}{\alpha}\sqrt{\frac{2S\log \sbr{\frac{KHSA}{\delta'}}}{n_k(s,a)}} , \\ & \forall V(\cdot): \mathcal{S} \mapsto [0,H], \forall k \in [K], \forall (s,a) \in \mathcal{S} \times \mathcal{A} \Bigg] \geq 1-2\delta' . \end{align*} \end{lemma} \begin{proof}[Proof of Lemma~\ref{lemma:concentration_any_V}] As shown in Figure~\ref{fig:concentration_any_V}, we sort all states $s \in \mathcal{S}$ by $V(s)$ in ascending order (from the left to the right). Add a virtual line at the $\alpha$-quantile, denoted by ``the $\alpha$-quantile line''. Let $\mu(s'|s,a)$ and $\hat{\mu}^{k}(s'|s,a)$ denote the truncated weights imposed on support $V(s')$ when computing $\textup{CVaR}^{\alpha}_{s' \sim p(\cdot|s,a)}(V(s'))$ and $\textup{CVaR}^{\alpha}_{s' \sim \hat{p}^k(\cdot|s,a)}(V(s'))$, respectively, i.e., \begin{align*} \textup{CVaR}^{\alpha}_{s' \sim p(\cdot|s,a)}(V(s')) = & \frac{\sum_{s' \in \mathcal{S}} \mu(s'|s,a) \cdot V(s')}{\alpha} , \\ \textup{CVaR}^{\alpha}_{s' \sim \hat{p}^k(\cdot|s,a)}(V(s')) = & \frac{\sum_{s' \in \mathcal{S}} \hat{\mu}^{k}(s'|s,a) \cdot V(s')}{\alpha} . \end{align*} Fix the support $V(\cdot)$. When the transition probabilities changes from $p(\cdot|s,a)$ to $\hat{p}^k(\cdot|s,a)$, the $\alpha$-quantile line shifts to the left or right side. Denote the $\alpha$-quantile line before and afterc shift by Line 1 and Line 2, respectively. Then, denote the states on the left of Lines 1,2 by $\mathcal{S}_{left}$, denote the states between Lines 1,2 by $\mathcal{S}_{middle}$, and denote the states on the right of Lines 1,2 by $\mathcal{S}_{right}$. It holds that \begin{align*} \sum_{s' \in \mathcal{S}_{left}} \abr{\hat{\mu}^{k}(s'|s,a) - \mu(s'|s,a)} = & \sum_{s' \in \mathcal{S}_{middle}} \abr{\hat{\mu}^{k}(s'|s,a) - \mu(s'|s,a)} , \\ \sum_{s' \in \mathcal{S}_{left}} \abr{\hat{\mu}^{k}(s'|s,a) - \mu(s'|s,a)} = & \sum_{s' \in \mathcal{S}_{left}} \abr{\hat{p}^{k}(s'|s,a) - p(s'|s,a)} , \\ \hat{\mu}^{k}(s'|s,a) = & \mu(s'|s,a) = 0 , \ \forall s' \in \mathcal{S}_{right}. \end{align*} Thus, we have \begin{align} & \abr{\textup{CVaR}^{\alpha}_{s' \sim \hat{p}^k(\cdot|s,a)}(V(s')) - \textup{CVaR}^{\alpha}_{s' \sim p(\cdot|s,a)}(V(s')) } \nonumber\\ = & \frac{ \abr{\sum_{s' \in \mathcal{S}} \sbr{\hat{\mu}^{k}(s'|s,a)-\mu(s'|s,a)} \cdot V(s')} }{\alpha} \nonumber\\ \leq & \frac{\sum_{s' \in \mathcal{S}} \abr{\hat{\mu}^{k}(s'|s,a)-\mu(s'|s,a)} \cdot H}{\alpha} \nonumber\\ = & \frac{2 \sum_{s' \in \mathcal{S}_{left}} \abr{\hat{\mu}^{k}(s'|s,a)-\mu(s'|s,a)} \cdot H}{\alpha} \nonumber\\ = & \frac{2 \sum_{s' \in \mathcal{S}_{left}} \abr{\hat{p}^{k}(s'|s,a)-p(s'|s,a)} \cdot H}{\alpha} \nonumber\\ \leq & \frac{2 \sum_{s' \in \mathcal{S}} \abr{\hat{p}^{k}(s'|s,a)-p(s'|s,a)} \cdot H}{\alpha} \label{eq:con_for_any_V_ell_1} \end{align} Using Eq. (55) in \cite{zanette2019tighter} (originated from \cite{weissman2003inequalities}), we have that with probability at least $1-2\delta'$, for any $k \in [K]$ and $(s,a) \in \mathcal{S} \times \mathcal{A}$, \begin{align} \sum_{s' \in \mathcal{S}} \abr{\hat{p}^{k}(s'|s,a)-p(s'|s,a)} \leq \sqrt{\frac{2S\log \sbr{\frac{KHSA}{\delta'}}}{n_k(s,a)}} . \label{eq:p_ell_1_concentration} \end{align} Plugging Eq.~\eqref{eq:p_ell_1_concentration} into Eq.~\eqref{eq:con_for_any_V_ell_1}, we have that with probability at least $1-2\delta'$, for any $k \in [K]$, $(s,a) \in \mathcal{S} \times \mathcal{A}$ and function $V:\mathcal{S} \mapsto [0,H]$, \begin{align*} \abr{\textup{CVaR}^{\alpha}_{s' \sim \hat{p}^k(\cdot|s,a)}(V(s')) - \textup{CVaR}^{\alpha}_{s' \sim p(\cdot|s,a)}(V(s'))} \leq \frac{2H}{\alpha}\sqrt{\frac{2S\log \sbr{\frac{KHSA}{\delta'}}}{n_k(s,a)}} . \end{align*} \end{proof} \begin{figure} [t!] \centering \includegraphics[width=0.9\textwidth]{fig/Lemma_concentration_for_any_V.pdf} \caption{Illustrating example for Lemma~\ref{lemma:concentration_any_V}.} \label{fig:concentration_any_V} \end{figure} For any $k \in [K]$, $h \in [H]$ and $(s,a) \in \mathcal{S} \times \mathcal{A}$, let $n_{kh}(s,a)$ denote the number of times $(s,a)$ was visited at step $h$ up to episode $k$, and let $n_{k}(s,a):=\sum_{h=1}^{H} n_{kh}(s,a)$ denote the number of times $(s,a)$ was visited up to episode $k$. \begin{lemma}[Concentration of Visitation]\label{lemma:con_visitation} It holds that \begin{align*} \Pr \Bigg[ n_k(s,a) \geq \frac{1}{2} \sum_{k'<k} \sum_{h=1}^{H} w_{k'h}(s,a) - H \log \sbr{\frac{HSA}{\delta'}} , \ \forall k, \forall (s,a) \in \mathcal{S} \times \mathcal{A} \Bigg] \geq 1-\delta' . \end{align*} \end{lemma} \begin{proof}[Proof of Lemma~\ref{lemma:con_visitation}] Applying Lemma F.4 in \cite{dann2017unifying}, we have that for a fixed $h \in [H]$ \begin{align*} \Pr \mbr{ n_{kh}(s,a) \geq \frac{1}{2} \sum_{k'<k} w_{k'h}(s,a) - \log \sbr{\frac{HSA}{\delta'}} , \ \forall k, \forall (s,a) \in \mathcal{S} \times \mathcal{A} } \geq 1-\frac{\delta'}{H} \end{align*} By a union bound over $h \in [H]$, we have \begin{align*} \Pr \mbr{ n_{k}(s,a) \geq \frac{1}{2} \sum_{k'<k} \sum_{h=1}^{H} w_{k'h}(s,a) - H \log \sbr{\frac{HSA}{\delta'}} } \geq 1-\delta' . \end{align*} \end{proof} \textbf{Concentration Events.} For ease of notation, we summarize the concentration events which will be used in our proof as follows: \begin{align*} \mathcal{E}_1:= & \Bigg\{ \abr{\textup{CVaR}^{\alpha}_{s' \sim \hat{p}^k(\cdot|s,a)}(V^{*}_h(s')) - \textup{CVaR}^{\alpha}_{s' \sim p(\cdot|s,a)}(V^{*}_h(s'))} \leq \frac{H}{\alpha}\sqrt{\frac{ \log \sbr{\frac{KHSA}{\delta'}} }{n_k(s,a)}} , \\ & \forall k \in [K], \forall h \in [H], \forall (s,a) \in \mathcal{S} \times \mathcal{A} \Bigg\} \\ \mathcal{E}_2:= & \Bigg\{ \abr{\textup{CVaR}^{\alpha}_{s' \sim \hat{p}^k(\cdot|s,a)}(V(s')) - \textup{CVaR}^{\alpha}_{s' \sim p(\cdot|s,a)}(V(s'))} \leq \frac{2H}{\alpha}\sqrt{\frac{2S\log \sbr{\frac{KHSA}{\delta'}}}{n_k(s,a)}} , \\ & \forall V(\cdot): \mathcal{S} \mapsto [0,H], \forall k \in [K], \forall (s,a) \in \mathcal{S} \times \mathcal{A} \Bigg\} \\ \mathcal{E}_3:= & \Bigg\{ n_k(s,a) \geq \frac{1}{2} \sum_{k'<k} \sum_{h=1}^{H} w_{k'h}(s,a) - H \log \sbr{\frac{HSA}{\delta'}} , \ \forall k, \forall (s,a) \in \mathcal{S} \times \mathcal{A} \Bigg\} \\ \mathcal{E}:= & \mathcal{E}_1 \cap \mathcal{E}_2 \cap \mathcal{E}_3 \end{align*} \begin{lemma}\label{lemma:rm_con_event} Letting $\delta'=\frac{\delta}{5}$, it holds that \begin{align*} \Pr \mbr{\mathcal{E}} \geq 1-\delta . \end{align*} \end{lemma} \begin{proof}[Proof of Lemma~\ref{lemma:rm_con_event}] This lemma can be obtained by combining Lemmas~\ref{lemma:concentration_V_star},\ref{lemma:concentration_any_V},\ref{lemma:con_visitation}. \end{proof} \subsubsection{Optimism, Visitation and CVaR Gap} Let $L:=\log \sbr{\frac{KHSA}{\delta'}}$. \begin{lemma}[Optimism] \label{lemma:optimism} Suppose that event $\mathcal{E}$ holds. Then, for any $k \in [K]$, $h \in [H]$ and $s \in \mathcal{S}$, we have \begin{align*} \bar{V}^k_h(s) \geq V^{*}_h(s) . \end{align*} \end{lemma} \begin{proof}[Proof of Lemma~\ref{lemma:optimism}] We prove this lemma by induction. First, for any $k \in [K]$, $s \in \mathcal{S}$, it holds that $\bar{V}^k_{H+1}(s) = V^{*}_{H+1}(s)=0$. Then, for any $k \in [K]$, $h \in [H]$ and $(s,a) \in \mathcal{S} \times \mathcal{A}$, \begin{align*} \bar{Q}^k_h(s,a) = & r(s,a)+\textup{CVaR}^{\alpha}_{s' \sim \hat{p}^k(\cdot|s,a)}(\bar{V}^k_{h+1}(s')) + \frac{H}{\alpha}\sqrt{\frac{L}{n_k(s,a)}} \\ \overset{\textup{(a)}}{\geq} & r(s,a)+\textup{CVaR}^{\alpha}_{s' \sim \hat{p}^k(\cdot|s,a)}(V^{*}_{h+1}(s')) + \frac{H}{\alpha}\sqrt{\frac{L}{n_k(s,a)}} \\ \overset{\textup{(b)}}{\geq} & r(s,a)+\textup{CVaR}^{\alpha}_{s' \sim p(\cdot|s,a)}(V^{*}_{h+1}(s')) \\ = & Q^{*}_h(s,a) , \end{align*} where (a) uses the induction hypothesis and (b) comes from Lemma~\ref{lemma:concentration_V_star}. Thus, we have \begin{align*} \bar{V}^k_h(s) \geq \bar{Q}^k_h(s,\pi^{*}_h(s)) \geq Q^{*}_h(s,\pi^{*}_h(s)) = V^{*}_h(s) , \end{align*} which concludes the proof. \end{proof} For any episode $k$, define the set of state-action pairs \begin{align*} \mathcal{L}_k := \lbr{ (s,a) \in \mathcal{S} \times \mathcal{A}: \frac{1}{4} \sum_{k'<k} \sum_{h=1}^{H} w_{k'h}(s,a) \geq H \log \sbr{\frac{HSA}{\delta'}} +H } . \end{align*} Let $\bar{n}_{k}(s,a):= \sum_{k' \leq k} \sum_{h=1}^{H} w_{k'h}(s,a)$. \begin{lemma}[Sufficient Visitation] \label{lemma:sufficient_visit} Suppose that event $\mathcal{E}$ holds. Then, for any $k$ and $(s,a) \in \mathcal{L}_k$, \begin{align*} n_k(s,a) \geq \frac{1}{4} \sum_{k' \leq k} \sum_{h=1}^{H} w_{k'h}(s,a) = \frac{1}{4} \bar{n}_k(s,a) . \end{align*} \end{lemma} \begin{proof}[Proof of Lemma~\ref{lemma:sufficient_visit}] This proof is the same as that of Lemma 6 in \cite{zanette2019tighter}. Using Lemma~\ref{lemma:con_visitation} and the definition of $\mathcal{L}_k$, we have \begin{align*} n_k(s,a) \geq & \frac{1}{2} \sum_{k'<k} \sum_{h=1}^{H} w_{k'h}(s,a) - H \log \sbr{\frac{HSA}{\delta'}} \\ = & \frac{1}{4} \sum_{k'<k} \sum_{h=1}^{H} w_{k'h}(s,a) + \frac{1}{4} \sum_{k'<k} \sum_{h=1}^{H} w_{k'h}(s,a) - H \log \sbr{\frac{HSA}{\delta'}} \\ \geq & \frac{1}{4} \sum_{k'<k} \sum_{h=1}^{H} w_{k'h}(s,a) + H \\ \geq & \frac{1}{4} \sum_{k'<k} \sum_{h=1}^{H} w_{k'h}(s,a) + \sum_{h=1}^{H} w_{kh}(s,a) \\ = & \frac{1}{4} \sum_{k'\leq k} \sum_{h=1}^{H} w_{k'h}(s,a) \\ = & \frac{1}{4} \bar{n}_k(s,a) \end{align*} \end{proof} For any $k \in [K]$, $h \in [H]$ and $(s,a) \in \mathcal{S} \times \mathcal{A}$, let $w^{\textup{CVaR},\alpha,V^{\pi^k}}_{kh}(s,a)$ denote the distorted probability (weight) of visiting $(s,a)$ at step $h$ in episode $k$ under the $\textup{CVaR}$ metric. It holds that $\sum_{s,a} w^{\textup{CVaR},\alpha,V^{\pi^k}}_{kh}(s,a)=1$. \begin{lemma}[Insufficient Visitation] \label{lemma:insufficient_visit} It holds that \begin{align*} \sum_{k=1}^{K} \sum_{h=1}^{H} \sum_{(s,a) \notin \mathcal{L}_k} w^{\textup{CVaR},\alpha,V^{\pi^k}}_{kh}(s,a) \leq & \frac{ 8SAH }{\min \limits_{\begin{subarray}{c}\pi,h,s:\\ w_{\pi,h}(s)>0\end{subarray}} w_{\pi,h}(s)} \log\sbr{\frac{HSA}{\delta'}} , \end{align*} \end{lemma} \begin{proof}[Proof of Lemma~\ref{lemma:insufficient_visit}] First, using the definition of $\mathcal{L}_k$, we have \begin{align*} \sum_{k=1}^{K} \sum_{h=1}^{H} \sum_{(s,a) \notin \mathcal{L}_k} w_{kh}(s,a) = & \sum_{s,a} \sum_{k=1}^{K} \sum_{h=1}^{H} w_{kh}(s,a) \cdot \indicator{(s,a) \notin \mathcal{L}_k} \\ < & \sum_{s,a} \sbr{4H \log \sbr{\frac{HSA}{\delta'}} +4H} \\ \leq & 8SAH \log\sbr{\frac{HSA}{\delta'}} \end{align*} Then, we have \begin{align*} \sum_{k=1}^{K} \sum_{h=1}^{H} \sum_{(s,a) \notin \mathcal{L}_k} w^{\textup{CVaR},\alpha,V^{\pi^k}}_{kh}(s,a) \overset{\textup{(a)}}{=} & \sum_{k=1}^{K} \sum_{h=1}^{H} \sum_{(s,a) \notin \mathcal{L}_k} \frac{w^{\textup{CVaR},\alpha,V^{\pi^k}}_{kh}(s,a)}{w_{kh}(s,a)} \cdot w_{kh}(s,a) \cdot \\& \indicator{w_{kh}(s,a) \neq 0} \\ \leq & \frac{1}{\min \limits_{\begin{subarray}{c}\pi,h,(s,a):\\ w_{\pi,h}(s,a)>0\end{subarray}} w_{\pi,h}(s,a)} \sum_{k=1}^{K} \sum_{h=1}^{H} \sum_{(s,a) \notin \mathcal{L}_k} w_{kh}(s,a) \\ \overset{\textup{(b)}}{\leq} & \frac{ 8SAH }{\min \limits_{\begin{subarray}{c}\pi,h,s:\\ w_{\pi,h}(s)>0\end{subarray}} w_{\pi,h}(s)} \log\sbr{\frac{HSA}{\delta'}} , \end{align*} Here (a) is due to that if $w_{kh}(s,a) = 0$, $w^{\textup{CVaR},\alpha,V^{\pi^k}}_{kh}(s,a)=0$. (b) comes from that for any deterministic policy $\pi$, we have either $w_{kh}(s,a)=w_{kh}(s)$ or $w_{kh}(s,a)=0$. \end{proof} \begin{figure}[t!] \centering \includegraphics[width=0.9\textwidth]{fig/Lemma_CVaR_gap_V_shift.pdf} \caption{Illustrating example for Lemma~\ref{lemma:cvar_increase_V}.} \label{fig:cvar_increase_V} \end{figure} For any risk level $z \in (0,1]$, function $V:\mathcal{S} \mapsto [0,H]$ and distribution $p(\cdot|s,a) \in \triangle_{\mathcal{S}} $, let $\beta^{\alpha,V}(\cdot|s,a)$ denote the distorted distribution (re-normalized weights) on $\mathcal{S}$ when computing $\textup{CVaR}^{\alpha}_{s' \sim p(\cdot|s,a)}(V(s'))$, i.e., \begin{align*} \textup{CVaR}^{\alpha}_{s' \sim p(\cdot|s,a)}(V(s')) = \sum_{s' \in \mathcal{S}} \beta^{\alpha,V}(s'|s,a) \cdot V(s') . \end{align*} \begin{lemma}[CVaR Gap due to Value Function Shift] \label{lemma:cvar_increase_V} For any $(s,a) \in \mathcal{S} \times \mathcal{A}$ and functions $V,\bar{V}:\mathcal{S} \mapsto [0,H]$ such that $\bar{V}(s') \geq V(s')$ for any $s' \in \mathcal{S}$, \begin{align*} \textup{CVaR}^{\alpha}_{s' \sim p(\cdot|s,a)}(\bar{V}(s')) - \textup{CVaR}^{\alpha}_{s' \sim p(\cdot|s,a)}(V(s')) \leq \beta^{\alpha,V}(\cdot|s,a)^\top \sbr{\bar{V}-V} \end{align*} \end{lemma} \begin{proof}[Proof of Lemma~\ref{lemma:cvar_increase_V}] As shown in Figure~\ref{fig:cvar_increase_V}, let $\mu^{V}(s'|s,a)$, $\mu^{\bar{V}}(s'|s,a)$ denote the truncated weights imposed on support $V(s')$, $\bar{V}(s')$ when computing $\textup{CVaR}^{\alpha}_{s' \sim p(\cdot|s,a)}(V(s'))$, $\textup{CVaR}^{\alpha}_{s' \sim p(\cdot|s,a)}(\bar{V}(s'))$, respectively. It holds that \begin{align*} \textup{CVaR}^{\alpha}_{s' \sim p(\cdot|s,a)}(V(s')) = & \frac{\sum_{s' \in \mathcal{S}} \mu^{V}(s'|s,a) \cdot V(s')}{\alpha} = \sum_{s' \in \mathcal{S}} \beta^{\alpha,V}(s'|s,a) \cdot V(s') , \\ \textup{CVaR}^{\alpha}_{s' \sim p(\cdot|s,a)}(\bar{V}(s')) = & \frac{\sum_{s' \in \mathcal{S}} \mu^{\bar{V}}(s'|s,a) \cdot \bar{V}(s')}{\alpha} = \sum_{s' \in \mathcal{S}} \beta^{\alpha,\bar{V}}(s'|s,a) \cdot \bar{V}(s') . \end{align*} Add a virtual line at the $\alpha$-quantile for the shifted distribution (i.e, $\textup{VaR}^z_{s' \sim p(\cdot|s,a)}(\bar{V}(s'))$), denoted by ``the shifted $\alpha$-quantile line''. We divide all states $\mathcal{S}$ into three subsets $\mathcal{S}_{asc}$, $\mathcal{S}_{des}$ and $\mathcal{S}_{unch}$ according to how $\mu^{V}(s'|s,a)$ changes to $\mu^{\bar{V}}(s'|s,a)$ when $V(s')$ shifts to $\bar{V}(s')$, as follows: For any $s' \in \mathcal{S}_{asc}$, $\mu^{\bar{V}}(s'|s,a) \leq \mu^{V}(s'|s,a)$, the rank of $s$ ascends, and $s$ lies at the middle or right of the shifted $\alpha$-quantile line. For any $s' \in \mathcal{S}_{des}$, $\mu^{\bar{V}}(s'|s,a) \geq \mu^{V}(s'|s,a)$, the rank of $s$ descends, and $s$ lies at the middle or left of the shifted $\alpha$-quantile line. For any $s' \in \mathcal{S}_{unch}$, $\mu^{\bar{V}}(s'|s,a) = \mu^{V}(s'|s,a)$, the rank of $s$ keeps unchanged, and $s$ lies at the right of the shifted $\alpha$-quantile line. It holds that \begin{align} \sum_{s' \in \mathcal{S}_{asc}} \sbr{\mu^{\bar{V}}(s'|s,a) - \mu^{V}(s'|s,a)} + \sum_{s' \in \mathcal{S}_{des}} \sbr{\mu^{\bar{V}}(s'|s,a) - \mu^{V}(s'|s,a)} =0 . \label{eq:V_shift_asc_des} \end{align} Then, we have \begin{align*} & \textup{CVaR}^{\alpha}_{s' \sim p(\cdot|s,a)}(\bar{V}(s')) - \textup{CVaR}^{\alpha}_{s' \sim p(\cdot|s,a)}(V(s')) \\ = & \frac{1}{\alpha} \cdot \Bigg( \sum_{s' \in \mathcal{S}_{asc}} \sbr{ \mu^{\bar{V}}(s'|s,a) \cdot \bar{V}(s') - \mu^{V}(s'|s,a) \cdot V(s') } \\& + \sum_{s' \in \mathcal{S}_{des}} \sbr{ \mu^{\bar{V}}(s'|s,a) \cdot \bar{V}(s') - \mu^{V}(s'|s,a) \cdot V(s') } \\& + \sum_{s' \in \mathcal{S}_{unch}} \sbr{ \mu^{\bar{V}}(s'|s,a) \cdot \bar{V}(s') - \mu^{V}(s'|s,a) \cdot V(s') } \Bigg) \\ = & \frac{1}{\alpha} \cdot \Bigg( \sum_{s' \in \mathcal{S}_{asc}} \bigg( \mu^{V}(s'|s,a) \cdot \sbr{ \bar{V}(s') - V(s') } \quad + \sbr{ \mu^{\bar{V}}(s'|s,a) - \mu^{V}(s'|s,a) } \cdot \bar{V}(s') \bigg) \\& + \sum_{s' \in \mathcal{S}_{des}} \bigg( \mu^{V}(s'|s,a) \cdot \sbr{ \bar{V}(s') - V(s') } \quad + \sbr{ \mu^{\bar{V}}(s'|s,a) - \mu^{V}(s'|s,a) } \cdot \bar{V}(s') \bigg) \\& + \sum_{s' \in \mathcal{S}_{unch}} \mu^{V}(s'|s,a) \cdot \sbr{ \bar{V}(s') - V(s') } \Bigg) \\ = & \frac{1}{\alpha} \cdot \Bigg( \sum_{s \in \mathcal{S}} \mu^{V}(s'|s,a) \cdot \sbr{ \bar{V}(s') - V(s') } - \sum_{s' \in \mathcal{S}_{asc}} \sbr{\mu^{V}(s'|s,a) - \mu^{\bar{V}}(s'|s,a) } \cdot \bar{V}(s') \\ & + \sum_{s' \in \mathcal{S}_{des}} \sbr{ \mu^{\bar{V}}(s'|s,a) - \mu^{V}(s'|s,a) } \cdot \bar{V}(s') \Bigg) \\ \overset{\textup{(a)}}{\leq} & \frac{1}{\alpha} \cdot \Bigg( \sum_{s \in \mathcal{S}} \mu^{V}(s'|s,a) \cdot \sbr{ \bar{V}(s') - V(s') } - \min_{s' \in \mathcal{S}_{asc}} \bar{V}(s') \cdot \sum_{s' \in \mathcal{S}_{asc}} \sbr{ \mu^{V}(s'|s,a) - \mu^{\bar{V}}(s'|s,a) } \\ & + \min_{s' \in \mathcal{S}_{asc}} \bar{V}(s') \cdot \sum_{s' \in \mathcal{S}_{des}} \sbr{ \mu^{\bar{V}}(s'|s,a) - \mu^{V}(s'|s,a) } \Bigg) \\ \overset{\textup{(b)}}{=} & \frac{1}{\alpha} \cdot \sum_{s \in \mathcal{S}} \mu^{V}(s'|s,a) \cdot \sbr{ \bar{V}(s') - V(s') } \\ = & \beta^{\alpha,V}(\cdot|s,a)^\top \sbr{ \bar{V} - V } , \end{align*} Here (a) is due to that for any $s' \in \mathcal{S}_{asc}$, $\mu^{\bar{V}}(s'|s,a) \leq \mu^{V}(s'|s,a)$, and that for any $s \in \mathcal{S}_{asc}$, $s' \in \mathcal{S}_{des}$, it holds that $\bar{V}(s) \geq \bar{V}(s')$. (b) comes from Eq.~\eqref{eq:V_shift_asc_des}. \end{proof} \subsubsection{Proof of Theorem~\ref{thm:cvar_rm_ub}} \begin{proof}[Proof of Theorem~\ref{thm:cvar_rm_ub}] Suppose that event $\mathcal{E}$ holds. Then, for any $k \in [K]$, \begin{align} V^{*}_1(s^k_1)-V^{\pi^k}_1(s^k_1) \overset{\textup{(a)}}{\leq} & \bar{V}^k_1(s^k_1)-V^{\pi^k}_1(s^k_1) \nonumber\\ = & \textup{CVaR}^{\alpha}_{s' \sim \hat{p}^k(\cdot|s^k_1,a^k_1)}(\bar{V}^k_{2}(s')) + \frac{H}{\alpha}\sqrt{\frac{L}{n_k(s^k_1,a^k_1)}} - \textup{CVaR}^{\alpha}_{s' \sim p(\cdot|s^k_1,a^k_1)}(V^{\pi^k}_{2}(s')) \nonumber\\ = & \frac{H}{\alpha}\sqrt{\frac{L}{n_k(s^k_1,a^k_1)}}+\textup{CVaR}^{\alpha}_{s' \sim \hat{p}^k(\cdot|s^k_1,a^k_1)}(\bar{V}^k_{2}(s')) - \textup{CVaR}^{\alpha}_{s' \sim p(\cdot|s^k_1,a^k_1)}(\bar{V}^k_{2}(s')) \nonumber\\& + \textup{CVaR}^{\alpha}_{s' \sim p(\cdot|s^k_1,a^k_1)}(\bar{V}^k_{2}(s')) - \textup{CVaR}^{\alpha}_{s' \sim p(\cdot|s^k_1,a^k_1)}(V^{\pi^k}_{2}(s')) \nonumber\\ \overset{\textup{(b)}}{\leq} & \frac{H}{\alpha}\sqrt{\frac{L}{n_k(s^k_1,a^k_1)}}+ \frac{H}{\alpha}\sqrt{\frac{SL}{n_k(s^k_1,a^k_1)}} + \beta^{\alpha,V^{\pi^k}_{2}}(\cdot|s^k_1,a^k_1)^\top (\bar{V}^k_{2} - V^{\pi^k}_{2}) \nonumber\\ \overset{\textup{(c)}}{\leq} & \sum_{h=1}^{H} \sum_{(s,a) \in \mathcal{L}_k} w^{\textup{CVaR},\alpha,V^{\pi^k}}_{kh}(s,a) \cdot \frac{H\sqrt{L}+H\sqrt{SL}}{z\sqrt{n_k(s,a)}} \nonumber\\& + \sum_{h=1}^{H} \sum_{(s,a) \notin \mathcal{L}_k} w^{\textup{CVaR},\alpha,V^{\pi^k}}_{kh}(s,a) \cdot 2H \label{eq:V_diff_an_episode} \end{align} where (a) uses Lemma~\ref{lemma:optimism}, (b) uses Lemmas~\ref{lemma:concentration_any_V},\ref{lemma:cvar_increase_V}, and (c) is due to that the estimation error term $\frac{H\sqrt{L}+H\sqrt{SL}}{z\sqrt{n_k(s,a)}}$ has a universal upper bound $2H$. Since the second term in Eq.~\eqref{eq:V_diff_an_episode} can be bounded by Lemma~\ref{lemma:insufficient_visit}, in the following, we analyze the first term. Summing the first term in Eq.~\eqref{eq:V_diff_an_episode} over $k \in [K]$, we have \begin{align*} & \sum_{k=1}^{K} \sum_{h=1}^{H} \sum_{(s,a) \in \mathcal{L}_k} w^{\textup{CVaR},\alpha,V^{\pi^k}}_{kh}(s,a) \frac{H\sqrt{L}+H\sqrt{SL}}{z\sqrt{n_k(s,a)}} \\ \leq & \frac{H\sqrt{L}+H\sqrt{SL}}{\alpha} \sqrt{ \sum_{k=1}^{K} \sum_{h=1}^{H} \sum_{(s,a) \in \mathcal{L}_k} \frac{w^{\textup{CVaR},\alpha,V^{\pi^k}}_{kh}(s,a)}{n_k(s,a)}} \cdot \sqrt{\sum_{k=1}^{K} \sum_{h=1}^{H} \sum_{(s,a) \in \mathcal{L}_k} w^{\textup{CVaR},\alpha,V^{\pi^k}}_{kh}(s,a)} \\ \overset{\textup{(a)}}{=} & \frac{H\sqrt{L}+H\sqrt{SL}}{\alpha} \sqrt{ \sum_{k=1}^{K} \sum_{h=1}^{H} \sum_{(s,a) \in \mathcal{L}_k} \frac{w^{\textup{CVaR},\alpha,V^{\pi^k}}_{kh}(s,a)}{n_k(s,a)} \cdot \indicator{w_{kh}(s,a) \neq 0} } \cdot \sqrt{KH} \\ = & \frac{H\sqrt{L}+H\sqrt{SL}}{\alpha} \sqrt{KH} \sqrt{ \sum_{k=1}^{K} \sum_{h=1}^{H} \sum_{(s,a) \in \mathcal{L}_k} \frac{w^{\textup{CVaR},\alpha,V^{\pi^k}}_{kh}(s,a)}{w_{kh}(s,a)} \cdot \frac{w_{kh}(s,a)}{n_k(s,a)} \cdot \indicator{w_{kh}(s,a) \neq 0} } \\ \leq & \frac{H\sqrt{L}+H\sqrt{SL}}{\alpha} \cdot \frac{ \sqrt{KH}}{\sqrt{\min \limits_{\begin{subarray}{c}\pi,h,(s,a):\\ w_{\pi,h}(s,a)>0\end{subarray}} w_{\pi,h}(s,a)}} \sqrt{ \sum_{k=1}^{K} \sum_{h=1}^{H} \sum_{(s,a) \in \mathcal{L}_k} \frac{w_{kh}(s,a)}{n_k(s,a)} } \\ \overset{\textup{(b)}}{\leq} & \frac{\sbr{H\sqrt{L}+H\sqrt{SL}} \sqrt{KH}}{z \cdot \sqrt{\min \limits_{\begin{subarray}{c}\pi,h,s:\\ w_{\pi,h}(s)>0\end{subarray}} w_{\pi,h}(s)}} \sqrt{SAL} \\ \leq & \frac{ 2HSL \sqrt{KHA} }{z \cdot \sqrt{\min \limits_{\begin{subarray}{c}\pi,h,s:\\ w_{\pi,h}(s)>0\end{subarray}} w_{\pi,h}(s)}} \end{align*} Here (a) is due to that if $w_{kh}(s,a) = 0$, $w^{\textup{CVaR},\alpha,V^{\pi^k}}_{kh}(s,a)=0$. (b) uses Lemma 13 in \cite{zanette2019tighter} and that for any deterministic policy $\pi$, we have either $w_{kh}(s,a)=w_{kh}(s)$ or $w_{kh}(s,a)=0$. Then, summing Eq.~\eqref{eq:V_diff_an_episode} over $k \in [K]$ and using Lemma~\ref{lemma:insufficient_visit}, we have \begin{align*} \mathcal{R}(K) = & \sum_{k=1}^{K} \sbr{V^{*}_1(s^k_1)-V^{\pi^k}_1(s^k_1)} \\ \leq & \sum_{k=1}^{K} \sum_{h=1}^{H} \sum_{(s,a) \in \mathcal{L}_k} w^{\textup{CVaR},\alpha,V^{\pi^k}}_{kh}(s,a) \frac{H\sqrt{L}+H\sqrt{SL}}{z\sqrt{n_k(s,a)}} \\& + \sum_{k=1}^{K} \sum_{h=1}^{H} \sum_{(s,a) \notin \mathcal{L}_k} w^{\textup{CVaR},\alpha,V^{\pi^k}}_{kh}(s,a) \cdot 2H \\ \leq & \frac{ 2HS \sqrt{KHA} }{z \cdot \sqrt{\min \limits_{\begin{subarray}{c}\pi,h,s:\\ w_{\pi,h}(s)>0\end{subarray}} w_{\pi,h}(s)}} \log\sbr{\frac{KHSA}{\delta'}} + \frac{ 16SAH^2 }{\min \limits_{\begin{subarray}{c}\pi,h,s:\\ w_{\pi,h}(s)>0\end{subarray}} w_{\pi,h}(s)} \log\sbr{\frac{HSA}{\delta'}} \end{align*} \end{proof} When $K$ is large enough, the first term dominates the bound, and thus we obtain Theorem~\ref{thm:cvar_rm_ub}. \yihan{ Handle $\sqrt{SAL}$ } \subsection{Proofs for Regret Lower Bound} In this subsection, we prove the regret lower bound (Theorem~\ref{thm:cvar_rm_lb}) for Iterated CVaR RL-RM. \begin{figure} [t!] \centering \includegraphics[width=0.9\textwidth]{fig/instance_lb_apx.pdf} \caption{The instance for lower bounds (Theorems~\ref{thm:cvar_rm_lb},\ref{thm:bpi_lb}).} \label{fig:lower_bound} \end{figure} \begin{proof}[Proof of Theorem~\ref{thm:cvar_rm_lb}] Consider the instance shown in Figure~\ref{fig:lower_bound} (the same as Figure~\ref{fig:lower_bound_main_text} in the main text): The state space is $\mathcal{S}=\{s_1,s_2,\dots,s_n,x_1,x_2,x_3\}$, where $n=S-3$ and $s_1$ is the initial state. Let $H>S$ and $0<\alpha<\frac{1}{4}$. The reward functions are as follows: For any $a \in \mathcal{A}$, $r(x_1,a)=1$, $r(x_2,a)=0.8$, $r(x_3,a)=0.2$. For any $i \in [n]$ and $a \in \mathcal{A}$, $r(s_i,a)=0$. The transition distributions are as follows: For any $a \in \mathcal{A}$, $p(s_2|s_1,a)=\alpha$, $p(x_1|s_1,a)=1-3\alpha$, $p(x_2|s_1,a)=\alpha$ and $p(x_3|s_1,a)=\alpha$. For any $i\in\{2,\dots,n-1\}$ and $a \in \mathcal{A}$, $p(s_{i+1}|s_i,a)=\alpha$ and $p(x_1|s_i,a)=1-\alpha$. $x_1$, $x_2$ and $x_3$ are absorbing states, i.e., for any $a \in \mathcal{A}$, $p(x_1|x_1,a)=1$, $p(x_2|x_2,a)=1$ and $p(x_3|x_3,a)=1$. Let $a_{J}$ be the optimal action in state $s_n$, which is uniformly drawn from $\mathcal{A}$. For the optimal action $a_J$, $p(x_2|s_n,a_{J})=1-\alpha+\eta$ and $p(x_3|s_n,a_{J})=\alpha-\eta$, where $\eta$ is a parameter which satisfies $0<\eta<\alpha$ and will be specified later. For any suboptimal action $a \in \mathcal{A} \setminus \{a_{J}\}$, $p(x_2|s_n,a)=1-\alpha$ and $p(x_3|s_n,a)=\alpha$. For any $a_j \in \mathcal{A}$, let $\mathbb{E}_j[\cdot]$ and $\Pr_j[\cdot]$ denote the expectation and probability operators under the instance with $a_J=a_j$. % Let $\mathbb{E}_{unif}[\cdot]$ and $\Pr_{unif}[\cdot]$ denote the expectation and probability operators under the uniform instance where all actions $a \in \mathcal{A}$ in state $s_n$ have the same transition distribution, i.e., $p(x_2|s_n,a)=1-\alpha$ and $p(x_3|s_n,a)=\alpha$. Fix an algorithm $\mathcal{A}$. Let $\pi^k$ denote the policy taken by $\mathcal{A}$ in episode $k$. Let $N_{s_n,a_j}=\sum_{k=1}^{K} \indicator{\pi^k(s_n)=a_j}$ denote the number of episodes that the policy chooses $a_j$ in state $s_n$. Let $V_{s_n,a_j}$ denote the number of episodes that the algorithm $\mathcal{A}$ visits $(s_n,a_j)$. Let $w(s_n)$ denote the probability of visiting $s_n$ in an episode (the probability of visiting $s_n$ is the same for any policy). Then, it holds that $\mathbb{E}[V_{s_n,a_j}]=w(s_n) \cdot \mathbb{E}[N_{s_n,a_j}]$. According to the definitions of the value function and regret in CVaR RL, we have \begin{align*} V^{*}_1(s_1) = & \frac{(\alpha-\eta)\cdot 0.2(H-n-1)+\eta \cdot 0.8(H-n-1)}{\alpha} \\ V^{\pi}_1(s_1) = & \frac{(\alpha-\eta)\cdot 0.2(H-n-1)+\eta \cdot 0.8(H-n-1)}{\alpha} \cdot \indicator{\pi(s_n)=a_{J}} \\& + 0.2(H-n-1) \cdot \sbr{1-\indicator{\pi(s_n)=a_{J}}} \end{align*} Thus, \begin{align} \mathbb{E} \mbr{\mathcal{R}(K)} = & \frac{1}{A} \sum_{J=1}^{A} \sum_{k=1}^{K} \sbr{V^{*}_1(s_1)-V^{\pi^k}_1(s_1)} \nonumber\\ = & \frac{1}{A} \sum_{J=1}^{A} \frac{\eta}{\alpha} \cdot 0.6 (H-n-1) \sbr{K-\mathbb{E}_j[N_{s_n,a_j}] } \nonumber\\ = & 0.6 (H-n-1) \cdot \frac{\eta}{\alpha} \cdot \sbr{K-\frac{1}{A} \sum_{J=1}^{A} \mathbb{E}_j[N_{s_n,a_j}] } \label{eq:regret_lb_half} \end{align} For any $j \in [A]$, using Pinsker's inequality and $0<\alpha<\frac{1}{4}$, we have that $\textup{KL}(p_{unif}(s_n,a_j)\|p_j(s_n,a_j)) = \textup{KL}(\mathtt{Ber}(\alpha)\|\mathtt{Ber}(\alpha-\eta)) \leq \frac{\eta^2}{(\alpha-\eta) (1-\alpha+\eta)} \leq \frac{c_1 \eta^2}{\alpha}$ for some constant $c_1$ and a small enough $\eta$. Then, using Lemma~A.1 in \cite{auer2002nonstochastic}, we have that for any $j \in [A]$, \begin{align*} \mathbb{E}_j[N_{s_n,a_j}] \leq & \mathbb{E}_{unif}[N_{s_n,a_j}] + \frac{K}{2} \sqrt{ \mathbb{E}_{unif}[V_{s_n,a_j}] \cdot \textup{KL}\sbr{p_{unif}(s_n,a_j)||p_j(s_n,a_j)} } \nonumber\\ \leq & \mathbb{E}_{unif}[N_{s_n,a_j}] + \frac{K}{2} \sqrt{ w(s_n) \cdot \mathbb{E}_{unif}[N_{s_n,a_j}] \cdot \frac{c_1 \eta^2}{\alpha} } \end{align*} \yihan{Check the KL divergence for Bernoulli distribution} Then, using $\sum_{J=1}^{A} \mathbb{E}_{unif}[N_{s_n,a_j}]=K$ and the Cauchy–Schwarz inequality, we have \begin{align} \frac{1}{A} \sum_{J=1}^{A} \mathbb{E}_j[N_{s_n,a_j}] \leq & \frac{1}{A} \sum_{J=1}^{A} \mathbb{E}_{unif}[N_{s_n,a_j}] + \frac{K \eta}{2A} \sum_{J=1}^{A} \sqrt{ \frac{c_1}{\alpha} \cdot w(s_n) \cdot \mathbb{E}_{unif}[N_{s_n,a_j}] } \nonumber\\ \leq & \frac{1}{A} \sum_{J=1}^{A} \mathbb{E}_{unif}[N_{s_n,a_j}] + \frac{K \eta}{2A} \sqrt{ A \sum_{J=1}^{A} \frac{c_1}{\alpha} \cdot w(s_n) \cdot \mathbb{E}_{unif}[N_{s_n,a_j}] } \nonumber\\ \leq & \frac{K}{A} + \frac{K \eta}{2} \sqrt{ \frac{ c_1 \cdot w(s_n) K}{\alpha A} } \label{eq:avg_N_a_j} \end{align} By plugging Eq.~\eqref{eq:avg_N_a_j} into Eq.~\eqref{eq:regret_lb_half}, we have \begin{align*} \mathbb{E} \mbr{\mathcal{R}(K)} \geq & 0.6 (H-n-1) \cdot \frac{\eta}{\alpha} \cdot \sbr{K - \frac{K}{A} - \frac{K \eta}{2} \sqrt{ \frac{c_1 \cdot w(s_n) K}{\alpha A} } } \end{align*} Let $\eta=c_2\sqrt{\frac{\alpha A}{w(s_n) K}}$ for a small enough constant $c_2$. We have \begin{align*} \mathbb{E} \mbr{\mathcal{R}(K)} = & \Omega \sbr{ H \sqrt{\frac{A}{\alpha \cdot w(s_n) K}} \cdot K } \\ = & \Omega \sbr{ H \sqrt{\frac{AK}{\alpha \cdot w(s_n) }} } . \end{align*} Since $\min \limits_{\begin{subarray}{c}\pi,h,s:\\ w_{\pi,h}(s)>0\end{subarray}} w_{\pi,h}(s)=w(s_n)$ in the constructed instance (Figure~\ref{fig:lower_bound}), we have \begin{align*} \mathbb{E} \mbr{\mathcal{R}(K)} = & \Omega \sbr{ H \sqrt{\frac{AK}{\alpha \cdot \min \limits_{\begin{subarray}{c}\pi,h,s:\\ w_{\pi,h}(s)>0\end{subarray}} w_{\pi,h}(s) }} } . \end{align*} \end{proof} \section{Proofs for Iterated CVaR RL with Best Policy Identification} In this section, we give the proofs of sample complexity upper and lower bounds (Theorems~\ref{thm:bpi_ub},\ref{thm:bpi_lb}) for Iterated CVaR RL-BPI. \subsection{Proofs for Sample Complexity Upper Bound} To prove the sample complexity upper bound (Theorem~\ref{thm:bpi_ub}), we first introduce the following lemmas (Lemmas~\ref{lemma:bpi_concentration_V_star}-\ref{lemma:estimate_error}) and define concentration event $\mathcal{F}$. \subsubsection{Concentration} \begin{lemma}[Concentration for $V^{*}$ -- BPI] \label{lemma:bpi_concentration_V_star} It holds that \begin{align*} \Pr \Bigg[ & \abr{\textup{CVaR}^{\alpha}_{s' \sim \hat{p}^k(\cdot|s,a)}(V^{*}_h(s')) - \textup{CVaR}^{\alpha}_{s' \sim p(\cdot|s,a)}(V^{*}_h(s'))} \leq \frac{H}{\alpha}\sqrt{\frac{ \log \sbr{\frac{2 k^3 HSA}{\delta'}} }{n_k(s,a)}} , \\ & \forall k, \forall h \in [H], \forall (s,a) \in \mathcal{S} \times \mathcal{A} \Bigg] \geq 1-2\delta' . \end{align*} \end{lemma} \begin{proof}[Proof of Lemma~\ref{lemma:bpi_concentration_V_star}] Using the same analysis as Lemma~\ref{lemma:concentration_V_star}, we have that for a fixed $k$, \begin{align*} \Pr \Bigg[ & \abr{\textup{CVaR}^{\alpha}_{s' \sim \hat{p}^k(\cdot|s,a)}(V^{*}_h(s')) - \textup{CVaR}^{\alpha}_{s' \sim p(\cdot|s,a)}(V^{*}_h(s'))} \leq \frac{H}{\alpha}\sqrt{\frac{ \log \sbr{\frac{2 k^3 HSA}{\delta'}} }{n_k(s,a)}} , \\ & \forall h \in [H], \forall (s,a) \in \mathcal{S} \times \mathcal{A} \Bigg] \geq 1-2 \cdot \frac{\delta'}{2 k^2} . \end{align*} By a union bound over $k=1,2,\dots$, we have \begin{align*} \Pr \Bigg[ & \abr{\textup{CVaR}^{\alpha}_{s' \sim \hat{p}^k(\cdot|s,a)}(V^{*}_h(s')) - \textup{CVaR}^{\alpha}_{s' \sim p(\cdot|s,a)}(V^{*}_h(s'))} \leq \frac{H}{\alpha}\sqrt{\frac{ \log \sbr{\frac{2 k^3 HSA}{\delta'}} }{n_k(s,a)}} , \\ & \forall k, \forall h \in [H], \forall (s,a) \in \mathcal{S} \times \mathcal{A} \Bigg] \\ \geq & 1-2 \cdot \sum_{k=1}^{\infty} \sbr{\frac{\delta'}{2 k^2}} \\ \geq & 1-2\delta' . \end{align*} \end{proof} \begin{lemma}[Concentration for any $V$ -- BPI] \label{lemma:bpi_concentration_any_V} It holds that \begin{align*} \Bigg[ &\abr{\textup{CVaR}^{\alpha}_{s' \sim \hat{p}^k(\cdot|s,a)}(V(s')) - \textup{CVaR}^{\alpha}_{s' \sim p(\cdot|s,a)}(V(s'))} \leq \frac{2H}{\alpha}\sqrt{\frac{2S\log \sbr{\frac{2k^3HSA}{\delta'}}}{n_k(s,a)}} , \\ & \forall V(\cdot): \mathcal{S} \mapsto [0,H], \forall k, \forall (s,a) \in \mathcal{S} \times \mathcal{A} \Bigg] \geq 1-2\delta' . \end{align*} \end{lemma} \begin{proof}[Proof of Lemma~\ref{lemma:bpi_concentration_any_V}] Using the same analysis as Lemma~\ref{lemma:concentration_any_V} and a union bound over $k=1,2,\dots$, we can obtain this lemma. \end{proof} \textbf{Concentration Events.} For ease of notation, in the following, we summarize the concentration events which will be used in the proof for BPI, and recall the aforementioned event $\mathcal{E}_3$. \begin{align*} \mathcal{F}_1:= & \Bigg\{ \abr{\textup{CVaR}^{\alpha}_{s' \sim \hat{p}^k(\cdot|s,a)}(V^{*}_h(s')) - \textup{CVaR}^{\alpha}_{s' \sim p(\cdot|s,a)}(V^{*}_h(s'))} \leq \frac{H}{\alpha}\sqrt{\frac{ \log \sbr{\frac{2k^3HSA}{\delta'}} }{n_k(s,a)}} , \\ & \forall k, \forall h \in [H], \forall (s,a) \in \mathcal{S} \times \mathcal{A} \Bigg\} \\ \mathcal{F}_2:= & \Bigg\{ \abr{\textup{CVaR}^{\alpha}_{s' \sim \hat{p}^k(\cdot|s,a)}(V(s')) - \textup{CVaR}^{\alpha}_{s' \sim p(\cdot|s,a)}(V(s'))} \leq \frac{2H}{\alpha}\sqrt{\frac{2S\log \sbr{\frac{2k^3HSA}{\delta'}}}{n_k(s,a)}} , \\ & \forall V(\cdot): \mathcal{S} \mapsto [0,H], \forall k, \forall (s,a) \in \mathcal{S} \times \mathcal{A} \Bigg\} \\ \mathcal{E}_3:= & \Bigg\{ n_k(s,a) \geq \frac{1}{2} \sum_{k'<k} \sum_{h=1}^{H} w_{k'h}(s,a) - H \log \sbr{\frac{HSA}{\delta'}} , \ \forall k, \forall (s,a) \in \mathcal{S} \times \mathcal{A} \Bigg\} \\ \mathcal{F}:= & \mathcal{F}_1 \cap \mathcal{F}_2 \cap \mathcal{E}_3 \end{align*} \begin{lemma}\label{lemma:bpi_con_event} Letting $\delta'=\frac{\delta}{5}$, it holds that \begin{align*} \Pr \mbr{\mathcal{F}} \geq 1-\delta . \end{align*} \end{lemma} \begin{proof}[Proof of Lemma~\ref{lemma:bpi_con_event}] This lemma can be obtained by combining Lemmas~\ref{lemma:bpi_concentration_V_star},\ref{lemma:bpi_concentration_any_V},\ref{lemma:con_visitation}. \end{proof} \subsubsection{Optimism and Estimation Error} For any positive integer $k$, let $\tilde{L}(k):=\log \sbr{\frac{2HSA k^3}{\delta'}}$. \begin{lemma}[Optimism and Pessimism] \label{lemma:bpi_optimism_pessimism} Suppose that event $\mathcal{F}$ holds. Then, for any $k$, $h \in [H]$ and $s \in \mathcal{S}$, \begin{align*} \bar{V}^k_h(s) & \geq V^{*}_h(s) , \\ \underline{V}^k_h(s) & \leq V^{\pi^k}_h(s) . \end{align*} \end{lemma} \begin{proof}[Proof of Lemma~\ref{lemma:bpi_optimism_pessimism}] The proof of $\bar{V}^k_h(s) \geq V^{*}_h(s)$ is similar to Lemma~\ref{lemma:optimism}. Below we prove $\underline{V}^k_h(s) \leq V^{\pi^k}_h(s)$ by induction. First, for any $k \in [K]$, $s \in \mathcal{S}$, it holds that $\underline{V}^k_{H+1}(s) = V^{\pi^k}_{H+1}(s)=0$. Then, for any $k \in [K]$, $h \in [H]$ and $(s,a) \in \mathcal{S} \times \mathcal{A}$, \begin{align*} \underline{Q}^k_h(s,a) = & r(s,a)+\textup{CVaR}^{\alpha}_{s' \sim \hat{p}^k(\cdot|s,a)}(\underline{V}^k_{h+1}(s')) - \frac{H}{\alpha}\sqrt{\frac{S\tilde{L}(k)}{n_k(s,a)}} \\ \overset{\textup{(a)}}{\leq} & r(s,a)+\textup{CVaR}^{\alpha}_{s' \sim \hat{p}^k(\cdot|s,a)}(V^{\pi^k}_{h+1}(s')) - \frac{H}{\alpha}\sqrt{\frac{S\tilde{L}(k)}{n_k(s,a)}} \\ \overset{\textup{(b)}}{\leq} & r(s,a)+\textup{CVaR}^{\alpha}_{s' \sim \hat{p}^k(\cdot|s,a)}(V^{\pi^k}_{h+1}(s')) \\& - \sbr{ \textup{CVaR}^{\alpha}_{s' \sim \hat{p}^k(\cdot|s,a)}(V^{\pi^k}_{h+1}(s')) - \textup{CVaR}^{\alpha}_{s' \sim p(\cdot|s,a)}(V^{\pi^k}_{h+1}(s')) } \\ = & r(s,a)+\textup{CVaR}^{\alpha}_{s' \sim p(\cdot|s,a)}(V^{\pi^k}_{h+1}(s')) \\ = & Q^{\pi^k}(s,a) \end{align*} where (a) uses the induction hypothesis and (b) comes from Lemma~\ref{lemma:concentration_any_V}. Thus, we have \begin{align*} \underline{V}^k_h(s) = \underline{Q}^k_h(s,\pi^k_h(s)) \leq Q^{\pi^k}_h(s,\pi^k_h(s)) = V^{\pi^k}_h(s) , \end{align*} which concludes the proof. \end{proof} \begin{lemma}[Estimation Error] \label{lemma:estimate_error} Suppose that event $\mathcal{F}$ holds. Then, for any $k$, \begin{align*} V^{*}_1(s_1)-V^{\pi^k}_1(s_1) \leq J^k_1(s_1) \end{align*} \end{lemma} \begin{proof}[Proof of Lemma~\ref{lemma:estimate_error}] We prove by induction that for any $k$, $h \in [H]$ and $s \in \mathcal{S}$, \begin{align*} \bar{V}^k_h(s)-\underline{V}^k_h(s) \leq J^k_h(s) \end{align*} First, for any $k$, it holds that $\bar{V}^k_{H+1}(s)-\underline{V}^k_{H+1}(s) = J^k_{H+1}(s)=0$. Let $\hat{\beta}^{k;\alpha,\underline{V}^k_{h+1}}(\cdot|s,a)$ denote the distorted distribution (re-normalized weights) when computing $\textup{CVaR}^{\alpha}_{s' \sim \hat{p}^k(\cdot|s,a)}(\underline{V}^k_{h+1}(s'))$, i.e., \begin{align*} \textup{CVaR}^{\alpha}_{s' \sim \hat{p}^k(\cdot|s,a)}(\underline{V}^k_{h+1}(s')) = \sum_{s' \in \mathcal{S}} \hat{\beta}^{k;\alpha,\underline{V}^k_{h+1}}(s'|s,a) \cdot \underline{V}^k_{h+1}(s') . \end{align*} Then, for any $k$, $h \in [H]$ and $(s,a) \in \mathcal{S} \times \mathcal{A}$, \begin{align*} \bar{Q}^k_h(s,a)-\underline{Q}^k_h(s,a) = & \frac{H}{\alpha}\sqrt{\frac{\tilde{L}(k)}{n_k(s,a)}} + \frac{H}{\alpha}\sqrt{\frac{S \tilde{L}(k)}{n_k(s,a)}} \\ & + \textup{CVaR}^{\alpha}_{s' \sim \hat{p}^k(\cdot|s,a)}(\bar{V}^k_{h+1}(s')) - \textup{CVaR}^{\alpha}_{s' \sim \hat{p}^k(\cdot|s,a)}(\underline{V}^k_{h+1}(s')) \\ \overset{\textup{(a)}}{\leq} & \frac{H\sqrt{\tilde{L}(k)}(1+\sqrt{S})}{z\sqrt{n_k(s,a)}} + \hat{\beta}^{k;\alpha,\underline{V}^k_{h+1}}(\cdot|s,a)^\top \sbr{\bar{V}^k_{h+1}-\underline{V}^k_{h+1}} \\ \overset{\textup{(b)}}{\leq} & \frac{H\sqrt{\tilde{L}(k)}(1+\sqrt{S})}{z\sqrt{n_k(s,a)}} + \hat{\beta}^{k;\alpha,\underline{V}^k_{h+1}}(\cdot|s,a)^\top J^k_{h+1} \\ = & G^k_h(s,a) \end{align*} where (a) uses Lemma~\ref{lemma:cvar_increase_V} and (b) is due to induction hypothesis. Thus, \begin{align*} \bar{V}^k_h(s)-\underline{V}^k_h(s)=\bar{Q}^k_h(s,\pi^k_h(s))-\underline{Q}^k_h(s,\pi^k_h(s)) \leq G^k_h(s,\pi^k_h(s)) = J^k_h(s) , \end{align*} which completes the induction. Thus, for any $k$, \begin{align*} \bar{V}^k_1(s_1)-\underline{V}^k_1(s_1) \leq J^k_1(s_1) . \end{align*} Using Lemma~\ref{lemma:bpi_optimism_pessimism}, we have \begin{align*} V^{*}_1(s)-V^{\pi^k}_1(s_1) \leq \bar{V}^k_1(s_1)-\underline{V}^k_1(s_1) \leq J^k_1(s_1) \end{align*} \end{proof} \subsubsection{Proof of Theorem~\ref{thm:bpi_ub}} \begin{proof}[Proof of Theorem~\ref{thm:bpi_ub}] Suppose that event $\mathcal{F}$ holds. First, we prove the correctness. Using Lemma~\ref{lemma:estimate_error}, when algorithm $\mathtt{ICVaR\mbox{-}BPI}$ stops, we have \begin{align*} V^{*}_1(s_1)-V^{\pi^k}_1(s_1) \leq J^k_1(s_1) \leq \varepsilon . \end{align*} Thus, the output policy $\pi^k$ is $\varepsilon$-optimal. Next, we prove the sample complexity. Let $\hat{w}^{\textup{CVaR},\alpha,\underline{V}^k_{h+1}}_{kh}(s,a)$ denote the distorted probability (weight) of visiting $(s,a)$ at step $h$ in episode $k$ under the $\textup{CVaR}^{\alpha}_{s' \sim \hat{p}^k(\cdot|s,a)}(\underline{V}^k_{h+1}(s'))$ transition metric. Let $K$ denote the episode that algorithm $\mathtt{ICVaR\mbox{-}BPI}$ stops. Then, for any $k \in [K-1]$, we have $\varepsilon < J^k_1(s_1) $. Summing over $k \in [K-1]$ and unfolding $J^k_1(s_1)$, we have \begin{align*} (K-1) \cdot \varepsilon < & \sum_{k=1}^{K-1} J^k_1(s_1) \\ \overset{\textup{(a)}}{\leq} & \sum_{k=1}^{K-1} \sum_{h=1}^{H} \sum_{(s,a) \in \mathcal{L}_k} \hat{w}^{\textup{CVaR},\alpha,\underline{V}^k_{h+1}}_{kh}(s,a) \frac{H (1+\sqrt{S}) \sqrt{\tilde{L}(k)}}{z \sqrt{n_k(s,a)}} \\&+ \sum_{k=1}^{K-1} \sum_{h=1}^{H} \sum_{(s,a) \notin \mathcal{L}_k} \hat{w}^{\textup{CVaR},\alpha,\underline{V}^k_{h+1}}_{kh}(s,a) \cdot H \\ \overset{\textup{(b)}}{\leq} & \frac{H (1+\sqrt{S}) \sqrt{\tilde{L}(K-1)}}{\alpha} \sqrt{\sum_{k=1}^{K-1} \sum_{h=1}^{H} \sum_{(s,a) \in \mathcal{L}_k} \frac{\hat{w}^{\textup{CVaR},\alpha,\underline{V}^k_{h+1}}_{kh}(s,a)}{n_k(s,a)}} \cdot \\& \sqrt{\sum_{k=1}^{K-1} \sum_{h=1}^{H} \sum_{(s,a) \in \mathcal{L}_k} \hat{w}^{\textup{CVaR},\alpha,\underline{V}^k_{h+1}}_{kh}(s,a)} + \frac{ 8SAH^2 }{\min \limits_{\begin{subarray}{c}\pi,h,s:\\ w_{\pi,h}(s)>0\end{subarray}} w_{\pi,h}(s)} \log\sbr{\frac{HSA}{\delta'}} \\ \leq & \frac{H (1+\sqrt{S}) \sqrt{\tilde{L}(K-1)} }{\alpha} \cdot \sqrt{(K-1)H} \cdot \\& \sqrt{\sum_{k=1}^{K-1} \sum_{h=1}^{H} \sum_{(s,a) \in \mathcal{L}_k} \frac{\hat{w}^{\textup{CVaR},\alpha,\underline{V}^k_{h+1}}_{kh}(s,a)}{w_{kh}(s,a)} \cdot \frac{w_{kh}(s,a)}{n_k(s,a)} \indicator{w_{kh}(s,a) \neq 0} } \\ & + \frac{ 8SAH^2 }{\min \limits_{\begin{subarray}{c}\pi,h,s:\\ w_{\pi,h}(s)>0\end{subarray}} w_{\pi,h}(s)} \log\sbr{\frac{HSA}{\delta'}} \\ \overset{\textup{(c)}}{\leq} & \frac{(1+\sqrt{S})H\sqrt{H \cdot \tilde{L}(K-1) \cdot (K-1)}}{z \cdot \sqrt{\min \limits_{\begin{subarray}{c}\pi,h,s\\w_{\pi,h}(s,a)>0\end{subarray}} w_{\pi,h}(s,a)} } \sqrt{ \sum_{k=1}^{K-1} \sum_{h=1}^{H} \sum_{(s,a) \in \mathcal{L}_k} \frac{w_{kh}(s,a)}{\frac{1}{4} \bar{n}^k(s,a)} } \\& + \frac{ 8SAH^2 }{\min \limits_{\begin{subarray}{c}\pi,h,s:\\ w_{\pi,h}(s)>0\end{subarray}} w_{\pi,h}(s)} \log\sbr{\frac{HSA}{\delta'}} \\ \leq & \frac{4SH \cdot \tilde{L}(K-1) \cdot \sqrt{HA(K-1)}}{z \cdot \sqrt{\min \limits_{\begin{subarray}{c}\pi,h,s:\\ w_{\pi,h}(s)>0\end{subarray}} w_{\pi,h}(s)} } + \frac{ 8SAH^2 }{\min \limits_{\begin{subarray}{c}\pi,h,s:\\ w_{\pi,h}(s)>0\end{subarray}} w_{\pi,h}(s)} \log\sbr{\frac{HSA}{\delta'}}, \end{align*} where (a) is due to that $J^k_1(s_1)$ has a universal upper bound $H$, (b) uses Lemma~\ref{lemma:insufficient_visit}, and (c) uses Lemma~\ref{lemma:sufficient_visit}. Thus, we have \begin{align*} K-1 \leq & \frac{4SH\sqrt{HA}}{\varepsilon z \cdot \sqrt{\min \limits_{\begin{subarray}{c}\pi,h,s:\\ w_{\pi,h}(s)>0\end{subarray}} w_{\pi,h}(s)} } \cdot \sqrt{K-1} \cdot \log\sbr{\frac{2HSA(K-1)^3}{\delta'}} \\& + \frac{ 8SAH^2 }{\varepsilon \cdot \min \limits_{\begin{subarray}{c}\pi,h,s:\\ w_{\pi,h}(s)>0\end{subarray}} w_{\pi,h}(s)} \log\sbr{\frac{HSA}{\delta'}} \end{align*} Using Lemma 13 in \cite{menard2021fast} with $A=B=1,\ E=0,\ \tau=K-1,\ \alpha=\frac{2HSA}{\delta'},\ C=\frac{12SH\sqrt{HA}}{\varepsilon z \cdot \sqrt{\min \limits_{\begin{subarray}{c}\pi,h,s:\\ w_{\pi,h}(s)>0\end{subarray}} w_{\pi,h}(s)} }$ and $ D=\frac{ 8SAH^2 }{\varepsilon \cdot \min \limits_{\begin{subarray}{c}\pi,h,s:\\ w_{\pi,h}(s)>0\end{subarray}} w_{\pi,h}(s)} $, we have that the number of used trajectories is bounded by \begin{align*} K-1 = O \sbr{ \frac{ H^3 S^2 A}{\varepsilon^2 \alpha^2 \min \limits_{\begin{subarray}{c}\pi,h,s:\\ w_{\pi,h}(s)>0\end{subarray}} w_{\pi,h}(s) } \cdot \log\sbr{\frac{HSA}{\delta \cdot \varepsilon z \min \limits_{\begin{subarray}{c}\pi,h,s:\\ w_{\pi,h}(s)>0\end{subarray}} w_{\pi,h}(s) } } } \end{align*} \end{proof} \subsection{Proofs for Sample Complexity Lower Bound} In this subsection, we give the proof of sample complexity lower bound (Theorem~\ref{thm:bpi_lb}) for Iterated CVaR RL-BPI. \begin{proof}[Proof of Theorem~\ref{thm:bpi_lb}] This proof uses a similar analytical procedure as Theorem 2 in \cite{dann2015sample}. Consider the same instance as the proof of Theorem~\ref{thm:cvar_rm_lb} (Figure~\ref{fig:lower_bound}). Fix an algorithm $\mathcal{A}$. Define $\mathcal{E}_{s_n}:=\{ \hat{\pi}(s_n)=a_J \}$ as the event that the output policy $\hat{\pi}$ of algorithm $\mathcal{A}$ chooses the optimal action in state $s_n$. Then, we have \begin{align*} V^{*}_1(s_1)-V^{\pi}_1(s_1) = & 0.6 (H-n-1) \cdot \frac{\eta}{\alpha} \cdot (1-\indicator{ \mathcal{E}_{s_n} }) \end{align*} For $\pi$ to be $\varepsilon$-optimal, we need \begin{align*} \varepsilon \geq V^{*}_1(s_1)-V^{\pi}_1(s_1) = 0.6 (H-n-1) \cdot \frac{\eta}{\alpha} \cdot (1-\indicator{ \mathcal{E}_{s_n} }) , \end{align*} which is equivalent to \begin{align*} \indicator{ \mathcal{E}_{s_n} } \geq 1- \frac{\varepsilon \alpha}{0.6 (H-n-1) \cdot \eta} \end{align*} Let $\eta=\frac{8 e^4 \varepsilon \alpha}{0.6c_0 (H-n-1)}$ for some constant $c_0$. Then, for $\pi$ to be $\varepsilon$-optimal, we need \begin{align*} \indicator{ \mathcal{E}_{s_n} } \geq 1-\frac{c_0}{8e^4} \end{align*} Let $\phi:=1-\frac{c_0}{8e^4}$. For algorithm $\mathcal{A}$ to be $(\varepsilon,\delta)$-correct, we need \begin{align*} 1-\delta \leq & \Pr[ V^{*}-V^{\pi} \geq \varepsilon] \\ \leq & \Pr[\indicator{ \mathcal{E}_{s_n} } \geq \phi] \\ \leq & \frac{\mathbb{E}[\mathcal{E}_{s_n}]}{\phi} \\ \leq & \frac{1}{\phi} \Pr[\mathcal{E}_{s_n}] , \end{align*} which is equivalent to \begin{align*} \Pr[\bar{\mathcal{E}}_{s_n}] = 1- \Pr[\mathcal{E}_{s_n}] \leq 1-\phi+\phi \delta \end{align*} Let $V_{s_n}$ be the number of times that algorithm $\mathcal{A}$ visited state $s_n$. To ensure $\Pr[\bar{\mathcal{E}}_{s_n}] \leq 1-\phi+\phi \delta$, we need \begin{align*} \mathbb{E}[V_{s_n}] \geq & \frac{c_1 \cdot \alpha A}{\eta^2} \log \sbr{\frac{c_2}{1-\phi+\phi \delta}} \\ = & \frac{c_1 \cdot \alpha A \cdot 0.6^2 c_0^2 (H-n-1)^2}{64 e^8 \varepsilon^2 \alpha^2} \log \sbr{\frac{c_2}{\frac{c_0}{8e^4} + \delta}} , \end{align*} for some constants $c_1,c_2$. Let $c_0$ be a small constant such that $\frac{c_0}{8e^4} < \delta$. Let $w(s_n)$ be the probability of visiting $s_n$ in an episode, and this probability is the same for any policy. Then, the number of required trajectories for $\mathcal{A}$ to be $(\varepsilon,\delta)$-correct is at least \begin{align*} K \geq & \frac{c_1 A \cdot 0.6^2 c_0^2 (H-n-1)^2}{64 e^8 \varepsilon^2 \alpha \cdot w(s_n)} \log \sbr{\frac{c_2}{\frac{c_0}{8e^4} + \delta}} \\ = & \Omega \sbr{ \frac{ H^2 A}{\varepsilon^2 \alpha \cdot w(s_n)} \log \sbr{\frac{1}{\delta}} } . \end{align*} Since $\min \limits_{\begin{subarray}{c}\pi,h,s:\\ w_{\pi,h}(s)>0\end{subarray}} w_{\pi,h}(s)=w(s_n)$ in the constructed instance (Figure~\ref{fig:lower_bound}), we have \begin{align*} K=\Omega \sbr{ \frac{ H^2 A}{\varepsilon^2 \alpha \cdot \min \limits_{\begin{subarray}{c}\pi,h,s:\\ w_{\pi,h}(s)>0\end{subarray}} w_{\pi,h}(s)} \log \sbr{\frac{1}{\delta}} } . \end{align*} \end{proof} \section{Proofs for Worst Path RL} In this section, we provide the proofs of regret upper and lower bounds (Theorems~\ref{thm:min_regret_ub},\ref{thm:min_regret_lb}) for Worst Path RL. \subsection{Proofs for Regret Upper Bound} In order to prove the regret upper bound, we first introduce the following lemmas (Lemmas~\ref{lemma:bernoulli_upsilon(s,a)}-\ref{lemma:min_con_event}) and define concentration event $\mathcal{G}$. \subsubsection{Concentration} Recall that $n_k(s,a)$ is the number of times that the algorithm visited $(s,a)$ up to episode $k$. For any $k$ and $(s',s,a)\in \mathcal{S} \times \mathcal{S} \times \mathcal{A}$, let $n_k(s',s,a)$ denote the number of times that the algorithm visited $(s,a)$ and transitioned to $s'$ up to episode $k$. For any policy $\pi$ and $(s,a)\in \mathcal{S} \times \mathcal{A}$, let $\upsilon_{\pi}(s,a)$ be the probability that the algorithm visited $(s,a)$ at least once in an episode under policy $\pi$. \begin{lemma} \label{lemma:bernoulli_upsilon(s,a)} It holds that \begin{align*} \Pr \mbr{n_k(s,a) \geq \frac{1}{2} \sum_{k'<k} \upsilon_{\pi^{k'}}(s,a) - \log \sbr{\frac{SA}{\delta'}}, \ \forall k, \forall (s,a) \in \mathcal{S} \times \mathcal{A}} \geq 1-\delta' \end{align*} \end{lemma} \begin{proof}[Proof of Lemma~\ref{lemma:bernoulli_upsilon(s,a)}] For any $k$ and $(s,a) \in \mathcal{S} \times \mathcal{A}$, conditioning on the filtration of episodes $1,\dots,k-1$, whether the algorithm visited $(s,a)$ at least once in episode $k$ is a Bernoulli random variable with success probability $\upsilon_{\pi^k}(s,a)$. Then, using Lemma F.4 in \cite{dann2017unifying}, we can obtain this lemma. \end{proof} \begin{lemma} \label{lemma:bernoulli_p(s'|s,a)} It holds that \begin{align*} \Pr \Bigg\{ & n_k(s',s,a) \geq \frac{1}{2} \cdot n_k(s,a) \cdot p(s'|s,a) - 2 \log \sbr{\frac{SA}{\delta'}}, \\ & \forall k, \forall (s',s,a) \in \mathcal{S} \times \mathcal{S} \times \mathcal{A} \Bigg\} \geq 1-\delta' \end{align*} \end{lemma} \begin{proof}[Proof of Lemma~\ref{lemma:bernoulli_p(s'|s,a)}] For any $k$, $h \in [H]$ and $(s,a) \in \mathcal{S} \times \mathcal{A}$, conditioning on the event $\{s^k_h=s,a^k_h=a\}$, the indicator $\indicator{s^k_{h+1}=s'}$ is a Bernoulli random variable with success probability $p(s'|s,a)$. Then, using Lemma F.4 in \cite{dann2017unifying}, we can obtain this lemma. \yihan{Check if we can use the martingale-based concentration here} \end{proof} \textbf{Concentration Events.} For ease of notation, we summarize the concentration events which will be used in the proof for algorithm $\mathtt{MaxWP}$ as follows: \begin{align*} \mathcal{G}_1:= & \Bigg\{ n_k(s,a) \geq \frac{1}{2} \sum_{k'<k} \upsilon_{\pi^{k'}}(s,a) - \log \sbr{\frac{SA}{\delta'}}, \ \forall k, \forall (s,a) \in \mathcal{S} \times \mathcal{A} \Bigg\} \\ \mathcal{G}_2:= & \Bigg\{ n_k(s',s,a) \geq \frac{1}{2} \cdot n_k(s,a) \cdot p(s'|s,a) - 2 \log \sbr{\frac{SA}{\delta'}}, \ \forall k, \forall (s',s,a) \in \mathcal{S} \times \mathcal{S} \times \mathcal{A} \Bigg\} \\ \mathcal{G}:= & \mathcal{G}_1 \cap \mathcal{G}_2 \end{align*} \begin{lemma}\label{lemma:min_con_event} Letting $\delta'=\frac{\delta}{2}$, it holds that \begin{align*} \Pr \mbr{\mathcal{G}} \geq 1-\delta . \end{align*} \end{lemma} \begin{proof}[Proof of Lemma~\ref{lemma:min_con_event}] This lemma can be obtained by combining Lemmas~\ref{lemma:bernoulli_upsilon(s,a)},\ref{lemma:bernoulli_p(s'|s,a)}. \end{proof} \subsubsection{Proof of Theorem~\ref{thm:min_regret_ub}} \begin{proof}[Proof of Theorem~\ref{thm:min_regret_ub}] Suppose that event $\mathcal{G}$ holds. Let $$ \bar{T}:= \sum \limits_{(s,a)} \frac{1}{\min \limits_{\begin{subarray}{c} \pi: \\ \upsilon_{\pi}(s,a)>0 \end{subarray}} \upsilon_{\pi}(s,a) \cdot \min \limits_{s' \in \textup{supp}( p(\cdot|s,a))} p(s'|s,a) } \cdot 8 \sbr{2\log\sbr{ \frac{SA}{\delta} }+1} . $$ For any $(s,a) \in \mathcal{S} \times \mathcal{A}$, let $$ T(s,a):= \frac{1}{\min \limits_{\begin{subarray}{c} \pi: \\ \upsilon_{\pi}(s,a)>0 \end{subarray}} \upsilon_{\pi}(s,a) \cdot \min \limits_{s' \in \textup{supp}( p(\cdot|s,a))} p(s'|s,a) } \cdot 8 \sbr{2\log\sbr{ \frac{SA}{\delta} }+1} . $$ It holds that $\bar{T}=\sum_{s,a} T(s,a)$. To prove Theorem~\ref{thm:min_regret_ub}, it suffices to prove that algorithm $\mathtt{MaxWP}$ will not take a sub-optimal policy after episode $\bar{T}$. In the following, we prove this statement by contradiction. Suppose that in some episode $k>\bar{T}$, algorithm $\mathtt{MaxWP}$ takes a sub-optimal policy $\pi^{k}$. Note that, under the $\min$ metric, if a Q-value/V-value is not accurately estimated, it can only be overestimated (not underestimated). In addition, the overestimation of a Q-value/V-value comes from either of the following reasons: (i) Algorithm $\mathtt{MaxWP}$ has not detected a bad successor state; (ii) The V-value of the successor state is over-estimated. According to the supposition, we have that in episode $k$, algorithm $\mathtt{MaxWP}$ chooses some $(s,a_{sub})$ with an overestimated Q-value at some step $h$, i.e., $\hat{Q}^{k}_h(s,a_{sub}) > Q^{*}_h(s,a_{sub})$ and $\upsilon_{\pi^k}(s,a_{sub})>0$. Then, there exists some $(s',a')$ with an overestimated Q-value and an accurate future V-value at step $h' \geq h$, i.e., $\hat{Q}^{k}_{h'}(s',a') > Q^{*}_{h'}(s',a')$, $\hat{V}^{k}_{h'+1}(\cdot)=V^{*}_{h'+1}(\cdot)$ point-wise, and $\upsilon_{\pi^k}(s',a')>0$. In other words, the overestimation of $\hat{Q}^{k}_{h'}(s',a')$ is purely due to that algorithm $\mathtt{MaxWP}$ has not detected a bad successor state of $(s',a')$. For any $(s,a) \in \mathcal{S} \times \mathcal{A}$, let $\mathcal{T}^k(s,a)=\{k' < k: \upsilon_{\pi^{k'}}(s,a)>0\}$ denote the subset of episodes $1,\dots,k-1$ in which $(s,a)$ is available. \paragraph{Case (1):} If $|\mathcal{T}^k(s',a')| \geq T(s',a')$, using Lemma~\ref{lemma:bernoulli_upsilon(s,a)}, we have \begin{align*} n_k(s',a') \geq & \frac{1}{2} \sum_{k'<k} \upsilon_{\pi^{k'}}(s',a') - \log \sbr{\frac{SA}{\delta'}} \\ \geq & \frac{1}{2} \cdot T(s',a') \cdot \min \limits_{\begin{subarray}{c} \pi: \\ \upsilon_{\pi}(s',a')>0 \end{subarray}} \upsilon_{\pi}(s',a') - \log\sbr{ \frac{SA}{\delta} } \\ = & \frac{ 4 \sbr{2\log\sbr{ \frac{SA}{\delta} }+1} }{\min \limits_{s' \in \textup{supp}( p(\cdot|s,a))} p(s'|s,a) } - \log\sbr{ \frac{SA}{\delta} } \\ \geq & \frac{ 2 \sbr{2\log\sbr{ \frac{SA}{\delta} }+1} }{\min \limits_{s' \in \textup{supp}( p(\cdot|s,a))} p(s'|s,a) } \end{align*} Then, using Lemma~\ref{lemma:bernoulli_p(s'|s,a)}, we have that for any $s \in \textup{supp}(p(\cdot|s',a'))$, \begin{align*} n_k(s,s',a') \geq & \frac{1}{2} \cdot n_k(s',a') \cdot \min \limits_{s \in \textup{supp}( p(\cdot|s',a'))} p(s|s',a') - 2 \log\sbr{ \frac{SA}{\delta} } \\ \geq & 1 \end{align*} which contradicts that $\hat{Q}^{k}_{h'}(s',a')$ is overestimated. \paragraph{Case (2):} If $|\mathcal{T}^k(s',a')| < T(s',a')$, then we exclude all the episodes in $\mathcal{T}^k(s',a')$, and consider the last episode $\tilde{k} < k$ where the algorithm took a sub-optimal policy. Note that the excluded state-action pair $(s',a')$ cannot be visited in episode $\tilde{k}$. Then, we repeatedly apply the argument in this proof. Once Case (1) happens, we derive a contradiction and complete the proof. Otherwise, Case (2) repeatedly happens and we exclude the episodes in $\mathcal{T}^k(s,a)$ for all $(s,a) \in \mathcal{S} \times \mathcal{A}$. Since $\sum_{(s,a)} |\mathcal{T}^k(s,a)| < \sum_{(s,a)} T(s,a)=\bar{T}<k$, there exists an episode $k_1 < k$ where $\upsilon_{\pi^{k_1}}(s,a)=0$ for any $(s,a) \in \mathcal{S} \times \mathcal{A}$, which gives a contradiction. \yihan{Add a figure} \end{proof} \subsection{Proofs for Regret Lower Bound} In this subsection, we prove the regret lower bound (Theorem~\ref{thm:min_regret_lb}) for Worst Path RL. \begin{figure} [t!] \centering \includegraphics[width=0.9\textwidth]{fig/instance_lb_min.pdf} \caption{The instance for lower bound under the $\min$ metric (Theorem~\ref{thm:min_regret_lb}).} \label{fig:lower_bound_min} \end{figure} \begin{proof}[Proof of Theorem~\ref{thm:min_regret_lb}] Consider the instance $\mathcal{I}$ as shown in Figure~\ref{fig:lower_bound_min}: The action space contains two actions, i.e., $\mathcal{A}=\{a_1,a_2\}$. % The state space is $\mathcal{S}=\{s_1,s_2,\dots,s_n,x_1,x_2,x_3\}$, where $n=S-3$ and $s_1$ is the initial state. Let $H>S$ and $0<\alpha<\frac{1}{4}$. The reward functions are as follows: For any $a \in \mathcal{A}$, $r(x_1,a)=1$, $r(x_2,a)=0.8$ and $r(x_3,a)=0.2$. For any $i \in [n]$ and $a \in \mathcal{A}$, $r(s_i,a)=0$. The transition distributions are as follows: For any $a \in \mathcal{A}$, $p(s_2|s_1,a)=\alpha$, $p(x_1|s_1,a)=1-3\alpha$, $p(x_2|s_1,a)=\alpha$ and $p(x_3|s_1,a)=\alpha$. For any $i\in\{2,\dots,n-1\}$ and $a \in \mathcal{A}$, $p(s_{i+1}|s_i,a)=\alpha$ and $p(x_1|s_i,a)=1-\alpha$. $x_1$, $x_2$ and $x_3$ are absorbing states, i.e., for any $a \in \mathcal{A}$, $p(x_1|x_1,a)=1$, $p(x_2|x_2,a)=1$ and $p(x_3|x_3,a)=1$. The state $s_n$ is a bandit state, which has an optimal action and a suboptimal action. Let $a_*$ denote the optimal action in state $s_n$, which is uniformly drawn from $\{a_1,a_2\}$, and let $a_{sub}$ denote the other sub-optimal action in state $s_n$. For the optimal action $a_*$, $p(x_2|s_n,a_*)=1$. For the sub-optimal action $a_{sub}$, $p(x_2|s_n,a_{sub})=1-\alpha$ and $p(x_3|s_n,a_{sub})=\alpha$. Fix an $o(K)$-consistent algorithm $\mathcal{A}$, which guarantees a sub-linear regret on any instance of Worst Path RL. We have that $\mathcal{A}$ needs to observe the transition from $(s_n,a_{sub})$ to $x_3$ at least once. Otherwise, $\mathcal{A}$ cannot distinguish whether the sub-optimal action in state $s_n$ is $a_1$ or $a_2$. Specifically, no matter $\mathcal{A}$ chooses $a_1$ or $a_2$ in state $s_n$, it will suffer a linear regret in the counter case where the unchosen action is optimal. Thus, any $o(K)$-consistent algorithm must observe the transition from $(s_n,a_{sub})$ to $x_3$ at least once, and needs at least $$ \frac{1}{\upsilon_{\pi_{sub}}(s_n,a_{sub}) \cdot p(x_3|s_n,a_{sub})} $$ episodes with sub-optimal policies. Here $\pi_{sub}$ denotes a policy which chooses $a_{sub}$ in state $s_n$, and $\upsilon_{\pi_{sub}}(s_n,a_{sub})$ denotes the probability that $(s_n,a_{sub})$ is visited at least once in an episode under policy $\pi_{sub}$. Therefore, $\mathcal{A}$ needs to suffer at least $$ \Omega \sbr{\frac{1}{\upsilon_{\pi_{sub}}(s_n,a_{sub}) \cdot p(x_3|s_n,a_{sub})} \cdot \Delta_{\min}} $$ regret in expectation. Since in the constructed instance (Figure~\ref{fig:lower_bound_min}) \begin{align*} &\max_{\begin{subarray}{c} (s,a):\\ \exists h, a \neq \pi^{*}_h(s) \end{subarray}} \frac{1}{\min \limits_{\begin{subarray}{c} \pi: \\ \upsilon_{\pi}(s,a)>0 \end{subarray}} \upsilon_{\pi}(s,a) \cdot \min \limits_{s' \in \textup{supp}( p(\cdot|s,a))} p(s'|s,a) } = \frac{1}{\upsilon(s_n,a_{sub}) \cdot p(x_3|s_n,a_{sub})} , \end{align*} we have that $\mathcal{A}$ needs to suffer at least \begin{align*} \Omega \sbr{ \max_{\begin{subarray}{c} (s,a):\\ \exists h, a \neq \pi^{*}_h(s) \end{subarray}} \frac{ \Delta_{\min} }{\min \limits_{\begin{subarray}{c} \pi: \\ \upsilon_{\pi}(s,a)>0 \end{subarray}} \upsilon_{\pi}(s,a) \cdot \min \limits_{s' \in \textup{supp}( p(\cdot|s,a))} p(s'|s,a) } } \end{align*} regret. \end{proof} \yihan{Check this proof}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section*{Introduction} We call a geometric figure, that is, a set of points in Euclidean space, $T$ an \emph{$r$-gentile} if $T$ admits an \emph{$r$-gentiling}, that is, a subdivision of $T$ into $r \geq 2$ figures (\emph{tiles}) $T_1,...,T_r$, such that each of the figures $T_1,...,T_r$ is similar to $T$. In other words, $T$ is an $r$-gentile if we can tile it with $r$ smaller copies of itself. This generalizes the concept of \emph{reptiles}, coined by Golomb~\cite{golomb}: a figure $T$ is an \emph{$r$-reptile} if $T$ admits an \emph{$r$-reptiling}, that is, a subdivision of $T$ into $r \geq 2$ figures $T_1,...,T_r$, such that each of the figures $T_1,...,T_r$ is similar to $T$ \emph{and all figures $T_1,...,T_r$ are mutually congruent}. In other words, $T$ is a $r$-reptile if we can tile it with $r$ equally large smaller copies of itself. Interest in reptile tetrahedra (or triangles, for that matter) exists, among other reasons, because of their application in meshes for scientific computing~\cite{bader,Liu}. In this realm certain reptile-based techniques are well-developed in two dimensions~\cite{bader}, but three-dimensional space poses great challenges~\cite{parcotalk}. It is known what triangles are $r$-reptiles~\cite{snover} and $r$-gentiles~\cite{freese,kaiser} for what $r$. However, for tetrahedra the situation is much less clear; in fact the identification of reptile and gentile tetrahedra and tetrahedra that tile space has been a long-standing open problem. The shape of a tetrahedron has five degrees of freedom and these have not been fully explored yet. Matou\v sek and Safernov\'a argued that $r$-reptilings with tetrahedra exist if and only if $r$ is a cube number~\cite{safernova}. In particular, it is known that all so-called \emph{Hill tetrahedra} (attributed to Hill~\cite{Hill} by Hertel~\cite{Hertel} and Matou\v sek and Safernov\'a~\cite{safernova}) are 8-reptiles. It has been conjectured that the Hill tetrahedra are the only reptile tetrahedra~\cite{Hertel}, but this conjecture is false: two non-Hill tetrahedra are known that have been recognized as 8-reptiles by Liu and Joe~\cite{Liu}. To the best of our knowledge, the Hill tetrahedra and the two non-Hill tetrahedra from Liu and Joe are the only tetrahedra known to be reptiles, but there might be others. This paper provides a small contribution to the answer of the question: exactly what tetrahedra are reptiles? In mesh construction applications one typically needs to enforce certain quality constraints on the mesh elements. This has motivated studies into acute triangles~\cite{matzke} and \emph{acute tetrahedra}~\cite{ungor}: \begin{definition} A tetrahedron is \emph{acute} if each pair of its facets has a dihedral angle strictly less than $\pi/2$. \end{definition} All facets of an acute tetrahedron are acute triangles themselves (Eppstein et al.~\cite{ungor}, Lemma 2). The Hill tetrahedra, as well as the two non-Hill tetrahedra from Liu and Joe, all have right dihedral angles. Thus, no acute reptile tetrahedra are known. \section*{Results} In this article we will prove the following statement, which may serve as evidence that acute reptile tetrahedra are probably hard to find, if they exist at all: \begin{theorem}\label{thm:main} Let $T$ be an acute tetrahedron subdivided into $r \geq 2$ acute tetrahedra $T_1,...,T_r$. If the diameter (longest edge) of each tetrahedron $T_i$ is smaller than the diameter (longest edge) of $T$, then $r \geq 9$. \end{theorem} In particular we get: \begin{corollary}\label{cor:gentile} No acute tetrahedron is an $r$-gentile for any $r < 9$. \end{corollary} With the result from Matou\v sek and Safernov\'a that $r$-reptile tetrahedra can only exist when $r$ is a cube number~\cite{safernova}, we get: \begin{corollary}\label{cor:reptile} No acute tetrahedron is an $r$-reptile for any $r < 27$. \end{corollary} \section*{The proof} Note that if a tetrahedron $T$ is subdivided into tetrahedra $T_1,...,T_r$ with smaller diameter than $T$, then at least one tetrahedron $T_i$, for some $i \in \{1,...,r\}$, must have a vertex $v$ on the longest edge of $T$. For the proof of Theorem~\ref{thm:main} we analyse ${\cal S}_v$, the subdivision of an infinitesimal sphere around $v$ that is induced by the facets of $T$ and $T_1,...,T_r$. In such a subdivision, we find:\begin{itemize} \item faces: each face is either a spherical triangle, corresponding to a tetrahedron $T_i$ of which $v$ is a vertex, or a spherical diangle (also called lune), corresponding to a tetrahedron that has $v$ on the interior of an edge; \item edges: the edges of ${\cal S}_v$ are segments of great circles and correspond to facets of $T_1,...,T_r$ that contain $v$; the angle between two adjacent edges on a face of ${\cal S}_v$ corresponds to the dihedral angle of the corresponding facets of a tetrahedron $T_i$. \item vertices: each vertex of ${\cal S}_v$ corresponds to an edge of a tetrahedron $T_i$ that contains $v$. \end{itemize} Thus, ${\cal S}_v$ consists of a spherical diangle $D$ corresponding to $T$, subdivided into a number of spherical triangles, and possibly some spherical diangles, that correspond to the tetrahedra from $T_1,...,T_r$ that touch $v$. Below we will see that ${\cal S}_v$ must contain at least nine faces (not counting the outer face, that is, the complement of $D$), which proves Theorem~\ref{thm:main}. In what follows, when we talk about diangles and triangles, we will mean \emph{acute}, \emph{spherical} diangles and \emph{acute}, \emph{spherical} triangles on a sphere with radius~1. Note that the faces are diangles or triangles in the geometric sense, but they may have more than two or three vertices on their boundary. More precisely, a diangle or triangle has, respectively, exactly two or three vertices, called \emph{corners}, where its boundary has an acute angle, and possibly a number of other vertices where its boundary has a straight angle. A chain of edges of a diangle or triangle from one corner to the next is called a \emph{side}. Note that ${\cal S}_v$ contains at least one triangle, since $v$ is a vertex of at least one tetrahedron $T_i$. Therefore, in what follows we consider a subdivision ${\cal S}$ of a diangle $D$ into a number of diangles and triangles, among which at least one triangle. We call such subdivisions \emph{valid}. Henceforth, we will assume that ${\cal S}$ has the smallest number of faces out of all possible valid subdivisions of all possible diangles $D$. Our goal is now to prove that ${\cal S}$ contains at least 9 faces. \begin{lemma}\label{lem:alltriangles} Each face of ${\cal S}$ is a triangle. \end{lemma} \begin{proof} If ${\cal S}$ would have any diangular face $F$, it must have the same corners as $D$, because the corners of any diangle must be an antipodal pair and there is only one antipodal pair within $D$. The removal of $F$ would separate ${\cal S}$ into at most two diangular components with the same corners as $D$. By construction, at least one of these components contains a triangle. That component would then constitute a valid subdivision that has fewer faces than ${\cal S}$, contradicting our choice of ${\cal S}$. \end{proof} In ${\cal S}$, we distinguish \emph{boundary vertices} (vertices on the boundary of $D$) and \emph{interior vertices} (vertices in the interior of $D$). Among the boundary vertices, we distinguish \emph{poles} (the corners of $D$) and \emph{side vertices} (the remaining boundary vertices). Among the interior vertices we distinguish \emph{full vertices} and \emph{hanging vertices}: a vertex $v$ is a full vertex if it is a corner of each face incident on $v$; a vertex $v$ is a hanging vertex if it is a non-corner vertex of one of the faces incident on $v$. We will now derive a few properties of ${\cal S}$ from the acuteness of its angles. \begin{lemma}\label{lem:shortedges} Each side of each face of ${\cal S}$ has length strictly less than $\pi/2$. \end{lemma} \begin{proof} Consider any face $F$ of ${\cal S}$. Let $a$ be the length of a particular side of $F$, let $\alpha$ be the angle in the opposite corner of $F$, and let $\beta$ and $\gamma$ be the angles in the other two corners of $F$. Since $F$ is acute, the sines and cosines of $\alpha, \beta$ and $\gamma$ are all positive. By the supplementary cosine rule (\cite{todhunter}, Art.~47) we have $\cos \alpha = -\cos\beta\cos\gamma + \sin\beta\sin\gamma\cos a$, so $\cos a = (\cos\alpha + \cos\beta\cos\gamma)/(\sin\beta\sin\gamma) > 0$. It follows that $a < \pi/2$. \end{proof} \begin{lemma}\label{lem:fourboundary} There are at least four side vertices: two on each side of $D$. \end{lemma} \begin{proof} For the sake of contradiction, suppose one side of $D$ contains only one side vertex. Then this side would consist of two edges, at least one of which has length at least $\pi/2$, contradicting Lemma~\ref{lem:shortedges}. \end{proof} \begin{lemma}\label{lem:mindegree} Each pole is incident on at least two edges.\\ Each hanging vertex and each side vertex is incident on at least four edges.\\ Each full vertex is incident on at least five edges. \end{lemma} \begin{proof} The poles are incident on at least two edges by definition. If a side vertex or a hanging vertex would be incident on only three edges, then two of these edges make a straight angle on one side, while the third edge divides the straight angle on the other side. Thus, at least one of the angles that results from this division would be non-acute. If a full vertex would be incident on at most four edges, then at least two of those edges must make an angle of at least $2\pi/4 = \pi/2$ on their common face, again contradicting the assumption that all faces are acute. \end{proof} Now we can combine these properties with Euler's formula and find: \begin{lemma}\label{lem:minfaces} The number of faces equals $2f + h + s$, where $f \geq 2$ is the number of full vertices, $h$ is the number of hanging vertices, and $s \geq 4$ is the number of side vertices. \end{lemma} \begin{proof} Let $v = f + h + s + 2$ be the number of vertices, let $e$ be the number of edges and $r$ be the number of triangles of ${\cal S}$. By Lemma~\ref{lem:alltriangles} all faces are triangles, so by Euler's formula we have $v + r = f + h + s + 2 + r = e + 1$, hence $2e = 2f + 2h + 2s + 2 + 2r$. We say that a hanging vertex is \emph{owned} by the triangle of which it is a non-corner vertex; each hanging vertex is owned by exactly one triangle. If we add up the edges of all triangles, we count, for each triangle, three edges plus the number of hanging vertices it owns, making $3r + h$ in total. This counts all edges double, except the $s + 2$ edges on the boundary, which are counted only once. Therefore we have $2e - s - 2 = 3r + h$. Hence we have $2e = 3r + h + s + 2 = 2f + 2h + 2s + 2 + 2r$, which solves to $r = 2f + h + s$. Thus we have $2e = 3r + h + s + 2 = 6f + 4h + 4s + 2$. By Lemma~\ref{lem:mindegree} we also have $2e \geq 5f + 4h + 4s + 4$, so $6f + 4h + 4s + 2 \geq 5f + 4h + 4s + 4$, which solves to $f \geq 2$. The condition $s \geq 4$ is given by Lemma~\ref{lem:fourboundary}. \end{proof} \begin{figure} \centering \includegraphics[scale=1.0]{hopeless.pdf} \caption{In grey: the boundary of $D$ sketched as a smooth loop, without indicating the location of the poles. In black: the only network of edges that subdivides $D$ into at most eight faces and complies with Lemma~\ref{lem:minfaces}. However, this contradicts Lemma~\ref{lem:mindegree}.} \label{fig:hopeless} \end{figure} \begin{lemma}\label{lem:thelemma} The number of faces of ${\cal S}$ is at least nine. \end{lemma} \begin{proof} Suppose, for the sake of contradiction, that ${\cal S}$ has at most eight faces. Then, by Lemma~\ref{lem:minfaces}, we have four side vertices, two full vertices, and no hanging vertices. Each of the two interior vertices is incident on five faces, so in order to have at most eight faces in total, the interior vertices must share two of their incident faces, see Figure~\ref{fig:hopeless}. Adding further edges in the interior of $D$ is not possible, as this would increase the number of faces. Thus there are at least six boundary vertices, four of which are incident on only one interior edge and therefore on only three edges in total, and at least two of these must be side vertices. This contradicts Lemma~\ref{lem:mindegree}. \end{proof} This concludes the proof of Theorem~\ref{thm:main}. \section*{Bold conjectures} The proof of Theorem~\ref{thm:main} is based on the following observation: any dissection of an acute spherical diangle into acute spherical triangles requires at least nine triangles. I do not know whether this bound is tight. It seems that a dissection into ten triangles is easy to achieve: in Figure~\ref{fig:hopeless}, add an edge connecting the two side vertices on the left, add an edge connecting the two side vertices on the right, place the poles on the left and the right end of the figure, and distort the construction to make all angles acute. So far, I have not been able to find a solution with nine triangles, so I conjecture: \begin{conjecture}\label{con:ten} Any dissection of an acute spherical diangle into acute spherical triangles requires at least ten triangles. \end{conjecture} Note that proving this would immediately strengthen Corollary~\ref{cor:gentile} to: no acute tetrahedron is an $r$-gentile for any $r < 10$. Now let $b$ be this lower bound on the required number of triangles, proven to be at least 9 and conjectured to be 10. Suppose we can prove Conjecture~\ref{con:ten}, then how could we go about improving Corollary~\ref{cor:reptile}? A first step could be the following. If $T$ is an acute tetrahedron with diameter $d$, and we can identify three segments $x$, $y$ and $z$ of length $d/3$ on the edges of $T$, then the arguments presented in this paper tell us that any 27-reptiling of $T$ must contain at least $b$ tiles that intersect $x$, $b$ tiles that intersect $y$, and $b$ tiles that intersect $z$. If additionally, one can ensure that $x$, $y$ and $z$ lie at distance more than $d/3$ from each other, these three sets of $b$ tiles must be mutually disjoint, so there must be at least $3b$ tiles in total. With $b = 10$, this would contradict the existence of a 27-reptiling and thus improve Corollary~\ref{cor:reptile} to: no acute tetrahedron is an $r$-reptile for any $r < 64$. However, given that no acute tetrahedron can be an $r$-reptile or $r$-gentile for small values of $r$ (and given, in general, that the past hundred years did not turn up any reptile tetrahedra without right dihedral angles), we may rather restate the obvious: \begin{conjecture} There are no reptile acute tetrahedra. \end{conjecture} A stronger conjecture would be: \begin{conjecture} There are no gentile acute tetrahedra. \end{conjecture} We conclude with an even bolder conjecture: \begin{conjecture} There are no gentile tetrahedra that do not have a dihedral angle of exactly $\pi/2$. \end{conjecture}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} Given a cohomology theory $h^{\bullet}$, there is a well-known abstract way to define the dual homology theory $h_{\bullet}$, using the theory of spectra. In particular, if $h^{\bullet}$ is representable via a spectrum $E = \{E_{n}, e_{n}, \varepsilon_{n}\}_{n \in \mathbb{Z}}$, for $e_{n}$ the marked point of $E_{n}$ and $\varepsilon_{n}: \Sigma E_{n} \rightarrow E_{n+1}$ the structure map, one can define on a space with marked point $(X, x_{0})$ \cite{Rudyak}: \[h_{n}(X, x_{0}) := \pi_{n}(E \wedge X). \] In \cite{Jakob} the author provides a more geometric construction of $h_{\bullet}$, using a generalization of the bordism groups. In particular, he shows that, for a given pair $(X,A)$, a generator of $h_{n}(X,A)$ can be represented by a triple $(M, \alpha, f)$, where $M$ is a compact $h^{\bullet}$-manifold with boundary of dimension $n+q$, $\alpha \in h^{q}(M)$ and $f: (M, \partial M) \rightarrow (X,A)$ is a continuous function. On such triples one must impose a suitable equivalence relation, which is defined via the natural generalization of the bordism equivalence relation and via the notion of vector bundle modification. In this paper we provide a variant of that construction, which seems to be more natural. In particular, we replace the notion of vector bundle modification with the Gysin map, the latter being the natural push-forward in cohomology. The vector bundle modification is just a particular case, which holds when the underlying map is a section of a sphere bundle. We prove that the two constructions are equivalent, since there is a natural isomorphism between the geometric homology groups defined in \cite{Jakob} and their variant defined in the present paper. The paper is organized as follows. In section \ref{Preliminaries} we recall the definition of the Gysin map, even for manifolds with boundary, and the geometric construction of the homology groups provided in \cite{Jakob}. In section \ref{DualRevisited} we introduce the variant of the geometric construction we discussed above, and in section \ref{Equivalence} we prove that the two constructions are equivalent. \section{Preliminaries}\label{Preliminaries} We call $\mathcal{FCW}_{2}$ the category of pairs of spaces having the homotopy type of finite CW-complexes. Let $h^{\bullet}$ be a multiplicative cohomology theory on $\mathcal{FCW}_{2}$. We recall the construction of the Gysin map for smooth maps between differentiable manifolds with boundary (v.\ \cite{Karoubi, FR} for manifolds without boundary). \subsection{Gysin map} Let $h^{\bullet}$ be a cohomology theory on $\mathcal{FCW}_{2}$. A smooth $h^{\bullet}$-manifold is a smooth manifold with an $h^{\bullet}$-orientation, the latter being a Thom class of its tangent bundle or, equivalently, of its stable normal bundle. Given two compact smooth $h^{\bullet}$-manifolds with boundary $X$ and $Y$ and a map $f: (Y, \partial Y) \rightarrow (X, \partial X)$, we can define the Gysin map $f_{!}: h^{\bullet}(Y) \rightarrow h^{\bullet + \dim\,X - \dim\,Y}(X)$ as: \begin{equation}\label{GysinLefschetz} f_{!}(\alpha) := L_{X}^{-1} f_{*} L_{Y} (\alpha) \end{equation} where: \begin{equation}\label{Lefschetz} L_{X}: h^{\bullet}(X) \rightarrow h_{\dim X - \bullet}(X, \partial X) \end{equation} is the Lefschetz duality \cite{Switzer}. The problem of this definition is that it involves the homology groups, which we have to define, therefore we need a construction involving only the cohomology groups. When $f$ is a neat map, one can define the Gysin map in a way similar to the one shown in \cite{Karoubi}, pp.\ 230-234, about topological K-theory on manifolds without boundary. Then the definition can be easily extended to any map between $h^{\bullet}$-manifolds. We start with embeddings. We call $\mathbb{R}^{n}_{+} := \{(x_{1}, \ldots, x_{n}) \in \mathbb{R}^{n} \,\vert\, x_{n} \geq 0\}$. Given a manifold $X$ and a point $x \in \partial X$, by definition there exists a chart $(U, \varphi)$ of $X$ in $x$, with $U \subset X$ open and $\varphi: U \rightarrow \mathbb{R}^{n}_{+}$, such that $\varphi(x) = 0$ and $\varphi(\partial X \cap U) = (\mathbb{R}^{n-1} \times \{0\}) \cap \varphi(U)$. We call such a chart \emph{boundary chart}. For $m \leq n-1$, we call $\overline{\mathbb{R}}^{m}$ the subspace of $\mathbb{R}^{n}$ containing those vectors whose first $n-m$ components are vanishing, i.e.\ $\overline{\mathbb{R}}^{m} = \{0\} \times \mathbb{R}^{m} \subset \mathbb{R}^{n}$, and $\overline{\mathbb{R}}^{m}_{+} := \overline{\mathbb{R}}^{m} \cap \mathbb{R}^{n}_{+}$. \begin{Def} An embedding of manifolds $i: (Y, \partial Y) \hookrightarrow (X, \partial X)$, where $\dim Y = m$, is \emph{neat} \cite{Hirsh, Kosinski} if: \begin{itemize} \item $i(\partial Y) = i(Y) \cap \partial X$; \item for every $y \in \partial Y$ there exists a boundary chart $(U, \varphi)$ of $X$ in $i(y)$ such that $U \cap i(Y) = \varphi^{-1}(\overline{\mathbb{R}}^{m}_{+})$. \end{itemize} \end{Def} The importance of neat embeddings in this context relies on fact that the properties of tubular neighborhoods are similar to the ones holding for manifolds without boundary. \begin{Def} Let $(Y, \partial Y)$ be a neat submanifold of $(X, \partial X)$. A tubular neighborhood $U$ of $Y$ in $X$ is \emph{neat} \cite{Kosinski} if $U \cap \partial X$ is a tubular neighborhood of $\partial Y$ in $\partial X$. \end{Def} \begin{Theorem} If $(Y, \partial Y)$ is a neat submanifold of $(X, \partial X)$, there exists a neat tubular neighborhood of $Y$ in $X$ and it is unique up to isotopy. \end{Theorem} The proof can be found in \cite[Chapter~4.6]{Hirsh} and in \cite[Chapter~III.4]{Kosinski}. Let $i: (Y, \partial Y) \hookrightarrow (X, \partial X)$ be a neat embedding of smooth compact manifolds of codimension $n$, such that the normal bundle $N_{Y}X$ is $h^{\bullet}$-orientable. Let $U$ be a tubular neighborhood of $Y$ in $X$, and $\varphi_{U}: U \rightarrow N_{Y}X$ a homeomorphism, which exists by definition. The map: \[i_{!}: h^{\bullet}(Y) \rightarrow h^{\bullet + n}(X) \] is defined in the following way: \begin{itemize} \item we apply the Thom isomorphism $T: h^{\bullet}(Y) \rightarrow h^{\bullet + n}_{\cpt}(N_{Y}X) = \tilde{h}^{\bullet + n}(N_{Y}X^{+})$, for $N_{Y}X^{+}$ the one-point compactification of $N_{Y}X$; \item we extend $\varphi_{U}$ to $\varphi_{U}^{+}: U^{+} \rightarrow N_{Y}X^{+}$ in the natural way and apply $(\varphi_{U}^{+})^{*}: h^{\bullet}_{\cpt}(N_{Y}X) \rightarrow h^{\bullet}_{\cpt}(U)$; \item considering the natural map $\psi: X \rightarrow U^{+}$ given by: \[\psi(x) = \left\{\begin{array}{ll} x & \text{if } x \in U \\ \infty & \text{if } x \in X \setminus U \end{array}\right. \] we apply $\psi^{*}: \tilde{h}^{\bullet}(U^{+}) \rightarrow \tilde{h}^{\bullet}(X)$. \end{itemize} Summarizing: \begin{equation}\label{GysinMap} i_{!}(\alpha) := \psi^{*} \circ (\varphi_{U}^{+})^{*} \circ T(\alpha). \end{equation} We now define the Gysin map associated to a generic neat map $f: (Y, \partial Y) \rightarrow (X, \partial X)$, not necessarily an embedding. \begin{Def} A smooth map $f: (Y, \partial Y) \rightarrow (X, \partial X)$ is \emph{neat} (v.\ \cite[Appendix~C]{HS} and references therein) if: \begin{itemize} \item $f^{-1}(\partial X) = \partial Y$; \item for every $y \in \partial Y$, the map $df_{y}: T_{y}Y/T_{y}\partial Y \rightarrow T_{f(y)}X/T_{f(y)}\partial X$ is an isomorphism. \end{itemize} \end{Def} If $f$ is an embedding this definition is equivalent to the previous one. In the case of manifolds without boundary, in order to construct the Gysin map one considers an embedding $j: Y \rightarrow \mathbb{R}^{N}$, and the embedding $(f, j): Y \rightarrow X \times \mathbb{R}^{N}$. This does not apply to manifolds with boundary, since $j$ is not a neat map, and, if we consider $\mathbb{R}^{N}_{+}$ instead of $\mathbb{R}^{N}$, then it is more complicated to define the integration map. Anyway, a similar construction is possible thanks to the following theorem (v.\ \cite[Appendix~C]{HS} and references therein). \begin{Theorem} Let $f: (Y, \partial Y) \rightarrow (X, \partial X)$ be a neat map. Then there exists a neat embedding $\iota: (Y, \partial Y) \rightarrow (X \times \mathbb{R}^{N}, \partial X \times \mathbb{R}^{N})$, stably unique up to isotopy, such that $f = \pi_{X} \circ \iota$ for $\pi_{X}: X \times \mathbb{R}^{N} \rightarrow X$ the projection. \end{Theorem} Therefore we consider the Gysin map: \[\iota_{!}: h^{\bullet}(Y) \rightarrow h_{\cpt}^{\bullet + (N + \dim X - \dim Y)}(X \times \mathbb{R}^{N}) \] as previously defined, followed by the integration map: \begin{equation}\label{IntegrationMap} \int_{\mathbb{R}^{N}}: \; h^{\bullet+N}_{\cpt}(X \times \mathbb{R}^{N}) \rightarrow h^{\bullet}(X) \end{equation} defined in the following way: \begin{itemize} \item $h^{\bullet+N}_{\cpt}(X \times \mathbb{R}^{N}) = \tilde{h}^{\bullet+N}((X \times \mathbb{R}^{N})^{+}) \simeq \tilde{h}^{\bullet+N}(\Sigma^{N}(X_{+}))$, for $X_{+} = X \sqcup \{\infty\}$; \item we apply the suspension isomorphism $\tilde{h}^{\bullet+N}(\Sigma^{N}(X_{+})) \simeq \tilde{h}^{\bullet}(X_{+}) \simeq h^{\bullet}(X)$. \end{itemize} Summarizing: \[f_{!}(\alpha) := \int_{\mathbb{R}^{N}} \iota_{!}(\alpha). \] In order to prove that the Gysin map so defined does not depend on the choices involved in the construction (the tubular neighborhood $U$, the diffeomorphism $\varphi_{U}$ with the normal bundle, the embedding $\iota$), the proof in \cite{Karoubi} applies also to the case of manifolds with boundary. In fact, the independence from the tubular neighborhood and the associated diffeomorphism is a consequence of the unicity up to isotopy (in particular, homotopy) of such a neighborhood. Also for what concerns the embedding $\iota$, the proof of \cite{Karoubi}, Prop.\ 5.24 p.\ 233, applies. In particular, $f_{!}$ only depends on the homotopy class of $f$. If $f: (Y, \partial Y) \rightarrow (X, \partial X)$ is a generic map, not necessarily neat, we can define $f_{!}$ via the following lemma. \begin{Lemma}\label{HomotopicNeat} Any smooth map $f: (Y, \partial Y) \rightarrow (X, \partial X)$ between compact manifolds is homotopic to a neat map relatively to $\partial Y$. \end{Lemma} \paragraph{Proof:} We can choose two collar neighborhoods $U$ of $\partial Y$ and $V$ of $\partial X$, such that $f(U) \subset V$. Hence we think of $f\vert_{U}$ as a map from $\partial Y \times [0, 1)$ to $\partial X \times [0, 1)$. We consider the following homotopy: $F_{t}(y, u) = (\pi_{\partial X}f(y, u), (1-t)\pi_{[0,1)}f(y,u) + tu)$. It follows that $F_{0} = f$ and $F_{1}(y, u) = (\pi_{\partial X}f(y, u), u)$, and the latter is neat. Gluing $F_{t}$ and $f\vert_{X \setminus U}$ via a bump function, we get a complete homotopy. $\square$ \\ Since the Gysin map $f_{!}$, for $f$ neat, only depends on the homotopy class of $f$, we can define it for a generic map $f$, simply considering any neat function homotopic to it. The Gysin map commutes with the restrictions to the boundaries up to a sign, as the following theorem shows. \begin{Theorem}\label{GysinCommBoundary} Let $f: (Y, \partial Y) \rightarrow (X, \partial X)$ be a map between $h^{\bullet}$-oriented smooth manifolds, and $f': \partial Y \rightarrow \partial X$ the restriction to the boundaries. Then, for $\alpha \in h^{\bullet}(X)$: \[f'_{!}(\alpha\vert_{\partial Y}) = (-1)^{\dim X - \dim Y}(f_{!}\alpha)\vert_{\partial X} \] where the orientations on the boundaries are naturally induced from the ones of $X$ and $Y$. \end{Theorem} \paragraph{Proof:} It is enough to prove the statement for embeddings, since the integration over $\mathbb{R}^{N}$, which is actually the suspension isomorphism, commutes with the restrictions. Therefore, let us suppose that $f$ is a neat embedding, and that $N_{Y}X$ is a neat tubular neighborhood. Then $N_{\partial Y}\partial X = N_{Y}X\vert_{\partial Y}$, but the orientation induced by the ones of $\partial Y$ and $\partial X$ on $N_{\partial Y}\partial X$ differs from the restriction of the one induced by $Y$ and $X$ by a factor $(-1)^{\dim X - \dim Y}$. Therefore, if: \[T: h^{\bullet}(Y) \rightarrow h^{\bullet + \dim X - \dim Y}_{\cpt}(N_{Y}X), \qquad T': h^{\bullet}(\partial Y) \rightarrow h^{\bullet + \dim X - \dim Y}_{\cpt}(N_{\partial Y}\partial X) \] are the Thom isomorphisms, it follows that: \[T'(\alpha\vert_{\partial Y}) = (-1)^{\dim X - \dim Y}T(\alpha)\vert_{N_{\partial Y}\partial X}. \] Then, since the pull-backs commute with the restrictions, the thesis follows. If $f$ is a generic embedding, not necessarily neat, since $f$ is homotopic to a neat embedding relatively to the boundary, then $f'$ remains unchanged under the homotopy and the thesis follows. $\square$ \subsection{Geometric homology} We recall the geometric definition of the homology theory dual to a given cohomology theory $h^{\bullet}$, as defined in \cite{Jakob}. Let $M$ be a paracompact space, $\pi_{V}: V \rightarrow M$ a real vector bundle of rank $r+1$ with metric and $h^{\bullet}$-orientation, and $\sigma: M \rightarrow V$ a section of norm $1$. Then, $\sigma$ induces an isomorphism $V \simeq E \oplus 1$, for $\pi_{E}: E \rightarrow M$ a vector bundle of rank $r$ with metric, such that $\sigma(m) \simeq (0, 1)_{m}$. We identify $V$ with $E \oplus 1$. The unit sphere bundle $SV$ of $V$ can be thought of as the union of two disc bundles, the two hemispheres, joined on the equator: the two disc bundles are isomorphic to the unit disc bundle of $E$, therefore we call them $D^{+}E$ and $D^{-}E$, while the bundle of the equators is isomorphic to the sphere bundle of $E$, which we call $SE$. Moreover, the north pole $(0, 1)_{m}$ of $D^{+}E_{m}$ is $\sigma(m)$, therefore $D^{+}E$ is a tubular neighborhood of the image of $\sigma$. There is a natural map: \begin{equation}\label{VBM} \sigma_{!}: h^{\bullet}(M) \rightarrow h^{\bullet+r}(SV) \end{equation} defined in the following way: \begin{itemize} \item we apply the Thom isomorphism $T: h^{\bullet}(M) \rightarrow h^{\bullet+r}(E^{+}) \simeq h^{\bullet+r}(D^{+}E, SE)$; \item by excision $h^{\bullet+r}(D^{+}E, SE) \simeq h^{\bullet+r}(SV, D^{-}E)$; \item from the inclusion of couples $(SV, \emptyset) \subset (SV, D^{-}E)$ we get a map $h^{\bullet+r}(SV, D^{-}E) \rightarrow h^{\bullet+r}(SV)$. \end{itemize} The map \eqref{VBM} coincides with the Gysin map associated to $\sigma$ \cite{Jakob}. \begin{Def}\label{CyclesJakob} For $(X, A) \in \Ob(\mathcal{FCW}_{2})$ and $n \in \mathbb{Z}$ fixed, we consider the quadruples $(M, u, \alpha, f)$ where: \begin{itemize} \item $(M, u)$ a smooth compact $h^{\bullet}$-manifold, possibly with boundary, whose connected components $\{M_{i}\}$ have dimension $n+q_{i}$, with $q_{i}$ arbitrary; we think of $u$ as a Thom class of the tangent bundle; \item $\alpha \in h^{\bullet}(M)$, such that $\alpha\vert_{M_{i}} \in h^{q_{i}}(M)$; \item $f: (M, \partial M) \rightarrow (X, A)$ is a map. \end{itemize} Two quadruples $(M, u, \alpha, f)$ and $(N, v, \beta, g)$ are equivalent if there exists an orientation-preserving diffeomorphism $F: (M, u) \rightarrow (N, v)$ such that $f = g \circ F$ and $\alpha = F^{*}\beta$. The \emph{group of $n$-cycles} $C_{n}(X, A)$ is the free abelian group generated by equivalence classes of such quadruples. \end{Def} We now consider the group $G_{n}(X, A)$ defined as the quotient of $C_{n}(X, A)$ by the subgroup generated by elements of the form: \begin{itemize} \item $[(M, u, \alpha, f)] - [(M_{1}, u\vert_{M_{1}}, \alpha\vert_{M_{1}}, f\vert_{M_{1}})] - [(M_{2}, u\vert_{M_{2}}, \alpha\vert_{M_{2}}, f\vert_{M_{2}})]$, for $M = M_{1} \sqcup M_{2}$; \item $[(M, u, \alpha + \beta, f)] - [(M, u, \alpha, f)] - [(M, u, \beta, f)]$. \end{itemize} Moreover, we define the subgroup $U_{n}(X, A) \leq G_{n}(X, A)$ as the one generated by elements: \begin{itemize} \item $[(M, u, \alpha, f)] - [(S(E \oplus 1), \tilde{u}, \sigma_{!}\alpha, f \circ \pi)]$, where $S(E \oplus 1)$ is the sphere bundle induced by an $h^{\bullet}$-oriented vector bundle\footnote{The vector bundle $E$ may have different rank on different connected components of $M$.} $E \rightarrow M$ with metric, $\tilde{u}$ is the orientation canonically induced on $S(E \oplus 1)$ as a manifold from $u$ and the orientation of $E$, $\sigma: M \rightarrow E \oplus 1$ is the section $\sigma(m) = (0, 1)_{m}$ and $\sigma_{!}$ is the vector bundle modification \eqref{VBM}; \item $[(M, u, \alpha, f)]$ such that there exists $[(W, U, A, F)] \in G_{n+1}(X, X)$ such that $M \subset \partial W$ is a regularly embedded submanifold of codimension $0$ and $F(\partial W \setminus M) \subset A$, $U = u\vert_{M}$, $\alpha = A\vert_{M}$, $f = F\vert_{M}$. \end{itemize} Finally: \begin{Def} The geometric homology groups are defined as $h_{n}(X, A) := G_{n}(X, A) / U_{n}(X, A)$. \end{Def} \section{Geometric homology revisited}\label{DualRevisited} We now redefine the homology groups using only the Gysin map instead of the vector bundle modification. We also define cycles and boundaries in a slightly different way. \begin{Def} On a pair $(X, A) \in \mathcal{FCW}_{2}$, we define the group of \emph{$n$-precycles of $h_{\bullet}$} as the free abelian group generated by the quadruples $(M, u, \alpha, f)$, for: \begin{itemize} \item $(M, u)$ a smooth compact $h^{\bullet}$-manifold, possibly with boundary, whose connected components $\{M_{i}\}$ have dimension $n+q_{i}$, with $q_{i}$ arbitrary; \item $\alpha \in h^{\bullet}(M)$, such that $\alpha\vert_{M_{i}} \in h^{q_{i}}(M)$; \item $f: (M, \partial M) \rightarrow (X, A)$ a continuous map. \end{itemize} \end{Def} Contrary to definition \ref{CyclesJakob}, we do not quotient out with respect to orientation-preserving diffeomorphisms, since it will turn out not to be necessary. We define cycles and boundaries in the following way. \begin{Def} The group of \emph{$n$-cycles of $h_{\bullet}$}, denoted by $z_{n}(X, A)$, is the quotient of the group of $n$-precycles by the free subgroup generated by elements of the form: \begin{itemize} \item $(M, u, \alpha, f) - (M_{1}, u\vert_{M_{1}}, \alpha\vert_{M_{1}}, f\vert_{M_{1}}) - (M_{2}, u\vert_{M_{2}}, \alpha\vert_{M_{2}}, f\vert_{M_{2}})$, for $M = M_{1} \sqcup M_{2}$; \item $(M, u, \alpha + \beta, f) - (M, u, \alpha, f) - (M, u, \beta, f)$; \item $(M, u, \varphi_{!}\alpha, f) - (N, v, \alpha, f \circ \varphi)$ for $\varphi: (N, \partial N) \rightarrow (M, \partial M)$ a map. \end{itemize} \end{Def} The use of the Gysin map in this definition is more natural than the one of the vector bundle modification, which is just a particular case, while the Gysin map is the natural push-forward, defined for any $\varphi: (N, \partial N) \rightarrow (M, \partial M)$. Moreover, it is not necessary to explicitly deal with diffeomorphisms: in fact, if $\varphi: (M, u) \rightarrow (N, v)$ is an orientation preserving diffeomorphism, it is trivial to show from the definition that $\varphi_{!} = (\varphi^{-1})^{*}$, therefore the quotient in definition \ref{CyclesJakob} is just another particular case of the Gysin map. \begin{Def} The group of \emph{$n$-boundaries of $h_{\bullet}$}, denoted by $b_{n}(X, A)$, is the subgroup of $z_{n}(X, A)$ containing the cycles which are representable by a precycle $(M, u, \alpha, f)$ such that there exits a precycle $(W, U, A, F)$ of $(X, X)$ such that: \begin{itemize} \item $M \subset \partial W$ is a regularly embedded submanifold of codimension $0$; \item $F(\partial W \setminus M) \subset A$; \item $U = u\vert_{M}$, $\alpha = A\vert_{M}$, $f = F\vert_{M}$. \end{itemize} \end{Def} Of course we define: \[h_{n}(X,A) := z_{n}(X,A) / b_{n}(X,A). \] For $g: (X, A) \rightarrow (Y, B)$ a map, the push-forward $g_{*}: h_{\bullet}(X,A) \rightarrow h_{\bullet}(Y, B)$ is naturally defined as $g_{*}[(M, u, \alpha, f)] = [(M, u, \alpha, f \circ g)]$, while the connecting homomorphism $\partial_{n}: h_{n}(X,A) \rightarrow h_{n-1}(A)$ is defined as: \[\partial_{n}[(M, u, \alpha, f)] = (-1)^{\abs{\alpha}}[(\partial M, u\vert_{\partial M}, \alpha\vert_{\partial M}, f\vert_{\partial M})] \] where $(-1)^{\abs{\alpha}}$ depends on the connected component and $u\vert_{\partial M}$ is the orientation naturally induced by $u$ on the boundary. It is well-defined thanks to theorem \ref{GysinCommBoundary}. The exterior product and the cap product are defined as in \cite{Jakob}. \section{Equivalence}\label{Equivalence} We call $h''_{n}(X, A)$ the geometric homology groups defined in \cite{Jakob}, $h'_{n}(X,A)$ the ones defined in the present paper and $h_{n}(X,A)$ the ones defined via spectra. There is a natural map: \begin{equation}\label{EquivalenceMap} \begin{split} \Psi_{n}(X, A):\; &h''_{n}(X, A) \rightarrow h'_{n}(X,A) \\ & [(M, u, \alpha, f)] \rightarrow [(M, u, \alpha, f)], \end{split} \end{equation} where of course the square brackets denote two different equivalence relations in the domain and in the codomain. It is easy to show that $\Psi_{n}$ is well-defined, since the quotient by diffeomorphisms and vector bundle modifications in the domain corresponds to quotient by the Gysin map in the codomain, therefore equivalent quadruples are sent to equivalent quadruples. Moreover, the quotient by boundaries, disjoint union of base manifolds and addition of cohomology classes are defined in the same way in the two cases. \begin{Theorem}\label{EquivalenceTheorem} The maps $\Psi_{\bullet}$ defined in \eqref{EquivalenceMap} induce an equivalence of homology theories on $\mathcal{FCW}_{2}$ between $h''_{\bullet}$ and $h'_{\bullet}$. \end{Theorem} \paragraph{Proof:} $\Psi_{n}$ is a group homomorphism by construction, since it is defined on the generators of the free abelian group $C_{n}(X, A)$, and $h'_{n}(X, A)$ is defined quotienting out repeatedly $C_{n}(X, A)$. It is clearly surjective, since the elements of the form $[(M, u, \alpha, f)]$ are generators even for $h_{n}(X,A)$. Therefore, it remains to prove injectivity. Every element of $h''_{n}(X, A)$ can be written as a single class $[(M, u, \alpha, f)]$, since a sum of such classes can be reduced to a single one via the disjoint union of the base manifolds. Therefore, since $\Psi_{n}[(M, u, \varphi_{!}\alpha, f)] = \Psi_{n}[(N, v, \alpha, f \circ \varphi)]$ for $\varphi: (N, \partial N) \rightarrow (M, \partial M)$ a map, we must prove that $[(M, u, \varphi_{!}\alpha, f)] = [(N, v, \alpha, f \circ \varphi)]$ even in $h''_{n}(X,A)$. The equivalence between $h''_{n}(X,A)$ and $h_{n}(X,A)$ is given in \cite{Jakob} by the maps $\Phi_{\bullet}: h''_{\bullet}(X,A) \rightarrow h_{\bullet}(X,A)$ defined by $\Phi_{n}[(M, u, \alpha, f)] := f_{*}(\alpha \cap [M])$, for $[M]$ the fundamental class of $M$. By definition $\alpha \cap [M] = L_{M}(\alpha)$ for $L_{M}$ the Lefschetz duality \eqref{Lefschetz}. Hence, because of \eqref{GysinLefschetz}: \[\begin{split} & \Phi_{n}[(M, u, \varphi_{!}\alpha, f)] = f_{*} L_{M}(\varphi_{!}\alpha) \\ & \Phi_{n}[(N, v, \alpha, f \circ \varphi)] = (f \circ \varphi)_{*} L_{N}(\alpha) = f_{*} \varphi_{*} L_{N}(\alpha) = f_{*} L_{M}(\varphi_{!}\alpha). \end{split}\] Since the map $\Phi_{n}$ is injective, it follow that $[(M, u, \varphi_{!}\alpha, f)] = [(N, v, \alpha, f \circ \varphi)]$ even in $h''_{n}(X,A)$. The fact that $\Psi_{\bullet}$ is a morphism of homology theories then follows from the fact that the boundary morphism and the push-forward are defined in the same way for $h'_{n}(X,A)$ and $h''_{n}(X,A)$. $\square$ \section*{Acknowledgements} The author is financially supported by FAPESP (Funda\c{c}\~ao de Amparo \`a Pesquisa do Estado de S\~ao Paulo). We would like to thank Ryan Budney for having suggested the proof of lemma \ref{HomotopicNeat} on http://mathoverflow.net/.
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} It is well accepted that nowadays the cosmological data consistently indicates that the expansion of the Universe began to accelerate~\cite{Riess, Perlmutter, WMAP, Planck, Planck2016} around $z=0.64$~\cite{Moresco}. Thus, every model used to describe the cosmic background evolution must display this transition in its dynamics. Of course, $\Lambda$CDM presents this transition as well and it can be understood as the transition between the dark matter (DM) dominant era and the era dominated by the dark energy (DE). Nevertheless, despite the fact that the $\Lambda$CDM model has been very successful to explain the cosmological data, it presents the following weak points from the theoretical point of view: i) Why the estimated value of $\Lambda$ is 120 orders of magnitude smaller than the physically predicted one?. This is the well known cosmological constant problem~\cite{Weinberg, Carroll, Turner, Sahni, Carroll2001, Padmanabhan2003, Peebles}, which can be represented mainly by the two following open questions: a) Why does the observed vacuum energy has such an unnaturally small but non vanishing value?, and b) Why do we observe vacuum density to be so close to matter density, even though their ratio can vary up to 120 orders of magnitude during the cosmic evolution? (known as the coincidence problem)~\cite{Steinhardt, Zlatev}. This model presents serious specific observational challenges and tensions as well (for a brief review see for example \cite{Perivo} ). As an alternative to $\Lambda$CDM, DM unified models do not invoke a cosmological constant. In the framework of general relativity, non perfect fluids drive the accelerated expansion due to the negativeness of the viscous pressure, which appears due to the presence of bulk viscosity. Therefore, a Cold DM (CDM) viscous component represents a kind of unified DM model that could, in principle, explain the above mentioned transition of the acceleration without the inclusion of a DE component. It is worthy mentioning that measurements of the Hubble constant show tension with the values obtained from large scale structure (LSS) and Planck CMB data, which can be alleviated when viscosity is included in the DM component~\cite{Anand}. The new era of gravitational waves detector has also opened the possibility to detect dissipative effects in DM and DE through the dispersion and dissipation experimented by these waves propagating in a non perfect fluid. Some constraints on those effects were found in~\cite{Goswami}. For neutralino CDM it was pointed out in~\cite {Hofmann} that a bulk viscosity appears in the CDM fluid due to the energy transfered from the CDM fluid to the radiation fluid. From the point of view of cosmological perturbations, it has been shown that viscous fluid dynamics provides a simple and accurate framework for extending the description of cosmological perturbations into the nonlinear regime ~\cite {Blas}. Dissipative DM also appears as a residing component in a hidden sector, and can reproduce several observational properties of disk galaxies \cite{Foot_1}, \cite{Foot_2}. Alternatively, the direct study in astrophysical scenarios, such as the Bullet Cluster, has been an arena to place constraints on the DM-DM elastic scattering cross-section~\cite{Randall},~\cite{Kahlhoefer}. Simulations of this cluster with the inclusion of self-interacting DM and gas was performed in~\cite{Robertson}, finding a cross-section of around $\sigma/m = 2cm^{2}g^{-1}$. Other study presents an indication that self interaction DM would require a cross-section that varies with the relative velocity between DM particles in order to modify the structure of dwarf galaxy dark matter haloes~\cite{Harvey}. In spite of the fact that the bullet cluster revealed that the barionic matter has a viscosity much larger than the DM viscosity, its dissipative negative pressure contribution to the accelerated expansion of the universe can be neglected due to very low density in comparison with the one of the DM. At background level, where a homogeneous and isotropic space describes the Universe as a whole, only bulk viscosity is present in the cosmic fluid and the dissipative pressure must be described by some relativistic thermodynamical approach to non perfect fluids. This implies a crucial point in a fully consistent physical description of the expansion of the Universe using dissipative processes to generate the transition. Meanwhile in the $\Lambda$CDM model the acceleration is due to a cosmological constant and the entropy remains constant, in the case of non perfect fluids it is necessary to find a solution that not only consistently describes the kinematics of the Universe, but also that satisfies the thermodynamical requirements. In the case of a description of viscous fluids, the Eckart's theory~\cite{Eckart, Eckart2} has been widely investigated due to its simplicity and became the starting point to shed some light in the behavior of the dissipative effects in the late time cosmology~\cite{Avelino2009, Avelino2010, Montiel, Avelino2013} or in inflationary scenarios~\cite{Padmanabhan}. In the framework of an interacting dark sector, a recent work explores a flat universe with a radiation component and a viscous fluid (DM plus baryons) that interacts with a perfect fluid (DE)~\cite{Almada2020}. Also a $\Lambda$CDM model with with a dissipative DM, where the viscosity is a polynomial function of the redshift, has been constrained in~\cite{AlmadaH2020}. Nevertheless, it is a well known result that the Eckart's theory has non causal behavior, presenting the problem of superluminal propagation velocities and some instabilities. So, from the point of view of a consistent description of the relativistic thermodynamics of non perfect fluids, it is necessary to include a causal description such as the one given by the Israel- Stewart (IS) theory~\cite{Israel, Israel1979, Pavon, Chimento1993, Maartens, Zimdahl, Maartens1996}. The aim in this paper is to constraint the respective free parameters appearing in the recent exact cosmological solutions found in~\cite{Gonzalez}, for a universe filled only with a dissipative dark matter component. The constraint was done by using the Supernova Ia (SNe Ia) and Observational Hubble Data (OHD). These solutions were found in the framework of the causal thermodynamics described by the IS theory, and are compatible with a transition between deceleration and acceleration expansions at background level, within a certain range of the parameters involved. Since the solution found describes a universe containing only a dissipative DM as the main component of the universe, it should only be considered as an adequate approximation for the late time evolution, where cold DM dominates. In this sense, these models cannot expected to be fairly representative of the early time evolution, where ultrarelativistic matter dominates. For the solutions was assumed a barotropic EoS for the fluid that filled the universe, i.e., \begin{equation} p=\left(\gamma-1\right)\rho, \label{EoS} \end{equation} where $p$ is the barotropic pressure, and $\rho$ is the energy density. These solutions describe the evolution of the universe with dissipative DM with positive pressure, therefore the EoS parameter considered lies in the range $1\leq\gamma <2$, where $\gamma=1$ corresponds to a particular solution. Furthermore, the following Ansatz for the bulk viscosity coefficient, $\xi(\rho)$, \begin{equation} \xi (\rho)=\xi_{0}\rho^{s}, \label{xirho} \end{equation} was chosen, which has been widely considered as a suitable function between the bulk viscosity and the energy density of the main fluid. $\xi _{0}$ must be a positive constant because of the second law of thermodynamics~\cite{Weinberg1971}. The nonlinear ordinary differential equation of the IS theory obtained with the above assumptions has been solved, for example, for different values of the parameter $s$ in~\cite{Cornejo}; for $s=1/4$ and stiff matter in ~\cite{Harko}. Inflationary solutions were found in~\cite{Harko1998}. Stability of inflationary solutions were investigated in~\cite{Chimento1998, Chimento}. For an extensive review on viscous cosmology in early and late see~\cite{Brevik}. It is important mentioning that in the formulation of the thermodynamical approaches of relativistic viscous fluids it is assumed that the viscous pressure must be lower than the equilibrium pressure of the fluid ( the near equilibrium condition). Whenever there are solutions with acceleration at some stage, like, for example, bulk viscous inflation at early times, or transition between decelerated and accelerated expansions at late times, the above condition cannot be fulfilled. Therefore, it is not clearly justified the application of the above approach in such situations. To overcome this issue, deviations from the near equilibrium condition have been considered within a non linear extension of IS, as the one proposed in~\cite{Maartens1997}. Using this proposal, a nonlinear extension in accelerated eras occurring at early times, like inflation or at late times, like phantom behavior, were investigated in ~\cite{Chimento1} and in~\cite{Cruzphantom}, respectively. The current accelerated expansion was addressed with a nonlinear model for viscosity in~\cite{Beesham}. Also, a phase space analysis of a cosmological model with both viscous radiation and non-viscous dust was performed in~\cite{Beesham1}, where the viscous pressure obeys a nonlinear evolution equation. Is important mentioning that in ~\cite{Cruz2018} was shown that the inclusion of a cosmological constant along with a dissipative DM component allows to obey the near equilibrium condition within, in principle, the linear IS theory. The analytical solution we will analyse in the present article was obtained using the general expression for the relaxation time $\tau$~\cite{Maartens1996}, derived from the study of the causality and stability of the IS theory in~\cite{Hiscock} \begin{equation} \frac{\xi}{\left(\rho+p\right)\tau}=c_{b}^{2}, \label{relaxationtime} \end{equation} where $c_{b}$ is the speed of bulk viscous perturbations (non-adiabatic contribution to the speed of sound in a dissipative fluid without heat flux or shear viscosity). Since the dissipative speed of sound $V$, is given by $V^{2}= c_{s}^{2}+c_{b}^{2}$, where $c_{s}^{2}=(\partial p/\partial \rho)_{s}$ is the adiabatic contribution, then for a barotropic fluid $c_{s}^{2}=\gamma-1$ and thus $c_{b}^{2}=\epsilon\left(2-\gamma\right)$ with $0<\epsilon\leq 1$, known as the causality condition. The solution generalizes the solution found in~\cite{Mathew2017}, where the particular expression $\tau=\xi/\rho$ was used, taking besides the particular values $s=1/2$ and $\gamma=1$. In a previous work, which included Eq.(\ref{relaxationtime}) for the relaxation time and a pressureless main fluid, the IS equation was solved by using an Ansatz for the viscous pressure~\cite{Piattella}. The conclusion indicates that the full causal theory seems to be disfavored by the cosmological data. Nevertheless, in the truncated version of the theory, similar results to those of the $\Lambda CDM$ model were found for a bulk viscous speed within the interval $10^{-11}\ll c_{b}^{2}\lesssim 10^{-8}$. This last constraint on $c_{b}^{2}$, even though it was obtained with a suitable Ansatz, and {\bf{it}} does not represent an exact solution of the theory, teaches us that the non-adiabatic contribution to the speed of sound must be very small to be consistent with the cosmological data. The free parameters of the general analytical solution we will constraint against observational data in the present article are $h$, $\xi_{0}$, $\epsilon$ and $\gamma$ . In the case of one CDM component taking from the beginning, only $h$, $\xi _{0}$ and $\epsilon$ remains free and we find the constraints to obtain a solution that presents a transition between deceleration to acceleration expansions. We will also analyse the constraints for the case of where all parameters are taken free. Using the observational constraints obtained for the parameters $h$, $\xi _{0}$ and $\epsilon$ for the both cases $\gamma =1$ and $\gamma$ free, we will discuss the consistence of a fluid description during the cosmic evolution of the exact solutions representing a dissipative DM component. To this aim we evaluate the consistency condition $\tau H < 1$ in terms of the constrained parameter values, with $\tau$ being the relaxation time and $H$ the Hubble parameter. This paper is organized as follow: In section II we describe briefly the causal Israel-Stewart theory and we recall the general evolution equation for the Hubble parameter $H$. We also write down the constraints for the parameters of the model in order to guaranty a consistent fluid description. In section III we present the expressions corresponding to the analytic solution found in~\cite{Gonzalez} for arbitrary $\gamma$ and for the dust case, $\gamma=1$, respectively. In section IV we constraint the free parameters of the solutions with the Supernovae Ia (SNe Ia) and Observational Hubble Data (OHD). In section V we discuss this results comparing them with $\Lambda$CDM model and the implication of the parameters's values obtained and their thermodynamic implications. Finally, in section VI we present our conclusions taken into account the kinematic properties of the solutions, as well as the consistence of a fluid description. \section{Israel-Stewart formalism} The model of a dissipative DM component is described by the Einstein's equations for a flat FRW metric: \begin{equation} 3H^{2}=\rho, \label{constraint} \end{equation} and \begin{equation} 2\dot{H}+3H^{2}=-p-\Pi, \label{acceleration} \end{equation} where natural units defined by $8\pi G=c=1$ were used. In addition, in the IS framework, the transport equation for the viscous pressure $\Pi $ reads~\cite{Israel1979} \begin{equation} \tau\dot{\Pi}+\Pi= -3\xi(\rho) H-\frac{1}{2}\tau\Pi \left(3H+\frac{\dot{\tau}}{\tau}-\frac{\dot{\xi(\rho)}}{\xi(\rho)}-\frac{\dot{T}}{T} \right), \label{eqforPi} \end{equation} where ``dot" accounts for the derivative with respect to the cosmic time, $H$ is the Hubble parameter and $T$ is the barotropic temperature, which takes the form $T=T_{0}\rho^{\left(\gamma-1\right)/\gamma}$ (Gibbs integrability condition when $p=\left(\gamma-1\right)\rho$), with $T_{0}$ being a positive parameter. Using Eqs.(\ref{EoS}), (\ref{xirho}) in Eq.(\ref{relaxationtime}) we obtain the following expression for the relaxation time \begin{equation} \tau=\frac{\xi_{0}}{\epsilon\gamma\left(2-\gamma\right)}\rho^{s-1}=\frac{3^{s-1}\xi_{0}}{\epsilon\gamma\left(2-\gamma\right)} H^{2(s-1)} . \label{relaxationtime1} \end{equation} In order to obtain a differential equation in terms of the Hubble parameter, it is neccesary to evaluate the ratios $\dot{\tau}/\tau, \,\,\dot{\xi}/\xi$ and $\dot{T}/T$, which appear in Eq.(\ref{eqforPi}). From Eqs.(\ref{constraint}) and (\ref{acceleration}) the expression for the viscous pressure and its time derivative can be obtained. Using the above expressions a nonlinear second order differential equation can be obtained for the Hubble parameter: \begin{widetext} \begin{equation} \begin{split} & \ddot{H}+3H\dot{H}+(3)^{1-s}\xi_{0}^{-1}\epsilon\gamma\left(2-\gamma\right)H^{2-2s}\dot{H}-\frac{(2\gamma-1)}{\gamma}H^{-1}\dot{H}^{2}+\frac{9}{4}\gamma\left[1-2\epsilon\left(2-\gamma\right)\right]H^{3} \\ & +\frac{1}{2}(3)^{2-s}\xi_{0}^{-1}\epsilon\gamma^{2}\left(2-\gamma\right)H^{4-2s}=0. \label{eqforH} \end{split} \end{equation} \end{widetext} We address the reader to see the technical details in ref. \cite{Gonzalez}. As we shall see in the next section the exact solution was obtained for the special case $s=1/2$, which allows an important simplification of Eq. (\ref{eqforH}). In fact, in this case the simple form $H\left(t\right)=A\left(t_{s}-t\right)^{-1}$ is a solution of Eq.(\ref{eqforH}) with a phantom behavior, with $A>0$, $\epsilon=1$ and the restriction $ 0<\gamma<3/2$~\cite{Cruz2017}. Besides, the solution $H\left(t\right)=A\left(t-t_{s}\right)^{-1}$ can represent accelerated universes if $A>1$, Milne universes if $A=1$ and decelerated universes if $A<1$, all of them having an initial singularity at $t=t_{s}$~\cite{Cruz2017a}. As it was mentioned in Section I, an important issue that we will discuss after to constraint the parameters $\xi _{0}$, $\epsilon$, for the both cases $\gamma =1$ and $\gamma$, is if the found values satisfy the condition for keeping the fluid description of the dissipative dark matter component, expressed by the constraint $\tau H<1$. Using Eq.(\ref{constraint}) for the case $s=1/2$ and Eq.(\ref{relaxationtime1}), the above inequality leads to the following constraint between the parameters $\xi _{0}$, $\epsilon$ and $\gamma$ \begin{eqnarray} \frac{\xi _{0}}{\sqrt{3}\epsilon \gamma(2-\gamma)} \ll 1 . \label{eq:eq8} \end{eqnarray} We will discuss later this condition using the values of $\xi _{0}$, $\epsilon$, with and without an election of the $\gamma$- value, obtained from the cosmological data of SNe Ia observations. \section{The exact cosmological solutions} Now, we will briefly discuss the two solutions for Eq.(\ref{eqforH}) found in~\cite{Gonzalez} for $s=1/2$ and for the especial cases of $\gamma\neq 1$ and $\gamma=1$. \textbf{i)} In the case of $\gamma\neq 1$, the solution for the Eq.(\ref{eqforH}) can be written as a function of the redshift $z$ as \begin{equation} H(z)=C_{3}\left(1+z\right)^{\alpha}\cosh^{\gamma}{\left[\beta\left(\ln{\left(1+z\right)}+C_{4}\right)\right]}, \label{Hgamma} \end{equation} where $C_{3}$ and $C_{4}$ are constants given by \begin{equation} C_{3}=\frac{H_{0}}{\cosh^{\gamma}{\left(\beta C_{4}\right)}}=H_{0}\left[1-\frac{\left(q_{0}+1-\alpha\right)^{2}}{\gamma^{2}\beta^{2}}\right]^{\frac{\gamma}{2}}, \label{defofC3} \end{equation} \begin{equation} C_{4}=\frac{1}{\beta}\mathop{\mathrm{arctanh}}\left[\frac{\left(q_{0}+1\right)-\alpha}{\gamma\beta}\right], \label{defofC4} \end{equation} \begin{equation} \alpha=\frac{\sqrt{3}\gamma}{2\xi_{0}}\left[\sqrt{3}\xi_{0}+\epsilon\gamma\left(2-\gamma\right)\right], \label{defofalpha} \end{equation} \begin{equation} \beta=\frac{\sqrt{3}}{2\xi_{0}}\sqrt{6\xi_{0}^{2}\epsilon\left(2-\gamma\right)+\epsilon^{2}\gamma^{2}\left(2-\gamma\right)^{2}}. \label{defofbeta} \end{equation} In the above expressions $H_{0}$ and $q_{0}$ are the Hubble and deceleration parameters at the present time, respectively, where the deceleration parameter is defined through $q=-1-\dot{H}/H^{2}$. The initial condition $a_{0}=1$ is also used. This solution has a constraint that arises from Eqs.(\ref{defofC3}) and (\ref{defofC4}) that reads \begin{equation} \left(\alpha-\gamma\beta\right)-1<q_{0}<\left(\alpha+\gamma\beta\right)-1. \label{constraintq0gamma} \end{equation} Since the value of $q_{0}$ will be taken from the observed data, we will check if the above constraints are fulfilled for the values determined for the parameters $\xi_{0}$, $\epsilon$ and $\gamma$ after the constraint of the SNe Ia data. \textbf{ii)} In the case of $\gamma=1$, the solution of the Eq.(\ref{eqforH}) can be written as \begin{equation} H(z)=H_{0}\left[C_{1}\left(1+z\right)^{m_{1}}+C_{2}\left(1+z\right)^{m_{2}}\right], \label{solforH} \end{equation} where $H_{0}$ is the Hubble parameter at the present time, and \begin{equation} m_{1}=\frac{\sqrt{3}}{2\xi_{0}}\left(\sqrt{3}\xi_{0}+\epsilon+\sqrt{6\xi_{0}^{2}\epsilon+\epsilon^{2}}\right), \label{defofm1} \end{equation} \begin{equation} m_{2}=\frac{\sqrt{3}}{2\xi_{0}}\left(\sqrt{3}\xi_{0}+\epsilon-\sqrt{6\xi_{0}^{2}\epsilon+\epsilon^{2}}\right), \label{defofm2} \end{equation} \begin{equation} C_{1}=\frac{\left(q_{0}+1\right)-m_{2}}{m_{1}-m_{2}}, \label{defofc1} \end{equation} \begin{equation} C_{2}=\frac{m_{1}-\left(q_{0}+1\right)}{m_{1}-m_{2}}. \label{defofc2} \end{equation} In the above equations $q_{0}$ is the deceleration parameter at the present time, and the conditions $a_{0}=1$ and $C_{1}+C_{2}=1$ have been set. This solution was previously found and discussed in~\cite{Mathew2017}, but with a particular relation for the relaxation time of the form $\xi_{0}\rho^{s-1}$ (which corresponds to $\alpha=\xi_{0}$ of our Ansatz), instead of the more general relation as Eq.(\ref{relaxationtime1}), which was used in order to obtain the Eq.(\ref{solforH}) in~\cite{Gonzalez}. Even more, this solution has three different behaviors depending on the signs of the constants $C_{1}$ and $C_{2}$. So, for the fit purposes, we limit our analysis to the solution that is most similar to the $\Lambda$CDM model, and that corresponds to the Hubble parameter which fulfills the constraint \begin{equation} m_{2}-1<q_{0}<m_{1}-1, \label{constraintHpositive} \end{equation} which leads an always positive Hubble parameter. \section{Constraining the solutions with Supernova Ia and Observational Hubble data sets} \begin{table*} \centering \resizebox{17.94cm}{!} { \begin{tabular}{|ll|lllll|lll|} \hline \hline \multicolumn{1}{c}{} & \multicolumn{1}{c@{\hspace{0.5cm}}}{} & \multicolumn{5}{c}{Best fit values} & \multicolumn{1}{c@{\hspace{0.5cm}}}{} & \multicolumn{2}{c}{Goodness of fit} \\ \cline{3-7}\cline{9-10} \multicolumn{1}{c}{Data} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{$h$} & \multicolumn{1}{c}{$\Omega_{m}$} & \multicolumn{1}{c}{$x$} & \multicolumn{1}{c}{$\epsilon$} & \multicolumn{1}{c}{$\gamma$} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{$\chi^{2}_{min}$} & \multicolumn{1}{c}{$BIC$} \\ \hline \multicolumn{10}{c}{$\Lambda$CDM model} \\ \multicolumn{1}{c}{SNe Ia} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{$0.732_{-0.018}^{+0.017}$} & \multicolumn{1}{c}{$0.299_{-0.022}^{+0.022}$} & \multicolumn{1}{c}{$\cdots$} & \multicolumn{1}{c}{$\cdots$} & \multicolumn{1}{c}{$\cdots$} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{1026.9} & \multicolumn{1}{c}{1040.8} \\ \multicolumn{1}{c}{OHD} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{$0.715_{-0.010}^{+0.010}$} & \multicolumn{1}{c}{$0.248_{-0.014}^{+0.015}$} & \multicolumn{1}{c}{$\cdots$} & \multicolumn{1}{c}{$\cdots$} & \multicolumn{1}{c}{$\cdots$} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{27.9} & \multicolumn{1}{c}{35.7} \\ \multicolumn{1}{c}{SNe Ia + OHD} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{$0.705_{-0.009}^{+0.009}$} & \multicolumn{1}{c}{$0.265_{-0.012}^{+0.013}$} & \multicolumn{1}{c}{$\cdots$} & \multicolumn{1}{c}{$\cdots$} & \multicolumn{1}{c}{$\cdots$} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{1057.1} & \multicolumn{1}{c}{1071.1} \\ \hline \multicolumn{10}{c}{Exact cosmological solution with $\gamma\neq 1$} \\ \multicolumn{1}{c}{SNe Ia} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{$0.732_{-0.017}^{+0.017}$} & \multicolumn{1}{c}{$1$} & \multicolumn{1}{c}{$1.288_{-0.276}^{+0.199}$} & \multicolumn{1}{c}{$0.709_{-0.143}^{+0.181}$} & \multicolumn{1}{c}{$1.194_{-0.139}^{+0.177}$} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{$1030.0$} & \multicolumn{1}{c}{$1057.8$} \\ \multicolumn{1}{c}{OHD} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{$0.735_{-0.007}^{+0.007}$} & \multicolumn{1}{c}{$1$} & \multicolumn{1}{c}{$1.422_{-0.192}^{+0.108}$} & \multicolumn{1}{c}{$0.445_{-0.056}^{+0.177}$} & \multicolumn{1}{c}{$1.108_{-0.082}^{+0.213}$} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{$62.2$} & \multicolumn{1}{c}{77.9} \\ \multicolumn{1}{c}{SNe Ia + OHD} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{$0.731_{-0.006}^{+0.006}$} & \multicolumn{1}{c}{$1$} & \multicolumn{1}{c}{$1.488_{-0.125}^{+0.061}$} & \multicolumn{1}{c}{$0.396_{-0.022}^{+0.045}$} & \multicolumn{1}{c}{$1.044_{-0.033}^{+0.078}$} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{$1089.2$} & \multicolumn{1}{c}{$1117.2$} \\ \hline \multicolumn{10}{c}{Exact cosmological solution with $\gamma=1$} \\ \multicolumn{1}{c}{SNe Ia} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{$0.732_{-0.017}^{+0.017}$} & \multicolumn{1}{c}{$1$} & \multicolumn{1}{c}{$1.161_{-0.314}^{+0.282}$} & \multicolumn{1}{c}{$0.553_{-0.068}^{+0.126}$} & \multicolumn{1}{c}{$1$} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{$1027.2$} & \multicolumn{1}{c}{$1048.1$} \\ \multicolumn{1}{c}{OHD} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{$0.733_{-0.006}^{+0.006}$} & \multicolumn{1}{c}{$1$} & \multicolumn{1}{c}{$1.408_{-0.203}^{+0.119}$} & \multicolumn{1}{c}{$0.378_{-0.016}^{+0.032}$} & \multicolumn{1}{c}{$1$} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{$39.7$} & \multicolumn{1}{c}{$51.5$} \\ \multicolumn{1}{c}{SNe Ia + OHD} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{$0.730_{-0.006}^{+0.006}$} & \multicolumn{1}{c}{$1$} & \multicolumn{1}{c}{$1.479_{-0.132}^{+0.068}$} & \multicolumn{1}{c}{$0.371_{-0.009}^{+0.018}$} & \multicolumn{1}{c}{$1$} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{$1083.0$} & \multicolumn{1}{c}{$1104.0$} \\ \hline\hline \end{tabular} } \caption{Best-fit values for each model free parameters $\theta$, as well as the respective goodness of fit criteria, obtained in the MCMC analysis. The first row shows the best-fit values for the standard cosmological model $\Lambda$CDM, while the second and third rows shows the best-fit parameters of the exact cosmological solution with $\gamma\neq 1$ and $\gamma =1$, respectively. The uncertainties correspond to $1\sigma$ ($68.3\%$) of confidence level (CL). The best-fits values for the $\Lambda$CDM model are used for the sake of comparison with the exact cosmological solutions.} \label{bestfittable} \end{table*} \begin{figure*} \includegraphics[scale=0.5]{TriangleLCDM} \caption{Joint and marginalized constraint of $h$ and $\Omega_{m}$ obtained in the MCMC analysis for the $\Lambda CDM$ model. The admissible regions correspond to $1\sigma\left(68.3\%\right)$, $2\sigma\left(95.5\%\right)$, and $3\sigma\left(99.7\%\right)$ confidence level (CL), respectively. The best-fit values for each parameter are shown in Table \ref{bestfittable}.} \label{triangleLCDM} \end{figure*} \begin{figure*} \includegraphics[scale=0.5]{Trianglegeneral} \caption{Joint and marginalized constraint of $h$, $x$, $\epsilon$ and $\gamma$ obtained in the MCMC analysis for the exact cosmological solution with $\gamma\neq 1$. The admissible regions correspond to $1\sigma\left(68.3\%\right)$, $2\sigma\left(95.5\%\right)$, and $3\sigma\left(99.7\%\right)$ confidence level (CL), respectively. The best-fit values for each parameter are shown in Table \ref{bestfittable}.} \label{trianglegeneral} \end{figure*} \begin{figure*} \includegraphics[scale=0.5]{Triangleparticular} \caption{Joint and marginalized constraint of $h$, $x$ and $\epsilon$ obtained in the MCMC analysis for the exact cosmological solution with $\gamma=1$. The admissible regions correspond to $1\sigma\left(68.3\%\right)$, $2\sigma\left(95.5\%\right)$, and $3\sigma\left(99.7\%\right)$ confidence level (CL), respectively. The best-fit values for each parameter are shown in Table \ref{bestfittable}.} \label{triangleparticular} \end{figure*} We shall constrain the free parameters of the solutions presented in the above section with the Supernova Ia data (SNe Ia) and the Observational Hubble Data (OHD). To do so, we compute the best-fit parameters with the affine-invariant Markov Chain Monte Carlo Method (MCMC)~\cite{Goodman}, implemented in the pure-Python code \textit{emcee}~\cite{Foreman}, by setting 50 chains or ``walkers" with 10000 steps and 10000 burn-in steps; this last ones in order to let the walkers explore the parameters space and get settled in the maximum of the probability density. As a likelihood function we use the following Gaussian distribution \begin{equation} \mathcal{L}\propto\exp{\left(-\frac{\chi_{I}^{2}}{2}\right)}, \label{distribution} \end{equation} where $\chi_{I}^{2}$ is the merit function with $I$ representing each data set, namely SNe Ia, OHD and their joint analysis $\chi_{joint}^{2}=\chi_{SNe}^{2}+\chi_{OHD}^{2}$. Therefore, to impose the constraint, we use the Pantheon SNe Ia sample~\cite{Scolnic} and the compilation of OHD provided by Magaña \textit{et al.}~\cite{Magana}. In the first one, the sample consist in 1048 data points in the redshift range $0.01\leq z\leq 2.3$, that is a compilation of 279 SNe Ia data discovered by the Pan-STARRS1 (PS1) Medium Deep Survey, combined with the distance estimates of SNe Ia from the Sloan Digital Sky Survey (SDSS), Supernova Legacy Survey (SNLS), and various low-z and Hubble Space Telescope (HST) samples, where the distance estimator is obtained using a modified version of the Tripp formula~\cite{Tripp} with two nuisance parameters calibrated to zero with the method ``BEAMS with Bias Correction'' (BBC) proposed by Kessler and Scolnic~\cite{Kessler}. Hence, the observational distance modulus for each SNe Ia at a certain redshift $z_{i}$ is given by the formula \begin{equation} \mu_{i} = m_{B,i}-\mathcal{M} \label{mu}, \end{equation} where $m_{B,i}$ is the apparent B-band magnitude of a fiducial SNe Ia and $\mathcal{M}$ is a nuisance parameter. Considering that the Pantheon sample give directly the corrected apparent magnitude for each SNe Ia, the merit function for the SNe Ia data sample can be constructed in matrix notation as \begin{equation} \chi_{SNe}^{2}=\mathbf{M}^{\dagger}\mathbf{C^{-1}}\mathbf{M} \label{MeritSNe}, \end{equation} where $\mathbf{M}=\mathbf{m}_{B}-\boldsymbol\mu_{th}-\boldsymbol{\mathcal{M}}$ and $\mathbf{C}$ is the total covariance matrix, given by \begin{equation} \mathbf{C}=\mathbf{D}_{stat}+\mathbf{C}_{sys} \label{CovarianceMatrix}, \end{equation} where the diagonal matrix $\mathbf{D}_{stat}$ denotes the statistical uncertainties obtained by adding in quadrature the uncertainties of $m_{B}$ for each redshift, associated with the BBC method, while $\mathbf{C}_{sys}$ denotes the systematic uncertainties in the BBC approach. On the other hand, the theoretical distance modulus for each SNe Ia at a certain redshift $z_{i}$ in a flat FLRW spacetime for a given model is defined through the relation \begin{equation} \mu_{th}\left(z_{i},\theta\right)=5\log_{10}{\left[\frac{d_{L}\left(z_{i},\theta\right)}{Mpc}\right]}+\bar{\mu}, \label{mutheoretical} \end{equation} where $\theta$ encompasses the free parameters of the respective solution, $\bar{\mu}=5\log_{10}{(c)}+25$ with $c$ the speed of light and $d_{L}$ is the luminosity distance which takes the form \begin{equation} d_{L}(z_{i},\theta)=(1+z_{i})\int_{0}^{z_{i}}{\frac{dz'}{H(z',\theta)}} \label{luminosity}. \end{equation} In order to reduce the number of free parameters and marginalized over the nuisance parameter $\mathcal{M}$, we define $\bar{\mathcal{M}}=\bar{\mu}+\mathcal{M}$ and the merit function (\ref{MeritSNe}) can be expanded as~\cite{Lazkoz} \begin{equation} \chi^{2}_{SNe}=A(z,\theta)-2B(z,\theta)\bar{\mathcal{M}}+C(z,\theta)\bar{\mathcal{M}}^{2} \label{expandedmerit}, \end{equation} where \begin{equation} A(z,\theta)=\mathbf{M}(z,\theta,\bar{\mathcal{M}}=0)^{\dagger}\mathbf{C^{-1}}\mathbf{M}(z,\theta,\bar{\mathcal{M}}=0), \end{equation} \begin{equation} B(z,\theta)=\mathbf{M}(z,\theta,\bar{\mathcal{M}}=0)^{\dagger}\mathbf{C^{-1}}\mathbf{1}, \end{equation} \begin{equation} C(z,\theta)=\mathbf{1}^{\dagger}\mathbf{C^{-1}}\mathbf{1}. \end{equation} Hence, minimizing the expression (\ref{expandedmerit}) whit respect to $\bar{\mathcal{M}}$ gives $\bar{\mathcal{M}}=B/C$ and the merit function reduces to \begin{equation} \chi_{SNe}^{2}\Bigr\rvert_{min}=A(z,\theta)-\frac{B(z,\theta)^{2}}{C} \label{MeritSNefinal}, \end{equation} that clearly only depends on the free parameters of the respective solution. It is important to note that the merit function given by (\ref{MeritSNe}) provides the same information as the function given by (\ref{MeritSNefinal}); this is because the best fit parameters minimize the merit function. Then, $\chi^{2}_{min}$ gives an indication of the goodness of fit: the smaller its value, the better is the fit. In the second one, the sample consist in 51 data points in the redshift range $0.07\leq z\leq 2.36$, where 31 data points are obtained by the Differential Age (DA) method~\cite{Jimenez} which implies that these data points are model independent. The remaining 20 points come from BAO measurements, assuming that the $H(z)$ data obtained come from independent measurements. Hence, the merit function for the OHD data sample can be constructed as \begin{equation} \chi^{2}_{OHD}=\sum_{i=1}^{51}{\left[\frac{H_{i}-H_{th}(z_{i},\theta)}{\sigma_{H,i}}\right]^{2}} \label{MeritOHD}, \end{equation} where $H_{i}$ is the observational Hubble parameter at redshift $z_{i}$ with associated error $\sigma_{H,i}$, provided by the OHD sample considered, $H_{th}$ is the theoretical Hubble parameter at the same redshift, provided by the solutions, and $\theta$ encompasses the free parameters of the respective solution. The two cosmological solutions are contrasted with the SNe Ia and OHD data through their corresponding Hubble parameter (\ref{Hgamma}) and (\ref{solforH}). Because the solutions correspond to only matter as a dominant component, we have to impose $\Omega_{m}=1$. So, for the solution with $\gamma\neq 1$ their free parameters are $\theta=\{H_{0},\xi_{0},\epsilon,\gamma\}$ and the solution with $\gamma = 1$ are $\theta=\{H_{0},\xi_{0},\epsilon\}$. Even more, dimensionless parameters for the fit are required, where $\epsilon$ and $\gamma$ are already dimensionless. So, we replace $H_{0}$ for the dimensionless Hubble parameter $h$, where \begin{equation} H_{0}=100\;km\;s^{-1}\;Mpc^{-1}\times h \label{H0dimensionless}, \end{equation} and a dimensionless $\xi_{0}$ required the following redefinition \begin{equation} \xi_{0}\rightarrow H_{0}^{1-2s}\xi_{0}, \label{xi0dimensionless} \end{equation} where, considering that the solutions are obtained for $s=1/2$, then $\xi_{0}$ it is in particular also dimensionless. In consequence, for $h$ we use a Gaussian prior according to the value reporting by A. G. Riess \textit{et al.} in~\cite{Riess2016} of $H_{0}=73.24\pm 1.74 \;km\;s^{-1}\;Mpc^{-1}$ wich is measured with a $2.4\%$ of uncertainty, for $\epsilon$ and $\gamma$ we use the flat priors $0<\epsilon<1$ and $1<\gamma<2$, and for $\xi_{0}$ we make the change of variable $\xi_{0}=\xi_{0}(x)=\tan{(x)}$ for which we use the flat prior $0<x<\pi/2$; this last one in order to simplify the sampling of the full parameter space using in the MCMC analysis. It is important to mention that in both cases we use the actual value of the deceleration parameter, $q_{0}=-0.6$, as initial condition~\cite{Planck}. In the solution with $\gamma\neq 1$ we need to use as a prior the restriction given by Eq.(\ref{constraintq0gamma}), in order to avoid a complex Hubble parameter during the fit; and in the solution with $\gamma=1$ we need to use as a prior the restriction given by Eq.(\ref{constraintHpositive}), in order to obtain a positive Hubble parameter. Moreover, the $a$ parameter in the \textit{emcee} code is modified in order to obtain a mean acceptance fraction between $0.2$ and $0.5$~\cite{Foreman}. \section{Results and discussion} Both solutions will be compared with the standard cosmological model $\Lambda$CDM, whose respective Hubble parameter as a function of the redshift is given by \begin{equation} H_{\Lambda CDM}(z)=H_{0}\sqrt{1-\Omega_{m}+\Omega_{m}\left(1+z\right)^{3}}, \label{LambdaCDM} \end{equation} with their respective free parameters $\theta=\{h,\Omega_{m}\}$, where for $\Omega_{m}$ we use the flat prior $0<\Omega_{m}<1$, and for $h$ the we use the same Gaussian prior as for the exact cosmological solutions. Even more, in order to compare the goodness of the fits statistically, we will use the Bayesian Information Criterion (BIC)~\cite{Schwarz}, defined as \begin{equation} BIC=\theta_{N}\ln{\left(n\right)}-2\ln{\left(\mathcal{L}_{max}\right)}, \label{BIC} \end{equation} where $\mathcal{L}_{max}$ is the maximum value of the likelihood function, calculated for the best-fit parameters, $\theta_{N}$ the total number of free parameters of the model and $n$ is the total number of the respective data sample. This criteria tries to solve the problem of maximizing the likelihood function by adding free parameters resulting in over-fitting. To do so, the criteria introduces a penalization that depends on both, the total number of free parameters of each model and the total observational data. The model statistically favored by observations, as compared to the others, corresponds to the one with the smallest value of BIC, where a difference of $2-6$ in BIC between two models is considered as evidence against the model with the higher BIC, a difference of $6-10$ in BIC is already strong evidence, and a difference $>10$ in BIC represents definitely a very strong evidence. \begin{table}[h] \centering \resizebox{7cm}{!} { \begin{tabular}{ccc} \hline\hline Data & \multicolumn{1}{c@{\hspace{0.5cm}}}{} & $\xi_{0}$ \\ \hline \multicolumn{3}{c}{Exact cosmological solution with $\gamma\neq 1$} \\ SNe Ia & & $3.441_{-1.842}^{+8.464}$ \\ OHD & & $6.671_{-3.851}^{+17.828}$ \\ SNe Ia + OHD & & $12.050_{-7.307}^{+33.822}$ \\ \hline \multicolumn{3}{c}{Exact cosmological solution with $\gamma= 1$} \\ SNe Ia & & $2.302_{-1.171}^{+5.480}$ \\ OHD & & $6.088_{-3.478}^{+16.730}$ \\ SNe Ia + OHD & & $10.863_{-6.470}^{+31.152}$ \\ \hline\hline \end{tabular} } \caption{Best-fit values for the free parameter $\xi_{0}$ obtained from the best-fit values of $x$, indicated in the Table \ref{bestfittable}, and the relation $\xi_{0}=tan{(x)}$.} \label{xi0table} \end{table} The best-fit values for the $\Lambda$CDM model and the exact cosmological solutions, as well as the goodness of fit criteria are shown in Table \ref{bestfittable}. In Figs.\ref{triangleLCDM}-\ref{triangleparticular} we depict their respective joint credible regions for combinations of their respective free parameters. From them, we are be able to conclude that: \begin{itemize} \item[1.-] The $\Lambda$CDM model has the lower values of $\chi^{2}_{min}$ and BIC, i. e., it is the model more favored by the observations. Focusing in the values of $\chi^{2}_{min}$, the solution with $\gamma=1$ is as suited to describe the SNe Ia data as the $\Lambda$CDM model does, with a difference in $\chi^{2}_{min}$ smaller than $0.3$. But, from the joint credible regions graphics it is possible to see that the SNe Ia data constricts less the free parameters than the OHD data and the joint data analysis. On the other hand, focusing in the BIC criteria, the smallest BIC difference occurs between the $\Lambda$CDM model and the solution with $\gamma=1$ reaching already for this difference the value 7.3 for the SNe Ia data. This has the consequence that the other solutions for $\gamma \neq 1$ are disfavored by the data. Moreover, the observations favor models where the recent acceleration expansion of the Universe is due to DE, instead of the models where the acceleration is due to the dissipative effects that experiments the DM. Even more, the exact cosmological solution with $\gamma=1$ has lower values of $\chi^{2}_{min}$ and BIC than the solutions with $\gamma\neq 1$, i. e., the observations favor the solutions where a CDM is considered. \item[2.-] The main issue of the solutions arise from the best-fit values obtained for $\epsilon$, which clearly are inconsistent with the value of $10^{-11}\ll\epsilon\lesssim 10^{-8}$ reported in~\cite{Piattella}, in order to be consistent with the properties of structure formation. \item[3.-] In order to fulfill the condition $\tau H\ll 1$, given by the Eq.(\ref{eq:eq8}), it is necessary, in the best scenario, that $\xi_{0}\ll 2\sqrt{3}$. From the values of $\xi_{0}$ shown in the Table \ref{xi0table}, it is possible to see that the value obtained from the SNe Ia data for both solutions and for the lower interval, gives a value close to $\sqrt{3}$, and for OHD data a value close to $2\sqrt{3}$, which are clearly greater than $2\sqrt{3}$ for the joint data analysis. Therefore, the condition $\tau H\ll 1$ is not fulfilled by the exact cosmological solution any of both cases, nevertheless, there is the possibility that the fluid condition can be fulfilled in some regime, improving the fit data, or under new considerations when studying the proposed cosmological model. So, this claim is not conclusive. \item[4.-] In natural units $\xi_{0}$ is a dimensionless parameter. In terms of physical units, it has no viscosity units due to the form in which it was defined. Nevertheless, it is possible to evaluate the dissipative pressure, for example, at the present time, in order to get an estimation of the size of the values involved. For the present time we obtained that $\Pi\approx 10^{-20}\,Pa$, which is a very low pressure, in comparison, for example, with the values obtained in the Eckart's framework (see~\cite{Brevik11}). \item[5.-] The possible explanation for the principal drawbacks presented by the exact cosmological solution for $\gamma\neq1$ and $\gamma=1$ could be related to the particular election for the bulk viscosity coefficient (see Eq.(\ref{xirho})), which is in this case proportional to the root of the DM density and it is the responsible of the recent acceleration expansion of the universe in this model. Because $\rho\rightarrow\infty$ when $z\rightarrow\infty$, and $\rho\rightarrow 0$ when $z\rightarrow -1$, the bulk viscosity becomes relevant in the past and negligible in the future, which is when the Universe experiments the acceleration in its expansion. Therefore, in order that the bulk viscosity becomes relevant at present and future time, it is necessary to increase the value of $\xi_{0}$, which inevitably prevents to fulfil the near equilibrium condition $\tau H\ll 1$, alternatively, the rise of the $\epsilon$ value would be required. This fact can be observed in the Figs. \ref{trianglegeneral} and \ref{triangleparticular}, (most clearly in the Fig. \ref{triangleparticular}), where for a lower values of $\xi_{0}$ larger $\epsilon$ values are obtained and vice versa. It is worthwhile mentioning that, because $\epsilon$ cannot be larger than one, $\xi_{0}$ has a non-zero lower bound; and for this minimum value, $\xi_{0}\rightarrow\infty$. \end{itemize} \section{Conclusions} We have tested a cosmological model described by an analytical solution recently found in~\cite{Gonzalez} for $s=1/2$ and for arbitrary $\gamma$, including the particular case when $\gamma=1$, by constraining it against Supernovae Ia and Observational Hubble Data. The solution gives the time evolution of the Hubble parameter in the framework of the full causal thermodynamics of Israel-Stewart. This solution was obtained considering a bulk viscous coefficient with the dependence $\xi=\xi_{0}\rho^{1/2}$, the general expression given by Eq.(\ref{relaxationtime}) for the relaxation time, and for a fluid with a barotropic EoS $p=\left(\gamma-1\right)\rho$. The results of the constraints still indicate that the $\Lambda CDM$ model is statistically the most favored model by the observations. The lesson that we have learned here is that unified DM models succeed to display the transition between decelerated and accelerated expansions, which is an essential a feature supported by the observational data, without invoking a cosmological constant or some other form of dark energy. Nevertheless, as it was found in~\cite{Piattella}, only a very small value of $\epsilon$ is consistent with the structure formation, while the numerical value we found from the best fit to the data leads to inconsistencies with the values required at perturbative level. It is relevant to mention that the exact solution constrained in this paper displays naturally, for some parameter values, the transition between decelerated and accelerated expansions. Other solutions found in the literature within the IS framework and for the same election for the bulk viscosity coefficient, (see for instance \cite{Cruzpowerlaw}), which are described by the power law behavior $H\left( t\right) =A\left( t_{s}-t\right) ^{-1}$, do not display the same natural transition. In fact, depending on the parameter $A$, they represent monotonically accelerated or decelerated solutions. For the case of an accelerated expansion a large non adiabatic contribution to the speed of sound is required. These two investigated solutions have in common the election of a bulk viscosity coefficient, which grows with the energy density of DM. This Ansatz has been made due to the simplicity the master equation acquires within the IS formalism. Nevertheless, from the physical point of view, this choice implies that the negative dissipative pressure grows with the redshift, while the inverse behavior leads to an accelerated late time expansion. The above results indicate that in the framework of the causal thermodynamics theory of dissipative fluids, accelerated solutions can in fact be obtained, nevertheless the non adiabatic contribution to the speed of sound happen to be large, in contradiction with the conclusions of perturbation analysis. This result can be inferred when the general expression for the relaxation time $\tau$, given by Eq.(\ref{relaxationtime}), is used. In some previous results, like the one displayed in ~\cite{Mathew2017}, where $\epsilon$ is set equal to one from the beginning, the consequences of this drawback was not properly acknowledged. This result is also consistent with the mathematical condition found in~\cite{Gonzalez}, where the exact solution displays an accelerated expansion only if $\epsilon>1/18$. Moreover, we have shown that the values of the parameters found from the data constraints lead to an inconsistency of the fluid description of the dissipative dark matter component. In fact, the best fit parameters indicate that the required condition $\tau H\ll 1$ cannot be fulfilled by the solution. This result is consistent with the the basic assumption in the thermodynamic approaches of relativistic viscous fluids, which asserts that the viscous stress should be lower than the equilibrium pressure of the fluid. This is the so-called near equilibrium condition. When the negative pressure comes only from the dissipation, the above condition is not fulfilled. The condition $\tau H\ll 1$ means that particles of the fluid has an interaction rate that allows to keep the thermal equilibrium, adjusting more rapidly that the natural time-scale defined by the expansion time $H^{-1}$~\cite{Maartens1996}. Therefore, it is expected that the condition $\tau H\ll 1$ will not be fulfilled by the parameters of an exact solution when this describes accelerated expansions. Nevertheless, as it was previously mentioned, this feature does not rule out the possibility that this condition be fulfilled in some other region of the the solution. Extensions of the IS approach, which consider non-linear effects allow deviations from the equilibrium. This could represent a possible solution to the technical difficulties just mentioned above, and to some extent one scenario of it has been explored in ~\cite{Cruz2017}, for phantom-type solutions. We expect to go further and extend the analytic solution including this nonlinear generalization elsewhere. \begin{acknowledgments} We thank Arturo Avelino for useful discussions. This article was partially supported by Dicyt from Universidad de Santiago de Chile, through Grants $N^{\circ}$ $041831PA$ (G.P.) and $N^{\circ}$ $041831CM$ (N.C.). E.G. was supported by Proyecto POSTDOC\_DICYT, c\'odigo 041931CM\_POSTDOC, Universidad de Santiago de Chile and partially supported by CONICYT-PCHA/Doctorado Nacional/2016-21160331. \end{acknowledgments}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section*{Introduction} \date{July 26, 2010} One of the fundamental achievements of quantum topology was a discovery of a non-trivial connection between monoidal categories and 3-manifolds. This connection was first observed by O. Viro and V. Turaev and later generalized in the papers of J. Barrett, B. Westbury, A. Ocneanu, S. Gelfand, D. Kazhdan and others. Their results may be summarized by saying that every spherical fusion category $\mathcal C$ over $\mathbb C$ with $\dim (\mathcal C)\neq 0$ gives rise to a topological invariant $\vert M\vert_{\mathcal C}\in \mathbb C$ of any closed oriented 3-dimensional manifold $M$. A prototypical example of a spherical fusion category is the category ${\textsc{Rep}} (G)$ of finite-dimensional complex representations of a finite group $G$. This category allows nice operations on objects and morphisms: direct sums, tensor products, left and right dualization. Moreover, ${\textsc{Rep}} (G)$ contains a finite family of \lq\lq simple" objects (= irreducible representations) such that all objects split as direct sums of the objects of this family. Certainly, the sets of morphisms in ${\textsc{Rep}} (G)$ are finite-dimensional complex vector spaces. Axiomatizing these properties, one obtains a notion of a fusion category, see \cite{ENO}. The condition of sphericity \cite{BW2} on a fusion category $\mathcal C$ says that the \lq\lq left" and \lq\lq right" dimensions of the objects of $\mathcal C$ are equal. The resulting $\mathbb C$-valued dimension of objects of $\mathcal C$ is preserved under isomorphisms in~$\mathcal C$. This allows one to define the dimension $\dim (\mathcal C)$ of a spherical fusion category $\mathcal C$ as the sum of the squares of the dimensions of the isomorphism classes of simple objects. For $\mathcal C={\textsc{Rep}} (G)$, the dimension of objects is the usual dimension of the underlying vector spaces and $\dim ({\textsc{Rep}} (G))=\vert G\vert$. The class of spherical fusion categories includes the categories of type ${\textsc{Rep}} (G)$ and many other categories some of which will be discussed below. The class of spherical fusion categories is believed to be \lq\lq big but not too big" so that one may hope for some kind of classification. The invariant of a 3-manifold $M$ associated with $ {\textsc{Rep}} (G)$ is nothing but the number of homomorphisms from the fundamental group of $M$ to $G$. In general, the invariant $\vert M\vert_{\mathcal C}$ associated with a spherical fusion category $\mathcal C$ is not determined by the fundamental group. The definition of~$\vert M\vert_{\mathcal C}$ is rather subtle and proceeds in terms of state sums on a triangulation of~$M$. The key algebraic ingredients of these state sums are the so-called $6j$-symbols associated with~$\mathcal C$. The formula $(M,\mathcal C)\mapsto \vert M\vert_{\mathcal C}$ defines a pairing between homeomorphism classes of closed oriented 3-manifolds and spherical fusion categories of non-zero dimension. A study of this pairing leads to natural questions both in algebra and topology. One usually studies the topological aspects. Is the pairing $(M,\mathcal C)\mapsto \vert M\vert_{\mathcal C}$ sufficiently strong to distinguish the 3-sphere from other 3-manifolds? (The answer is \lq\lq yes"). Is it sufficiently strong to distinguish arbitrary 3-manifolds up to homeomorphism? (The answer is \lq\lq no", see \cite{Fun}). We shall focus on algebraic questions and specifically on the following reconstruction problem: To what extent a spherical fusion category can be reconstructed from the associated 3-manifold invariants? The rational for this problem is that the number $\vert M\vert_{\mathcal C}$ may be viewed as a generalized dimension of $\mathcal C$ determined by $M$. The reconstruction problem is intriguing already for the categories of type ${\textsc{Rep}} (G)$. Is it true that for any non-isomorphic finite groups $G_1, G_2$, there is a closed oriented 3-manifold $M$ such that the numbers of homomorphisms from $\pi_1(M)$ to $G_1$ and $G_2$ are different? We do not know the answer. In this paper, we study the reconstruction problem for an interesting class of spherical fusion categories recently introduced by Tambara and Yamagami \cite{TY}. A Tambara-Yamagami category $\mathcal{TY}(A,\chi,\nu)$ is determined by a bicharacter $\chi$ on a finite abelian group $A$ and a sign $\nu=\pm 1$. By a {\it bicharacter} on $A$ we mean a non-degenerate symmetric bilinear pairing $\chi:A\times A\longrightarrow S^1$; the non-degeneracy of $\chi$ means that the adjoint homomorphism $A\to \mathrm{Hom} (A, S^1)$ is bijective. The pair $(A, \chi)$ will be called a {\it bicharacter pair}. Two bicharacter pairs $(A, \chi)$ and $(A', \chi')$ are said to be isomorphic if there is an isomorphism $A \cong A' $ transforming $\chi $ into $\chi' $. It is known that two Tambara-Yamagami categories, $\mathcal{TY}(A,\chi,\nu)$ and $\mathcal{TY}(A',\chi',\nu')$, are monoidally equivalent if and only if the pairs $(A,\chi)$ and $(A',\chi')$ are isomorphic and $\nu=\nu'$. Each bicharacter pair $(A, \chi)$ splits uniquely as an orthogonal sum $$(A,\chi)= \bigoplus_{p} (A^{(p)}, \chi^{(p)}),$$ where $p$ runs over all prime natural numbers, $A^{(p)}\subset A$ is the abelian $p$-group consisting of the elements of $A$ annihilated by a sufficiently big power of $p$, and $\chi^{(p)}:A^{(p)}\times A^{(p)} \longrightarrow S^1$ is the restriction of $\chi$ to $A^{(p)}$. In the sequel, the order of a group $A$ is denoted $\vert A\vert$. \begin{theorem}\label{thm-one} Let $\mathcal C=\mathcal{TY}(A,\chi,\nu)$ and $\mathcal C'=\mathcal{TY}(A',\chi',\nu')$ be two Tambara-Yamagami categories such that $\vert M \vert_{\mathcal C}= \vert M \vert_{\mathcal C'}$ for all closed oriented 3-manifolds~$M$. (a) We have $\vert A\vert=\vert A'\vert$ and if $\vert A\vert$ is not a positive power of $4$, then $\nu=\nu'$. (b) For every odd prime $p$, the pairs $(A^{(p)}, \chi^{(p)})$ and $(A'^{(p)}, \chi'^{(p)})$ are isomorphic. \end{theorem} Combining the claims (a) and (b) we obtain the following corollary. \begin{corollary}\label{cor-one} Let $\mathcal C=\mathcal{TY}(A,\chi,\nu)$ and $\mathcal C'=\mathcal{TY}(A',\chi',\nu')$ be two Tambara-Yamagami categories such that $\vert M \vert_{\mathcal C}= \vert M \vert_{\mathcal C'}$ for all closed oriented 3-manifolds~$M$. If $\vert A\vert$ is odd, then the bicharacter pairs $(A , \chi )$ and $(A' , \chi' )$ are isomorphic and $\nu=\nu'$. \end{corollary} We conjecture a similar claim in the case where $\vert A\vert$ is even. The proof of Theorem~\ref{thm-one} is based on an explicit computation of $\vert M \vert_{\mathcal C}$ for the lens spaces $L_k=L_{k,1}$ with $k=0,1, 2,\ldots$ Recall that $L_k $ is the closed oriented 3-manifold obtained from the 3-sphere $S^3$ by surgery along a trivial knot in $S^3$ with framing $k$. In particular, $L_0=S^1 \times S^2$, $L_1=S^3$, and $L_2=\mathbb {R} { P}^3$. The manifolds $\{L_k\}_{k}$ are pairwise non-homeomorphic; they are distinguished by the fundamental group $\pi_1(L_k)=\mathbb{Z}/k\mathbb{Z}$. To formulate our computation of $\vert L_k \vert_{\mathcal C}$, we recall the notion of a Gauss sum. Let $A$ be a finite abelian group and $\chi:A\times A\longrightarrow S^1$ be a symmetric bilinear form (possibly degenerate). A {\it quadratic map} associated with~$\chi$ is a map $\mu:A\to S^1$ such that for all $a,b\in A$, \begin{equation*} \mu(a+b)=\chi(a,b)\, \mu(a)\, \mu(b) . \end{equation*} In other words, the coboundary of $\mu$ is equal to $\chi$. Such a $\mu$ always exists (see, for example, \cite{Klep}) and determines the normalized {\it Gauss sum} $$\gamma(\mu)= \vert A\vert^{-1/2} \vert A^\perp_\chi \vert^{-1/2} \sum_{a\in A } \mu(a) \in \mathbb C,$$ where $$A^\perp_\chi=\{a\in A\, \vert \, \chi(a,b)=1 \,\ {\text {for all}}\,\ b\in A\}$$ is the annihilator of $\chi$. (If $\chi$ is a bicharacter, then $A^\perp_\chi=\{0\}$.) The normalization is chosen so that either $\gamma(\mu)=0$ or $\vert \gamma(\mu)\vert=1$ (see Lemma~\ref{lemma-twominus} below). Denote by $Q_\chi$ the set of quadratic maps associated with $\chi$. This set has precisely $\vert A\vert$ elements; this follows from the fact that any two quadratic maps associated with~$\chi$ differ by a homomorphism $A\to S^1$. Every integer $k\geq 0$ determines a subgroup $A_k=\{a\in A\, |\, ka =0\}$ of $A$ and a number $$\zeta_k ( \chi)= \vert A\vert^{-1/2} \vert A_k \vert^{-1/2} \sum_{ \mu\in Q_\chi} \gamma(\mu)^k\in \mathbb C.$$ For example, $A_0=A$ and $\zeta_0(\chi)=1$. \begin{theorem}\label{thm-two} Let $\mathcal C=\mathcal{TY}(A,\chi,\nu)$ be a Tambara--Yamagami category. For any odd integer $k\geq 1$, we have \begin{equation}\label{eq-odd} \vert L_{k} \vert_{\mathcal C}= \frac {\vert A_{k}\vert }{2\vert A\vert} . \end{equation} For any even integer $k\geq 0$, we have \begin{equation}\label{eq-even} \vert L_{k}\vert_{\mathcal C}=\frac{\vert A_{k}\vert +\nu^{k/2}\vert A\vert^{1/2}\vert A_{k/2}\vert^{1/2}\zeta_{k/2} (\chi)}{2\vert A\vert}. \end{equation} \end{theorem} For $k=0$, Formula~\eqref{eq-even} gives $\vert S^1 \times S^2 \vert_{\mathcal C} =1$ which is known to be true for all spherical fusion categories $\mathcal C$. Our proof of Theorem~\ref{thm-two} is based on two results. The first is the equality $\vert M \vert_{\mathcal C}= \tau_{\mathcal Z(\mathcal C)} (M)$ recently established in \cite{TVi}. Here $\mathcal C$ is an arbitrary spherical fusion category of non-zero dimension, $\mathcal Z(\mathcal C)$ is the Drinfeld-Joyal-Street center of $\mathcal C$, and $\tau_{\mathcal Z(\mathcal C)}(M)$ is the Reshetikhin-Turaev invariant of $M$. The second result is the computation of the center of $\mathcal C=\mathcal{TY}(A, \chi,\nu)$ in \cite{GNN}. The paper is organized as follows. In Section~\ref{sec-1} we recall the Tambara-Yamagami category and its center and prove Theorem~\ref{thm-two}. In Sections~\ref{sec-3} and~\ref{sec-3(a)} we prove respectively claims (a) and (b) of Theorem~\ref{thm-one}. This paper was started during the visit of VT to the University of Caen in June 2010. VT would like to thank the University of Caen for hospitality. The work of V.~Turaev was partially supported by the NSF grant DMS-0904262. \section{The Tambara-Yamagami categories and their centers}\label{sec-1} In this section, $(A,\chi)$ is a bicharacter pair, $\nu=\pm 1$, and $n=\vert A\vert$. \subsection{The category $\mathcal{TY}(A,\chi,\nu)$}\label{subsec-1} The simple objects of the Tambara-Yamagami category $\mathcal C=\mathcal{TY}(A,\chi,\nu)$ are all elements $a$ of $ A$ and an additional object $m$. The unit object of $\mathcal C$ is the zero element $0\in A$. All other objects of $\mathcal C$ are finite direct sums of the simple objects. The tensor product in $\mathcal C$ is determined by the following fusion rules: $$ a\otimes b=a+b \, \, {\text {and}} \, \, a\otimes m=m\otimes a=m \, \ {\text{for all}} \,\ a,b\in A,\ {\text {and}}\ \, m\otimes m=\bigoplus_{a\in A} a. $$ The category $\mathcal C$ is associative but generally speaking not strictly associative. For any simple objects $U,V,W$ of $\mathcal C$, the associativity isomorphism $\phi_{ U,V,W}: (U\otimes V) \otimes W\to U \otimes (V \otimes W)$ is given by the following formulas (where $a,b,c$ run over $A$):\\ $$\begin{array}{lcl} \phi_{a,b,c} = id_{a+b+c},\hfill \phi_{a,b,m} = id_m,\hfill \phi_{m,a,b} = id_m,\\ \phi_{a,m,b} = \chi(a,b)id_m,\hfill \phi_{a,m,m} = \displaystyle\bigoplus_{b\in A}id_b,\hfill \phi_{m,m,a} = \displaystyle\bigoplus_{b\in A}id_b,\\ \phi_{m,a,m} = \displaystyle\bigoplus_{b\in A}\chi(a,b)id_b,\, \hfill \phi_{m,m,m} = ( {\nu} n^{-1/2} \chi(a,b)^{-1}id_m)_{a,b}. \end{array}$$ The unit isomorphisms are trivial. The duality in $\mathcal C$ is defined by $a^*=-a$ for all $a\in A$ and $m^*=m$. The left duality morphisms in $\mathcal C$ are the identity maps $ 0\to a\otimes a^*,\ a^*\otimes a\to 0$ for $a\in A$, the inclusion $ 0\hookrightarrow m\otimes m $ and $\nu n^{1/2}$ times the obvious projection $ m\otimes m \to 0$. The right duality morphisms in $\mathcal C$ are the identity maps $ 0\to a^*\otimes a,\ a\otimes a^*\to 0$ for $a\in A$, $\nu$ times the inclusion $ 0\hookrightarrow m\otimes m $ and $n^{1/2}$ times the obvious projection $ m\otimes m\to 0$. We define a fusion category as a $\mathbb C$-linear monoidal category with compatible left and right dualities such that all objects are direct sums of simple objects, the number of isomorphism classes of simple objects is finite, and the unit object is simple. (An object $V$ is simple if ${\rm End } (V)=\mathbb C \, id_V$.) The condition of sphericity says that the left and right dimensions of all objects are equal. A basic reference on the theory of fusion categories is \cite{ENO}. It is easy to see that $\mathcal C=\mathcal{TY}(A,\chi,\nu)$ is a spherical fusion category of dimension $2n$. \subsection{The center}\label{subsec-2} The center $ {\mathcal Z(\mathcal C)} $ of $\mathcal C=\mathcal{TY}(A,\chi,\nu)$ was computed in \cite{GNN}, Prop.\@ 4.1. The category $ {\mathcal Z(\mathcal C)} $ has three types of simple objects whose description together with the corresponding quantum dimensions and twists is as follows: (1) $2n$ invertible objects $X_{(a,\varepsilon)}$, where $ a$ runs over $ A$ and $ \varepsilon$ runs over complex square roots of $\chi(a,a)^{-1}$. Here $\dim(X_{(a,\varepsilon)})=1$ and $ \theta_{(a,\varepsilon)}=\chi(a,a)^{-1}$; (2) $\frac{n(n-1)}{2}$ objects $Y_{(a,b)}$ parameterized by unordered pairs $(a,b)$, where $a,b\in A,\ a\neq b$. Here $\dim(Y_{(a,b)})=2$ and $ \theta_{(a,b)}=\chi(a,b)^{-1}$; (3) $2n$ objects $Z_{(\mu,{\Delta})}$, where $\mu $ runs over $Q_\chi$ and ${\Delta}$ runs over the square roots of $ {\nu} \gamma(\mu)$. Here $\dim(Z_{(\mu,{\Delta})})= n^{1/2}$ and $\theta_{(\mu,{\Delta})}={\Delta}$. Denote by $I$ the set of the (isomorphism classes of) simple objects of $\mathcal Z(\mathcal C)$. The dimension of $\mathcal Z(\mathcal C)$ is computed by $$\dim \mathcal Z(\mathcal C)=\sum_{i\in I} (\dim(i))^2=2n \times 1 + \frac{n(n-1)}{2} \times 4+ 2n \times n=4n^2.$$ We will need the following more general computation. \begin{lemma}\label{lemma-one} For an integer $k\geq 0$, set $ \tau_{k}= \sum_{i\in I}\theta_i^{k}(\dim(i))^2 $, where $\theta_i$ and $\dim(i)$ are the twist and the dimension of $i\in I$. If $k$ is odd, then $ \tau_{k} = 2n \vert A_{k}\vert$. If $k$ is even, then $ \tau_{k} = 2n ({\vert A_{k}\vert} + {\nu^{k/2}}\vert A\vert^{1/2}\vert A_{k/2}\vert^{1/2} \zeta_{k/2} (\chi)) $. \end{lemma} \begin{proof} A direct computation shows that $ \tau_{k}=2u_k+nv_k$, where $$ u_k=\sum_{a\in A}\chi(a,a)^{-k}+\sum_{(a,b)\in A^2, a\neq b}\chi(a,b)^{-k} $$ and $v_k=\sum_{({\mu}, \Delta) }{\Delta}^k $. Since $\chi$ is non-degenerate, $$ u_k=\sum_{a,b\in A} \chi(a,b)^{-k}=\sum_{a,b\in A}\chi(a,b^{-k}) =n \, |A_k|. $$ If $k$ is odd, then the contributions of the pairs $({\mu}, \Delta)$ and $({\mu}, -\Delta)$ to $v_k$ cancel each other so that $v_k=0$ and $ \tau_{k} = 2n \vert A_{k}\vert$. For even $k$, $$v_k=\sum_\mu 2 ({\nu} \gamma (\mu) )^{k/2}=2 {\nu^{k/2}}\vert A\vert^{1/2}\vert A_{k/2}\vert^{1/2}\zeta_{{k/2}} (\chi).$$ \end{proof} \subsection{Proof of Theorem \ref{thm-two}}\label{sec-2} Since $\mathcal C=\mathcal{TY}(A,\chi,\nu)$ is a spherical fusion category of non-zero dimension, it determines for any closed oriented 3-manifold $M$ a state sum invariant $\vert M\vert_{\mathcal C}\in \mathbb C$, see \cite{TV}, \cite{BW}. By a theorem of M\"uger \cite{Mug}, the category $\mathcal Z(\mathcal C)$ is modular in the sense of~\cite{Tur}. A modular category endowed with a square root $\mathcal D$ of its dimension gives rise to the Reshetikhin-Turaev invariant of any $M$ as above. The RT-invariant of $M$ determined by $\mathcal Z(\mathcal C)$ and the square root $\mathcal D=2n=\tau_1$ of $\dim \mathcal Z(\mathcal C)$ will be denoted by $\tau_{\mathcal Z(\mathcal C)}(M)$. A theorem of Virelizier and Turaev \cite{TVi} implies that $\vert M\vert_{\mathcal C}=\tau_{\mathcal Z(\mathcal C)}(M)$ for all $M$. By \cite{Tur}, Chapter II, 2.2, for all $k\geq 0$, \begin{equation*} \tau_{\mathcal Z(\mathcal C)}(L_{k})={\mathcal D}^{-2}\sum_{i\in I}\theta_i^{k}(\dim(i))^2=4n^{-2} \tau_k. \end{equation*} Substituting the expression for $\tau_k$ provided by Lemma~\ref{lemma-one}, we obtain the claim of the theorem. \section{Proof of Theorem~\ref{thm-one}(a)}\label{sec-3} We start with a well known lemma. In this lemma we call a quadratic map $\mu :A\to S^1$ {\it homogeneous} if $\mu(na)=(\mu(a))^{n^2}$ for all $n\in \mathbb{Z}$ and $a\in A$. \begin{lemma}\label{lemma-twominus} Let $A$ be a finite abelian group and $\mu :A\to S^1 $ be a quadratic map associated with a symmetric bilinear form $\chi:A\times A \to S^1$. Set $A^\perp=A^\perp_\chi\subset A$. - If $\mu(A^\perp)\neq 1$, then $\gamma(\mu)= 0$. - If $\mu(A^\perp)= 1$, then $\vert \gamma(\mu)\vert=1$. - If $\mu(A^\perp)= 1$ and $\mu$ is homogeneous, then $\gamma(\mu)$ is an 8-th complex root of unity. \end{lemma} \begin{proof} We have $$ \vert A\vert \, \vert A^\perp \vert\, \vert \gamma(\mu)\vert^2= \vert \sum_{a\in A} \mu(a)\vert^2= \sum_{a, b\in A}\mu(a)\overline {\mu(b)} =\sum_{a, b\in A}\mu(a){\mu(b)}^{-1}$$ $$=\sum_{a, b\in A} \mu(a+b) {\mu(b)}^{-1} =\sum_{a, b\in A} \chi (a,b) \mu(a).$$ When $b$ runs over $A$, the complex number $\chi(a,b)$ runs over a finite subgroup of $S^1$. We have $\sum_{b\in A} \chi(a,b)=0$ unless this subgroup is trivial. The latter holds if and only if $a\in A^\perp$ and in this case $\sum_{b\in A} \chi(a,b)=\vert A\vert$. Therefore, $$\vert A\vert \, \vert A^\perp \vert\, \vert \gamma(\mu)\vert^2= \vert A\vert \, \sum_{a\in A^\perp} \mu(a).$$ The restriction of $\mu$ to $A^\perp$ is a group homomorphism $A^\perp\to S^1 $. If $\mu(A^\perp)\neq 1$, then $\sum_{a\in A^\perp} \mu(a)=0$ and therefore $\gamma(\mu)=0$. Suppose now that $\mu(A^\perp)= 1$. Then $\sum_{a\in A^\perp} \mu (a)=\vert A^\perp\vert$ and therefore $\vert \gamma(\mu)\vert=1$. The equality $\mu(A^\perp)= 1$ also ensures that $\mu$ is the composition of the projection $A\to A'= A/A^\perp$ with a quadratic map $\mu':A'\to S^1 $ associated with the non-degenerate symmetric bilinear form $ A'\times A' \to S^1$ induced by $\chi$. It follows from the definitions that $\gamma(\mu)=\gamma(\mu')$. If $\mu$ is homogeneous, then so is $\mu'$. It is known (see, for instance, \cite{Sc}, Chapter 5, Section 2) that for any homogeneous quadratic map on a finite abelian group associated with a non-degenerate symmetric bilinear form, the corresponding invariant $\gamma$ is an $8$-th root of unity. This implies the last claim of the lemma. \end{proof} \begin{lemma}\label{lemma-two} Let $(A,\chi)$ be a bicharacter pair. For any integer $k \geq 1 $, either $\zeta_k (\chi)=0$ or $ \zeta_k (\chi) $ is an $8$-th root of unity. If $k=1$ or $k$ is divisible by $8\vert A\vert$, then $\zeta_k(\chi)=1$. \end{lemma} \begin{proof} Pick a quadratic map $\mu_0:A\to S^1$ associated with $\chi$. Observe that for every integer $k $, the function $\mu_0^{k}:A\to S^1$ carrying any $ c\in A$ to $ (\mu_0(c))^{k}$ is a quadratic map associated with the symmetric bilinear form $\chi^{k}:A\times A\to S^1$ defined by $\chi^{k}(a,b)= (\chi (a,b) )^{k}$. We claim that for all $k\in \mathbb{Z}$, \begin{equation}\label{prin}\zeta_k(\chi)=\gamma (\mu_0^{-k})\, (\gamma (\mu_0))^k. \end{equation} Indeed, since $\chi$ is non-degenerate, any quadratic map $\mu:A\to S^1$ associated with $\chi$ can be expanded in the form $\mu(a)= \chi (a, c)\, \mu_0(a)$ for a unique $c=c(\mu)\in A$. Since $\chi(a,c) \, \mu_0(a)=\mu_0(a+c)\, \mu_0(c)^{-1}$ for all $a,c\in A$, we have $$ \zeta_k(\chi)= |A|^{-1/2} |A_k|^{-1/2}\sum_{\mu\in Q_\chi}(|A|^{-1/2}\sum_{a\in A} \mu(a))^k $$ $$ = |A|^{-1/2} |A_k|^{-1/2} \sum_{c\in A} (|A|^{-1/2}\sum_{a\in A}\chi(a,c)\, \mu_0(a))^k$$ $$= \{ |A|^{-1/2} |A_k|^{-1/2} \sum_{c\in A}\mu_0(c)^{-k}\}\{|A|^{-1/2}\sum_{b\in A}\mu_0(b)\}^{k}$$ $$=\gamma (\mu_0^{-k})\, (\gamma (\mu_0))^k. $$ In the last equality we use the obvious fact that $A^\perp_{\chi^{-k}}=A_k$. We can always choose $\mu_0:A\to S^1$ to be homogeneous. Then $\mu_0^{-k}$ also is homogeneous. Since $\chi$ is non-degenerate, the previous lemma implies that $\gamma (\mu_0)$ is an $8$-th root of unity and $\gamma (\mu_0^{-k})$ is either zero or an $8$-th root of unity. This implies the first claim of the lemma. For $k=1$, Formula~\eqref{prin} gives $$\zeta_1(\chi)= \gamma (\mu_0^{-1})\, \gamma (\mu_0) ={\gamma (\overline{\mu_0})} \, \gamma (\mu_0)=\overline{\gamma (\mu_0)} \, \gamma (\mu_0)= 1,$$ where the overbar is the complex conjugation. Observe that $\mu_0^{2n}=1$ for $n=\vert A\vert$. Indeed, for any $a\in A$, $$1=\mu_0(0)=\mu_0(2na)= (\mu_0(a))^{2n} \chi(a,a)^{n(n-1)}$$ $$= (\mu_0(a))^{2n} \chi(na,(n-1) a)= (\mu_0(a))^{2n}.$$ Therefore for all $k\in 2n\mathbb{Z}$, we have $\gamma (\mu_0^{-k})=1$. If $k \in 8 \mathbb{Z}$, then $(\gamma (\mu_0))^k=1$. Hence, if $k\in 8n\mathbb{Z}$, then $\zeta_k(\chi)=\gamma (\mu_0^{-k})\, (\gamma (\mu_0))^k=1$. \end{proof} \subsection{Proof of Theorem~\ref{thm-one}(a)} For $k=1$, Formula~\eqref{eq-odd} gives $\vert L_{1}\vert_{\mathcal C}=(2\vert A\vert )^{-1}$. Thus, $$\vert A\vert =\vert L_{1}\vert_{\mathcal C}^{-1}/2=\vert L_{1}\vert_{\mathcal C'}^{-1}/2=\vert A'\vert.$$ This and Formula~\eqref{eq-odd} implies that $\vert A_k\vert =\vert A'_k\vert$ for all odd $k\geq 1$. Set $n=\vert A\vert=\vert A'\vert $. Suppose that $\nu\neq\nu'$. Assume for concreteness that $\nu=-1$ and $\nu'=+1$. Formula~\eqref{eq-even} with $k=2$ and Lemma~\ref{lemma-two} show that $$\vert A_{2}\vert -n^{1/2}=2n\vert L_{2}\vert_{\mathcal C}=2n\vert L_{2}\vert_{\mathcal C'}=\vert A'_{2}\vert + n^{1/2}.$$ Thus, $\vert A_{2}\vert -\vert A'_{2}\vert= 2n^{1/2}$. Therefore, $n=m^2$ for an integer $m\geq 1$. Since $n$ is not a positive power of $4$, either $m=1$ or $m$ is not a power of $2$. If $m=1$, then $A=A'=\{0\}$ and so $A_{2}= A'_{2}=\{0\}$ which contradicts the equality $\vert A_{2}\vert -\vert A'_{2}\vert= 2m$. Suppose that $m=n^{1/2}$ is not a power of $2$. Pick an odd divisor $\ell\geq 3$ of $m $. Applying Formula~\eqref{eq-even} to $k=2\ell$, we obtain that $$\vert A_{k}\vert - m\vert A_{\ell} \vert^{1/2} \zeta_{\ell} ( \chi)=\vert A'_{k}\vert + m \vert A'_{\ell} \vert^{1/2} \zeta_{\ell} ( \chi').$$ Note that $\vert A_{k}\vert=\vert A_{2}\vert\, \vert A_{\ell}\vert$ and similarly for $A'$. Since $\ell$ is odd, we have $\vert A_\ell\vert =\vert A'_\ell \vert$. Therefore $$\vert A_{2}\vert -\vert A'_{2}\vert= m \vert A_{\ell} \vert^{-1/2} (\zeta_{\ell} ( \chi')+\zeta_{\ell} ( \chi)).$$ The right-hand side of this equality must be a real number that cannot exceed $2m \vert A_{\ell} \vert^{-1/2}$ by Lemma~\ref{lemma-two}. Thus, $\vert A_{2}\vert -\vert A'_{2}\vert \leq 2m \vert A_{\ell} \vert^{-1/2}$. Since $\ell$ divides $n$, we have $A_\ell\neq 1$ so that $ \vert A_{\ell} \vert \geq 2$. This gives $\vert A_{2}\vert -\vert A'_{2}\vert \leq 2m /\sqrt{2}$ which contradicts the equality $\vert A_{2}\vert -\vert A'_{2}\vert= 2m$. This contradiction shows that $\nu=\nu'$. \subsection{Remarks} (i) It is easy to extend the argument above to show that the conclusion of Theorem~\ref{thm-one}(a) holds also for $\vert A\vert=4$. (ii) Let in the proof above $\vert A\vert=\vert A'\vert=n$ be a positive power of $2$ and $\nu=-1,\nu'=1$. Formula~\eqref{eq-even} with $k=2\ell$, where $\ell\geq 3$ is odd, shows that $$\vert A_{2\ell}\vert-n^{1/2}\vert A_{\ell}\vert^{1/2}\zeta_{\ell}(\chi)=2n\vert L_{2\ell}\vert_{\mathcal C}= 2n\vert L_{2\ell}\vert_{\mathcal C'}=\vert A'_{2\ell}\vert+n^{1/2}\vert A'_{\ell}\vert^{1/2}\zeta_{\ell}(\chi').$$ But now $A_{\ell}=\{0\}$, so $\vert A_{\ell}\vert=1$, $\vert A_{2\ell}\vert=\vert A_{2}\vert$ and similarly for $A'$. This gives $\vert A_{2}\vert- \vert A'_{2}\vert=n^{1/2}(\zeta_{\ell}(\chi')+\zeta_{\ell}(\chi))$. Comparing with the equality $\vert A_{2}\vert-\vert A'_{2}\vert= 2n^{1/2}$ obtained above, we conclude that $\zeta_{\ell}(\chi')+\zeta_{\ell}(\chi)=2$. By Lemma~\ref{lemma-two}, this is possible if and only if $\zeta_{\ell}(\chi)= \zeta_{\ell}(\chi')= 1$ for all odd $l\geq 3$. (iii) The number $\zeta_k(\chi)$ is closely related to the Frobenius-Schur indicator $\nu_{2k}(m)$ of the object $m$ of the category $\mathcal C=\mathcal{TY}(A,\chi,\nu)$ computed by Shimizu \cite{Sh}. Indeed, substituting $n=2k, V=m$ in formula (3) of \cite{Sh} and taking into account that $\dim(\mathcal C)=2|A|$, $ \theta_m=\Delta$, $ \dim(m)= |A|^{1/2}$, we obtain $$ \nu_{2k}(m)=\frac{1}{2|A|^{1/2}}\sum_{\mu,\Delta}\Delta^{2k}= |A|^{-1/2}\sum_{\mu\in Q_{\chi}} [\nu\gamma(\mu)]^k= \nu^k |A_k|^{1/2} \zeta_k(\chi) $$ (our sign $\nu$ is equal to Shimizu's $sgn(\tau)$). This and Lemma~\ref{lemma-two} give another proof of the following results of Shimizu (see \cite{Sh}, Theorem 3.5): the number $|A_k|^{-1/2} \nu_{2k}(m) $ is either $0$ or an 8-th complex root of unity for all $k$; this number is $0$ if and only if for some (and then for any) $\mu\in Q_\chi$, there is $a=a_\chi\in A_k$ such that $\mu(a)^k=1$. The latter claim follows from Lemma~\ref{lemma-twominus}, Formula~\eqref{prin}, and the equality $A^\perp_{\chi^{-k}}=A_k$. \section{Proof of Theorem~\ref{thm-one}(b)}\label{sec-3(a)} \subsection{Preliminaries on bicharacters}\label{prelimbich} Any finite abelian group $A$ splits uniquely as a direct sum $A=\oplus_{p}\, A^{(p)}$, where $p\geq 2$ runs over all prime integers and $A^{(p)}$ consists of all elements of $A$ annihilated by a sufficiently big power of $p$. The group $A^{(p)}$ is a $p$-group, i.e., an abelian group annihilated by a sufficiently big power of $p$. Given a bicharacter $\chi$ of $A$, we have $\chi (A^{(p)}, A^{(p')})=1$ for any distinct $p,p'$. Therefore the restriction, $\chi^{(p)}$, of $\chi$ to $A^{(p)}$ is a bicharacter and we have an orthogonal splitting $(A,\chi)=\oplus_{p} \, (A^{(p)} , \chi^{(p)})$. Fix a prime integer $p\geq 2$ and recall the properties of bicharacters on $p$-groups, see, for example, \cite{De} for a survey. Given a bicharacter $\chi$ on a finite abelian $p$-group $A$, there is an orthogonal splitting $(A,\chi)=\oplus_{{s}\geq 1} (A_{s}, \chi_{s})$, where $A_{s}$ is a direct sum of several copies of $\mathbb{Z}/p^{{s}}\mathbb{Z}$ and $\chi_{s}:A_{s} \times A_{s}\to S^1$ is a bicharacter. The rank of $A_{s}$ as a $\mathbb{Z}/p^{{s}}\mathbb{Z}$-module depends only on $A$ and is denoted ${r}_{p,s} (A)$. Assume from now on that $p\neq 2$. Then the splitting $(A,\chi)=\oplus_{{s}\geq 1} (A_{s}, \chi_{s})$ is unique up to isomorphism and each $\chi_{s}$ is an orthogonal sum of bicharacters on ${r}_{s} (A)$ copies of the cyclic abelian group $\mathbb{Z}/p^{{s}}\mathbb{Z}$. Using the canonical injection $\mathbb{Z}/p^{{s}}\mathbb{Z}\hookrightarrow S^1, z\mapsto e^{2\pi i z/p^{{s}}}$, we can view $\chi_{s}$ as a pairing with values in the ring~$\mathbb{Z}/p^{{s}}\mathbb{Z}$. This allows us to consider the determinant $\det\chi_{s}\in \mathbb{Z}/p^{{s}}\mathbb{Z}$ of $\chi_{s}$. Since $\chi_{s}$ is non-degenerate, $\det \chi_{s} $ is coprime with $p$. Let $$\sigma_{p, s}(\chi)=(\frac{\det \chi_{s} }{p}) \in \{\pm 1\} $$ be the corresponding Legendre symbol. Recall that for an integer $d$ coprime with~$p$, the Legendre symbol $(\frac{d}{p})$ is equal to $1$ if $d \,(\mathrm{mod}\ p)$ is a quadratic residue and to $-1$ otherwise, see, for example, \cite{IR}. If ${r}_{p,s} (A )=0$, then by definition $\sigma_{p, s}=1$. It follows from the definitions that the integers $\{{r}_{p,s}\}_{s}$ are additive and the signs $\{\sigma_{p,s}\}_{s}$ are multiplicative with respect to orthogonal summation of bicharacter pairs. A theorem due to H. Minkowski, E. Seifert, and C.T.C.\ Wall says that these invariants form a complete system: two bicharacters, $\chi_1$ and $\chi_2$, on $p$-groups $A_1$ and $A_2$, respectively, are isomorphic if and only if ${r}_{p,s}(A_1)={r}_{p,s}(A_2)$ and $\sigma_{p,s}(\chi_1)=\sigma_{p,s}(\chi_2)$ for all ${s}\geq 1$. For shortness, when $p$ is specified, we denote ${r}_{p,s} (A)$ and $\sigma_{p, s}(\chi)$ by $r_s(A)$ and $\sigma_{ s}(\chi)$, respectively. \subsection{Computation of $\zeta_k$}\label{compos} Consider the $\mathbb C$-valued invariants $\{\zeta_k\}_{k\geq 1}$ of bicharacters defined in the introduction. It is easy to deduce from the definitions that $ \zeta_k(\chi\oplus\chi') = \zeta_k(\chi)\, \zeta_k(\chi') $ for any bicharacters $\chi, \chi'$ and any $k$. Thus, the formula $\chi\mapsto\zeta_k(\chi)$ defines a multiplicative function from the semigroup of bicharacter pairs (with operation being the orthogonal sum $\oplus$) to $\mathbb C$. Fix an odd prime $p \geq 3$. We now compute $\zeta_k$ on the bicharacters on $p$-groups. For any odd integer $a$, set $\varepsilon_{a}=i=\sqrt{-1}$ if $a\equiv 3 (\mathrm{mod}\ 4)$ and $\varepsilon_{a}=1$ otherwise. For any integers $ k, s \geq 1$, we have $gcd(k,p^s)=p^t$ with $0\leq t \leq s$. Set $$ \alpha_{ k,s}= k s + s-t\quad {\text {and}} \quad \beta_{ k,s}= \frac{\varepsilon^k_{p^s}}{\varepsilon_{p^{s-t}}} \, (\frac{h}{p})^{k s + s-t} \, (\frac{k'}{p})^{s-t} \in \{\pm 1, \pm i\},$$ where $h=({p^s+1})/{2}\in \mathbb{Z}$ and $k'= {k}/{p^t}\in \mathbb{Z}$. Note that $gcd(h,p)=1$ so that the Legendre symbol $(\frac{h}{p})$ is defined. If $t<s$, then $gcd(k',p)=1$ so that the Legendre symbol $(\frac{k'}{p})$ is defined; if $t=s$, then by definition, $(\frac{k'}{p})^{s-t}=1$. \begin{lemma} \label{zetal} For any $ k \geq 1$ and any bicharacter $\chi$ on a $p$-group $A$, \begin{equation} \label{zetal1} \zeta_k(\chi)= \prod_{s\geq 1} \, \beta_{ k, s}^{r_{s}(A)}\, [\sigma_{s} (\chi)]^{\alpha_{ k,s}}. \end{equation} \end{lemma} \begin{proof} The proof is based on the following classical Gauss formula: for any integer $d$ coprime with $p$, \begin{equation} \label{gauss} \sum_{j=0}^{p^s-1} \exp(\frac{2\pi i}{p^s}dj^2) =p^{\frac{s}{2}}\varepsilon_{p^{s}} (\frac{d}{p})^{s}. \end{equation} A more general formula holds for any integer $d$: if $gcd(d,p^s)=p^t$ with $0\leq t\leq s$ and $d'= {d}/{p^t}$, then \begin{equation} \label{gauss+} \sum_{j=0}^{p^s-1} \exp(\frac{2\pi i}{p^s}dj^2)=p^t\sum_{j=0}^{p^{s-t}-1} \exp(\frac{2\pi i} {p^{s-t}}d'j^2)=p^{\frac{s+t}{2}}\varepsilon_{p^{s-t}} (\frac{d'}{p})^{s-t}, \end{equation} where, by definition, for $t=s$, the expression $(\frac{d'}{p})^{s-t}$ is equal to 1. We now prove \eqref{zetal1}. It is clear that both sides of \eqref{zetal1} are multiplicative with respect to orthogonal summation of bicharacters. The results stated in Section~\ref{prelimbich} allow us to reduce the proof of \eqref{zetal1} to the case where $A=\mathbb{Z}/p^s \mathbb{Z}$ for some $s\geq 1$. We must prove that for any bicharacter $\chi:A\times A\to S^1$, \begin{equation} \label{zetal1--} \zeta_k(\chi)= \beta_{ k, s} \, [\sigma_{s} (\chi)]^{\alpha_{ k,s}}. \end{equation} Set as above $h=({p^s+1})/{2} $ and $k'= {k}/{p^t} $, where $gcd(k,p^s)=p^t$ with $0\leq t \leq s$. The bicharacter $\chi$ is given by $\chi(a,b)= \exp(\frac{2\pi i}{p^s}\Delta ab)$ for all $ a,b\in A$, where $\Delta$ is an integer coprime with $p$. Observe that the map $\mu_0:A\to S^1$ carrying any $a\in A$ to $ \exp(\frac{2\pi i}{p^s}h\Delta a^2)$ is a quadratic map associated with $ \chi$. Formula (\ref{gauss}) and the multiplicativity of the Legendre symbol imply that $$ \gamma(\mu_0 )=p^{-s/2}\sum_{j=0}^{p^s-1} \exp(\frac{2\pi i}{p^s}h\Delta j^2)= \varepsilon_{p^{s}} (\frac{h}{p})^{s} (\frac{\Delta}{p})^{s} = \varepsilon_{p^{s}} (\frac{h}{p})^{s}\, [\sigma_s(\chi)]^s. $$ Similarly, Formula (\ref{gauss+}) implies that $$ \sum_{c\in A}\mu_0(c)^{-k}= \sum_{j=0}^{p^s-1}\overline{\exp(\frac{2\pi i}{p^s}k h\Delta j^2)}= p^{(s+t)/2} {\varepsilon_{p^{s-t}}^{-1}}(\frac{h}{p})^{s-t}(\frac{k'}{p})^{s-t} [\sigma_s(\chi)]^{s-t}. $$ Since $|A|=p^s$ and $|A_k|=gcd(k,p^s)=p^t$, we have $$\gamma(\mu_0^{-k})= |A|^{-1/2} |A_k|^{-1/2}\sum_{c\in A}\mu_0(c)^{-k}= {\varepsilon_{p^{s-t}}^{-1}}(\frac{h}{p})^{s-t}(\frac{k'}{p})^{s-t} [\sigma_s(\chi)]^{s-t}.$$ These computations and Formula~\eqref{prin} imply that $$ \zeta_k(\chi)= \gamma (\mu_0^{-k})\, (\gamma (\mu_0))^k= \frac{\varepsilon^k_{p^s}}{\varepsilon_{p^{s-t}}} (\frac{h}{p})^{k s + s-t}(\frac{k'}{p})^{s-t} [\sigma_s(\chi)]^{k s + s-t}. $$ This is equivalent to Formula~\eqref{zetal1--}. \end{proof} Note one special case of Lemma~\ref{zetal}: if $k$ is divisible by $2\vert A\vert$, then $\zeta_k(\chi)= \prod_{s\geq 1} \, \beta_{ k, s}^{r_{s}(A)}$. Indeed, in this case for all $s$ such that $\mathbb{Z}/p^s \mathbb{Z}$ is a direct summand of $A$, we have $gcd(k,p^s)=p^s$ and $\alpha_{ k,s} = k s \in 2\mathbb{Z}$. For all other $s$, we have $\sigma_{s} (\chi)=1$. Therefore $[\sigma_{s} (\chi)]^{\alpha_{ k,s}}=1$ for all $s$. \subsection{Proof of Theorem~\ref{thm-one}(b)} We begin with a few remarks concerning the subgroups $(A_k)_{k }$ of $A$ defined in the introduction. Using the splitting $A=\oplus_p \, A^{(p)}$, one easily checks that $A_{kl}=A_k \oplus A_l$ for any relatively prime integers $k, l$. For any prime $p$, the integers $(\vert A_{p^m}\vert)_{m\geq 1}$ depend only on the group $A^{(p)}$ and determine the isomorphism class of $A^{(p)}$. Indeed, $A^{(p)}=\oplus_{s\geq 1}(\mathbb{Z}/p^s\mathbb{Z})^{r_{p,s}}$ for $r_{p,s}=r_{p,s}(A)\geq 0$. Given $m\geq 1$, $$A_{p^m}=(A^{(p)})_{p^m}= \oplus_{s= 1}^m (\mathbb{Z}/p^s\mathbb{Z})^{r_{p,s}}\, \oplus \, \oplus_{s>m}(\mathbb{Z}/p^m\mathbb{Z})^{r_{p,s}}.$$ Hence, $$ \log_p (\vert A_{p^{m+1}}\vert/\vert A_{p^m}\vert)=r_{p,m+1}+r_{p,m+2} + \cdots .$$ Therefore, the sequence $(\vert A_{p^m}\vert)_{m\geq 1}$ determines the sequence $\{r_{p,s}(A)\}_{s\geq 1}$ and so determines the isomorphism type of $A^{(p)}$. Formula~\eqref{eq-odd} and the assumptions of the theorem imply that, for all odd $k\geq 1$, $$\vert A_k\vert=2n\vert L_{k} \vert_{\mathcal C}=2n \vert L_{k}\vert_{\mathcal C'}=\vert A'_k\vert,$$ where $n=\vert A\vert=\vert A'\vert $. By the previous paragraph, $A^{(p)}\cong A'^{(p)}$ for all prime $p \neq 2$, and for all $s\geq 1$, \begin{equation}\label{tok} r_{p,s}(A^{(p)})=r_{p,s}(A)=r_{p,s}(A')=r_{p,s}(A'^{(p)}). \end{equation} Since $n =\prod_{p\geq 2} \vert A^{(p)}\vert $, we also have $\vert A^{(2)}\vert=\vert A'^{(2)}\vert$. Let $N\geq 2$ be a positive power of 2 annihilating both $A^{(2)}$ and $A'^{(2)}$. Then $A_{{N}}= A^{(2)}$ and $A'_{{N}}= A'^{(2)}$. For any odd integer $\ell\geq 1$, $$\vert A_{{N}\ell}\vert=\vert A_{{N}}\vert\, \vert A_\ell\vert=\vert A^{(2)}\vert\, \vert A_\ell\vert= \vert A'^{(2)}\vert\, \vert A'_\ell\vert=\vert A'_{{N}}\vert\,\vert A'_\ell\vert=\vert A'_{{N}\ell}\vert.$$ Similarly, $\vert A_{2N\ell}\vert=\vert A'_{2N \ell} \vert$. Applying \eqref{eq-even} to $k=2N \ell$, we obtain $\zeta_{ {N} \ell} (\chi)=\zeta_{ {N} \ell} (\chi')$. Fix from now on an odd prime $p$. The identity \eqref{tok} shows that to prove that the bicharacter pairs $(A^{(p)}, \chi^{(p)})$ and $(A'^{(p)}, \chi'^{(p)})$ are isomorphic, it is enough to verify that $\sigma_{ s} (\chi^{(p)})=\sigma_{ s} (\chi'^{(p)})$ for all $s\geq 1$. Set $$\ell= \frac{\vert A \vert}{\vert A^{(2)}\vert \vert A^{(p)}\vert}= \prod_{q\geq 3, q\neq p} \vert A^{(q)} \vert= \prod_{q\geq 3, q\neq p} \vert A'^{(q)} \vert=\frac{\vert A' \vert}{\vert A'^{(2)}\vert \vert A'^{(p)}\vert}, $$ where $q$ runs over all odd primes distinct from $p$. Clearly, $\ell $ is an odd integer. For any $N$ as above, $\zeta_{ {N} \ell} (\chi)=\zeta_{ {N} \ell} (\chi')$. Observe that $$\zeta_{ {N} \ell} (\chi)=\zeta_{ {N} \ell} (\chi^{(2)}) \prod_{q\geq 3} \zeta_{ {N} \ell} (\chi^{(q)}),$$ where $q$ runs over all odd primes. Since $N\ell$ is divisible by $2\vert A^{(q)} \vert$ for $q\neq p$, the remark at the end of Section~\ref{compos} implies that $ \zeta_{ {N} \ell} (\chi^{(q)})= \zeta_{ {N} \ell} (\chi'^{(q)}) \neq 0$ for all $q\neq p$. Replacing if necessary $N$ by a bigger power of 2, we can assume that $N$ is divisible by $8 \vert A^{(2)} \vert=8 \vert A'^{(2)} \vert$. The last claim of Lemma~\ref{lemma-two} yields $ \zeta_{ {N} \ell} (\chi^{(2)})= \zeta_{ {N} \ell} (\chi'^{(2)})=1$. Combining these equalities, we obtain that $\zeta_{ {N} \ell} (\chi^{(p)})=\zeta_{ {N} \ell} (\chi'^{(p)})$. Expanding both sides as in Formula~\eqref{zetal1} and using Formula~\eqref{tok} and the inclusions $ \sigma_{ s} (\chi^{(p)}) , \sigma_{ s} (\chi'^{(p)})\in \{\pm 1\} $, we obtain that $$\prod_{{\rm {odd}}\, s\geq 1} \sigma_{ s} (\chi^{(p)})= \prod_{{\rm {odd}}\, s\geq 1} \sigma_{ s} (\chi'^{(p)}).$$ Replacing in this argument $\ell$ by $\ell p, \ell p^2, \ell p^3 , \ldots $, we similarly obtain that for all odd $u\geq 1$ and even $v\geq 2$, $$\prod_{{\rm {odd}}\, s\geq u} \sigma_{ s} (\chi^{(p)})= \prod_{{\rm {odd}}\, s\geq u} \sigma_{ s} (\chi'^{(p)}), \quad \prod_{{\rm {even}}\, s\geq v} \sigma_{ s} (\chi^{(p)})= \prod_{{\rm {even}}\, s\geq v} \sigma_{ s} (\chi'^{(p)}) .$$ These equalities easily imply that $\sigma_{ s} (\chi^{(p)})= \sigma_{ s} (\chi'^{(p)})$ for all $s$.
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} \IEEEPARstart{C}{ombinatorial} optimization deals with problems with an objective function that has to be maximized over a set of combinatorial alternatives \cite{Korte2011}. The most naive way of obtaining the optimal solution is to list all the feasible solutions, evaluating them using the objective function and selecting the optimal one. Nevertheless, a brute-force approach lacks practicality when the size of the problem is too large, as the time needed to solve it grows exponentially with the problem size. Combinatorial optimization problems (COPs) have been traditionally approached either by exact methods or heuristic methods. Exact methods guarantee an optimal solution but, as their algorithmic complexity is high, exact approaches become useless for medium to large problem sizes. Conversely, heuristic methods do not guarantee to reach the optimal solution, but they try to reach a good solution in a reasonable time budget. The effectiveness of a heuristic method depends on its ability to identify and exploit the relevant information of the problem at hand, in order to optimize it efficiently. In this line, as a generalization of heuristic algorithms, metaheuristics (MH) introduce higher level generic procedures that guide the search process, making them "general-purpose methods" \cite{Blum2003}. Even though a lot of work has been done in the development of MH algorithms, in the last few decades the research in that area has reached a point of maturity, with hardly any considerable improvements being made. In contrast, Deep Learning (DL) methods have recently entered the optimization paradigm, introducing a breath of fresh air. The recent success of DL in fields such as machine translation \cite{Cho2014}, biology \cite{Jumper2021} and board games \cite{Schrittwieser2020}, has not only attracted the attention of many machine learning practitioners into the optimization field, but has also captured the interest of the optimization community about the possibilities of DL. The main building blocks of DL algorithms, Neural Networks (NN), were proposed in the 80s to solve COPs as a promising component to incorporate the relevant information of a problem by learning \cite{Hopfield1985}. However, only in the last few decades have NNs become an interesting alternative, due to the increase in computational capacity and the development of complex NN models, which enable the design of competitive algorithms. Recent reviews \cite{Bengio2021, Talbi2021} present two taxonomies that differentiate the ways and the stages in which DL can be applied to COPs. In general, two main groups can be distinguished: end-to-end methods and hybrid methods. \textbf{End-to-end methods} are designed to solve a particular problem by means of NN models without needing any additional algorithm. Regarding \textbf{hybrid methods}, they combine DL models with previously mentioned exact and heuristic methods. According to \cite{Bengio2021, Talbi2021}, hybrid methods can be split into two groups: \begin{enumerate} \item \textbf{Learning to configure algorithms}. Almost all the metaheuristics that have been proposed in the literature have a set of configurable parameters and operators that must be tuned carefully for the algorithm to perform efficiently. This parameter tuning has concerned the community for years \cite{Yu2020}, and different proposals have been developed using DL to learn how to configure those parameters \cite{Huang2019}. \item \textbf{DL alongside optimization algorithms}. This group includes two-level algorithms, in which a heuristic calls a DL model to assist in some decisions, such as the most appropriate crossover or mutation operator, or selecting the most promising branching variable. For instance, a NN can be used to select the next branch and improve the exploration of a Monte Carlo Tree Search algorithm \cite{Xing2020}. \end{enumerate} Though there is much yet to come, hybrid methods generally recreate previously studied techniques with the addition of DL modules \cite{Talbi2021}. Conversely, end-to-end proposals work on the basis of an NN model as the core of the algorithm, and contrary to the conventional methods, the optimization process consists of 2 steps: a learning phase in which the model will be trained, and the inference, in which the model gives a solution to a new problem instance. Therefore, this approach is significantly different to that used by traditional optimization algorithms and, even if interesting proposals have been presented in the literature, the benefits and limitations of end-to-end proposals have not been studied in-depth. Thus, in this paper a discussion about end-to-end models is opened, focusing on the most relevant aspects: (1) \textbf{Performance}. How good are the solutions provided by these models? Are they competitive with the state-of-the-art methods? (2) \textbf{Training data \& Transferability}. Which type of data is required to train the model? Is it necessary to have hundreds or even thousands of instances of the real world problem to train the model? Can random instances be used instead? (3) \textbf{Computational cost}. Considering the computation resources and time required to train these NN-based models, are they affordable for medium to large size problems? (4) \textbf{Generalization to large size instances}. Related to the previous issue, in the case when a large enough model can not be trained due to a memory limitation, is it possible to competitively apply a model trained for small size instances to larger instances? Moreover, can a model trained for a problem be successfully applied to another problem? In addition to analysing these aspects on the works presented in the literature, with illustrative purposes, we take a practical case study to apply the analysis over end-to-end models. Particularly, we develop an end-to-end model to solve the Linear Ordering Problem (LOP) \cite{Ceberio2015}, a well-known NP-hard COP. Our purpose is not to present a state-of-the-art algorithm, but to guide the reader during the process of implementation and evaluation of the method and address the aforementioned aspects. Moreover, we conduct an experimentation of its capabilities with a broad comparison, using a diverse set of conventional methods, including an exact solver \cite{Achterberg2009}, a constructive heuristic \cite{Becker1967} and two state-of-the-art metaheuristics \cite{Lugo2021,Santucci2020}. Conducted experiments show that the end-to-end model is competitive against the constructive heuristic, although it falls behind the metaheuristic algorithms. However, the NN model is capable of generalizing the learnt knowledge to larger instances and transfer it to other types of instances. Not limited to that, a number of promising research lines are described for future investigations on the application of end-to-end models over COPs. The rest of the paper is organized as follows. Section \ref{related_work} introduces a revision of meaningful works in the neural end-to-end framework. Section \ref{analysis} presents four features of the critical analysis that arise from the introduction of DL algorithms to the optimization field. We try to find answers to a number of relevant questions on the basis of a case of study, the Linear Ordering Problem, which is defined in Section \ref{problem_definition}, and an end-to-end approach, presented in Section \ref{architecture}. A broad set of experiments is conducted in Section \ref{experiments}. Obtained results are discussed in Section \ref{discussion}, where we also suggest new directions for future work in the area of learning algorithms. Finally, Section \ref{conclusion} concludes the paper. \section{Related Work} \label{related_work} As stated previously, end-to-end models for combinatorial optimization problems are the scope of this paper. In that sense, this section analyses an exhaustive set of this type of proposals, highlighting for each case the methodological and practical advances introduced. In general terms, the optimization process of end-to-end models follows a training-inference pipeline. In the training phase, a set of instances is used to learn the parameters of the NN model. In this step, two main learning scenarios arise: Supervised Learning (SL) and Reinforcement Learning (RL). In SL, the NN model learns to imitate an optimal (or good) policy \cite{Gasse2019}. The NN model is fed with a collection of solved instances that have been previously computed by an exact or an approximate solver (see SL in Fig. \ref{fig:nco}). An example of this approach can be seen in \cite{Vinyals2015}, where the authors propose an end-to-end algorithm to solve the Travelling Salesman Problem\footnote{Among the most popular combinatorial problems, the Travelling Salesman Problem (TSP) has been one of the most studied problems. The goal in TSP is to find the shortest route between a set of cities that must be visited just once.}. The proposed algorithm embeds the information of the instance by an NN model and constructs a solution for the problem iteratively, adding a city to the solution at a time. Under this approach, Vinyals et al. \cite{Vinyals2015} use an architecture called Pointer Network, a sequence-to-sequence model that points to one of the input cities. However, using SL has some serious drawbacks, since obtaining a large set of labels is not usually tractable and affects the scalability of the method \cite{Bengio2021}. Moreover, imitation learning may fail to abstract the problem knowledge when the imitated policy is suboptimal or there are multiple optimal solutions. Instead, RL proposes a more suitable procedure for solving COPs, where an agent learns how to act, without supervision, based on the rewards it receives through its history, i.e., by experience \cite{Sutton2018} (see RL in Fig. \ref{fig:nco}). In this learning scenario, Bello et al. \cite{Bello2016} introduced a framework called Neural Combinatorial Optimization (NCO), which uses RL and NN models to learn approximate solutions for (a set of) combinatorial problems in an end-to-end manner. In that paper, the authors improve the performance of the model presented by Vinyals et al. \cite{Vinyals2015}, using a similar architecture but replacing SL with RL. Moreover, Bello et al. \cite{Bello2016} achieved better results than a classical heuristic (Christofides \cite{Christofides1976}) and OR-Tools' local search algorithm \cite{Google2016}. \begin{figure} \centering \includegraphics[width=0.45\textwidth]{NCO_framework.jpg} \caption{General structure of different learning methods. In \textbf{Supervised Learning}, the model imitates an exact (or good enough) solver. Usually, a static dataset of pre-solved instances is needed, where the NN model takes the labels for the training. In \textbf{Reinforcement Learning}, the NN model learns based on the reward obtained from the inferred solution. Regarding the learning data, the use of random generators is common in RL methods.} \label{fig:nco} \end{figure} Motivated by the good results obtained by Bello et al. \cite{Bello2016}, most of the papers dealing with end-to-end models have followed a similar learning scheme, and the scientific advances have focused on changes in the way the solutions are inferred. In this context, two main strategies are present in the literature: (1) \textbf{one-shot} methods predict a probability matrix that describes the probability of an item belonging to a specific part of the solution. The model infers the matrix in one go and solutions are then obtained by performing a beam-search over the probability matrix\cite{Joshi2019}. (2) \textbf{Autoregressive} methods construct the solutions sequentially, adding one item to the solution in each iteration and updating its current state\cite{Joshi2020}. In addition to the learning scenarios, recent works have also performed changes in the NN model architectures. As an improvement of Pointer Networks \cite{Vinyals2015, Bello2016}, Graph Neural Networks (GNN) \cite{Cappart2021} address the limitation of having an order invariant input, i.e., GNNs are capable of representing the features of the problem without considering any specific order of the input sequence. In \cite{Khalil2017}, the authors propose a GNN architecture, which automatically learns policies for several graph problems such as Maximum Cut, Minimum Vertex Cover and TSP. They implement a greedy meta-algorithm design where solutions are constructed by adding graph nodes (items) sequentially to the solutions based on the graph structure to satisfy the constraints of the problem. A widely used alternative to GNNs is the attention mechanism, the main component of the well-known Transformer architecture \cite{Vaswani2017}, which is able to selectively focus on segments of the input vector, bringing the capability of processing large sized inputs. Deudon et al. \cite{Deudon2018} and Kool et al. \cite{Kool2018} trained an architecture based on attention mechanisms improving previously reported results \cite{Vinyals2015, Bello2016, Khalil2017}. Joshi et al. \cite{Joshi2020} analyse the generalization of these architectures to instances larger than those seen in training. The authors stated that, based on their findings, NCO methods are not robust enough to handle real world situations beyond what they see in training and do not reliably scale to practical sizes. As we will see in what follows, apart from the generalization issue reported by Joshi et al. \cite{Joshi2020}, the application of NCO in COPs poses a number of questions that are worth investigating. For that purpose, in this paper we present a broad analysis addressing those concerns. \section{Critical Analysis} \label{analysis} As shown above, NCO models represent a breakthrough in the combinatorial optimization field, beating conventional heuristic constructive algorithms (in terms of solution quality), and exact methods (in terms of computational cost) \cite{Bello2016, Deudon2018}, by learning a policy from scratch. However, from the viewpoint of a combinatorial optimization practitioner, there are a number of questions that remain unanswered. The main difference between the application of conventional methods and DL models comes from the optimization pipeline. The general pipeline followed in a conventional optimization process starts with an (a set of) instance(s) to be solved, and a computational budget. Depending on the problem class and the budget, an algorithm (or many) is selected along with its hyper-parameters. Subsequently, the algorithm starts the optimization process and once the time expires, a result (solution to the problem instance) is provided. Conversely, DL, and consequently NCO, brings a different pipeline. After tuning the algorithm so that it meets the problem constraints, there is a training process that calibrates the parameters of a model. Once the model is trained, it can be used repeatedly to perform inference at a very low computational cost. Placing both approaches side-by-side, it is easy to see that NCO introduces an optimization pipeline that clashes with the conventional framework in a number of aspects. In what follows, we address the unanswered questions and group them in four interrelated points that need to be studied in order to fairly compare these models with conventional algorithms and contribute to the scientific progress: (1) the evaluation of the performance of the algorithm regarding the quality of the solution, (2) the training instances and the model's transferability to other instances, (3) the computational cost of the algorithm and (4) the generalization of models to larger instance sizes. \subsection{Performance Analysis} When conducting an experimental comparison of an NCO method with a Conventional Optimization Algorithms (COA), the aim, generally, is to identify the best performing algorithm according to the quality of the solution. Recent works on NCO using enhanced RL methods computed for several hours were applied to the TSP, claiming to improve heuristics such as Random Insertion and Farthest Insertion\cite{Kool2018}. However, as seen in \cite{Kool2018}, there are exact methods \cite{Applegate2006} that can solve large size instances in only a few minutes. Similarly, Pan et al. \cite{Pan2021} propose an RL framework that tackles the permutation flow shop scheduling problem. The authors compare the proposal with classical heuristics and an improved versions of these. Even though these methods are useful as a baseline, they do not reflect the state-of-the-art, which relies on sophisticated hybrid metaheuristics\cite{Santucci2014}. In fact, a bibliographic comparison suggests that results obtained by NCO are less competitive than the current state-of-the-art methods, which casts doubt on the real utility of NCO. Making a fair comparison between NCO and COA is not trivial, as the experimental setup used in both paradigms differs. Traditionally, two different stopping criteria are used for comparison: a limited computation time or a fixed number of objective-value evaluations, each of them having their supporters and detractors. In fact, most of the COAs have the ability to improve results if a larger budget is available, making it also difficult to establish a fair enough limit for all the algorithms included in a comparison. However, reporting the real objective values obtained by each of the proposals in the comparison, although not using the same budgets, seems to be a rigorous way to conduct the performance analysis. Moreover, comparing the NCO approach with the current state-of-the-art is a must, not with the purpose of invalidating the NCO algorithm, but to put it into perspective. \subsection{Training Data \& Transferability} In the optimization field, COAs are tested on different functions and/or instances of a given problem. In order to measure and compare their performance, common testbeds are required. Unfortunately, as real-world problems (instances) are difficult to obtain, most of the comparisons are made using benchmarks of artificial instances available online. Looking at the literature, most of the works in the NCO area use randomly generated instances for training, following a common trend in combinatorial optimization. For example, in the case of TSP a grid of cities is generated by sampling uniformly at random in the unit square \cite{Bello2016,Deudon2018,Joshi2020}. Given that instance benchmarks are generally synthetic data, and are rarely examples of real problems, it seems reasonable to compare COAs and NCOs by using instance generators of the desired type, and this generator should be used in both training and validation of the algorithms. In the following, we distinguish two strategies that are interesting when applied to train NCO models: \begin{enumerate} \item \textbf{Train using instances from an instance generator}. As mentioned above, generators can create a large set of instances. These can be divided in two types, based on the instances they generate. The most common generators use uniform distributions to generate instances. Conversely, the second type of generators aim to replicate a specific type of instances, such as instances from a specific real-world problem. This could be achieved by either using a known distribution, or, although not trivial, sampling from real-world instances. \item \textbf{Train using a real-world benchmark}. As the training of NCO models requires large amounts of instances, this setup is limited to the case in which previously known real instances are within reach. \end{enumerate} Whenever random generators are used for training, it is quite common to include an intensification phase called active search\cite{Bello2016}, where the rewards obtained from the evaluation dataset are used to further tune the parameters of the pre-trained model. However, in a practical case, performing this procedure in a converged network can be time and memory intensive \cite{Hottung2021}. All in all, a model that learns a policy needs to be able to transfer the obtained knowledge to new instances, and in this aspect the training strategy has a big influence. The ability of random generators to draw instances from the desired target distribution is mainly what conditions the transferability of the model. Namely, uniform-distribution random generators are simple to create but they may lack transferability to real problem instances, which can not be correctly defined by simple distributions. Besides that, models trained with small sets or models with an active search learning phase are more likely to overfit and, therefore, limit their transferability. Therefore, NCO practitioners need to find a balance between diverse and large instance sets given by random generators and smaller sets that include or are closer to the real instances. \subsection{Computational Cost \& Scalability} NCO models need to be trained for several epochs, and once the training of the model has properly converged, they are able to infer a large number of instances in a short period of time. Conversely, COAs face each instance individually and start the optimization procedure from scratch, without any knowledge transference from instance to instance. With the increase in the number of instances to be solved, the impact of the training time in the total computation time decreases, whilst the saved time due to parallelization increases. It is not clear whether the training time needs to be considered in the computational cost comparison or not, as it would depend on how often the model has to be updated (re-trained). Even though the current literature obviates it\cite{Bello2016,Kool2018}, providing training times gives an intuition about how costly obtaining the observed performance is. Another issue when comparing the computational efficiency of the available algorithms comes from the different programming languages used to code them and on the hardware in which they run. While COAs are generally written in C/C++ and deployed in CPUs, NCO models are mostly implemented in Python and use libraries optimized to carry out parallelized training and inference on GPUs. In order to perform a fair comparison, we should implement them in the same programming language and try to run them on similar hardware, which is neither natural nor efficient. So it seems reasonable to compare the algorithms implemented and executed in the programming languages and hardware infrastructure which the final practitioner will have easy access to, without the requirement of large overheads. Concerning the affordability of optimization algorithms, it has been broadly reported that the use of exact methods is intractable with very large NP-hard instances, as the required computation time grows exponentially. Similarly, there is a fairly high memory/computation cost when training NCO models, which grows with both the training batch and the instance size. For this reason, an analysis on the time and memory affordability of the algorithm can give an intuition of how feasible it is to solve a certain instance. \subsection{Generalization to different instance sizes} The generalization capability of a model shows how the increase (or decrease) in the instance size influences the performance of models trained with smaller (or larger) instances. In general, the larger the instance size, the harder it is for a NCO model to solve it. However, most of the works in the field only reported results on instances with up to 100 nodes \cite{Bello2016, Kool2018, Deudon2018}. This may not be enough in a problem such as TSP, where the optimal solution of a problem with 100 cities can be found in 0.22 seconds using exact algorithms \cite{Bello2016, Applegate2006}. In fact, in the TSP, which has become the main playground of NCO algorithms, solving larger instances is necessary in order to see whether NCO methods are able to improve their performance with respect to exact algorithms. Nevertheless, it is senseless to talk about generalization when working with fixed input-size models, such as sequence-to-sequence models\cite{Vinyals2015}, which can only be used for the instance size they have been trained for. These models are not very efficient, since a training period is required every time a new instance size needs to be solved. Except for the particular cases where the task consists of solving many equally sized instances, fixed input models should be replaced by other architectures or strategies that enable the computation of variable instance sizes. Even though GNNs solve this problem, they may lack the ability to efficiently process very large graphs. In fact, a common strategy when solving graph problems is to consider only a subset of promising nodes in the computation \cite{Manchanda2019}, for example using the k-nearest neighbor graph instead of the whole graph in each iteration. This trick is valid for the TSP since the notion of a neighborhood is direct \cite{Khalil2017, Cappart2021}. However, extending such a trick to other problems may not be that easy, requiring ad-hoc designs. \vspace{5mm} As we have witnessed, each of the points mentioned above may lead to profound discussions with many inter-dependencies and implications. Along with the purpose of this paper, in the following we approach a problem that, as far as we know, has not been approached using NCO models. In fact, we will illustrate the application of NCO and compare it to COAs. To do so, we propose a series of experiments and try to give an answer to the different questions that arise in the comparison of these two paradigms. \section{Case study: Linear Ordering Problem} \label{problem_definition} The Linear Ordering Problem (LOP) \cite{Ceberio2015} is a classical COP. Particularly, the LOP is a permutation problem that, in 1979, was proven to be NP-hard by Garey and Johnson. Since then, and due to its applicability in fields such as machine translation \cite{Tromble2009}, economics \cite{Leontief1986}, corruption perception \cite{Achatz2006} and rankings in sports or other tournaments \cite{Anderson2021, Cameron2021}, the LOP has gained popularity and it is easy to find a wide variety of works that have dealt with it \cite{Anderson2021_2}. Given a matrix $B = [b_{i j}]_{n \times n}$, the goal in the LOP is to find a simultaneous permutation of rows and columns such that the sum of the upper triangle entries is maximized (see Fig. \ref{fig:lop}-a). The objective function is defined formally as in Eq. \eqref{eq:lop_eqn} where $\pi$ represents the permutation that simultaneously re-orders rows and columns of the original matrix and $n$ is the problem size. \begin{equation} \label{eq:lop_eqn} f(\pi) = \sum_{i=1}^{n-1} \sum_{j=i+1}^{n} b_{\pi_i \pi_j} \end{equation} \begin{figure} \centering \includegraphics[width=0.45\textwidth]{draft_lop.png} \caption{Example of an LOP instance of size $n=5$. \textbf{a}) The LOP instance matrix of size $n=5$ ordered as the identity permutation $\pi_e = (1 \mspace{8mu}2 \mspace{8mu}3\mspace{8mu} 4 \mspace{8mu}5)$. Entries of the matrix contributing to the objective function are highlighted in grey, the sum of the entries in the upper diagonal gives the objective value, which is 51. For this instance, the optimal solution is given by the permutation $\pi = (4 \mspace{8mu}1 \mspace{8mu}2 \mspace{8mu}5 \mspace{8mu}3)$ with a objective value of 60. \textbf{b}) Equivalent complete graph with edge weights, only the edges of the first node are shown for clarity. \textbf{c}) Identity permutation incorporated into the solution graph.} \label{fig:lop} \end{figure} An alternative formalization of the LOP is to define it as a graph problem. Let $D_n = (V_n, E_n)$ denote the complete digraph of $n$ nodes, where for every pair of nodes $i$ and $j$ there is a directed edge $(i, j)$ from $i$ to $j$ and a directed edge $(j, i)$ from $j$ to $i$, (see Fig. \ref{fig:lop}-b). A tournament $T$ in $E_n$ consists of a subset of edges containing for every pair of nodes $i$ and $j$ one of their directed edges. The LOP can be formulated as the problem of finding the acyclic tournament which corresponds to a linear ordering where the node ranked first is the one without incoming edges in T, the second node is the one with one incoming edge (from the node ranked first) and so on. The node ranked last is the one without outgoing edges in T, (see Fig. \ref{fig:lop}-c). The objective of the graph problem is to find an acyclic tournament that gives a ranking of the nodes that maximizes $\sum_{(i,j) \in T} c_{ij}$, where $c_{ij}$ is the weight of the directed edge $(i, j)$. That is, \begin{equation} \label{eq:linear_prog} \begin{aligned} \max \quad & \sum_{(i,j) \in E_n} b_{ij}x_{ij}\\ \textrm{s.t.} \quad & x_{ij} + x_{ji} = 1, \quad \textrm{for all} \; i, j \in V_n, i<j\\ &x_{ij} + x_{jk} + x_{ki} \leq 2 \quad i<j, i<k, j\neq k\\ &x_{ij} \in \{0, 1\}, \quad \textrm{for all} \; i, j \in V_n \\ \end{aligned} \end{equation} \section{A NCO model for the LOP} \label{architecture} As mentioned in the introduction, we have designed an NCO model for the LOP, so that we can run illustrative experiments which will enrich the discussion about the aspects identified in Section \ref{analysis}. Particularly, we present an autoregressive end-to-end model that returns, for each item of the LOP, the probability to be chosen (added to the solution) at each step. So, starting from an empty solution, iteratively, the model is asked for an item (the one with the highest probability) until a complete solution is obtained. Of course, this is the most straightforward way for applying this model, but it could be used in many other ways: the model could be combined with other meta-heuristics, such as a local search, departing from the final solution provided by the model or a population-based meta-heuristic, whose initial population is obtained by sampling the output probability vector. However, the goal of this paper is to analyse the behavior, advantages and disadvantages of pure end-to-end models, and thus, the model is applied in its basic form, consequently showing a limited performance. The proposed model has two main modules: an encoder and a decoder. The problem (instance to be solved) will be represented as a graph, and the encoder will extract information from that graph, particularly node and edge features, generating high dimensional vectors called node- and edge-embeddings. Next, those embeddings are transferred into a decoder that will output the mentioned probability vector. At each step, node features will be updated according to the item(s) already placed in the previous iteration(s) until the complete permutation (solution) is obtained (see Fig. \ref{fig:gnn}). \begin{figure*} \centering \includegraphics[width=0.82\textwidth]{Model_architecture.png} \caption{The NN model architecture, composed of an encoder (Message passing GNN) and a decoder (Multi-Head-Attention mechanism).} \label{fig:gnn} \end{figure*} \subsection{Graph Features} Graph features, that is node- and edge-features are used to provide information to the model about the problem to solve, together with the actual state (placed item(s) in the solution). For example, in routing problems such as TSP, node features are used to reflect the absolute position of a city, while edge features represent the distance between two cities. Unfortunately, unlike in TSP, in the case of LOP there does not appear to be information related to a particular node, and therefore instance information will be provided only via edge-features (edge weights). Node-features will be used to codify the actual state. Given an LOP instance matrix of size $n$, $b_{ij}$ and $b_{ji}$ represent the weights of the directed edges between the pair of nodes $i$ and $j$, i.e., their relative information. Note that $b_{ij}$ contributes to the objective value when node $i$ appears before $j$ in the permutation. On the contrary, if $j$ is placed before $i$, the term contributing to the objective or fitness value is $b_{ji}$. Therefore, we will use the difference between both values as a one-dimensional compact edge feature: $y_{ij} = - y_{ji} = b_{ij} - b_{ji}$. Edge features will be disposed as a matrix that gathers pairwise precedence information for every pair of nodes (see Fig. \ref{fig:gnn} - Graph Features). As mentioned before, the model architecture is composed of an encoder and a decoder. A lineal projection of node and edge features forms node- and edge- embeddings, and these are fed to the encoder, which has $L$ layers. In each layer, each node gathers information from the rest of the nodes via their connection edges forming node embeddings, while edges gather information from the connected nodes, producing edge embeddings in a simultaneous way (see Fig. \ref{fig:gnn} - Encoder). Then, in a second step, embeddings are inserted into the decoder, which applies an attention mechanism (explained later) to produce the probabilities of selecting each node and appending it to the partial solution. Finally, the feature of the selected node is updated and the process repeats again (see Fig. \ref{fig:gnn} - Decoder). \subsection{Encoder} Based on previous references \cite{Joshi2020}, we decided to use a message passing Graph Neural Network as an encoder. GNNs gather node ($x_i$) and edge ($y_{ij}$) features from the graph (previous step), and those feature vectors are projected linearly to produce \textit{d}-dimensional representations called embeddings. The linear projections are shown in Eq. \ref{eq:embeddings}, where $A_x$ $\in \mathbb{R}^{2 \times d}$; $A_y$, $B_x$ and $B_y$ $\in \mathbb{R}^{1 \times d}$ are learnable parameters, and $h_i^{l=1}$ and $e_{ij}^{l=1}$ denote the node and edge embeddings of the first layer ($l=1$) respectively\footnote{Note that the learnable parameters are not dependent of the instance size ($n$), instead, the learned parameters are reused $n$ times for the node projection and $n\times n$ times for the edge projection, making the model input-size invariant.}. \begin{equation} \label{eq:embeddings} \begin{split} h_i^{l=1} = x_i^T * A_x + B_x \\ e_{ij}^{l=1} = y_{ij}^T * A_y + B_y \\ \end{split} \end{equation} The encoding process consists of several message-passing neural network layers. The first layer takes node $h_i^{l=1}$ and edge $e_{ij}^{l=1}$ embeddings. In each layer, information of neighboring nodes is aggregated and, therefore, in a GNN of \textit{L} layers, the features of neighbors \textit{L} hops away are taken into account for each node. The node $h_{i}$ and edge $e_{ij}$ embeddings at layer $l$ are defined using an $anisotropic$ message passing scheme as in \cite{Joshi2020}: \begin{equation} \label{eq:node_feat} h_{i}^{l+1} = h_{i}^{l} + ReLU\left(BN\left(W_1^l h_i^l + \sum_{j \in \mathbb{N}_i} (\sigma(e^l_{ij}) \odot W_2^l h_j^l)\right)\right) \end{equation} \begin{equation} \label{eq:edge_feat} e_{ij}^{l+1} = e_{ij}^{l} + ReLU\left(BN\left(W_3^l e_{ij}^l + W_4^l h_{i}^l + W_5^l h_{j}^l\right)\right) \end{equation} where $W_1^l$, $W_2^l$, $W_3^l$, $W_4^l$ and $W_5^l$ $\in \mathbb{R}^{d \times d}$ are learnable parameters, $BN$ denotes the batch normalization layer, $\sigma$ is the sigmoid function, $\odot$ is the Hadamard product and $\mathbb{N}_i$ is the neighborhood of node $i$. In the case of a fully connected graph, as in the LOP, the neighborhood consists of every other node in the graph. Node embeddings in the last layer $h^L$ are combined to produce the general graph representation (Eq. \eqref{eq:graph_embed}). We follow a common practice, taking the mean value over the node representations. \begin{equation} \label{eq:graph_embed} h_G = \frac{1}{n} \sum_{i=1}^n h_i^L \end{equation} \subsection{Decoder} The decoder produces the probability values that are used to take a decision about the next item to place in the partial solution. To that end, the node embeddings of the last layer and the graph representation from Eq. \eqref{eq:graph_embed} are provided to the decoder. Those node embeddings form a context vector named Query (in Fig. \ref{fig:gnn} - Decoder), that is used by an attention mechanism \cite{Kool2018} in order to obtain a probability distribution over the set of items. The attention mechanism is a weighted message passing process where the message values acquired from the neighbors are weighted with the compatibility between the node query and the key of the neighbor. Each query vector ($Q$) is matched against a set of keys ($K$) using the dot product to measure the compatibility. In this case, the keys are the node embeddings of the last encoding layer. As noted in \cite{Vaswani2017}, having multiple attention heads ($M=8$ is suggested) allows nodes to receive different messages from different neighbors, and this strategy called Multi-Head Attention mechanism (MHA), turned out to be beneficial. In order to build the mentioned context vector, or query, we concatenate the graph embeddings $h_G$ from Eq. \ref{eq:graph_embed} and the embeddings of the already placed nodes. This can be seen in Eq. \ref{eq:context1}, where $[ , ]$ denotes the concatenation operation and $h_{P} = \frac{1}{n_{placed}} \sum_{i \in \pi} h_i^L$ is the aggregation of the already placed node embeddings. \begin{equation} \label{eq:context1} \hat{h}_t^c = W_c [h_G, h_{P}] \end{equation} The context vector $\hat{h}_t^c$ gives additional intuition about the current state of the solution. Eq. \eqref{eq:context2} shows the query (Q), Keys (K) and Values (V) used in the MHA. \begin{equation} \label{eq:context2} h_t^c= \mathrm{MHA}(Q=\hat{h}_t^c, K =\{h_1^L, ..., h_n^L\}, V =\{h_1^L, ..., h_n^L\}) \end{equation} Finally, a second attention mechanism, between the refined context $h_t^c$ and node embeddings $h_i^L$, produces the logits $u^c_{j}$ of the non-placed nodes: \begin{equation} \label{eq:logits} u^c_{j} = \left\{ \begin{array}{ll} C \cdot \mathrm{tanh} \left( \frac{(W_Q h_t^c)^T \cdot (W_K h_j^L)}{\sqrt{d}} \right) & \mathrm{if\ } j \neq \pi_{t'}\;\; \forall t' < t \\ - \infty & \mathrm{otherwise} \\ \end{array} \right. \end{equation} where the $\mathrm{tanh}$ function is used to maintain the logits within $[-C, C]$ ($C=10$). The logits at the current step $t$ are normalized using the Softmax function to produce the probabilities $p_{i}$ used to select the next item $i$ to place in the partial solution: \begin{equation} \label{eq:probs} p_{i} = \frac{e^{u^c_{i}}}{\sum_j e^{u^c_{j}}} \end{equation} \subsection{Learning} The model is trained via the REINFORCE algorithm \cite{Williams1992}. Given an instance $s$, the output of the model with weights $\theta$, is a probability distribution $p_\theta (\pi | s)$. The training is performed minimizing the loss function \begin{equation} \label{eq:loss} \mathcal{L}(\theta | s) = \mathbb{E}_{p_\theta (\pi | s)} [-(R(\pi) - b(s)) \log p_\theta (\pi | s)] \end{equation} by gradient descent, where $R(\pi) = f(\pi)$ is the reward function, which in this case is equal to the objective value of the LOP instance given a solution $\pi$, and $b(s)$ is a baseline value which is used to reduce gradient variance and increase learning speed. In order to produce the baseline, we make use of a method called self-critical sequence training (SCST) \cite{Rennie2017}. We make the model greedy, by making it take only actions with maximum probability, and then use the resulting reward as the baseline. As a result, only samples from the model that outperform the greedy action are given positive reward. \section{Experimentation} \label{experiments} In the following, we will illustrate the experimental application of the end-to-end model described in the previous section, as well as consider a number of algorithms on the LOP (including the state-of-the-art) in order to compare them. As the goal is not to propose a state-of-the-art algorithm but to address the arising questions already discussed in Section \ref{analysis}, we will conduct a set of experiments to answer each one of the aspects mentioned. \subsection{General setting} \textbf{Instances}. As depicted in Section \ref{analysis}, we distinguish two main strategies for obtaining the instances needed to train the models: instance generators and benchmarks. In the case of the LOP, the most evident way of creating a generator is to randomly sample each element of the matrix $B$ from a uniform distribution between $(0, 1)$. Regarding benchmarks, the LOLIB \cite{Reinelt2002} is the most commonly used LOP library, which is composed of real world instances (\textit{IO}, \textit{SGB} and \textit{XLOLIB}) and randomly generated instances which try to mimic real world data (\textit{RandB}, \textit{MB}, \textit{RandA1} and \textit{RandA2}). Both strategies will be adopted for the experiments. \textbf{Algorithms}. Among the set of conventional algorithms to solve the LOP, we will distinguish 3 groups: exact methods, constructive heuristics and metaheuristics. In each one of the groups we selected the algorithms that compose the state-of-the-art, and the stopping criteria for each algorithm will be set in a fair way so that the real performance of each algorithm can be exploited. Among the constructive heuristics, the algorithm by Becker \cite{Becker1967} is the best performing one, and as a deterministic constructive algorithm, it will be run until a solution is given. Considering exact methods, \textit{SCIP}\cite{Achterberg2009} is one of the fastest non-commercial exact solvers, it will be run until the optimal solution is found with a maximum time of 12h per instance. The implementation of the constructive heuristic and exact solver has been made following the respective handbooks. Also, we consider two of the state-of-the-art metaheuristics: A Memetic Algorithm (MA) \cite{Lugo2021}; and a Variable Neighborhood Search (CD-RVNS) \cite{Santucci2020}, which are publicly available\footnote{Codes available at \url{https://github.com/sgpceurj/Precedences_LOP} and \url{https://github.com/carlossegurag/LOP_MA-EDM}}. CD-RVNS will be stopped once $1000n^2$ objective function evaluations are computed, where $n$ is the instance size, while MA will be given the same time budget as CD-RVNS. This will be enough to let the metaheuristic converge as seen in \cite{Santucci2020, Lugo2021}. Finally, as for NCO methods, we will include the model described in Section \ref{architecture} (named as GNN) and an active search procedure\cite{Bello2016} (GNN-AS), which consist of performing extra training steps using the evaluation set (instances to be solved). \textbf{Hardware}. Models are trained in four \textit{Nvidia RTX 2070} GPUs, with a cumulative memory of 32GB. The NCO algorithm is implemented in \textit{Python 3.8}, while the conventional algorithms are written in \textit{C++}. Experiments that do not need a GPU are run on a cluster of 55 nodes, each one equipped with two \textit{Intel Xeon X5650} CPUs and 64GB of memory. \textbf{Training}. We train four different GNN models using instances of sizes $n=20, 30, 40$ and $50$. Each model is trained for 200 epochs with 100 batches per epoch and a batch-size of 128, 128, 64 and 32 instances respectively. The batch size of larger models needs to be reduced so that the GPU memory is not overloaded. For the AS procedure, the model parameters are trained for 200 additional epochs. \subsection{Performance Analysis} \label{performance} This first experiment has been designed to measure the performance of the end-to-end model compared to the algorithms described in the previous section. For this purpose, a set of 1280 random instances will be created for each $n$ size (20, 30, 40 and 50) and all the algorithms will be run to solve them. Results are depicted in Table \ref{table:performance}. For $n\leq40$, the exact algorithm provides the best solution (the optimum), but from $n=50$ onwards, it can not provide any result within the budget of 12 hours. Metaheuristics are competitive (in fact, MA provides the best results for $n=50$), and the GNN model outputs good quality solutions, with a gap between 0.3\% and 0.5\%. Moreover, when combined with active search (GNN-AS) its performance increases (0.1\% - 0.2\%), but at the cost of a notably larger computational effort. \begin{table}[!t] \centering \caption{Analysis of the performance. The gap \% to the optimal or best known value.} \label{table:performance} \begin{tabular}{l|r|r|r|r|} \hline Method & n=20 & n=30 & n=40 & n=50\\ [0.5ex] \hline Exact (ILP) & 0.00\% & 0.00\% & 0.00\% & - \\ Becker & 3.40\% & 3.44\% & 3.35\% & 3.27\% \\ MA & 4.9e-5\% & 2.6e-5\% & 1.6e-4\% & 0.00\% \\ CD-RVNS & 4.2e-4\%& 7.9e-4\% & 2.2e-3\% & 0.014\% \\ GNN & 0.29\% & 0.37\% & 0.44\% & 0.51\% \\ GNN-AS & 0.11\% & 0.16\% & 0.19\% & 0.22\% \\ \hline \end{tabular} \end{table} \subsection{Computational Cost \& Scalability} \begin{table*}[!t] \centering \caption{Execution times.} \label{table:times} \begin{tabular}{l|c|c|c|c|c|c|c|} \hline Method & n=20 & n=30 & n=40 & n=50 & n=100 & n=200 & n=1000\\ [0.5ex] \hline Exact (ILP) & 0.29s & 25.3s & 752s & - & - & - & - \\ Becker & 0.03s & 0.04s & 0.08s & 0.12s & 0.48s & 1.9s & 2.2m \\ MA & 0.09s & 0.22s & 0.43s & 0.69s & 3.4s & 18.2s & 2.1h \\ CD-RVNS & 0.09s & 0.22s & 0.43s & 0.69s & 3.4s & 18.2s & 2.1h \\ GNN & 0.07s (4h) & 0.08s (9h) & 0.17s (14h) & 0.19s (29h) & 0.38s & 2.1s & 3.4m \\ GNN-AS & 25m (4h) & 1.1h (9h) & 2.8h (14h) & 6.6h (29h) & - & - & - \\ \hline \end{tabular} \end{table*} Together with the performance, practitioners must take into account the execution time required by an algorithm to obtain the solution. In optimization contexts in which time restrictions are present, this aspect is crucial. Table \ref{table:times} shows the execution times for the different algorithms and several instance sizes. The exact algorithm quickly becomes unusable, while constructive (Becker) scales successfully. The time required by the metaheuristics grows quadratically on $n$, which can be a bottleneck for larger sizes, but for the sizes tested in this work the execution time is reasonable. Regarding GNN, we want to first focus on the training time, the most time-demanding step, which takes several hours; up to 29 hours for $n=50$, and is unaffordable for sizes larger than 50 (at least with the available hardware). In our opinion, training time is relevant only in scenarios that require the model to be updated continuously, but could be ignored if this is not the case. Once trained, the model shows a very fast response time, just a few minutes for the largest size tested\footnote{Since the model is size-invariant, a GNN trained with $n=50$ has been used to solve $n=100$, $200$ and $1000$ sized problems. This aspect will be explained later in section \ref{scalability}}. Finally, the GNN-AS setup, as it includes an additional training step using the instance(s) to be solved, requires an extra computational effort (from 25 minutes to 6.6 hours), which makes this approach non-competitive and unaffordable when $n>50$. To complete the experiment, an analysis of memory consumption has been conducted. Becker, as well as metaheuristics, follows a linear growth, which makes the memory requirements affordable. However, the memory consumption of GNNs grows polynomially with respect to the model and the batch size. Fig. \ref{fig:memory} shows the memory usage curves according to the model size, where different curves are plotted for regularly used batch sizes. It can be seen that the training phase of GNN models is really memory intensive, which quickly limits their applicability. \begin{figure} \centering \includegraphics[width=0.47\textwidth]{gpu_memory_training.png} \caption{GPU memory use by different model sizes during training.} \label{fig:memory} \end{figure} \subsection{Generalization to different instance sizes} \label{scalability} Considering the huge computational resources required by NN-based models to solve large size instances, size-invariant models are really valuable. That is, models that can be trained using different sized instances. However, even though they are usable, they may lack the ability to generalize the learnt knowledge into larger and more complex instances, guaranteeing competitive performances \cite{Joshi2020}. The definition of size-invariant models does not come from the knn strategy mentioned before for the case of the TSP, as this is an ad-hoc approach for that particular problem. In the case of the LOP, when selecting the next item, it is hard to define a subset of promising nodes, contrary to the TSP, where the proximity of the cities is tightly related to their probability to be chosen. In LOP, every item has a certain impact on the others, and therefore the whole graph needs to be considered to rank the nodes. Thus, size-invariant definition is more general, indicating that the model works node-wise (not instance-wise), and therefore it can be applied to a number of nodes (instance sizes) different from that used for training, just by iterating more times. That being said, we conduct experiments to investigate the behaviour of the GNN model to generalize to different instance sizes. We report results provided by models trained with instances of $n=20, 30, 40$ and $50$, and evaluated in instances of $n=20, 30, 40, 50, 100, 200, 400, 700$ and $1000$. \begin{table*}[!t] \centering \caption{Performance gap to the optimal or best known value of GNN models. GNN-20 refers to the model trained with instances of size $n=20$.} \label{table:generalization} \begin{tabular}{l|c|c|c|c|c|c|c|c|c|} \hline Method & n=20 & n=30 & n=40 & n=50 & n=100 & n=200 & n=400 & n=700 & n=1000\\ [0.5ex] \hline GNN-20 & 0.29\% & 0.40\% & 0.47\% & 0.54\% & 0.78\% & 0.96\% & 0.93\% & 0.86\% & 0.79\% \\ GNN-30 & 0.32\% & 0.37\% & 0.42\% & 0.48\% & 0.63\% & 0.81\% & 0.82\% & 0.77\% & 0.76\% \\ GNN-40 & 0.36\% & 0.41\% & 0.44\% & 0.48\% & 0.55\% & 0.65\% & 0.71\% & 0.66\% & 0.63\% \\ GNN-50 & 0.46\% & 0.46\% & 0.48\% & 0.51\% & 0.59\% & 0.64\% & 0.67\% & 0.63\% & 0.60\% \\ \hline \end{tabular} \end{table*} In the view of the results, see Table \ref{table:generalization}, the GNN model shows a good generalization capability, as the difference with the best solution worsens slightly (gaps from 0.5\% to 0.6\%). Regarding generalization, more complex models, such as GNN-40 or GNN-50 show, in general, a higher performance than the more simple ones (GNN-20 and GNN-30), which is the behaviour one would expect. In addition to the performance, it is important to remember again the quick response time of these models, an aspect that can be relevant in some scenarios. Additionally, we compare the performance of GNNs with the rest of the proposals as a function of the instance size. Fig. 5 illustrates how the performance gap between the best performing algorithm (MA) and the GNN model increases from n = 20 to n = 200, remains constant from n = 200 to n = 400 and slightly decreases for larger sizes, which values the generalization capacity of the GNN models. Regarding the constructive algorithm (Becker), it reduces its gap with all the other algorithms consistently, even though the improvement decelerates for very large instances\footnote{Although not shown in the figure, Becker obtains a gap of 0.92\% for n = 2000} \begin{figure} \centering \includegraphics[width=0.47\textwidth]{generalization.png} \caption{Gap to the best known objective value as a function of the size of instances for the analysed algorithms.} \label{fig:generalization} \end{figure} \subsection{Training Data \& Transferability} In order to analyse the transferability of the learnt model to other type of instances, and given that our purpose is to solve a certain target set of instances, we will consider three different training setups: \begin{enumerate} \item The model is trained with random instances obtained from generators, and then target instances are solved using the model. \item The model is trained in two steps, first, with random instances obtained from generators, and then an additional training (active search) is performed using the target instance or set of instances. The model will return the best solution(s) found during this active search phase. In this setup, 100 extra epochs will be dedicated to the active search. \item The model is trained using the target or set of target instances without any previous training. \end{enumerate} For the first and the second cases we will use uniformly distributed instances, and instances from the LOLIB benchmark will constitute the target set. For the third case and the active search procedure, as using directly the instances from the LOLIB is intractable due to their large size, we have created smaller instances sampling uniformly at random discrete values from the real benchmark instances in order to form a matrix (instance) of the desired size. Table \ref{table:performance_lolib} gathers the results obtained for the different setups. Even if LOLIB instances are heterogeneous, regarding their origin or the procedure used to create them, training the model with random instances (setups 1 and 2) is notably better than training the model directly using the particular set of instances we want to solve. However, it can be observed that GNNs trained with random instance generators perform generally better in random instances (\textit{RandA1}, \textit{RandA2}, \textit{RandB} and \textit{MB}) than in real-world instances (\textit{IO}, \textit{SGB} and \textit{XLOLIB}). So, the use of generators is advisable, but these generators should be able to produce instances as close as possible to the ones we want to solve. Finally, as seen previously, including active search is really helpful (setup 2), providing the best performing models (gaps between 0.5\% and 3.72\% with respect to the best solutions ever found). This experiment has been also designed to give an intuition about the transferability between different LOLIB benchmarks. That is, considering the computation effort required to train a model, can this model be applied successfully to other types of instances? In this regard, interesting results have been found. For all the sets (except \textit{XLOLIB}), there are almost two models that, trained with different sets, improve the results of the model trained using the set of instances that we want to solve. Moreover, a model trained using \textit{XLOLIB} shows excellent transferability properties, obtaining the lowest gaps for all the remaining sets (except for \textit{IO}). Thus, transferability seems to be a positive characteristic of these GNN models, which should be further studied and also extended, as the whole study itself, to other problem types. \begin{table*}[!t] \centering \caption{Analysis of the performance in 7 types of instances (with different instance sizes in brackets) from the LOLIB benchmark. Gap is computed with respect to the best known value given by the Memetic Algorithm \cite{Lugo2021}. GNN-AS refers to the active search procedure applied to the model trained with instances of size 40.} \label{table:performance_lolib} \begin{tabular}{l |r|r|r|r|r|r|r|} \hline Method &IO (44) & RandB (50) & SGB (75) & MB (100) & RandA1 (150) & RandA2 (200) & XLOLIB (250)\\ [0.5ex] \hline MA & 0.00\% & 0.00\% & 0.00\% & 0.00\% & 0.00\% & 0.00\% & 0.00\% \\ Becker & 7.14\% & 7.49\% & 4.17\% & 4.41\% & 6.59\% & 1.53\% & 7.73\% \\ \hline (1) GNN20 & 5.21\% & 1.27\% & 4.80\% & 0.54\% & 2.47\% & 0.93\% & 7.78\% \\ (1) GNN30 & 5.24\% & 1.09\% & 9.07\% & 0.43\% & 1.99\% & 0.87\% & 7.93\% \\ (1) GNN40 & 5.24\% & 1.02\% & 6.87\% & 0.36\% & 1.65\% & 0.82\% & 6.27\% \\ (1) GNN50 & 5.97\% & 1.29\% & 10.28\% & 0.39\% & 1.65\% & 1.32\% & 6.29\% \\ \hline (2) GNN-AS & 0.63\% & 0.50\% & 2.07\% & 0.63\% & 1.23\% & 0.74\% & 3.72\% \\ \hline (3) GNN-IO & 4.10\% & 4.80\% & 12.00\% & 4.50\% & 5.60\% & 1.80\% & 12.00\% \\ (3) GNN-RandB & 10.00\% & 1.70\% & 15.00\% & 0.97\% & 3.40\% & 1.40\% & 30.00\% \\ (3) GNN-SGB & 4.10\% & 6.80\% & 17.00\% & 10.00\% & 6.80\% & 12.00\% & 12.00\% \\ (3) GNN-MB & 7.50\% & 1.50\% & 5.60\% & 1.50\% & 3.10\% & 1.50\% & 11.00\% \\ (3) GNN-RandA1 & 5.60\% & 2.30\% & 13.00\% & 1.50\% & 3.80\% & 1.40\% & 8.50\% \\ (3) GNN-RandA2 & 3.40\% & 5.00\% & 13.00\% & 3.60\% & 6.20\% & 2.30\% & 13.00\% \\ (3) GNN-XLOLIB & 5.20\% & 1.30\% & 4.80\% & 0.54\% & 2.50\% & 0.93\% & 7.80\% \\[1ex] \hline \end{tabular} \end{table*} \section{Discussion and Future Work} \label{discussion} Through the experimentation section, we have tested different aspects and properties of the end-to-end model, in order to analyse its behaviour and competitiveness. First, we observed that NCO models are not general purpose algorithms (at least not yet). Even though there are NCO models that can be suitable for a particular set of problems (e.g., GNNs for graph-based problems), they lack the capacity to adapt to all kinds of problems with the ease of metaheuristics. This can happen because we are still in the early stages of the area, and NCO development requires prior knowledge on the problem as well as advanced skills in the DL-optimization area. We made an effort to propose a good end-to-end model for optimizing the LOP, trying to find the most competitive training strategies. Although the conducted experiments showed that the NCO model obtains a remarkable performance when compared to the constructive heuristics, it is not still able to beat state-of-the-art methods (such as MA or CD-RVNS). Not limited to that, NN-based models have a serious drawback regarding the training time and the memory requirements of larger models. So, an obvious question arises: if metaheuristics are more competitive and easier to design, then what is the point in using NCO models? First, we are in the early stages of NN-based models for optimization, and the performance gap with respect to the state-of-the-art approaches does not seem relevant, which encourages further research on this kind of models. Secondly, looking at Table \ref{table:times}, the end-to-end model is able to provide a solution in a few minutes for the largest size ($n=1000$), while metaheurstics require a couple of hours. Thus, in environments where a fast response is required (assuming some performance loss), these models are an interesting option. For example, online decision making optimization problems (e.g., logistics). However, it must be also noted that the training time is computationally very expensive so, in order to be efficient, the model should not need to be re-trained frequently, as it would lose its competitiveness (fast response). In this regard, it is also worth noting the valuable advantage of node-wise models, such as the one designed in this work. They can be trained for $n$-sized instances and later be applied to $m >> n$-sized instances, maintaining a constant gap or even reducing it with respect to the state-of-the-art metaheuristics, which usually suffer, to a greater extent, with the increase of instance size. Regarding the training process, there is another aspect that must be considered. Do we always need a large set of instances of the problem at hand in order to train a good-performing model? As observed in the experiments, the effectiveness of the model is greatly influenced by the quantity and the diversity of the instances used for training. While thousands of instances are required for training, existing benchmarks do not generally have enough instances. Here, random instance generators come into play. However, implementing random instance generators that produce samples with characteristics similar to the target scenarios is usually challenging. Nevertheless, we have shown that, when ad-hoc generators are not available, a successful alternative is to employ uniform random generators to train a baseline model, and if possible, incorporate active search techniques to extend the training to the particular target instance(s) to be solved. In summary, the experiments conducted in this work suggest that NCO models have certain capabilities that could make them a valuable choice for designing smarter metaheuristic algorithms. With this purpose in mind, recent reviews \cite{Bengio2021, Talbi2021} identify different works \cite{Hudson2021, daCosta2020, Wu2021} that employ NCO models (based on different NN variants and approaches) to choose the most adequate parameters and operators for certain metaheuristic paradigms. However, this can not be considered as a fresh new line, since other techniques, such as Bayesian Optimization, have already been used previously for designing hyper-heuristics whose parameters can be adapted to a particular problem. Analysing whether NCO provides better results compared to other techniques is still a pending matter. In our opinion, there exists another relevant research topic which falls in line with the NCO model proposed in this article. The output of the NCO model developed for this work is a probability vector that is used to guide the construction of a solution for the LOP, i.e., deciding the item to place in the next empty position of the solution. That is, the model has been designed to make low-level decisions, and somehow it has shown quite good abilities to make the right decisions. So, including such a low-level module inside metaheuristics, although the correct answer is not always guaranteed, could be really helpful to guide them in a smarter fashion. For example, a possible way is to use NCO models to improve the search of trajectory-based metaheuristics, e.g., local search (LS) methods. These schemes usually employ quadratic size neighborhood structures that are computationally intensive to process. In such cases, given a problem instance and a solution, it would be very valuable to have a model which is able to propose the most promising neighborhood operation (or operations) to choose. Note that this is challenging since, in addition to the instance, the models must codify the solution (received as input) at which the LS algorithm is at each step. Although few, there are some works that can inspire research in this direction \cite{Chen2019}. Another research line is related to investigate the quantity and quality of the set of instances required to train the model, including their properties, characteristics, and the information and relations that the model is able to extract. \section{Conclusion} \label{conclusion} In this paper we conducted a critical analysis of Neural Combinatorial Optimization algorithms and their incorporation in the conventional optimization framework. The analysis consists of 4 interrelated axes: the performance of the algorithm, the computational cost, the training instances and the generalization ability. In addition, we discuss the guidelines to facilitate the comparison of NCO approaches together with COAs in a rigorous manner. In order to provide some practicality to the conducted analysis, we proposed a new learning-based algorithm composed of a graph neural network and an attention mechanism. We selected the Linear Ordering Problem (LOP), and guided the reader during the process of implementation and evaluation of the proposed model. We compared the NCO method with a diverse set of algorithms, including an exact solver \cite{Achterberg2009}, a classical constructive heuristic \cite{Becker1967} and two state-of-the-art metaheuristics \cite{Lugo2021,Santucci2020}. Finally, we discussed the results, pointing out future research lines in the field of end-to-end models, which can be a promising paradigm towards the design of more efficient optimization methods. \section*{Acknowledgments} Andoni Irazusta Garmendia acknowledges a predoctoral grant from the Basque Government (ref. PRE\_2020\_1\_0023). This work has been partially supported by the Research Groups 2019-2021 (IT1244-19) and the Elkartek Program (KK-2020/00049, SIGZE, KK- 2021/00065) from the Basque Government, the PID2019-104933GB-10 and PID2019-106453GA-I00/AEI/10.13039/501100011033 research projects from the Spanish Ministry of Science. \vfill \pagebreak
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Global fields and their valuations} \label{gfatv} In this section, we introduce some notation, and give a brief review of the theory of valuations on global fields. \bd Let ${\mathbb K}$ be a field. A {\em valuation} on ${\mathbb K}$ is a mapping $\varphi:{\mathbb K}\to {\mathbb R}^{+}$, such that \begin{eqnarray*} (i) &&\mbox{$\varphi(\alpha)=0$ if and only if $\alpha=0$,}\\ (ii) &&\mbox{$\varphi(\alpha\beta)=\varphi(\alpha)\varphi(\beta)$,}\\ (iii) &&\mbox{$\varphi(\alpha+\beta)\leq \varphi(\alpha)+\varphi(\beta)$,} \end{eqnarray*} for all $\alpha,\beta\in {\mathbb K}$. A valuation $\varphi$ on ${\mathbb K}$ is said to be {\em archimedean} if \begin{eqnarray*} \varphi(\alpha+\beta)>\max(\varphi(\alpha),\varphi(\beta)), \end{eqnarray*} for some $\alpha,\beta\in {\mathbb K}$. \eop \ed We define an equivalence relation $\sim$ on the set of valuations on ${\mathbb K}$, by declaring that $\varphi\sim \phi$ if and only if \begin{eqnarray*} \varphi(\alpha)<1 \Leftrightarrow \phi(\alpha)<1, \mbox{ for all $\alpha\in {\mathbb K}$.} \end{eqnarray*} We denote by $\sum_{\mathbb K}$ the set of equivalence classes under this relation.\\ \\ It is easily verified that whenever a valuation $\varphi$ is archimedean, $\varphi\sim\phi$ implies that the valuation $\phi$ is also archimedean. We say that an element $P\in\sum_{\mathbb K}$ is {\em archimedean} if $P$ contains an archimedean valuation, and we denote by $\sum_{\mathbb K}^{\infty}$ the set of archimedean elements in $\sum_{\mathbb K}$. \\ \\ \bd \label{knoll} By a {\em global field} we mean either \begin{eqnarray*} - &&\mbox{a finite extension of the field ${\mathbb Q}$, or}\\ - &&\mbox{a finite extension of a field ${\mathbb F}_q(t)$ of rational functions in an}\\ &&\mbox{indeterminate $t$ over a finite field ${\mathbb F}_q$.} \end{eqnarray*} For a global field ${\mathbb K}$, we set \begin{eqnarray*} {\mathbb K}_0&=&\left\{ \begin{array}{lcl} {\mathbb Q}, &&\mbox{ if $\chara({\mathbb K})=0$, }\\ {\mathbb F}_{q}(t), &&\mbox{ if $\chara({\mathbb K})>0$}. \end{array} \right . \end{eqnarray*} \eop \ed \bnote If ${\mathbb K}$ is a global field of characteristic zero, the extension ${\mathbb K}/{\mathbb K}_0$ is separable (cf. {\em \cite{Lang}}, Corollary $6.12$ in \S V:$6$). For a global field ${\mathbb K}$ of positive characteristic, this is not always the case for an arbitrary choice of $t$. However, there is at least one choice of $t$ which makes the extension ${\mathbb K}/{\mathbb K}_0$ separable (cf. {\em \cite{Lang}}, Proposition $4.9$ in \S VIII:$4$), and in the sequel, we shall assume that such a choice is made in Definition \ref{knoll}. \eop \enote Let ${\mathbb K}$ be a global field, and choose a representative $\phi_P\in P$, for each $P\in \sum_{\mathbb K}$. Set \begin{eqnarray*} &&A_P=\{\alpha\in {\mathbb K}; \phi_P(\alpha)\leq 1\},\\ &&M_P=\{\alpha\in {\mathbb K}; \phi_P(\alpha)<1\}. \end{eqnarray*} Denote by $\widehat{{\mathbb K}}_P$ the completion of ${\mathbb K}$ with respect to $\phi_P$. We define a function $N:\sum_{\mathbb K}\to [1,\infty)\subset {\mathbb R}$, by letting \begin{eqnarray*} N(P)&=&\left\{ \begin{array}{lcl} e^{\dim_{\mathbb R}\widehat{{\mathbb K}}_P}, &&\mbox{ if $P\in\sum_{\mathbb K}^{\infty}$, }\\ \#\left(A_P/M_P\right), &&\mbox{ if $P\in \sum_{\mathbb K}\setminus\sum_{\mathbb K}^{\infty}$}. \end{array} \right . \end{eqnarray*} Indeed, it is easily seen that both $\widehat{{\mathbb K}}_P$ and $A_P/M_P$ are independent of the choice of $\phi_{P}\in P$. Since ${\mathbb K}$ is global, the residue field $A_P/M_P$ is finite for all $P\in\sum_{\mathbb K}\setminus \sum_{\mathbb K}^{\infty}$ (cf. \cite{Rein}). For $P\in \sum_{\mathbb K}^{\infty}$, the completion $\widehat{{\mathbb K}}_P$ is either ${\mathbb R}$ or ${\mathbb C}$. Hence $N(P)<\infty$ for all $P\in\sum_{\mathbb K}$. \\ \\ We define the integers $S_{i}({\mathbb K})=i\cdot \#\left(\{P\in\sum_{{\mathbb K}}^{\infty}; \log N(P)=i\}\right)$, $i\in\{1,2\}$, determined by the archimedean valuations on ${\mathbb K}$. \bnote With this notation, $S_{1}({\mathbb K})=S_{2}({\mathbb K})=0$ if $\chara({\mathbb K})>0$. If $\chara({\mathbb K})=0$, then $S_{1}({\mathbb K})$ is the number of real embeddings of ${\mathbb K}$, and $S_{2}({\mathbb K})$ is the number of complex embeddings of ${\mathbb K}$. \eop \enote For $P\in\sum_{\mathbb K}\setminus\sum_{\mathbb K}^{\infty}$, we define the normalized valuation $\varphi_P\in P$ by requiring that \begin{eqnarray*} -\log_{N(P)}\varphi_{P}({\mathbb K})={\mathbb Z}\cup \{{\infty}\}. \end{eqnarray*} If $P\in \sum_{\mathbb K}^{\infty}$, there is a unique embedding $\theta_P:{\mathbb K}\to \widehat{{\mathbb K}}_P$ corresponding to $P$. We let $|\cdot|$ be the usual absolute value on $\widehat{{\mathbb K}}_P$ ($={\mathbb R}$ or ${\mathbb C}$), and define \begin{eqnarray*} \varphi_{P}=|\theta_P|^{\dim_{\mathbb R}\widehat{\mathbb K}_P}. \end{eqnarray*} We have the following product formula (cf. \cite{Rein}). \bth \label{pf} If ${\mathbb K}$ is a global field, then \begin{eqnarray*} \prod_{P\in\sum_{\mathbb K}}\varphi_P(\alpha)=1, \end{eqnarray*} for all $\alpha\in {\mathbb K}^{*}$. \eop \eth Consider a global field ${\mathbb L}$, and assume that ${\mathbb K}$ is another global field such that the extension ${\mathbb L}/{\mathbb K}$ is finite and separable. To each element $Q\in\sum_{\mathbb L}$, we shall now associate an integer $e_Q$ and a real number $r_Q$, depending on this extension. \\ \\ For $Q\in\sum_{\mathbb L}$, we denote by $P_Q$ the element in $\sum_{\mathbb K}$ that contains the restriction $\varphi_{Q}|_{\mathbb K}$. We let $B_{P_Q}$ be the integral closure of $A_{P_Q}$ in ${\mathbb L}$, and denote by $\widehat{B}_{P_Q}$ and $\widehat{A}_{P_Q}$ the corresponding completed rings. \newpage \noindent \bd \label{rami} The {\em ramification index} of $Q$ relative to the extension ${\mathbb L}/{\mathbb K}$, is the integer \begin{eqnarray*} e_Q&=&\left\{ \begin{array}{lcl} [\widehat{{\mathbb L}}_Q:\widehat{{\mathbb K}}_{P_Q}], &&\mbox{ if $Q\in\sum_{\mathbb L}^{\infty}$, }\\ (\varphi_Q({\mathbb L}^{*}):\varphi_{P_Q}({\mathbb K}^{*})), &&\mbox{ if $Q\in \sum_{\mathbb L}\setminus\sum_{\mathbb L}^{\infty}$}. \end{array} \right . \end{eqnarray*} For $Q\in \sum_{\mathbb L}^{\infty}$, we define \begin{eqnarray*} r_Q&=&\left\{ \begin{array}{lcl} -\log\log N(Q), &&\mbox{ if $e_Q\neq 1$,}\\ 0, &&\mbox{ otherwise}. \end{array} \right . \end{eqnarray*} For $Q\in\sum_{\mathbb L}\setminus\sum_{\mathbb L}^{\infty}$, we define $r_Q$ to be the exponent of $\widehat{M}_Q$ in the different of $\widehat{B}_{P_Q}$ over $\widehat{A}_{P_Q}$ (cf. {\em \cite{Serr}}, \S $3$ in Chapter III). \eop \ed \bnote \label{discriminant} When $\chara({\mathbb K})=0$, we obtain with this definition \begin{eqnarray*} \prod_{P\in\sum_{\mathbb K}\setminus\sum_{\mathbb K}^{\infty}}N(P)^{r_Q}=|\disc_{\mathbb K}|, \end{eqnarray*} where $\disc_{\mathbb K}$ denotes the discriminant of the number field ${\mathbb K}$. This follows from Proposition $6$ and Proposition $10$ in Chapter III of {\em \cite{Serr}}. \eop \enote Throughout the paper, we employ the conventions of letting empty sums equal $0$, and letting empty products equal $1$. \\ \\ \section{Divisors on global fields}\label{dogfatrr} In this section, we define and study divisors on global fields. In particular, we consider the set of multiples of a divisor $D$, and relate its cardinality to the degree of $D$ (cf. Theorem \ref{rr}). \newpage \noindent \bd \label{divdef} Let ${\mathbb K}$ be a global field. A {\em divisor} $D$ on ${\mathbb K}$ is a formal finite sum \begin{eqnarray*} D=\sum_{P\in \sum_{\mathbb K}}a_P\cdot P, \end{eqnarray*} where $a_P\in {\mathbb R}$ if $P\in \sum_{\mathbb K}^{\infty}$, and $a_P\in {\mathbb Z}$ otherwise. The {\em degree} of $D$ is the real number \begin{eqnarray*} \deg D=\prod_{P\in \sum_{\mathbb K}}N(P)^{a_P}. \end{eqnarray*} The divisor $D$ is said to be {\em principal} if there exists an $\alpha\in{\mathbb K}^{*}$, such that \begin{eqnarray*} N(P)^{a_P}=\varphi_{P}(\alpha), \end{eqnarray*} for all $P\in\sum_{\mathbb K}$. \eop \ed We may now state Theorem \ref{pf} by saying that a principal divisor has degree $1$. It follows that the identity element in the group defined by the degree homomorphism \begin{eqnarray*} \deg:\{\mbox{divisors on } {\mathbb K}\}\to {\mathbb R}, \end{eqnarray*} is the class containing the principal divisors. The inverse of the class containing a divisor $D=\sum a_P\cdot P$, is the class containing the divisor \begin{eqnarray*} -D=\sum (-a_P)\cdot P. \end{eqnarray*} \bd \label{muldef} Let $D=\sum a_P\cdot P$ be a divisor on a global field ${\mathbb K}$. The {\em space of multiples of $D$} is the set \begin{eqnarray*} H^{0}(D)=\{\alpha\in{\mathbb K}; \varphi_P(\alpha)\leq N(P)^{a_P}, \mbox{ for all $P\in\sum_{\mathbb K}$}\}. \end{eqnarray*} We denote by $h^{0}(D)$ the cardinality of $H^{0}(D)$. \eop \ed \newpage \noindent \bth \label{rr} Let $D$ be a divisor on a global field ${\mathbb K}$. \\ $(i)$ There exists a divisor $\omega_{\mathbb K}$, depending only on ${\mathbb K}$, such that \begin{eqnarray*} \frac{1}{C(S_{1}({\mathbb K}),S_{2}({\mathbb K}))}\leq \frac{h^{0}(D)}{h^{0}(\omega_{\mathbb K}-D)}\cdot \frac{\sqrt{\deg \omega_{\mathbb K}}}{\deg D}\leq C(S_{1}({\mathbb K}),S_{2}({\mathbb K})), \end{eqnarray*} with \begin{eqnarray*} C(S_{1}({\mathbb K}),S_{2}({\mathbb K}))=\frac{6^{S_{1}({\mathbb K})+S_{2}({\mathbb K})}\cdot (S_{1}({\mathbb K})+S_{2}({\mathbb K}))!}{2^{S_{1}({\mathbb K})}\cdot (\pi/2)^{S_{2}({\mathbb K})}}. \end{eqnarray*} $(ii)$ There exists a function $i:$ \{divisors on ${\mathbb K}\}\to {\mathbb R}$, such that $i(\cdot)\to 1$ when $\deg \cdot\to \infty$, and \begin{eqnarray*} \frac{h^{0}(D)}{i(D)}\cdot\frac{\sqrt{\deg \omega_{\mathbb K}}}{\deg D}=2^{S_{1}({\mathbb K})}\cdot (2\pi)^{S_{2}({\mathbb K})/2}. \end{eqnarray*} \eth \prf Assume first that $\chara({\mathbb K})=0$, and denote by ${\cal O}_{\mathbb K}$ the integral closure of ${\mathbb Z}$ in ${\mathbb K}$. Consider the ${\cal O}_{\mathbb K}$-module $\homo_{\mathbb Z}({\cal O}_{\mathbb K}, {\mathbb Z})$, metrized by defining \\$|\trace|_P=\log N(P)$, for $P\in \sum_{\mathbb K}^{\infty}$ (cf. \cite{GiSo}, \S$2.4$). \\ If a divisor $\omega_{\mathbb K}$ on ${\mathbb K}$ is chosen such that \begin{eqnarray*} \deg\omega_{\mathbb K}=\frac{|\disc_{\mathbb K}|}{2^{S_{2}({\mathbb K})}}, \end{eqnarray*} the corresponding metrized ${\cal O}_{\mathbb K}$-module will be isometrically isomorphic to \\$\homo_{\mathbb Z}({\cal O}_{\mathbb K}, {\mathbb Z})$, metrized as above (cf. \cite{Neu}, Theorem $4.5$ in Chapter III). \\Hence one obtains $(i)$ from Theorem $2$ in \cite{GiSo}. However, note Remark \ref{correction} on the value of $C(S_{1}({\mathbb K}),S_{2}({\mathbb K}))$.\\ \\ For a divisor $D=\sum a_P\cdot P$, denote by $\chi(D)$ the Euler-Minkowski characteristic (cf. \cite{Neu}, Definition $3.1$ in \S $3$ of Chapter III) of the fractional ideal \begin{eqnarray*} \prod_{P\in\sum_{\mathbb K}\setminus\sum_{\mathbb K}^{\infty}}({\cal O}_{\mathbb K}\cap M_{P})^{-a_P}. \end{eqnarray*} Setting \begin{eqnarray*} i(D)=\frac{h^{0}(D)\cdot e^{-\chi(D)}}{2^{S_{1}({\mathbb K})}(\pi)^{S_{2}({\mathbb K})/2}}, \end{eqnarray*} one obtains $(ii)$ as a slight reformulation of Theorem $3.9$ (Chapter III, \S $3$) in \cite{Neu}.\\ \\ Now assume that $\chara({\mathbb K})>0$. In this case $S_{1}({\mathbb K})=S_{2}({\mathbb K})=0$, and \begin{eqnarray*} \log_q h^{0}(D)=\dim_{{\mathbb F}_q}H^{0}(D). \end{eqnarray*} \newpage \noindent Let $g_{\mathbb K}$ be the genus of the complete non-singular curve determined by ${\mathbb K}$, and choose $\omega_{\mathbb K}$ from the class of divisors of degree $q^{2g_{\mathbb K}-2}$. Then $(i)$ is a multiplicative formulation of the Riemann-Roch theorem (cf. \cite{Rose}, Theorem $5.4$ in Chapter $5$). Setting $i(D)=h^{0}(\omega_{\mathbb K}-D)$, $(ii)$ follows from $(i)$, since $h^{0}(\omega_{\mathbb K}-D)=1$ whenever $\deg D>\deg\omega_{\mathbb K}$ (cf. \cite{Rose}, Corollary $4$ in Chapter $5$). \eop \bnote \label{correction} We make a minor correction to the proof of Theorem $2$ in {\em \cite{GiSo}}. Numbers in bold-face refer to lines or pages in {\em \cite{GiSo}}. The quantity $C(r_1 , r_2 , N)$ is defined on line {\em (}{\bf 26}{\em )} {\em (}pg. {\bf 355}{\em )} as \begin{eqnarray*} -\log\mu(K^{*})+N (r_1+2r_2) \log (6), \end{eqnarray*} where $\mu(K^{*})$ is the euclidean volume of the set of $({\bf y}_i,{\bf z}_j)\in ({\mathbb R}^{N})^{r_1}\times ({\mathbb C}^{N})^{r_2}$ such that \begin{eqnarray*} \sum_{i=1}^{r_1}|{\bf y}_i| + 2\sum_{j=1}^{r_2}|{\bf z}_j|\leq 1. \end{eqnarray*} However, the value of $C(r_1,r_2,N)$ stated in Theorem $2$ in {\em \cite{GiSo}} is incorrect, due to a missing minus sign in the computation of $\mu(K^{*})$ on line {\em (}{\bf 24}{\em )} {\em (}pg. {\bf 355}{\em )}. The correct value is \begin{eqnarray*} C(r_1,r_2,N)&=&\log\left(\frac{6^{N(r_1+2r_2)}}{\mu(K^{*})}\right)\\ &=&\log\left(\frac{(N(r_1+2r_2))!\cdot 2^{2Nr_2}\cdot 6^{N(r_1+2r_2)}}{(V(B_N) N!)^{r_1}\cdot (V(B_{2N}) (2N)!)^{r_2}}\right). \end{eqnarray*} The value of $C(S_{1}({\mathbb K}),S_{2}({\mathbb K}))$ in Theorem \ref{rr} is simply $e^{C(r_1,r_2,1)}$. \eop \enote \section{A canonical divisor}\label{atrht} In this section, we describe a divisor $\omega_{\mathbb K}'$ that is determined by the global field ${\mathbb K}$, and show that Theorem \ref{rr} holds with $\omega_{\mathbb K}=\omega_{\mathbb K}'$. We also show that for a finite separable extension ${\mathbb L}/{\mathbb K}$ of global fields, the corresponding divisors $\omega_{{\mathbb L}}'$ and $\omega_{{\mathbb K}}'$ satisfy a Riemann-Hurwitz type formula (cf. Theorem \ref{rh}).\\ \\ We begin by considering a divisor that is determined by a finite separable extension of global fields. Recall the real numbers $r_Q$ from Definition \ref{rami}.\newpage \noindent \bd Let ${\mathbb L}/{\mathbb K}$ be a finite separable extension of global fields. The {\em ramification divisor} relative to the extension ${\mathbb L}/{\mathbb K}$ is the divisor \begin{eqnarray*} R_{{\mathbb L}/{\mathbb K}}=\sum_{Q\in {\mathbb L}}r_Q\cdot Q. \end{eqnarray*} \eop \ed If $\chara({\mathbb K})>0$, we denote by $P_0$ a fixed element of $\sum_{{\mathbb K}_0}\setminus\sum_{{\mathbb K}_0}^{\infty}$ such that $N(P_0)=q$. If $\chara({\mathbb K})=0$, we let $P_0$ be an arbitrary fixed element of $\sum_{{\mathbb K}_0}\setminus\sum_{{\mathbb K}_0}^{\infty}$. In both cases, we denote by $S_0$ the set of $P\in\sum_{\mathbb K}$ such that the restriction $\varphi_P|_{{\mathbb K}_0}$ is contained in $P_0$.\\ \\ If $\chara({\mathbb K})=0$, we choose in addition an element $P_{\infty}\in\sum_{\mathbb K}^{\infty}$, and set \begin{eqnarray*} a_{\infty} =\sum_{P\in S_0}2e_P\log_{N(P_{\infty})}N(P), \end{eqnarray*} where $e_P$ denotes the ramification index of $P$ relative to the extension ${\mathbb K}/{\mathbb K}_0$ (cf. Definition \ref{rami}).\\ \\ Recall that the extension ${\mathbb K}/{\mathbb K}_0$ is separable by definition (cf. Remark \ref{knoll}), and consider the divisor \begin{eqnarray*} \omega_{\mathbb K}'=R_{{\mathbb K}/{\mathbb K}_0}-\sum_{P\in S_0}2e_P\cdot P + a_{\infty}\cdot P_{\infty}. \end{eqnarray*} \bpr Theorem \ref{rr} holds with $\omega_{\mathbb K}=\omega_{\mathbb K}'$.\epr \prf It suffices to verify that \begin{eqnarray*} \deg \omega_{\mathbb K}'&=& \left\{ \begin{array}{lcl} |\disc_{\mathbb K}|\cdot 2^{-S_{2}({\mathbb K})}, &&\mbox{ if $\chara(\mathbb K)=0$, }\\ q^{2g_{\mathbb K}-2}, &&\mbox{ if $\chara({\mathbb K})>0$}. \end{array} \right . \end{eqnarray*} If $\chara({\mathbb K})=0$, one has \begin{eqnarray*} \deg{\omega_{\mathbb K}'}=e^{-S_{2}({\mathbb K})\log 2}\prod_{Q\in\sum_{\mathbb K}\setminus\sum_{\mathbb K}^{\infty}}N(Q)^{r_Q}=\frac{|\disc_{\mathbb K}|}{2^{S_{2}({\mathbb K})}}, \end{eqnarray*} where Remark \ref{discriminant} is used to obtain the last equality.\\ \\ If $\chara({\mathbb K})>0$, one has $g_{{\mathbb K}_0}=0$. Hence \begin{eqnarray*} \deg{R_{{\mathbb K}/{\mathbb K}_0}}=q^{2g_{\mathbb K}-2+2[{\mathbb K}:{\mathbb K}_0]}, \end{eqnarray*} by the Riemann-Hurwitz formula for function fields (cf. \cite{Rose}, Theorem $7.16$ in Chapter $7$). \\ \newpage \noindent Since the extension ${\mathbb K}/{\mathbb K}_0$ is separable, one has \begin{eqnarray*} \prod_{P|P_0} N(P)^{e_P}=q^{[{\mathbb K}:{\mathbb K}_0]}, \end{eqnarray*} for any choice of $P_0\in {\mathbb K}_0$ such that $N(P_0)=q$ (cf. \cite{Serr}, Proposition $10$ in \S$4$ of Chapter I). This completes the proof. \eop \bth \label{rh} If ${\mathbb L}/{\mathbb K}$ is a finite separable extension of global fields, and if ${\mathbb L}_0={\mathbb K}_0$, then \begin{eqnarray*} \deg{\omega_{\mathbb L}'}=\deg{\omega_{\mathbb K}'}^{[{\mathbb L}:{\mathbb K}]}\cdot\deg{R_{{\mathbb L}/{\mathbb K}}} \end{eqnarray*} \eth \prf Assume first that $\chara({\mathbb K})=0$. Denote the restrictions of $R_{{\mathbb L}/{\mathbb K}}$, $\omega_{\mathbb L}'$ and $\omega_{\mathbb K}'$ to the archimedean classes by $R_{{\mathbb L}/{\mathbb K}}^{\infty}$, $\omega_{\mathbb L}^{\infty}$ and $\omega_{\mathbb K}^{\infty}$, respectively. Note that by the construction of $R_{{\mathbb L}/{\mathbb K}}$, $\omega_{\mathbb L}'$ and $\omega_{\mathbb K}'$: \begin{eqnarray*} (i) &&\mbox{$\log_{1/4}\deg R_{{\mathbb L}/{\mathbb K}}^{\infty}$ is the number of elements $Q\in\sum_{\mathbb L}^{\infty}$ with}\\ &&\mbox{$\log N(Q)=2$ extending elements $P\in\sum_{\mathbb K}^{\infty}$ with $\log N(P)=1$,}\\ (ii) &&\mbox{${[{\mathbb L}:{\mathbb K}]}\cdot\log_{1/4}\deg \omega_{\mathbb K}^{\infty}$ is the number of elements $Q\in\sum_{\mathbb L}^{\infty}$ with}\\ &&\mbox{$\log N(Q)=2$ extending elements $P\in\sum_{\mathbb K}^{\infty}$ with $\log N(P)=2$,}\\ (iii) &&\mbox{$\log_{1/4}\deg \omega_{\mathbb L}^{\infty}$ is the total number of elements $Q\in\sum_{\mathbb L}^{\infty}$ with}\\&&\mbox{$\log N(Q)=2$.} \end{eqnarray*} From these remarks, we obtain the equality \begin{eqnarray*} \deg{\omega_{\mathbb L}^{\infty}}=\deg{\omega_{\mathbb K}^{\infty}}^{[{\mathbb L}:{\mathbb K}]}\cdot\deg{R_{{\mathbb L}/{\mathbb K}}^{\infty}}. \end{eqnarray*} The corresponding equality for $R_{{\mathbb L}/{\mathbb K}}-R_{{\mathbb L}/{\mathbb K}}^{\infty}$, $\omega_{\mathbb L}'-\omega_{\mathbb L}^{\infty}$ and $\omega_{\mathbb K}'-\omega_{\mathbb K}^{\infty}$ follows from the transitivity of the different in a tower of finite separable extensions of fields (cf. \cite{Serr}, Proposition $8$ in \S $4$ of Chapter III).\\ \\ When $\chara({\mathbb K})>0$, the statement in the theorem is a multiplicative formulation of the Riemann-Hurwitz formula for function fields (cf. \cite{Rose}, Theorem $7.16$ in Chapter $7$). \eop \newpage \noindent
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{\label{sec:Introduction}Introduction} Informally, the Kolmogorov complexity of a finite binary string is the length of the shortest string from which the original can be losslessly reconstructed by an effective general-purpose computer such as a particular universal Turing machine $U$. Hence it constitutes a lower bound on how far a lossless compression program can compress. Formally, the {\em conditional Kolmogorov complexity} $C(x|y)$ is the length of the shortest input $z$ such that the universal Turing machine $U$ on input $z$ with auxiliary information $y$ outputs $x$. The {\em unconditional Kolmogorov complexity} $C(x)$ is defined by $C(x|\epsilon)$ where $\epsilon$ is the empty string (of length 0). Let $t$ be a total recursive function. Then, the {\em time-bounded conditional Kolmogorov complexity} $C^t(x|y)$ is the length of the shortest input $z$ such that the universal Turing machine $U$ on input $z$ with auxiliary information $y$ outputs $x$ within $t(n)$ steps where $n$ is the length in bits of $x$. The {\em time-bounded unconditional Kolmogorov complexity} $C^t(x)$ is defined by $C^t(x|\epsilon)$. For an introduction to the definitions and notions of Kolmogorov complexity (algorithmic information theory) see ~\cite{LiVi08}. \subsection{Related Work}\label{sect.related} Already in 1968 J. Barzdins~\cite{Barzdins1968} obtained a result known as {\em Barzdins's lemma}, probably the first result in resource-bounded Kolmogorov complexity, of which the lemma below quotes the items that are relevant here. Let $\chi$ denote the characteristic sequence of an arbitrary recursively enumerable (r.e.) subset $A$ of the natural numbers. That is, $\chi$ is an infinite sequence $\chi_{1}\chi_{2}\ldots$ where bit $\chi_{i}$ equals $1$ if and only if $i\in A$. Let $\chi_{1:n}$ denote the first $n$ bits of $\chi$, and let $C(\chi_{1:n}| n)$ denote the conditional Kolmogorov complexity of $\chi_{1:n}$, given the number $n$. \begin{lemma}\label{lem.barzdins} {\rm (i)} For every characteristic sequence $\chi$ of a r.e. set $A$ there exists a constant $c$ such that for all $n$ we have $C(\chi_{1:n}| n)\leq \log n+ c$. {\rm (ii)} There exists a r.e. set $A$ with characteristic sequence $\chi$ such that for every total recursive function $t$ there is a constant $c_t$ with $0<c_{t}<1$ such that for all $n$ we have $C^{t}(\chi_{1:n}| n)\geq c_{t} n$. \end{lemma} Barzdins actually proved this statement in terms of D.W. Loveland's version of Kolmogorov complexity \cite{Lo69}, which is a slightly different setting. He also proved that there is a r.e. set such that its characteristic sequence $\chi=\chi_{1}\chi_{2}\ldots $ satisfies $C(\chi_{1:n}) \geq \log n$ for every $n$. Kummer \cite{Ku96}, Theorem 3.1, solving the open problem in Exercise 2.59 of the first edition of \cite{LiVi08} proved that there exists a r.e. set such that its characteristic sequence $\zeta=\zeta_1 ,\zeta_2, \ldots$ satisfies $C(\zeta_{1:n}) \geq 2 \log n -c$ for some constant $c$ and infinitely many $n$. The converse of item (i) does not hold. To see this, consider a sequence $\chi= \chi_{1}\chi_{2}\ldots $ and a constant $c' \geq 2$, such that for every $n$ we have $C(\chi_{1:n}| n)\geq n-c' \log n$ By item (i), $\chi$ can not be the characteristic sequence of a r.e. set. Transform $\chi$ into a new sequence $\zeta=\chi_{1}\alpha_{1}\chi_{2}\alpha_{2}\ldots$ with $\alpha_{i}=0^{2^{i}}$, a string of $0$s of length $2^{i}$. While obviously $\zeta$ can not be the characteristic sequence of a r.e. set, there is a constant $c$ such that for every $n$ we have that $C(\zeta_{1:n}| n)\leq \log n+ c$. Item (i) is easy to prove and item (ii) is hard to prove. Putting items (i) and (ii) together, there is a characteristic sequence $\chi$ of a r.e. set $A$ whose initial segments are both logarithmic compressible and time-bounded linearly incompressible, for every total recursive time bound. Below, we identify the natural numbers with finite binary strings according to the pairing $ ( \epsilon , 0), (0,1), (1,2), (00,3), (01,4), \ldots , $ where $\epsilon$ again denotes the empty string. \subsection{Present Results} \begin{theorem} Let $k_0, k_1$ be positive integer constants and $t$ a total recursive function. {\rm (i)} A constant fraction of all strings $x$ of length $n$ with $C(x|n) \leq k_0 \log n$ satisfies $C^{t}(x| n)\geq n-k_{1}$. {\rm (}Lemma~\ref{thm:tincompressible}{\rm )}. {\rm (ii)} Let $t(n) \geq cn$ for $c > 1$ sufficiently large. A constant fraction of all strings $x$ of length $n$ with $C(x|n) \leq k_0 \log n$ satisfies $C^{t}(x| n) \leq k_0 \log n$ {\rm (}Lemma~\ref{thm:tcompressible}{\rm )}. {\rm (iii)} There exist uncountably many {\rm (}actually $2^{\aleph_0}${\rm )} infinite binary sequences $\omega$ such that $C(\omega_{1:n}|n) \leq \log n$ and $C^t(\omega_{1:n}|n) \geq \frac{1}{4} n-\log n$ for every $n$; moreover, there exist a countably infinite number of {\rm (}that is $\aleph_0${\rm )} recursive infinite binary sequences $\omega$ {\rm (}hence $C(\omega_{1:n}|n)=O(1)${\rm )} such that $C^t(\omega_{1:n}|n) \geq \frac{1}{4} n-\log n$ for every $n$ {\rm (}Lemma~\ref{lem.infinite}{\rm )}. \end{theorem} Note that the order of quantification in Barzdins's lemma is ``there exists a r.e. set such that for every total recursive function $t$ there exists a constant $c_t$.'' In contrast, in item (iii) we prove ``there is a positive constant such that for every total recursive function $t$ there is a sequence $\omega$.'' While Barzdins's lemma proves the existence of a single characteristic sequence of a r.e. set that is time-limited linearly incompressible, in item (iii) we prove the existence of uncountably many sequences that are logarithmically compressible over the initial segments, and the existence of a countably infinite number of recursive sequences, such that all those sequences are time-limited linearly incompressible. We generalize item (i) in Corollaries~\ref{cor:cortincompressible} and \ref{cor.general}. Section~\ref{sec:Preliminaries} presents preliminaries. Section~\ref{sec:Main-Result} gives the results on finite strings. Section~\ref{sect.infinite} gives the results on infinite sequences. Finally, conclusions are presented in Section~\ref{sec:Conclusions}. The proofs for the results are different from Barzdins's proofs. \section{\label{sec:Preliminaries}Preliminaries} A (binary) program is a concatenation of instructions, and an instruction is merely a string. Hence, we may view a program as a string. A program and a Turing machine (or machine for short) are used synonymously. The length in bits of a string $x$ is denoted by $|x|$. If $m$ is a natural number, then $|m|$ is the length in bits of the $m$th binary string in length-increasing lexicographic order, starting with the empty string $\epsilon$. We also use the notation $|S|$ to denote the cardinality of a set $S$. Consider a standard enumeration of all Turing machines $T_{1}$, $T_{2}$, $\ldots .$ Let $U$ denote a universal Turing machine such that for every $y\in\{0,1\}^{*}$ and $i\geq1$ we have $U(i,y)=T_{i}(y)$. That is, for all finite binary strings $y$ and every machine index $i\geq1$, we have that $U$'s execution on inputs $i$ and $y$ results in the same output as that obtained by executing $T_{i}$ on input $y$. Let $t$ be a total recursive function. Fix $U$ and define that $C(x| y)$ equals $\min\{ |p|: p\in\{0,1\}^{*}\:\textrm{and}\: U(p,y)=x\}$. For the same fixed $U$, define that $C^{t}(x| y)$ equals $\min\{ |p|:\, p\in\{0,1\}^{*}\:\textrm{and}\; U(p,y)=x\;\textrm{in $t(|x|)$ steps}\} $. (By definition the sets over which is minimized are countable and not empty). \section{Finite Strings}\label{sec:Main-Result} \begin{lemma} \label{thm:tincompressible} Let $k_0,k_1$ be positive integer constants and $t$ be a total recursive function. There is a positive constant $c_t$ such that for sufficiently large $n$ the strings $x$ of length $n$ satisfying $C^{t}(x| n)\geq n-k_{1}$ form a $c_t$-fraction of the strings $y$ of length $n$ satisfying $C(y|n) \leq k_0 \log n$. \end{lemma} \begin{proof} The proof is by diagonalization. We use the following algorithm with inputs $t,n,k_1$ and a natural number $m$. {\bf Algorithm} ${\cal A}(t,n,k_1,m)$ {\bf Step 1.} Using the universal reference Turing machine $U$, recursively enumerate a finite list of all binary programs $p$ of length $|p|< n-k_{1}$. There are at most $2^{n}/2^{k_{1}}-1$ such programs. Execute each of these programs on input $n$. Consider the set of all programs that halt within $t(n)$ steps and which output precisely $n$ bits. Call the set of these outputs $B$. Note that $|B| \leq 2^{n}/2^{k_{1}}-1$ and it can be computed in time $O(2^{n} t(n)/2^{k_{1}})$. {\bf Step 2.} Output the $(m+1)$th string of length $n$, say $x$, in the lexicographic order of all strings in $\{0,1\}^n \setminus B$ and halt. If there is no such string then halt with output $\perp$. {\bf End of Algorithm} Because of the selection process in Step 1, $|\{0,1\}^n \setminus B| \geq 2^n - 2^n/2^{k_1}+1$ and every $x \in \{0,1\}^n \setminus B$ has time-bounded complexity \begin{equation}\label{eq.gt} C^t(x|n) \geq n-k_1. \end{equation} For $|m| \leq k_0 \log n -c$, where the constant $c$ is defined below, and provided $\{0,1\}^n \setminus B$ is sufficiently large, that is, \begin{equation}\label{eq.eq} n^{k_0}/2^c \leq 2^n \left(1-\frac{1}{2^{k_{1}}}\right)+1, \end{equation} there are at least $n^{k_0}/2^c$ strings $x$ of length $n$ that will be output by the algorithm. Call this set $D$. Each string $x \in D$ satisfies \begin{equation}\label{eq.lt} C(x|t,n,k_1,{\cal A},p) \leq |m| \leq k_0 \log n -c. \end{equation} Since we can describe the fixed $t,k_0,k_1,{\cal A}$, a program $p$ to reconstruct $x$ from these data, and the means to tell them apart, in an additional constant number of bits, say $c$ bits (in this way the quantity $c$ can be deduced from the conditional), it follows that $ C(x|n) \leq k_0 \log n $. For given $k_0,k_1$, and $c$, inequality (\ref{eq.eq}) holds for every sufficiently large $n$. For such sufficiently large $n$, the cardinality of the set of strings of length $n$ satisfying both $C(x|n) \leq k_0 \log n$ and $C^t(x|n) \geq n-k_1$ is at least $ |D| = n^{k_0}/2^c. $ Since the number of strings $x$ of length $n$ satisfying $C(x|n) \leq k_0 \log n$ is at most $\sum_{i=0}^{k_0 \log n} 2^i < 2n^{k_0}$, the lemma follows with $c_t = 1/2^{c+1}$. \end{proof} \begin{corollary} \label{cor:cortincompressible} \rm Let $k_0$ be a positive integer constant and $t$ be a total recursive function. For every sufficiently large natural number $n$, the set of strings $x$ of length $n$ such that $C^t(x|n) \not\leq k_0 \log n$ is a positive constant fraction of the strings $y$ of length $n$ satisfying $C(y|n) \leq k_0 \log n$. \end{corollary} We can generalize Lemma~\ref{thm:tincompressible}. Let $t$ be a total recursive function, and $f,g$ be total recursive functions such that (\ref{eq.functions}) below is satisfied. \begin{corollary}\label{cor.general} \rm For every sufficiently large natural number $n$, the set of strings $x$ of length $n$ that satisfy both $C(x|n) \leq f(n)$ and $C^t(x|n) \geq g(n)$ is a positive constant fraction of the strings $y$ of length $n$ satisfying $C(y|n) \leq f(n)$. \end{corollary} \begin{proof} Use a similar algorithm ${\cal A}(t,n,g,m)$ with $|p| < g(n)$ in Step 1, and $|m| \leq f(n)- c$ in the analysis. Require \begin{equation}\label{eq.functions} 2^{f(n)-c} \leq 2^n - 2^{g(n)}+1. \end{equation} \end{proof} \begin{lemma} \label{thm:tcompressible} Let $t$ be a total recursive function with $t(n) \geq cn$ for some $c > 1$ and $k_0$ be a positive integer constant. For every sufficiently large natural number $n$, there is a positive constant $c_t$ such that the set of strings $x$ of length $n$ satisfying $C^t(x|n) \leq k_0 \log n$ is a $c_t$-fraction of the set of strings $y$ of length $n$ satisfying $C(y|n) \leq k_0 \log n$. \end{lemma} \begin{proof} We use the following algorithm that takes positive integers $n,m$ as inputs and computes a string $x$ of length $n$ satisfying $C^t(x|n) \leq k_0 \log n -c$. {\bf Algorithm} ${\cal B}(n,m)$ Output the string $0^{n-|m+1|} (m +1)$ (where $|m+1|$ is the length of the string representation of $m+1$) and halt. {\bf End of Algorithm} Let $k_0$ be a postive integer and $c$ a positive integer constant chosen below. Consider strings $x$ that are output by algorithm ${\cal B}$ and that satisfy $C^t(x|n,{\cal B},p) \leq |m| \leq k_0 \log n -c$ with $c$ the number of bits to contain descriptions of ${\cal B}$ and $k_0$, a program $p$ to reconstruct $x$ from these data, and the means to tell the constituent items apart. Hence, $C^t(x|n) \leq k_0 \log n$. The running time of algorithm ${\cal B}$ is $t(n) = O(n)$, since the output strings are length $n$ and to output the $m$th string with $m \leq 2^{k_0 \log n -c}$ we simply take the binary representation of $m$ and pad it with nonsignificant 0s to length $n$. Obviously, the strings that satisfy $C^t(x|n)\leq k_0 \log n$ are a subset of the strings that satisfy $C(x|n)\leq k_0 \log n$. There are at least $n^{k_0}/2^c$ strings of the first kind while there are at most $2n^{k_0}$ strings of the second kind. Setting $c_t = 1/2^{c+1}$ finishes the proof. \end{proof} It is well known that if we flip a fair coin $n$ times, that is, given $n$ random bits, then we obtain a string $x$ of length $n$ with Kolmogorov complexity $C(x|n) \geq n-c$ with probability at least $1- 2^{-c}$. Such a string $x$ is algorithmically random. We can also get by with less random bits to obtain resource-bounded algorithmic randomness from compressible strings. \begin{lemma} Let $a,b$ be constants as in the proof below. Given the set of strings $x$ of length $n$ satisfying $C(x|n) \leq k_0 \log n$, a total recursive function $t$, the constant $k_1$ as before, and $O(ab \log n)$ fair coin flips, we obtain a set of $O(ab)$ strings of length $n$ such that with probability at least $1-1/2^b$ one string $x$ in this set satisfies $C^t(x|n) \geq n-k_1$. \end{lemma} \begin{proof} By Lemma~\ref{thm:tincompressible}, a $c_t$th fraction of the set $A$ of strings $x$ of length $n$ that have $C(x|n) \leq k_0 \log n$ also have $C^t(x|n) \geq n-k_1$. Therefore, by choosing, uniformly at random, a constant number $a$ of strings from the set $A$ we increase (e.g. by means of a Chernoff bound \cite{LiVi08}) the probability that (at least) one of those strings cannot be compressed below $n-k_1$ in time $t(n)$ to at least $\frac{1}{2}$. To choose any one string from $A$ requires $O(\log n)$ random bits by dividing $A$ in two equal size parts and repeating this with the chosen half, and so on. The selected $a$ elements take $O(a \log n)$ random bits. Applying the previous step $b$ times, the probability that at least one of the $ab$ chosen strings cannot be compressed below $n-k_1$ bits in time $t(n)$ is at least $1- 1/2^{b}$. \end{proof} \section{From Finite Strings to Infinite Sequences}\label{sect.infinite} We prove a result reminiscent of Barzdins's lemma, Lemma~\ref{lem.barzdins}. In Barzdins's version, characteristic sequences $\omega$ of r.e. sets are considered which by Lemma~\ref{lem.barzdins} have complexity $C(\omega_{1:n}|n) \leq \log n +c$. Here, we consider a wider class of sequences of which the initial segments are logarithmically compressible (such sequences are not necessarily characteristic sequences of r.e. sets as explained in Section~\ref{sect.related}). \begin{lemma}\label{lem.infinite} Let $t$ be a total recursive function. {\rm (i)} There are uncountably many {\rm (}actually $2^{\aleph_0}${\rm )} sequences $\omega = \omega_1 \omega_2 \ldots$ such that both $C(\omega_{1:n}| n) \leq \log n$ and $C^{t}(\omega_{1:n}| n) \geq \frac{1}{4}n-\log n$ for every $n$. {\rm (ii)} The set in item {\rm (i)} contains a countably infinite number of {\rm (}that is $\aleph_0${\rm )} recursive sequences $\omega = \omega_1 \omega_2 \ldots$ such that $C^{t}(\omega_{1:n}| n) \geq \frac{1}{4}n-\log n$ for every $n$. \end{lemma} \begin{proof} (i) Let $g(n)=\frac{1}{2}n- \log n$. Let $c \geq 2$ be a constant to be chosen later, $m_i = c2^i$, $B(i),C(i),D(i) \subseteq \{0,1\}^{m_i}$ for $i=0,1, \ldots$, and $C(-1)= \{\epsilon\}$. The $C$ sets are constructed so that they contain the target strings in the form of a binary tree, where $C(i)$ contains all target strings of length $m_i$. The $B(i)$ sets correspond to forbidden prefixes of length $m_i$. The $D(i)$ sets consist of the set of strings of length $m_i$ with prefixes in $C(i-1)$ from which the strings in $C(i)$ are selected. {\bf Algorithm} ${\cal C}(t,g)$: {\bf for} $i :=0,1, \ldots $ {\bf do} {\bf Step 1.} Using the universal reference Turing machine $U$, recursively enumerate the finite list of all binary programs $p$ of length $|p|< g(m_i)$ with $m_i = c2^i$ and the constant $c$ defined below. There are at most $2^{g(m_i)}-1$ such programs. Execute each of these programs on all inputs $m_i+j$ with $0 \leq j < m_i$. Consider the set of all programs with input $m_i+j$ that halt with output $x=yz$ within $t(|x|)$ time with $|x|=m_i+j$, $y \in C(i-1)$ (then $|y| = m_{i-1}$ for $i > 0$ and $|y|=0$ for $i=0$), and $z$ is a binary string such that $x$ satisfies $m_{i} \leq |x| < m_{i+1}$. There are at most $m_i (2^{g(m_i)}-1)$ such $x$'s. Let $B(i)$ be the set of the $m_i$-length prefixes of these $x$'s. Then, $|B(i)| \leq m_i (2^{g(m_i)}-1)$ and it can be computed in time $O(m_i2^{g(m_i)} t(m_{i+1}))$. Note that if $u \in \{0,1\}^{m_i} \setminus B(i)$ then $C^t(uw| \; |uw|) \geq g(|u|)$ for every $w$ such that $|uw| < m_{i+1}$. {\bf Step 2.} Let $C(i-1)=\{x_1,x_2, \ldots ,x_h\}$ and $D(i)= (C(i-1)\{0,1\}^* \bigcap \{0,1\}^{m_{i}}) \setminus B(i)$. {\bf for} $l:=1, \ldots ,h$ {\bf do} {\bf for} $k:=0,1$ {\bf do} put the $k$th string with initial segment $x_l$, in the lexicographic order of $D(i)$, in $C(i)$. If there is no such string then halt with output $\perp$. {\bf od} {\bf od} {\bf od} {\bf End of Algorithm} Clearly, $C(i) \{0,1\}^* \subseteq C(i-1)\{0,1\}^*$ for every $i=0,1, \ldots .$ Therefore, if \begin{equation}\label{eq.inter} \bigcap_{i=0}^{\infty} C(i) \{0,1\}^{\infty} \neq \emptyset, \end{equation} then the elements of this intersection constitute the infinite sequences $\omega$ in the statement of the lemma. \begin{claim}\label{claim} \rm With $g(m_i) = \frac{1}{2} m_i - \log m_i$, we have $|C(i)| = 2^{i+1}$ for $i=0,1, \ldots .$. \end{claim} \begin{proof} The proof is by induction. Recall that $m_i=c 2^i$ with the constant $c \geq 2$. {\em Base case}: $|C(0)|=2$ since $C(-1) = \{\epsilon\}$ and $|D(0)| \geq 2^{m_0} - m_0 (2^{g(m_0)} -1) \geq 2$. {\em Induction}: Assume that the lemma is true for every $0 \leq j <i$. Then, every string in $C(i-1)$ has two extensions in $C(i)$, since for every string in $C(i-1)$ there are $2^{m_i - m_{i-1}}$ extensions available of which at most $|B(i)| \leq m_i (2^{g(m_i)}-1)$ are forbidden. Namely, $2^{m_i - m_{i-1}} - |B(i)| \geq 2^{m_i/2} - 2^{g(m_i) + \log m_i} + m_i \geq 2$. Hence it follows that the binary $k$-choice can always be made in Step 2 of the algorithm for every $l$. Therefore $|C(i)| = 2^{i+1}$. \end{proof} Let a constant $c_1$ account for the constant number of bits to specify the functions $t,g$, the algorithm ${\cal C}$, and a reconstruction program that executes the following: We can specify every initial $m_i$-length segment of a particular $\omega$ in the set on the lefthand side of (\ref{eq.inter}) by running the algorithm ${\cal C}$ using the data represented by the $c_1$ bits, $m_i$, and the indexes $k_j\in\{0,1\}$ of the strings in $D(j)$ with initial segment in $C(j-1)$, $0 \leq j \leq i$, that form a prefix of $\omega$. Therefore, \[ C(\omega_{1:m_i}|m_i) \leq c_1 +i+1. \] Setting $c=2^{c_1+1}$ yields $C(\omega_{1:m_i}|m_i) \leq \log c +i = \log m_i $. By the choice of $B(i)$ in the algorithm we know that $C^t(\omega_{1:m_i+j}|m_i+j) \geq g(m_i)$ for every $j$ satisfying $0 \leq j < m_i$. Because $2m_i = m_{i+1}$, for every $n$ satisfying $m_i \leq n < m_{i+1}$, $C^t (\omega_{1:n}|n) \geq \frac{1}{2}m_i - \log m_i \geq \frac{1}{4}n - \log n$. Since this holds for every $i=0,1, \ldots ,$ item (i) is proven with $C^t(\omega_{1:n}|n) \geq \frac{1}{4}n -\log n$ for every $n$. The number of $\omega$'s concerned equals the number of paths in an infinite complete binary tree, that is, $2^{\aleph_0}$. (ii) This is the same as item (i) except that we always take, for example, $k_i=0$ (no binary choice) in Step 2 of the algorithm. In fact, we can specify an arbitrary computable 0--1 valued function to choose the $k_i$'s. There are a countably infinite number of (that is $\aleph_0$) such functions. The specification of every such function $\phi$ takes $C(\phi)$ bits. Hence we do not have to specify the successive $k_i$ bits, and $C(\omega_{1:n}|n) = c_1 +1+C(\phi)=O(1)$ with $c_1$ the constant in the proof of item (i). Trivially, still $C^t(\omega_{1:m_i+j}|m_i+j) \geq g(m_i)$ for every $j$ satisfying $0 \leq j < m_i$. Since this holds for every $i=0,1, \ldots ,$ item (ii) is proven by item (i). \end{proof} \section{\label{sec:Conclusions}Conclusions} We have proved the items promised in the abstract. In Lemma~\ref{lem.infinite} we iterated the proof method of Lemma~\ref{thm:tincompressible} to prove a result which is reminiscent of Barzdins's lemma \ref{lem.barzdins}, relating compressiblity and time-bounded incompressiblity of infinite sequences in another manner. Alternatively, we could have studied space-bounded incompressibility. It is easily verified that the results also hold when the time-bound $t$ is replaced by a space bound $s$ and the time-bounded Kolmogorov complexity is replaced by space-bounded Kolmogorov complexity. \section*{Acknowledgement} We thank the referees for comments, references, pointing out an error in the original proof of Lemma~\ref{thm:tincompressible} and that the argument used there is both independent and close to that used to prove Theorem 3.2 in \cite{AFMV}.
{ "redpajama_set_name": "RedPajamaArXiv" }